项目背景
随着现代办公和学习方式的改变,人们长时间坐在电脑前工作已成为常态。不良的坐姿习惯不仅会导致颈椎病、腰椎间盘突出等健康问题,还会影响工作效率和精神状态。本项目旨在开发一个基于深度学习的人体姿态识别系统,实时监测用户坐姿并给出纠正建议。
技术栈选型
后端技术
- Python 3.8+:主要开发语言
- Flask/FastAPI:Web框架,提供RESTful API
- OpenCV:图像处理和视频流获取
- MediaPipe/OpenPose:人体姿态估计模型
- NumPy:数值计算
- SQLite/MySQL:数据存储
前端技术
- Vue.js 3:前端框架
- Element Plus:UI组件库
- WebSocket:实时通信
- Chart.js:数据可视化
深度学习框架
- TensorFlow/PyTorch:模型训练和推理
- ONNX:模型转换和优化
系统架构设计
┌─────────────┐ WebSocket/HTTP ┌──────────────┐
│ 前端界面 │ ◄──────────────────────► │ Flask后端 │
│ (Vue.js) │ │ │
└─────────────┘ └───────┬──────┘
│
▼
┌───────────────┐
│ 姿态识别模块 │
│ (MediaPipe) │
└───────┬───────┘
│
▼
┌───────────────┐
│ 坐姿评估模块 │
└───────┬───────┘
│
▼
┌───────────────┐
│ 数据存储模块 │
└───────────────┘
核心功能实现
1. 人体姿态关键点检测
使用MediaPipe Pose模型检测33个人体关键点,包括头部、肩部、脊柱、臀部等关键位置。
import cv2
import mediapipe as mp
class PoseDetector:
def __init__(self):
self.mp_pose = mp.solutions.pose
self.pose = self.mp_pose.Pose(
static_image_mode=False,
model_complexity=1,
min_detection_confidence=0.5,
min_tracking_confidence=0.5
)
self.mp_drawing = mp.solutions.drawing_utils
def detect_pose(self, frame):
# 转换颜色空间
image_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
results = self.pose.process(image_rgb)
if results.pose_landmarks:
# 提取关键点坐标
landmarks = []
for landmark in results.pose_landmarks.landmark:
landmarks.append({
'x': landmark.x,
'y': landmark.y,
'z': landmark.z,
'visibility': landmark.visibility
})
return landmarks
return None
2. 坐姿评估算法
基于关键点位置计算角度和距离,评估坐姿是否正确。
import numpy as np
class PostureEvaluator:
def __init__(self):
# 定义标准坐姿的角度阈值
self.thresholds = {
'neck_angle': (40, 60), # 颈部前倾角度
'back_angle': (75, 95), # 背部角度
'shoulder_level': 5, # 肩膀水平差
}
def calculate_angle(self, p1, p2, p3):
"""计算三点之间的角度"""
v1 = np.array([p1['x'] - p2['x'], p1['y'] - p2['y']])
v2 = np.array([p3['x'] - p2['x'], p3['y'] - p2['y']])
cos_angle = np.dot(v1, v2) / (np.linalg.norm(v1) * np.linalg.norm(v2))
angle = np.arccos(np.clip(cos_angle, -1.0, 1.0))
return np.degrees(angle)
def evaluate_posture(self, landmarks):
"""评估坐姿质量"""
issues = []
# 提取关键点
nose = landmarks[0]
left_shoulder = landmarks[11]
right_shoulder = landmarks[12]
left_hip = landmarks[23]
right_hip = landmarks[24]
# 1. 检查颈部前倾
neck_angle = self.calculate_angle(
{'x': (left_shoulder['x'] + right_shoulder['x']) / 2,
'y': (left_shoulder['y'] + right_shoulder['y']) / 2},
nose,
{'x': nose['x'], 'y': 0}
)
if neck_angle < self.thresholds['neck_angle'][0]:
issues.append({
'type': 'neck_forward',
'severity': 'high',
'message': '颈部过度前倾,请调整头部位置'
})
# 2. 检查肩膀是否水平
shoulder_diff = abs(left_shoulder['y'] - right_shoulder['y'])
if shoulder_diff > self.thresholds['shoulder_level'] / 100:
issues.append({
'type': 'shoulder_uneven',
'severity': 'medium',
'message': '肩膀不平衡,请调整坐姿'
})
# 3. 检查背部角度
back_angle = self.calculate_angle(
{'x': (left_shoulder['x'] + right_shoulder['x']) / 2,
'y': (left_shoulder['y'] + right_shoulder['y']) / 2},
{'x': (left_hip['x'] + right_hip['x']) / 2,
'y': (left_hip['y'] + right_hip['y']) / 2},
{'x': (left_hip['x'] + right_hip['x']) / 2, 'y': 1}
)
if back_angle < self.thresholds['back_angle'][0]:
issues.append({
'type': 'slouching',
'severity': 'high',
'message': '背部弯曲过度,请挺直腰背'
})
return {
'is_correct': len(issues) == 0,
'issues': issues,
'score': max(0, 100 - len(issues) * 20)
}
3. Flask后端API设计
from flask import Flask, request, jsonify
from flask_socketio import SocketIO, emit
from flask_cors import CORS
import base64
import cv2
import numpy as np
app = Flask(__name__)
CORS(app)
socketio = SocketIO(app, cors_allowed_origins="*")
pose_detector = PoseDetector()
posture_evaluator = PostureEvaluator()
@app.route('/api/health', methods=['GET'])
def health_check():
return jsonify({'status': 'ok'})
@socketio.on('video_frame')
def handle_video_frame(data):
"""处理前端发送的视频帧"""
try:
# 解码base64图像
image_data = base64.b64decode(data['frame'].split(',')[1])
nparr = np.frombuffer(image_data, np.uint8)
frame = cv2.imdecode(nparr, cv2.IMREAD_COLOR)
# 姿态检测
landmarks = pose_detector.detect_pose(frame)
if landmarks:
# 坐姿评估
evaluation = posture_evaluator.evaluate_posture(landmarks)
# 返回结果
emit('posture_result', {
'landmarks': landmarks,
'evaluation': evaluation,
'timestamp': data.get('timestamp')
})
else:
emit('posture_result', {
'error': '未检测到人体姿态'
})
except Exception as e:
emit('error', {'message': str(e)})
@app.route('/api/history', methods=['GET'])
def get_history():
"""获取历史记录"""
# 从数据库查询历史数据
# 这里简化处理
return jsonify({
'records': []
})
if __name__ == '__main__':
socketio.run(app, host='0.0.0.0', port=5000, debug=True)
4. Vue.js前端实现
<template>
<div class="posture-monitor">
<div class="video-container">
<video ref="video" autoplay></video>
<canvas ref="canvas"></canvas>
<div class="overlay" v-if="evaluation">
<div :class="['status', evaluation.is_correct ? 'good' : 'bad']">
{{ evaluation.is_correct ? '坐姿正确 ✓' : '坐姿需要调整 ✗' }}
</div>
<div class="score">得分: {{ evaluation.score }}</div>
</div>
</div>
<div class="alerts">
<transition-group name="alert">
<div
v-for="issue in currentIssues"
:key="issue.type"
:class="['alert', issue.severity]"
>
{{ issue.message }}
</div>
</transition-group>
</div>
<div class="statistics">
<el-card>
<h3>今日统计</h3>
<el-row :gutter="20">
<el-col :span="8">
<div class="stat-item">
<div class="stat-value">{{ todayStats.totalTime }}</div>
<div class="stat-label">监测时长</div>
</div>
</el-col>
<el-col :span="8">
<div class="stat-item">
<div class="stat-value">{{ todayStats.correctRate }}%</div>
<div class="stat-label">正确率</div>
</div>
</el-col>
<el-col :span="8">
<div class="stat-item">
<div class="stat-value">{{ todayStats.warnings }}</div>
<div class="stat-label">提醒次数</div>
</div>
</el-col>
</el-row>
</el-card>
</div>
</div>
</template>
<script>
import { ref, onMounted, onUnmounted } from 'vue';
import { io } from 'socket.io-client';
export default {
name: 'PostureMonitor',
setup() {
const video = ref(null);
const canvas = ref(null);
const socket = ref(null);
const evaluation = ref(null);
const currentIssues = ref([]);
const todayStats = ref({
totalTime: '0小时',
correctRate: 0,
warnings: 0
});
let stream = null;
let animationFrameId = null;
const initCamera = async () => {
try {
stream = await navigator.mediaDevices.getUserMedia({
video: { width: 640, height: 480 }
});
video.value.srcObject = stream;
} catch (error) {
console.error('摄像头初始化失败:', error);
}
};
const initSocket = () => {
socket.value = io('http://localhost:5000');
socket.value.on('posture_result', (data) => {
evaluation.value = data.evaluation;
currentIssues.value = data.evaluation?.issues || [];
// 更新统计数据
if (data.evaluation) {
updateStats(data.evaluation);
}
});
socket.value.on('error', (error) => {
console.error('Socket错误:', error);
});
};
const captureAndSend = () => {
if (!video.value || !canvas.value) return;
const ctx = canvas.value.getContext('2d');
canvas.value.width = video.value.videoWidth;
canvas.value.height = video.value.videoHeight;
ctx.drawImage(video.value, 0, 0);
const frameData = canvas.value.toDataURL('image/jpeg', 0.8);
socket.value.emit('video_frame', {
frame: frameData,
timestamp: Date.now()
});
// 每500ms发送一帧
setTimeout(() => {
animationFrameId = requestAnimationFrame(captureAndSend);
}, 500);
};
const updateStats = (evaluationData) => {
// 更新统计逻辑
if (!evaluationData.is_correct) {
todayStats.value.warnings++;
}
};
onMounted(() => {
initCamera();
initSocket();
setTimeout(() => {
captureAndSend();
}, 1000);
});
onUnmounted(() => {
if (stream) {
stream.getTracks().forEach(track => track.stop());
}
if (socket.value) {
socket.value.disconnect();
}
if (animationFrameId) {
cancelAnimationFrame(animationFrameId);
}
});
return {
video,
canvas,
evaluation,
currentIssues,
todayStats
};
}
};
</script>
<style scoped>
.posture-monitor {
padding: 20px;
}
.video-container {
position: relative;
width: 640px;
margin: 0 auto;
}
video {
width: 100%;
border-radius: 8px;
}
canvas {
display: none;
}
.overlay {
position: absolute;
top: 20px;
right: 20px;
background: rgba(0, 0, 0, 0.7);
padding: 15px;
border-radius: 8px;
color: white;
}
.status.good {
color: #67C23A;
}
.status.bad {
color: #F56C6C;
}
.alerts {
margin-top: 20px;
}
.alert {
padding: 12px 20px;
margin-bottom: 10px;
border-radius: 4px;
animation: slideIn 0.3s;
}
.alert.high {
background: #FEF0F0;
color: #F56C6C;
border-left: 4px solid #F56C6C;
}
.alert.medium {
background: #FDF6EC;
color: #E6A23C;
border-left: 4px solid #E6A23C;
}
@keyframes slideIn {
from {
transform: translateX(100%);
opacity: 0;
}
to {
transform: translateX(0);
opacity: 1;
}
}
.statistics {
margin-top: 30px;
}
.stat-item {
text-align: center;
}
.stat-value {
font-size: 32px;
font-weight: bold;
color: #409EFF;
}
.stat-label {
color: #909399;
margin-top: 8px;
}
</style>
数据库设计
-- 用户表
CREATE TABLE users (
id INTEGER PRIMARY KEY AUTOINCREMENT,
username VARCHAR(50) UNIQUE NOT NULL,
email VARCHAR(100) UNIQUE NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
-- 坐姿记录表
CREATE TABLE posture_records (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER NOT NULL,
timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
score INTEGER,
is_correct BOOLEAN,
issues TEXT, -- JSON格式存储问题列表
FOREIGN KEY (user_id) REFERENCES users(id)
);
-- 统计表
CREATE TABLE daily_stats (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER NOT NULL,
date DATE NOT NULL,
total_time INTEGER, -- 秒
correct_count INTEGER,
warning_count INTEGER,
average_score FLOAT,
FOREIGN KEY (user_id) REFERENCES users(id),
UNIQUE(user_id, date)
);
项目优化与扩展
1. 性能优化
- 使用ONNX Runtime优化模型推理速度
- 实现帧跳过策略,降低计算负担
- 使用WebWorker处理图像,避免阻塞主线程
2. 功能扩展
- 添加语音提醒功能
- 实现多人检测和识别
- 增加自定义坐姿标准设置
- 开发移动端APP
- 添加坐姿训练模式和打卡功能
3. 部署方案
- 使用Docker容器化部署
- Nginx反向代理
- 使用Redis缓存提高性能
- 云服务器部署(阿里云、腾讯云等)
项目总结
本项目完整展示了从前端到后端、从模型到应用的全栈开发流程。通过深度学习技术实现实时姿态识别,结合Web技术提供用户友好的交互界面,具有很好的实用价值。该项目可以帮助用户养成良好的坐姿习惯,预防职业病,提高生活质量。
技术亮点
- 实时视频流处理
- WebSocket双向通信
- 深度学习模型应用
- 响应式前端设计
- 数据可视化展示
可改进方向
- 模型准确率提升
- 增加更多姿态类型检测
- 优化算法性能
- 完善数据分析功能
- 增强用户激励机制
通过这个项目的开发,可以全面掌握Python全栈开发技能,包括深度学习、Web开发、数据库设计、前端框架等核心技术。
项目代码: