基于多摄像头融合的智能小车自动驾驶系统完整实现

摘要

本文详细阐述了一个基于多摄像头视觉融合的智能小车自动驾驶系统的完整设计与实现。系统采用前视、左视、右视、后视四路摄像头构建360度环境感知能力,通过深度学习目标检测、车道线识别、障碍物检测、多视图几何重建等核心算法,实现了自主导航、路径规划、避障和决策控制。文章从理论基础、硬件架构、软件框架到具体代码实现进行全方位剖析,为智能驾驶系统开发提供完整的技术参考。

关键词: 多摄像头融合、自动驾驶、计算机视觉、深度估计、SLAM、路径规划、STM32、嵌入式系统


目录

  1. 系统概述与技术架构
  2. 理论基础与算法原理
  3. 硬件系统设计
  4. 软件架构设计
  5. 核心模块实现
  6. 多摄像头标定与融合
  7. 感知层实现
  8. 决策与规划层实现
  9. 控制层实现
  10. 完整系统集成
  11. 测试与优化
  12. 总结与展望

1. 系统概述与技术架构

1.1 系统总体架构

自动驾驶系统采用分层架构设计,从下至上包括:

复制代码
┌─────────────────────────────────────────────────────────┐
│                    应用层 (Application)                   │
│              高级决策 | 任务规划 | HMI交互                │
└─────────────────────────────────────────────────────────┘
                            ↓
┌─────────────────────────────────────────────────────────┐
│                   决策规划层 (Planning)                   │
│        全局路径规划 | 局部避障 | 行为决策 | 轨迹生成       │
└─────────────────────────────────────────────────────────┘
                            ↓
┌─────────────────────────────────────────────────────────┐
│                    感知层 (Perception)                    │
│    目标检测 | 车道识别 | 深度估计 | 多传感器融合 | SLAM   │
└─────────────────────────────────────────────────────────┘
                            ↓
┌─────────────────────────────────────────────────────────┐
│                   传感器层 (Sensors)                      │
│     前摄像头 | 左摄像头 | 右摄像头 | 后摄像头 | IMU | GPS  │
└─────────────────────────────────────────────────────────┘
                            ↓
┌─────────────────────────────────────────────────────────┐
│                    控制层 (Control)                       │
│        横向控制(转向) | 纵向控制(速度) | 底盘执行器        │
└─────────────────────────────────────────────────────────┘

1.2 多摄像头布局方案

智能小车采用四摄像头环视配置:

  • 前视摄像头: 120°视场角,主要用于前方车道检测、目标识别、测距
  • 左/右侧视摄像头: 90°视场角,用于侧方盲区监测、变道辅助
  • 后视摄像头: 120°视场角,用于倒车、后方障碍物检测

摄像头安装位置:

复制代码
        [前摄像头]
            |
    [左]---车体---[右]
            |
        [后摄像头]

1.3 技术特点

  1. 多视角融合感知: 360度无死角环境感知
  2. 实时性保证: 采用多线程并行处理,30fps稳定运行
  3. 鲁棒性设计: 多传感器冗余,单点故障不影响系统运行
  4. 模块化架构: 各功能模块解耦,便于开发和维护
  5. 可扩展性: 支持添加激光雷达、毫米波雷达等传感器

2. 理论基础与算法原理

2.1 计算机视觉基础

2.1.1 相机成像模型

相机成像遵循针孔相机模型,三维世界坐标点 P w = ( X w , Y w , Z w ) P_w = (X_w, Y_w, Z_w) Pw=(Xw,Yw,Zw) 到图像坐标 p = ( u , v ) p = (u, v) p=(u,v) 的转换:

u v 1 \] = 1 Z c K \[ R ∣ t \] \[ X w Y w Z w 1 \] \\begin{bmatrix} u \\\\ v \\\\ 1 \\end{bmatrix} = \\frac{1}{Z_c} K \[R\|t\] \\begin{bmatrix} X_w \\\\ Y_w \\\\ Z_w \\\\ 1 \\end{bmatrix} uv1 =Zc1K\[R∣t\] XwYwZw1 其中: * K K K 为相机内参矩阵: K = \[ f x 0 c x 0 f y c y 0 0 1 \] K = \\begin{bmatrix} f_x \& 0 \& c_x \\\\ 0 \& f_y \& c_y \\\\ 0 \& 0 \& 1 \\end{bmatrix} K= fx000fy0cxcy1 * \[ R ∣ t \] \[R\|t\] \[R∣t\] 为相机外参(旋转和平移) * Z c Z_c Zc 为深度值 ##### 2.1.2 畸变校正 实际镜头存在径向畸变和切向畸变: **径向畸变** : { x c o r r e c t e d = x ( 1 + k 1 r 2 + k 2 r 4 + k 3 r 6 ) y c o r r e c t e d = y ( 1 + k 1 r 2 + k 2 r 4 + k 3 r 6 ) \\begin{cases} x_{corrected} = x(1 + k_1r\^2 + k_2r\^4 + k_3r\^6) \\\\ y_{corrected} = y(1 + k_1r\^2 + k_2r\^4 + k_3r\^6) \\end{cases} {xcorrected=x(1+k1r2+k2r4+k3r6)ycorrected=y(1+k1r2+k2r4+k3r6) **切向畸变** : { x c o r r e c t e d = x + \[ 2 p 1 x y + p 2 ( r 2 + 2 x 2 ) \] y c o r r e c t e d = y + \[ p 1 ( r 2 + 2 y 2 ) + 2 p 2 x y \] \\begin{cases} x_{corrected} = x + \[2p_1xy + p_2(r\^2 + 2x\^2)\] \\\\ y_{corrected} = y + \[p_1(r\^2 + 2y\^2) + 2p_2xy\] \\end{cases} {xcorrected=x+\[2p1xy+p2(r2+2x2)\]ycorrected=y+\[p1(r2+2y2)+2p2xy

其中 r 2 = x 2 + y 2 r^2 = x^2 + y^2 r2=x2+y2

2.2 立体视觉与深度估计

2.2.1 双目视觉原理

利用左右两个摄像头的视差计算深度:

Z = f ⋅ B d Z = \frac{f \cdot B}{d} Z=df⋅B

其中:

  • Z Z Z 为物体深度
  • f f f 为焦距
  • B B B 为基线距离(两摄像头间距)
  • d d d 为视差(同一点在左右图像中的x坐标差)
2.2.2 单目深度估计

采用深度学习方法,通过卷积神经网络从单张图像估计深度图:

复制代码
输入图像 → CNN编码器 → 特征提取 → CNN解码器 → 深度图输出

常用网络架构: MonoDepth, DenseDepth, BTS

2.3 目标检测算法

2.3.1 YOLO系列算法

YOLO (You Only Look Once) 将目标检测转化为回归问题:

  1. 输入图像划分为 S × S S \times S S×S 网格
  2. 每个网格预测 B B B 个边界框及置信度
  3. 每个框预测 ( x , y , w , h , c o n f i d e n c e ) (x, y, w, h, confidence) (x,y,w,h,confidence) 和类别概率

损失函数:
L = λ c o o r d ∑ i = 0 S 2 ∑ j = 0 B 1 i j o b j [ ( x i − x ^ i ) 2 + ( y i − y ^ i ) 2 ] + \mathcal{L} = \lambda_{coord}\sum_{i=0}^{S^2}\sum_{j=0}^{B}\mathbb{1}{ij}^{obj}[(x_i-\hat{x}i)^2 + (y_i-\hat{y}i)^2] + L=λcoordi=0∑S2j=0∑B1ijobj[(xi−x^i)2+(yi−y^i)2]+
λ c o o r d ∑ i = 0 S 2 ∑ j = 0 B 1 i j o b j [ ( w i − w ^ i ) 2 + ( h i − h ^ i ) 2 ] + \lambda
{coord}\sum
{i=0}^{S^2}\sum
{j=0}^{B}\mathbb{1}{ij}^{obj}[(\sqrt{w_i}-\sqrt{\hat{w}i})^2 + (\sqrt{h_i}-\sqrt{\hat{h}i})^2] + λcoordi=0∑S2j=0∑B1ijobj[(wi −w^i )2+(hi −h^i )2]+
∑ i = 0 S 2 ∑ j = 0 B 1 i j o b j ( C i − C ^ i ) 2 + λ n o o b j ∑ i = 0 S 2 ∑ j = 0 B 1 i j n o o b j ( C i − C ^ i ) 2 + \sum
{i=0}^{S^2}\sum
{j=0}^{B}\mathbb{1}
{ij}^{obj}(C_i - \hat{C}i)^2 + \lambda{noobj}\sum_{i=0}^{S^2}\sum_{j=0}^{B}\mathbb{1}{ij}^{noobj}(C_i - \hat{C}i)^2 + i=0∑S2j=0∑B1ijobj(Ci−C^i)2+λnoobji=0∑S2j=0∑B1ijnoobj(Ci−C^i)2+
∑ i = 0 S 2 1 i o b j ∑ c ∈ c l a s s e s ( p i ( c ) − p ^ i ( c ) ) 2 \sum
{i=0}^{S^2}\mathbb{1}
{i}^{obj}\sum_{c \in classes}(p_i(c) - \hat{p}_i(c))^2 i=0∑S21iobjc∈classes∑(pi(c)−p^i(c))2

2.4 车道线检测算法

2.4.1 传统方法: Hough变换

Hough变换将图像空间的直线检测转换到参数空间:

直线方程: ρ = x cos ⁡ θ + y sin ⁡ θ \rho = x\cos\theta + y\sin\theta ρ=xcosθ+ysinθ

在参数空间 ( ρ , θ ) (\rho, \theta) (ρ,θ) 中寻找峰值点

2.4.2 深度学习方法

采用语义分割网络(如SegNet, DeepLab)对车道线进行像素级分类:

python 复制代码
# 车道线分割网络架构
Input (640x480x3)
    ↓
Encoder: ResNet50 backbone
    ↓
ASPP (Atrous Spatial Pyramid Pooling)
    ↓
Decoder: Upsampling
    ↓
Output: Lane mask (640x480x1)

2.5 视觉SLAM原理

2.5.1 SLAM问题定义

同时定位与地图构建(SLAM):

  • 状态估计 : 估计车辆位姿 x t = ( x , y , θ ) t \mathbf{x}_t = (x, y, \theta)_t xt=(x,y,θ)t
  • 地图构建 : 构建环境地图 m \mathbf{m} m

概率表示:
P ( x 0 : t , m ∣ z 0 : t , u 0 : t ) P(\mathbf{x}{0:t}, \mathbf{m} | \mathbf{z}{0:t}, \mathbf{u}_{0:t}) P(x0:t,m∣z0:t,u0:t)

其中:

  • x 0 : t \mathbf{x}_{0:t} x0:t: 时刻0到t的所有位姿
  • z 0 : t \mathbf{z}_{0:t} z0:t: 所有观测数据
  • u 0 : t \mathbf{u}_{0:t} u0:t: 所有控制输入
2.5.2 ORB-SLAM2核心流程
复制代码
图像输入 → 特征提取(ORB) → 特征匹配 → 运动估计
    ↓
关键帧选择 → 局部地图构建 → 回环检测 → 全局优化

ORB特征:

  • FAST角点检测
  • BRIEF描述子
  • 旋转不变性
  • 尺度不变性(图像金字塔)

2.6 路径规划算法

2.6.1 全局路径规划: A*算法

启发式搜索,估价函数:
f ( n ) = g ( n ) + h ( n ) f(n) = g(n) + h(n) f(n)=g(n)+h(n)

其中:

  • g ( n ) g(n) g(n): 起点到节点n的实际代价
  • h ( n ) h(n) h(n): 节点n到目标的启发式估计(欧氏距离)
2.6.2 局部路径规划: DWA算法

动态窗口法(Dynamic Window Approach):

  1. 在速度空间 ( v , ω ) (v, \omega) (v,ω)中采样
  2. 对每个速度对预测轨迹
  3. 评估函数: G ( v , ω ) = σ ( α ⋅ h e a d i n g + β ⋅ d i s t + γ ⋅ v e l o c i t y ) G(v, \omega) = \sigma(\alpha \cdot heading + \beta \cdot dist + \gamma \cdot velocity) G(v,ω)=σ(α⋅heading+β⋅dist+γ⋅velocity)
  4. 选择最优轨迹

2.7 控制理论基础

2.7.1 纯追踪控制(Pure Pursuit)

计算前轮转角:
δ = arctan ⁡ ( 2 L sin ⁡ α l d ) \delta = \arctan\left(\frac{2L\sin\alpha}{l_d}\right) δ=arctan(ld2Lsinα)

其中:

  • L L L: 轴距
  • α \alpha α: 车辆航向与目标点连线夹角
  • l d l_d ld: 前视距离
2.7.2 模型预测控制(MPC)

优化问题:
min ⁡ u ∑ k = 0 N − 1 [ x k T Q x k + u k T R u k ] \min_{\mathbf{u}} \sum_{k=0}^{N-1}[\mathbf{x}_k^T Q \mathbf{x}_k + \mathbf{u}_k^T R \mathbf{u}_k] umink=0∑N−1[xkTQxk+ukTRuk]

约束条件:
{ x k + 1 = f ( x k , u k ) u m i n ≤ u k ≤ u m a x x m i n ≤ x k ≤ x m a x \begin{cases} \mathbf{x}_{k+1} = f(\mathbf{x}k, \mathbf{u}k) \\ \mathbf{u}{min} \leq \mathbf{u}k \leq \mathbf{u}{max} \\ \mathbf{x}{min} \leq \mathbf{x}k \leq \mathbf{x}{max} \end{cases} ⎩ ⎨ ⎧xk+1=f(xk,uk)umin≤uk≤umaxxmin≤xk≤xmax


3. 硬件系统设计

3.1 硬件清单

组件类别 型号/规格 数量 说明
主控制器 Raspberry Pi 4B (8GB) 1 运行视觉算法和决策
辅助控制器 STM32F407ZGT6 1 底层运动控制
摄像头 OV5647 (500万像素) 4 前/后/左/右环视
IMU MPU6050 1 姿态测量
GPS模块 NEO-M8N 1 位置定位
电机驱动 TB6612FNG 2 直流电机驱动
电机 减速直流电机 (12V) 4 四轮驱动
舵机 MG996R 1 前轮转向
电源 锂电池 (11.1V 3S) 1 系统供电
电压转换 DC-DC降压模块 2 5V/3.3V供电
显示屏 7寸触摸屏 1 HMI界面

3.2 电路连接图

复制代码
                    +12V Battery
                        |
            +-----------+-----------+
            |           |           |
        [DC-DC 5V]  [Motor Driver] [Servo]
            |           |
    +-------+-------+   +--[Motors x4]
    |       |       |
[RPi 4B] [STM32] [Screen]
    |       |
+---+-------+---+
|   |   |   |   |
[F] [L] [R] [B] <- Cameras
|   |
[IMU] [GPS]

图例:
F - Front Camera (前摄像头)
L - Left Camera (左摄像头)
R - Right Camera (右摄像头)
B - Back Camera (后摄像头)

3.3 树莓派与STM32通信接口

通信协议: UART (串口通信)

  • 波特率: 115200
  • 数据位: 8
  • 停止位: 1
  • 校验: None

通信数据包格式:

复制代码
[Header: 0xAA] [Length] [CMD] [Data...] [Checksum] [Tail: 0x55]

3.4 摄像头接口配置

CSI接口连接:

  • Camera 0 (前): CSI-0 port
  • Camera 1 (后): USB 3.0 (通过转接器)
  • Camera 2 (左): USB 3.0
  • Camera 3 (右): USB 3.0

V4L2设备映射:

bash 复制代码
/dev/video0 -> Front Camera
/dev/video1 -> Back Camera  
/dev/video2 -> Left Camera
/dev/video3 -> Right Camera

4. 软件架构设计

4.1 系统软件框架

复制代码
┌─────────────────────────────────────────────────────────┐
│                  应用层 (Application Layer)              │
│   ┌──────────────┐  ┌──────────────┐  ┌──────────────┐ │
│   │ 自动驾驶模式  │  │ 手动控制模式  │  │  HMI交互    │ │
│   └──────────────┘  └──────────────┘  └──────────────┘ │
└─────────────────────────────────────────────────────────┘
                            ↓
┌─────────────────────────────────────────────────────────┐
│              决策规划层 (Decision & Planning)            │
│   ┌─────────────┐  ┌──────────────┐  ┌──────────────┐  │
│   │ 行为决策FSM  │  │ 全局路径规划  │  │ 局部避障DWA │  │
│   └─────────────┘  └──────────────┘  └──────────────┘  │
│   ┌─────────────────────────────────────────────────┐  │
│   │         轨迹生成与平滑 (Trajectory Generator)    │  │
│   └─────────────────────────────────────────────────┘  │
└─────────────────────────────────────────────────────────┘
                            ↓
┌─────────────────────────────────────────────────────────┐
│                感知层 (Perception Layer)                 │
│   ┌──────────────┐  ┌──────────────┐  ┌─────────────┐  │
│   │ 多摄像头管理  │  │  图像预处理   │  │  特征提取   │  │
│   └──────────────┘  └──────────────┘  └─────────────┘  │
│   ┌──────────────┐  ┌──────────────┐  ┌─────────────┐  │
│   │ 目标检测YOLO │  │  车道线检测   │  │  深度估计   │  │
│   └──────────────┘  └──────────────┘  └─────────────┘  │
│   ┌──────────────┐  ┌──────────────┐  ┌─────────────┐  │
│   │ Visual SLAM  │  │ 多传感器融合  │  │  障碍物识别 │  │
│   └──────────────┘  └──────────────┘  └─────────────┘  │
└─────────────────────────────────────────────────────────┘
                            ↓
┌─────────────────────────────────────────────────────────┐
│                 控制层 (Control Layer)                   │
│   ┌─────────────────┐        ┌─────────────────┐       │
│   │ 横向控制 (转向)  │        │ 纵向控制 (速度)  │       │
│   │  Pure Pursuit   │        │   PID Control   │       │
│   └─────────────────┘        └─────────────────┘       │
└─────────────────────────────────────────────────────────┘
                            ↓
┌─────────────────────────────────────────────────────────┐
│                 执行层 (Actuator Layer)                  │
│          [STM32底层控制] → [电机] [舵机]                 │
└─────────────────────────────────────────────────────────┘

4.2 进程/线程架构

python 复制代码
Main Process (主进程)
├── Camera Thread 0 (前摄像头采集线程)
├── Camera Thread 1 (后摄像头采集线程)
├── Camera Thread 2 (左摄像头采集线程)
├── Camera Thread 3 (右摄像头采集线程)
├── Perception Thread (感知处理线程)
│   ├── Object Detection
│   ├── Lane Detection
│   └── Depth Estimation
├── SLAM Thread (定位建图线程)
├── Planning Thread (路径规划线程)
├── Control Thread (控制线程)
├── Communication Thread (STM32通信线程)
└── HMI Thread (人机交互线程)

4.3 数据流设计

python 复制代码
# 使用队列实现线程间数据传输
from queue import Queue
from threading import Thread, Lock

# 全局数据队列
frame_queues = {
    'front': Queue(maxsize=2),
    'back': Queue(maxsize=2),
    'left': Queue(maxsize=2),
    'right': Queue(maxsize=2)
}

perception_queue = Queue(maxsize=5)
planning_queue = Queue(maxsize=5)
control_queue = Queue(maxsize=5)

# 共享数据结构(需要加锁)
shared_data = {
    'vehicle_state': {},  # 车辆状态
    'perception_result': {},  # 感知结果
    'planning_path': [],  # 规划路径
    'obstacles': []  # 障碍物列表
}
shared_data_lock = Lock()

5. 核心模块实现

5.1 多摄像头管理模块

5.1.1 摄像头类定义
python 复制代码
# camera_manager.py
import cv2
import numpy as np
from threading import Thread, Lock
from queue import Queue
import time

class CameraDevice:
    """单个摄像头设备类"""
    
    def __init__(self, camera_id, position, resolution=(640, 480), fps=30):
        """
        初始化摄像头
        
        Args:
            camera_id: 摄像头设备ID (/dev/videoX)
            position: 摄像头位置 ('front', 'back', 'left', 'right')
            resolution: 分辨率 (width, height)
            fps: 帧率
        """
        self.camera_id = camera_id
        self.position = position
        self.resolution = resolution
        self.fps = fps
        
        # 打开摄像头
        self.cap = cv2.VideoCapture(camera_id, cv2.CAP_V4L2)
        
        # 设置参数
        self.cap.set(cv2.CAP_PROP_FRAME_WIDTH, resolution[0])
        self.cap.set(cv2.CAP_PROP_FRAME_HEIGHT, resolution[1])
        self.cap.set(cv2.CAP_PROP_FPS, fps)
        self.cap.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc('M', 'J', 'P', 'G'))
        self.cap.set(cv2.CAP_PROP_BUFFERSIZE, 1)  # 减小缓冲区,降低延迟
        
        # 相机标定参数(需要事先标定)
        self.camera_matrix = None
        self.dist_coeffs = None
        self.load_calibration()
        
        # 状态标志
        self.is_running = False
        self.frame_count = 0
        self.fps_actual = 0
        
        # 帧缓冲队列
        self.frame_queue = Queue(maxsize=2)
        
        # 线程
        self.capture_thread = None
        
    def load_calibration(self):
        """加载相机标定参数"""
        try:
            calib_file = f"calibration_{self.position}.npz"
            data = np.load(calib_file)
            self.camera_matrix = data['camera_matrix']
            self.dist_coeffs = data['dist_coeffs']
            print(f"[{self.position}] 加载标定参数成功")
        except:
            print(f"[{self.position}] 警告: 未找到标定文件,使用默认参数")
            # 默认参数(需根据实际摄像头调整)
            fx = fy = 500  # 焦距
            cx, cy = self.resolution[0] / 2, self.resolution[1] / 2
            self.camera_matrix = np.array([
                [fx, 0, cx],
                [0, fy, cy],
                [0, 0, 1]
            ], dtype=np.float32)
            self.dist_coeffs = np.zeros(5, dtype=np.float32)
    
    def start(self):
        """启动摄像头采集线程"""
        if not self.cap.isOpened():
            print(f"[{self.position}] 错误: 无法打开摄像头 {self.camera_id}")
            return False
        
        self.is_running = True
        self.capture_thread = Thread(target=self._capture_loop, daemon=True)
        self.capture_thread.start()
        print(f"[{self.position}] 摄像头启动成功")
        return True
    
    def stop(self):
        """停止摄像头采集"""
        self.is_running = False
        if self.capture_thread:
            self.capture_thread.join(timeout=2)
        self.cap.release()
        print(f"[{self.position}] 摄像头已停止")
    
    def _capture_loop(self):
        """摄像头采集循环(在独立线程中运行)"""
        last_time = time.time()
        frame_times = []
        
        while self.is_running:
            ret, frame = self.cap.read()
            
            if not ret:
                print(f"[{self.position}] 警告: 读取帧失败")
                time.sleep(0.01)
                continue
            
            # 去畸变
            if self.camera_matrix is not None:
                frame = cv2.undistort(frame, self.camera_matrix, self.dist_coeffs)
            
            # 添加时间戳
            timestamp = time.time()
            
            # 更新帧到队列(丢弃旧帧)
            if self.frame_queue.full():
                try:
                    self.frame_queue.get_nowait()
                except:
                    pass
            
            self.frame_queue.put({
                'frame': frame,
                'timestamp': timestamp,
                'frame_id': self.frame_count
            })
            
            # 统计FPS
            self.frame_count += 1
            frame_times.append(time.time())
            if len(frame_times) > 30:
                frame_times.pop(0)
                time_diff = frame_times[-1] - frame_times[0]
                self.fps_actual = len(frame_times) / time_diff if time_diff > 0 else 0
    
    def get_frame(self, timeout=0.1):
        """
        获取最新帧
        
        Returns:
            dict: {'frame': numpy.ndarray, 'timestamp': float, 'frame_id': int}
                  或 None (超时)
        """
        try:
            return self.frame_queue.get(timeout=timeout)
        except:
            return None
    
    def get_info(self):
        """获取摄像头信息"""
        return {
            'position': self.position,
            'camera_id': self.camera_id,
            'resolution': self.resolution,
            'fps_target': self.fps,
            'fps_actual': self.fps_actual,
            'frame_count': self.frame_count,
            'is_running': self.is_running
        }


class MultiCameraManager:
    """多摄像头管理器"""
    
    def __init__(self):
        self.cameras = {}
        self.is_running = False
        
        # 摄像头配置
        self.camera_configs = {
            'front': {'id': 0, 'resolution': (640, 480), 'fps': 30},
            'back': {'id': 1, 'resolution': (640, 480), 'fps': 30},
            'left': {'id': 2, 'resolution': (640, 480), 'fps': 30},
            'right': {'id': 3, 'resolution': (640, 480), 'fps': 30}
        }
        
    def initialize(self):
        """初始化所有摄像头"""
        print("正在初始化多摄像头系统...")
        
        for position, config in self.camera_configs.items():
            camera = CameraDevice(
                camera_id=config['id'],
                position=position,
                resolution=config['resolution'],
                fps=config['fps']
            )
            self.cameras[position] = camera
        
        print(f"初始化完成,共 {len(self.cameras)} 个摄像头")
        return True
    
    def start_all(self):
        """启动所有摄像头"""
        success_count = 0
        for position, camera in self.cameras.items():
            if camera.start():
                success_count += 1
        
        self.is_running = (success_count > 0)
        print(f"成功启动 {success_count}/{len(self.cameras)} 个摄像头")
        return self.is_running
    
    def stop_all(self):
        """停止所有摄像头"""
        for camera in self.cameras.values():
            camera.stop()
        self.is_running = False
        print("所有摄像头已停止")
    
    def get_frames(self):
        """
        获取所有摄像头的最新帧
        
        Returns:
            dict: {'front': frame_data, 'back': frame_data, ...}
        """
        frames = {}
        for position, camera in self.cameras.items():
            frame_data = camera.get_frame()
            if frame_data is not None:
                frames[position] = frame_data
        return frames
    
    def get_frame(self, position):
        """获取指定位置摄像头的帧"""
        if position in self.cameras:
            return self.cameras[position].get_frame()
        return None
    
    def get_status(self):
        """获取所有摄像头状态"""
        status = {}
        for position, camera in self.cameras.items():
            status[position] = camera.get_info()
        return status
    
    def visualize(self, window_name="Multi-Camera View"):
        """可视化所有摄像头画面(用于调试)"""
        frames = self.get_frames()
        
        if len(frames) == 0:
            return None
        
        # 创建2x2网格显示
        rows = []
        for row_positions in [['front', 'back'], ['left', 'right']]:
            row_frames = []
            for pos in row_positions:
                if pos in frames:
                    frame = frames[pos]['frame'].copy()
                    # 添加标签
                    cv2.putText(frame, pos.upper(), (10, 30),
                              cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2)
                    row_frames.append(frame)
                else:
                    # 空白帧
                    blank = np.zeros((480, 640, 3), dtype=np.uint8)
                    row_frames.append(blank)
            
            rows.append(np.hstack(row_frames))
        
        combined = np.vstack(rows)
        return combined


# 测试代码
if __name__ == "__main__":
    manager = MultiCameraManager()
    
    if manager.initialize():
        manager.start_all()
        
        try:
            while True:
                # 显示所有摄像头画面
                combined_view = manager.visualize()
                if combined_view is not None:
                    cv2.imshow("Multi-Camera System", combined_view)
                
                # 打印状态
                status = manager.get_status()
                print("\n摄像头状态:")
                for pos, info in status.items():
                    print(f"{pos}: FPS={info['fps_actual']:.1f}, "
                          f"Frames={info['frame_count']}")
                
                if cv2.waitKey(1) & 0xFF == ord('q'):
                    break
                
                time.sleep(0.1)
        
        except KeyboardInterrupt:
            print("\n用户中断")
        
        finally:
            manager.stop_all()
            cv2.destroyAllWindows()
5.1.2 相机标定工具
python 复制代码
# camera_calibration.py
import cv2
import numpy as np
import glob
import os

class CameraCalibrator:
    """相机标定工具"""
    
    def __init__(self, pattern_size=(9, 6), square_size=25.0):
        """
        初始化标定器
        
        Args:
            pattern_size: 棋盘格内角点数量 (cols, rows)
            square_size: 棋盘格方格边长(mm)
        """
        self.pattern_size = pattern_size
        self.square_size = square_size
        
        # 准备标定板的世界坐标
        self.objp = np.zeros((pattern_size[0] * pattern_size[1], 3), np.float32)
        self.objp[:, :2] = np.mgrid[0:pattern_size[0], 
                                     0:pattern_size[1]].T.reshape(-1, 2)
        self.objp *= square_size
        
        # 存储标定数据
        self.obj_points = []  # 世界坐标系中的点
        self.img_points = []  # 图像坐标系中的点
        self.image_size = None
        
    def capture_calibration_images(self, camera_id, save_dir, num_images=20):
        """
        采集标定图像
        
        Args:
            camera_id: 摄像头ID
            save_dir: 保存目录
            num_images: 采集图像数量
        """
        os.makedirs(save_dir, exist_ok=True)
        
        cap = cv2.VideoCapture(camera_id)
        cap.set(cv2.CAP_PROP_FRAME_WIDTH, 640)
        cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 480)
        
        captured = 0
        print(f"开始采集标定图像,目标: {num_images} 张")
        print("按空格键拍摄,按q键退出")
        
        while captured < num_images:
            ret, frame = cap.read()
            if not ret:
                continue
            
            # 显示当前画面
            display = frame.copy()
            cv2.putText(display, f"Captured: {captured}/{num_images}", 
                       (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2)
            cv2.putText(display, "Press SPACE to capture, Q to quit",
                       (10, 60), cv2.FONT_HERSHEY_SIMPLEX, 0.6, (0, 255, 0), 1)
            
            # 尝试检测棋盘格
            gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
            ret_corners, corners = cv2.findChessboardCorners(
                gray, self.pattern_size, None)
            
            if ret_corners:
                cv2.drawChessboardCorners(display, self.pattern_size, 
                                         corners, ret_corners)
                cv2.putText(display, "Pattern Detected!", (10, 90),
                           cv2.FONT_HERSHEY_SIMPLEX, 0.8, (0, 255, 0), 2)
            
            cv2.imshow("Calibration", display)
            
            key = cv2.waitKey(1) & 0xFF
            if key == ord(' ') and ret_corners:
                # 保存图像
                filename = os.path.join(save_dir, f"calib_{captured:02d}.jpg")
                cv2.imwrite(filename, frame)
                captured += 1
                print(f"已采集: {captured}/{num_images}")
            elif key == ord('q'):
                break
        
        cap.release()
        cv2.destroyAllWindows()
        print(f"采集完成,共 {captured} 张图像")
        
    def calibrate_from_images(self, image_dir):
        """
        从图像文件进行标定
        
        Args:
            image_dir: 标定图像目录
            
        Returns:
            dict: 标定结果
        """
        images = glob.glob(os.path.join(image_dir, "*.jpg"))
        
        if len(images) == 0:
            print(f"错误: 在 {image_dir} 中未找到图像")
            return None
        
        print(f"找到 {len(images)} 张标定图像")
        
        valid_count = 0
        for img_path in images:
            img = cv2.imread(img_path)
            gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
            
            if self.image_size is None:
                self.image_size = gray.shape[::-1]
            
            # 查找棋盘格角点
            ret, corners = cv2.findChessboardCorners(gray, self.pattern_size, None)
            
            if ret:
                # 亚像素精确化
                criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 
                           30, 0.001)
                corners_refined = cv2.cornerSubPix(gray, corners, (11, 11), 
                                                   (-1, -1), criteria)
                
                self.obj_points.append(self.objp)
                self.img_points.append(corners_refined)
                valid_count += 1
                print(f"✓ {os.path.basename(img_path)}")
            else:
                print(f"✗ {os.path.basename(img_path)} - 未检测到棋盘格")
        
        print(f"\n有效图像: {valid_count}/{len(images)}")
        
        if valid_count < 10:
            print("错误: 有效图像数量不足,需要至少10张")
            return None
        
        # 执行标定
        print("\n开始标定计算...")
        ret, camera_matrix, dist_coeffs, rvecs, tvecs = cv2.calibrateCamera(
            self.obj_points, self.img_points, self.image_size, None, None)
        
        if not ret:
            print("标定失败!")
            return None
        
        # 计算重投影误差
        total_error = 0
        for i in range(len(self.obj_points)):
            img_points_reproj, _ = cv2.projectPoints(
                self.obj_points[i], rvecs[i], tvecs[i], 
                camera_matrix, dist_coeffs)
            error = cv2.norm(self.img_points[i], img_points_reproj, 
                           cv2.NORM_L2) / len(img_points_reproj)
            total_error += error
        
        mean_error = total_error / len(self.obj_points)
        
        print("\n标定完成!")
        print(f"重投影误差: {mean_error:.4f} pixels")
        print("\n相机内参矩阵:")
        print(camera_matrix)
        print("\n畸变系数:")
        print(dist_coeffs.ravel())
        
        return {
            'camera_matrix': camera_matrix,
            'dist_coeffs': dist_coeffs,
            'rvecs': rvecs,
            'tvecs': tvecs,
            'reprojection_error': mean_error
        }
    
    def save_calibration(self, calib_result, output_file):
        """保存标定结果"""
        np.savez(output_file,
                camera_matrix=calib_result['camera_matrix'],
                dist_coeffs=calib_result['dist_coeffs'],
                reprojection_error=calib_result['reprojection_error'])
        print(f"标定结果已保存到: {output_file}")
    
    def test_undistortion(self, camera_id, calib_result):
        """测试去畸变效果"""
        cap = cv2.VideoCapture(camera_id)
        
        camera_matrix = calib_result['camera_matrix']
        dist_coeffs = calib_result['dist_coeffs']
        
        print("显示去畸变效果,按q退出")
        
        while True:
            ret, frame = cap.read()
            if not ret:
                break
            
            # 去畸变
            undistorted = cv2.undistort(frame, camera_matrix, dist_coeffs)
            
            # 并排显示
            combined = np.hstack([frame, undistorted])
            cv2.putText(combined, "Original", (10, 30),
                       cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2)
            cv2.putText(combined, "Undistorted", (650, 30),
                       cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2)
            
            cv2.imshow("Undistortion Test", combined)
            
            if cv2.waitKey(1) & 0xFF == ord('q'):
                break
        
        cap.release()
        cv2.destroyAllWindows()


# 使用示例
if __name__ == "__main__":
    calibrator = CameraCalibrator(pattern_size=(9, 6), square_size=25.0)
    
    # 相机位置
    positions = ['front', 'back', 'left', 'right']
    camera_ids = [0, 1, 2, 3]
    
    for position, cam_id in zip(positions, camera_ids):
        print(f"\n{'='*50}")
        print(f"标定 {position} 摄像头 (Camera {cam_id})")
        print(f"{'='*50}")
        
        # 采集图像
        calib_dir = f"calibration_images_{position}"
        calibrator.capture_calibration_images(cam_id, calib_dir, num_images=20)
        
        # 执行标定
        result = calibrator.calibrate_from_images(calib_dir)
        
        if result:
            # 保存结果
            output_file = f"calibration_{position}.npz"
            calibrator.save_calibration(result, output_file)
            
            # 测试去畸变
            calibrator.test_undistortion(cam_id, result)

6. 多摄像头标定与融合

6.1 外参标定(摄像头间相对位姿)

python 复制代码
# multi_camera_extrinsics.py
import cv2
import numpy as np

class ExtrinsicCalibrator:
    """多摄像头外参标定"""
    
    def __init__(self, reference_camera='front'):
        """
        初始化
        
        Args:
            reference_camera: 参考摄像头(世界坐标系原点)
        """
        self.reference_camera = reference_camera
        self.extrinsics = {}  # 存储各摄像头相对于参考摄像头的位姿
        
    def calibrate_stereo_pair(self, cam1_matrix, cam1_dist, cam2_matrix, cam2_dist,
                             obj_points, img_points1, img_points2, image_size):
        """
        标定立体相机对
        
        Returns:
            R: 旋转矩阵 (cam1 -> cam2)
            T: 平移向量 (cam1 -> cam2)
        """
        # 立体标定
        ret, _, _, _, _, R, T, E, F = cv2.stereoCalibrate(
            obj_points, img_points1, img_points2,
            cam1_matrix, cam1_dist,
            cam2_matrix, cam2_dist,
            image_size,
            criteria=(cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 100, 1e-5),
            flags=cv2.CALIB_FIX_INTRINSIC
        )
        
        return R, T, E, F
    
    def set_camera_pose(self, camera_name, R, T):
        """
        设置摄像头位姿(相对于参考摄像头)
        
        Args:
            camera_name: 摄像头名称
            R: 3x3 旋转矩阵
            T: 3x1 平移向量
        """
        self.extrinsics[camera_name] = {
            'R': R,
            'T': T,
            'transform': self._build_transform_matrix(R, T)
        }
    
    def _build_transform_matrix(self, R, T):
        """构建4x4齐次变换矩阵"""
        transform = np.eye(4)
        transform[:3, :3] = R
        transform[:3, 3] = T.flatten()
        return transform
    
    def get_relative_pose(self, from_camera, to_camera):
        """
        获取两个摄像头间的相对位姿
        
        Returns:
            R, T: 旋转矩阵和平移向量
        """
        if from_camera == self.reference_camera:
            return self.extrinsics[to_camera]['R'], self.extrinsics[to_camera]['T']
        elif to_camera == self.reference_camera:
            R = self.extrinsics[from_camera]['R'].T
            T = -R @ self.extrinsics[from_camera]['T']
            return R, T
        else:
            # from -> reference -> to
            T_from_ref = self.extrinsics[from_camera]['transform']
            T_ref_to = self.extrinsics[to_camera]['transform']
            T_from_to = np.linalg.inv(T_from_ref) @ T_ref_to
            R = T_from_to[:3, :3]
            T = T_from_to[:3, 3:4]
            return R, T
    
    def project_point_between_cameras(self, point_3d, from_camera, to_camera,
                                     to_camera_matrix):
        """
        将3D点从一个摄像头坐标系投影到另一个摄像头图像
        
        Args:
            point_3d: 3D点在from_camera坐标系中的坐标
            from_camera: 源摄像头
            to_camera: 目标摄像头
            to_camera_matrix: 目标摄像头内参矩阵
            
        Returns:
            image_point: 2D图像坐标
        """
        R, T = self.get_relative_pose(from_camera, to_camera)
        
        # 坐标变换
        point_in_to_cam = R @ point_3d + T
        
        # 投影到图像
        x = point_in_to_cam[0] / point_in_to_cam[2]
        y = point_in_to_cam[1] / point_in_to_cam[2]
        
        fx = to_camera_matrix[0, 0]
        fy = to_camera_matrix[1, 1]
        cx = to_camera_matrix[0, 2]
        cy = to_camera_matrix[1, 2]
        
        u = fx * x + cx
        v = fy * y + cy
        
        return np.array([u, v])
    
    def save_extrinsics(self, filename):
        """保存外参"""
        np.savez(filename, **self.extrinsics)
        print(f"外参已保存到: {filename}")
    
    def load_extrinsics(self, filename):
        """加载外参"""
        data = np.load(filename, allow_pickle=True)
        for camera_name in data.files:
            camera_data = data[camera_name].item()
            self.extrinsics[camera_name] = camera_data
        print(f"外参已从 {filename} 加载")


# 手动设置外参的示例(基于实际测量)
def setup_vehicle_cameras():
    """
    设置车辆摄像头外参
    基于实际车辆尺寸和摄像头安装位置
    """
    calibrator = ExtrinsicCalibrator(reference_camera='front')
    
    # 前摄像头作为参考,位于车辆前方中心,朝向前方
    # 其他摄像头相对于前摄像头的位姿
    
    # 后摄像头: 位于车辆后方,朝向后方(旋转180度)
    R_back = np.array([
        [-1, 0, 0],
        [0, 1, 0],
        [0, 0, -1]
    ])  # 绕Y轴旋转180度
    T_back = np.array([[0], [0], [-500]])  # 后移500mm
    calibrator.set_camera_pose('back', R_back, T_back)
    
    # 左摄像头: 位于车辆左侧,朝向左方(旋转90度)
    R_left = np.array([
        [0, 0, 1],
        [0, 1, 0],
        [-1, 0, 0]
    ])  # 绕Y轴旋转90度
    T_left = np.array([[-200], [0], [-250]])  # 左移200mm,后移250mm
    calibrator.set_camera_pose('left', R_left, T_left)
    
    # 右摄像头: 位于车辆右侧,朝向右方(旋转-90度)
    R_right = np.array([
        [0, 0, -1],
        [0, 1, 0],
        [1, 0, 0]
    ])  # 绕Y轴旋转-90度
    T_right = np.array([[200], [0], [-250]])  # 右移200mm,后移250mm
    calibrator.set_camera_pose('right', R_right, T_right)
    
    calibrator.save_extrinsics("camera_extrinsics.npz")
    
    return calibrator


if __name__ == "__main__":
    # 设置并保存车辆摄像头外参
    calibrator = setup_vehicle_cameras()
    
    # 打印外参信息
    print("\n摄像头外参(相对于前摄像头):")
    for cam_name, extrinsic in calibrator.extrinsics.items():
        print(f"\n{cam_name.upper()}:")
        print("旋转矩阵 R:")
        print(extrinsic['R'])
        print("平移向量 T (mm):")
        print(extrinsic['T'].flatten())

6.2 多视图融合模块

python 复制代码
# multi_view_fusion.py
import numpy as np
import cv2

class MultiViewFusion:
    """多视图融合处理"""
    
    def __init__(self, camera_calibrations, extrinsic_calibrator):
        """
        初始化
        
        Args:
            camera_calibrations: dict, 各摄像头标定参数
            extrinsic_calibrator: ExtrinsicCalibrator对象
        """
        self.cameras = camera_calibrations
        self.extrinsics = extrinsic_calibrator
        
        # 鸟瞰图参数
        self.bev_resolution = 10  # mm/pixel
        self.bev_size = (800, 1000)  # 宽x高 (像素)
        self.bev_range = (4000, 5000)  # 前方4m, 后方1m (mm)
        
        # 预计算投影矩阵
        self.bev_transforms = {}
        self._prepare_bev_transforms()
    
    def _prepare_bev_transforms(self):
        """预计算各摄像头到鸟瞰图的投影变换"""
        for camera_name, calib in self.cameras.items():
            # 获取摄像头参数
            K = calib['camera_matrix']
            
            # 获取外参(相对于车辆坐标系)
            if camera_name == 'front':
                R = np.eye(3)
                T = np.zeros((3, 1))
            else:
                R, T = self.extrinsics.extrinsics[camera_name]['R'], \
                       self.extrinsics.extrinsics[camera_name]['T']
            
            # 构建变换矩阵
            # 假设地面平面Z=0(相对于摄像头高度)
            height = 300  # 摄像头离地高度 mm
            
            # 这是一个简化的IPM(Inverse Perspective Mapping)
            # 实际应用中需要更精确的模型
            self.bev_transforms[camera_name] = {
                'K': K,
                'R': R,
                'T': T,
                'height': height
            }
    
    def create_bev_image(self, camera_frames):
        """
        创建鸟瞰图(Bird's Eye View)
        
        Args:
            camera_frames: dict, {'camera_name': frame_data}
            
        Returns:
            bev_image: 鸟瞰图
        """
        # 创建空白鸟瞰图
        bev = np.zeros((self.bev_size[1], self.bev_size[0], 3), dtype=np.uint8)
        
        # 定义每个摄像头在鸟瞰图中的投影区域
        regions = {
            'front': {'roi': (200, 500, 0, 500)},  # x1, x2, y1, y2
            'back': {'roi': (200, 600, 500, 1000)},
            'left': {'roi': (0, 200, 200, 800)},
            'right': {'roi': (600, 800, 200, 800)}
        }
        
        for camera_name, frame_data in camera_frames.items():
            if camera_name not in regions:
                continue
            
            frame = frame_data['frame']
            roi = regions[camera_name]['roi']
            
            # 透视变换到鸟瞰图
            warped = self._warp_to_bev(frame, camera_name)
            
            if warped is not None:
                # 裁剪并放置到鸟瞰图对应区域
                x1, x2, y1, y2 = roi
                h, w = warped.shape[:2]
                
                # 调整大小以适应ROI
                roi_h, roi_w = y2 - y1, x2 - x1
                warped_resized = cv2.resize(warped, (roi_w, roi_h))
                
                # 融合到鸟瞰图(简单叠加,可以改进为加权融合)
                bev[y1:y2, x1:x2] = warped_resized
        
        # 绘制车辆轮廓
        self._draw_vehicle_on_bev(bev)
        
        return bev
    
    def _warp_to_bev(self, image, camera_name):
        """将摄像头图像透视变换到鸟瞰图"""
        if camera_name not in self.bev_transforms:
            return None
        
        # 获取变换参数
        transform_params = self.bev_transforms[camera_name]
        K = transform_params['K']
        height = transform_params['height']
        
        # 定义源图像中的梯形区域(地面区域)
        h, w = image.shape[:2]
        
        # 根据摄像头位置定义不同的源区域
        if camera_name == 'front':
            src_points = np.float32([
                [w * 0.2, h * 0.6],  # 左上
                [w * 0.8, h * 0.6],  # 右上
                [w * 0.1, h],        # 左下
                [w * 0.9, h]         # 右下
            ])
        elif camera_name == 'back':
            src_points = np.float32([
                [w * 0.1, h],
                [w * 0.9, h],
                [w * 0.2, h * 0.6],
                [w * 0.8, h * 0.6]
            ])
        elif camera_name == 'left':
            src_points = np.float32([
                [w * 0.2, h * 0.6],
                [w * 0.8, h * 0.6],
                [0, h],
                [w, h]
            ])
        else:  # right
            src_points = np.float32([
                [w * 0.2, h * 0.6],
                [w * 0.8, h * 0.6],
                [0, h],
                [w, h]
            ])
        
        # 定义目标鸟瞰图区域
        bev_h, bev_w = 300, 200  # 每个视图的鸟瞰图尺寸
        dst_points = np.float32([
            [0, 0],
            [bev_w, 0],
            [0, bev_h],
            [bev_w, bev_h]
        ])
        
        # 计算透视变换矩阵
        M = cv2.getPerspectiveTransform(src_points, dst_points)
        
        # 执行透视变换
        warped = cv2.warpPerspective(image, M, (bev_w, bev_h))
        
        return warped
    
    def _draw_vehicle_on_bev(self, bev_image):
        """在鸟瞰图上绘制车辆轮廓"""
        h, w = bev_image.shape[:2]
        
        # 车辆尺寸(像素)
        vehicle_length = 400  # 对应4m
        vehicle_width = 200   # 对应2m
        
        # 车辆中心位置
        center_x = w // 2
        center_y = h // 2
        
        # 绘制车辆矩形
        x1 = center_x - vehicle_width // 2
        x2 = center_x + vehicle_width // 2
        y1 = center_y - vehicle_length // 4
        y2 = center_y + vehicle_length * 3 // 4
        
        cv2.rectangle(bev_image, (x1, y1), (x2, y2), (0, 255, 255), 2)
        
        # 绘制车头方向指示
        arrow_end = (center_x, y1 - 30)
        cv2.arrowedLine(bev_image, (center_x, y1), arrow_end, 
                       (0, 255, 255), 2, tipLength=0.3)
    
    def fuse_depth_maps(self, depth_maps):
        """
        融合多个深度图
        
        Args:
            depth_maps: dict, {'camera_name': depth_map}
            
        Returns:
            fused_depth: 融合后的全局深度图
        """
        # 创建全局坐标系下的点云
        global_points = []
        global_colors = []
        
        for camera_name, depth_map in depth_maps.items():
            # 获取相机参数
            K = self.cameras[camera_name]['camera_matrix']
            
            # 获取外参
            if camera_name == 'front':
                R = np.eye(3)
                T = np.zeros((3, 1))
            else:
                R = self.extrinsics.extrinsics[camera_name]['R']
                T = self.extrinsics.extrinsics[camera_name]['T']
            
            # 反投影到3D点云
            points_3d = self._backproject_depth(depth_map, K)
            
            # 转换到全局坐标系
            points_global = (R.T @ (points_3d.T - T)).T
            
            global_points.append(points_global)
        
        # 合并所有点云
        if len(global_points) > 0:
            fused_points = np.vstack(global_points)
        else:
            fused_points = np.array([])
        
        return fused_points
    
    def _backproject_depth(self, depth_map, K):
        """将深度图反投影为3D点云"""
        h, w = depth_map.shape
        
        # 创建像素坐标网格
        u, v = np.meshgrid(np.arange(w), np.arange(h))
        u = u.flatten()
        v = v.flatten()
        depth = depth_map.flatten()
        
        # 过滤无效深度
        valid = depth > 0
        u = u[valid]
        v = v[valid]
        depth = depth[valid]
        
        # 反投影
        fx, fy = K[0, 0], K[1, 1]
        cx, cy = K[0, 2], K[1, 2]
        
        x = (u - cx) * depth / fx
        y = (v - cy) * depth / fy
        z = depth
        
        points_3d = np.stack([x, y, z], axis=1)
        
        return points_3d


# 测试代码
if __name__ == "__main__":
    from camera_manager import MultiCameraManager
    from multi_camera_extrinsics import setup_vehicle_cameras, ExtrinsicCalibrator
    
    # 初始化摄像头系统
    camera_manager = MultiCameraManager()
    camera_manager.initialize()
    camera_manager.start_all()
    
    # 加载外参
    extrinsic_calibrator = ExtrinsicCalibrator()
    extrinsic_calibrator.load_extrinsics("camera_extrinsics.npz")
    
    # 加载相机标定
    camera_calibrations = {}
    for position in ['front', 'back', 'left', 'right']:
        try:
            data = np.load(f"calibration_{position}.npz")
            camera_calibrations[position] = {
                'camera_matrix': data['camera_matrix'],
                'dist_coeffs': data['dist_coeffs']
            }
        except:
            print(f"警告: 未找到 {position} 摄像头标定文件")
    
    # 初始化融合模块
    fusion = MultiViewFusion(camera_calibrations, extrinsic_calibrator)
    
    try:
        while True:
            # 获取所有摄像头画面
            frames = camera_manager.get_frames()
            
            if len(frames) > 0:
                # 创建鸟瞰图
                bev_image = fusion.create_bev_image(frames)
                
                # 显示
                cv2.imshow("Bird's Eye View", bev_image)
                
                # 显示原始多摄像头画面
                combined = camera_manager.visualize()
                if combined is not None:
                    cv2.imshow("Multi-Camera", combined)
            
            if cv2.waitKey(30) & 0xFF == ord('q'):
                break
    
    except KeyboardInterrupt:
        print("\n用户中断")
    
    finally:
        camera_manager.stop_all()
        cv2.destroyAllWindows()

7. 感知层实现

7.1 目标检测模块(YOLO)

python 复制代码
# object_detection.py
import cv2
import numpy as np
import time
from threading import Thread, Lock

class YOLODetector:
    """YOLO目标检测器"""
    
    def __init__(self, model_type='yolov5', conf_threshold=0.5, nms_threshold=0.4):
        """
        初始化YOLO检测器
        
        Args:
            model_type: 模型类型 ('yolov5', 'yolov8', etc.)
            conf_threshold: 置信度阈值
            nms_threshold: NMS阈值
        """
        self.model_type = model_type
        self.conf_threshold = conf_threshold
        self.nms_threshold = nms_threshold
        
        # 加载模型
        self.net = None
        self.output_layers = None
        self.classes = None
        self.colors = None
        
        self.is_initialized = False
        self._initialize_model()
        
    def _initialize_model(self):
        """初始化YOLO模型"""
        try:
            # 使用OpenCV DNN模块加载YOLO
            # 需要下载yolov5s.weights和yolov5s.cfg文件
            weights_path = "models/yolov5s.weights"
            config_path = "models/yolov5s.cfg"
            classes_path = "models/coco.names"
            
            # 加载网络
            self.net = cv2.dnn.readNetFromDarknet(config_path, weights_path)
            
            # 设置后端和目标设备
            self.net.setPreferableBackend(cv2.dnn.DNN_BACKEND_OPENCV)
            self.net.setPreferableTarget(cv2.dnn.DNN_TARGET_CPU)
            # 如果有GPU: self.net.setPreferableTarget(cv2.dnn.DNN_TARGET_CUDA)
            
            # 获取输出层
            layer_names = self.net.getLayerNames()
            self.output_layers = [layer_names[i - 1] for i in self.net.getUnconnectedOutLayers()]
            
            # 加载类别名称
            with open(classes_path, 'r') as f:
                self.classes = [line.strip() for line in f.readlines()]
            
            # 生成随机颜色
            np.random.seed(42)
            self.colors = np.random.randint(0, 255, size=(len(self.classes), 3), dtype=np.uint8)
            
            self.is_initialized = True
            print("YOLO模型加载成功")
            
        except Exception as e:
            print(f"YOLO模型加载失败: {e}")
            print("提示: 请下载YOLO模型文件到 models/ 目录")
            self.is_initialized = False
    
    def detect(self, image):
        """
        在图像上执行目标检测
        
        Args:
            image: 输入图像 (numpy.ndarray)
            
        Returns:
            detections: list of dict, 每个检测包含:
                        {'class': str, 'confidence': float, 
                         'box': [x, y, w, h], 'center': [cx, cy]}
        """
        if not self.is_initialized:
            return []
        
        height, width = image.shape[:2]
        
        # 预处理图像
        blob = cv2.dnn.blobFromImage(image, 1/255.0, (416, 416), 
                                     swapRB=True, crop=False)
        
        # 前向传播
        self.net.setInput(blob)
        outs = self.net.forward(self.output_layers)
        
        # 解析检测结果
        class_ids = []
        confidences = []
        boxes = []
        
        for out in outs:
            for detection in out:
                scores = detection[5:]
                class_id = np.argmax(scores)
                confidence = scores[class_id]
                
                if confidence > self.conf_threshold:
                    # 边界框坐标
                    center_x = int(detection[0] * width)
                    center_y = int(detection[1] * height)
                    w = int(detection[2] * width)
                    h = int(detection[3] * height)
                    
                    # 左上角坐标
                    x = int(center_x - w / 2)
                    y = int(center_y - h / 2)
                    
                    boxes.append([x, y, w, h])
                    confidences.append(float(confidence))
                    class_ids.append(class_id)
        
        # 非极大值抑制
        indices = cv2.dnn.NMSBoxes(boxes, confidences, 
                                   self.conf_threshold, self.nms_threshold)
        
        detections = []
        if len(indices) > 0:
            for i in indices.flatten():
                x, y, w, h = boxes[i]
                class_id = class_ids[i]
                confidence = confidences[i]
                
                detections.append({
                    'class': self.classes[class_id],
                    'class_id': class_id,
                    'confidence': confidence,
                    'box': [x, y, w, h],
                    'center': [x + w//2, y + h//2]
                })
        
        return detections
    
    def draw_detections(self, image, detections):
        """在图像上绘制检测结果"""
        result = image.copy()
        
        for det in detections:
            x, y, w, h = det['box']
            class_name = det['class']
            confidence = det['confidence']
            class_id = det['class_id']
            
            # 获取颜色
            color = tuple(int(c) for c in self.colors[class_id])
            
            # 绘制边界框
            cv2.rectangle(result, (x, y), (x + w, y + h), color, 2)
            
            # 绘制标签
            label = f"{class_name}: {confidence:.2f}"
            label_size, baseline = cv2.getTextSize(label, cv2.FONT_HERSHEY_SIMPLEX, 0.5, 1)
            y_label = max(y, label_size[1])
            
            cv2.rectangle(result, (x, y_label - label_size[1] - baseline),
                         (x + label_size[0], y_label + baseline), color, -1)
            cv2.putText(result, label, (x, y_label),
                       cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 1)
        
        return result


class MultiCameraObjectDetector:
    """多摄像头目标检测管理器"""
    
    def __init__(self, camera_manager):
        self.camera_manager = camera_manager
        self.detector = YOLODetector()
        
        # 各摄像头检测结果
        self.detections = {
            'front': [],
            'back': [],
            'left': [],
            'right': []
        }
        self.detections_lock = Lock()
        
        # 运行状态
        self.is_running = False
        self.detection_thread = None
        
    def start(self):
        """启动检测线程"""
        if not self.detector.is_initialized:
            print("错误: YOLO模型未初始化")
            return False
        
        self.is_running = True
        self.detection_thread = Thread(target=self._detection_loop, daemon=True)
        self.detection_thread.start()
        print("目标检测线程已启动")
        return True
    
    def stop(self):
        """停止检测线程"""
        self.is_running = False
        if self.detection_thread:
            self.detection_thread.join(timeout=2)
        print("目标检测线程已停止")
    
    def _detection_loop(self):
        """检测循环"""
        while self.is_running:
            start_time = time.time()
            
            # 获取所有摄像头画面
            frames = self.camera_manager.get_frames()
            
            # 对每个摄像头执行检测
            new_detections = {}
            for position, frame_data in frames.items():
                frame = frame_data['frame']
                detections = self.detector.detect(frame)
                new_detections[position] = detections
            
            # 更新检测结果
            with self.detections_lock:
                self.detections.update(new_detections)
            
            # 控制检测频率(每秒10次)
            elapsed = time.time() - start_time
            sleep_time = max(0, 0.1 - elapsed)
            time.sleep(sleep_time)
    
    def get_detections(self):
        """获取所有摄像头的检测结果"""
        with self.detections_lock:
            return self.detections.copy()
    
    def get_detections_by_camera(self, camera_position):
        """获取指定摄像头的检测结果"""
        with self.detections_lock:
            return self.detections.get(camera_position, [])
    
    def filter_detections_by_class(self, class_names):
        """
        按类别过滤检测结果
        
        Args:
            class_names: list of str, 要保留的类别名称
            
        Returns:
            filtered_detections: dict
        """
        all_detections = self.get_detections()
        filtered = {}
        
        for camera, dets in all_detections.items():
            filtered[camera] = [d for d in dets if d['class'] in class_names]
        
        return filtered
    
    def get_closest_obstacle(self, camera_position='front'):
        """
        获取指定摄像头视野中最近的障碍物
        
        Returns:
            closest_detection: dict or None
        """
        detections = self.get_detections_by_camera(camera_position)
        
        if len(detections) == 0:
            return None
        
        # 根据边界框底部y坐标判断距离(近似)
        # y值越大表示越近
        closest = max(detections, key=lambda d: d['box'][1] + d['box'][3])
        
        return closest


# 测试代码
if __name__ == "__main__":
    from camera_manager import MultiCameraManager
    
    # 初始化摄像头系统
    camera_manager = MultiCameraManager()
    camera_manager.initialize()
    camera_manager.start_all()
    
    # 初始化目标检测器
    detector = MultiCameraObjectDetector(camera_manager)
    detector.start()
    
    try:
        while True:
            # 获取画面和检测结果
            frames = camera_manager.get_frames()
            all_detections = detector.get_detections()
            
            # 可视化
            for position, frame_data in frames.items():
                if position in all_detections:
                    frame = frame_data['frame']
                    detections = all_detections[position]
                    
                    # 绘制检测结果
                    result = detector.detector.draw_detections(frame, detections)
                    
                    # 添加统计信息
                    info_text = f"{position.upper()}: {len(detections)} objects"
                    cv2.putText(result, info_text, (10, 30),
                              cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 255, 0), 2)
                    
                    cv2.imshow(f"Detection - {position}", result)
            
            # 打印重要目标
            for camera in ['front', 'back', 'left', 'right']:
                closest = detector.get_closest_obstacle(camera)
                if closest:
                    print(f"[{camera}] Closest: {closest['class']} "
                          f"(conf: {closest['confidence']:.2f})")
            
            if cv2.waitKey(1) & 0xFF == ord('q'):
                break
    
    except KeyboardInterrupt:
        print("\n用户中断")
    
    finally:
        detector.stop()
        camera_manager.stop_all()
        cv2.destroyAllWindows()

由于篇幅限制,我将继续在下一个代码块中提供车道线检测、深度估计、SLAM等模块...

7.2 车道线检测模块

python 复制代码
# lane_detection.py
import cv2
import numpy as np
from collections import deque

class LaneDetector:
    """车道线检测器"""
    
    def __init__(self):
        # 图像处理参数
        self.roi_vertices = None
        
        # Hough变换参数
        self.rho = 2
        self.theta = np.pi / 180
        self.threshold = 50
        self.min_line_length = 50
        self.max_line_gap = 150
        
        # 车道线历史(用于平滑)
        self.left_lines_history = deque(maxlen=10)
        self.right_lines_history = deque(maxlen=10)
        
        # 车道参数
        self.lane_width = 350  # 像素
        
    def detect_lanes(self, image):
        """
        检测车道线
        
        Args:
            image: 输入图像
            
        Returns:
            lane_image: 标注了车道线的图像
            lane_info: 车道信息字典
        """
        # 1. 预处理
        processed = self._preprocess(image)
        
        # 2. 边缘检测
        edges = cv2.Canny(processed, 50, 150)
        
        # 3. ROI掩码
        masked_edges = self._apply_roi_mask(edges, image.shape)
        
        # 4. Hough变换检测直线
        lines = cv2.HoughLinesP(masked_edges, self.rho, self.theta, self.threshold,
                                np.array([]), minLineLength=self.min_line_length,
                                maxLineGap=self.max_line_gap)
        
        # 5. 分离左右车道线
        left_lane, right_lane = self._separate_lanes(lines, image.shape)
        
        # 6. 车道线拟合与平滑
        left_lane_smooth = self._smooth_lane(left_lane, 'left')
        right_lane_smooth = self._smooth_lane(right_lane, 'right')
        
        # 7. 绘制车道线
        lane_image = self._draw_lanes(image, left_lane_smooth, right_lane_smooth)
        
        # 8. 计算车道信息
        lane_info = self._calculate_lane_info(left_lane_smooth, right_lane_smooth, image.shape)
        
        return lane_image, lane_info
    
    def _preprocess(self, image):
        """图像预处理"""
        # 转换为HLS色彩空间(对光照变化更鲁棒)
        hls = cv2.cvtColor(image, cv2.COLOR_BGR2HLS)
        
        # 提取L和S通道
        l_channel = hls[:, :, 1]
        s_channel = hls[:, :, 2]
        
        # 对L通道进行阈值化(检测白色车道线)
        l_binary = np.zeros_like(l_channel)
        l_binary[(l_channel >= 220) & (l_channel <= 255)] = 255
        
        # 对S通道进行阈值化(检测黄色车道线)
        s_binary = np.zeros_like(s_channel)
        s_binary[(s_channel >= 170) & (s_channel <= 255)] = 255
        
        # 组合两个通道
        combined = np.zeros_like(l_channel)
        combined[(l_binary == 255) | (s_binary == 255)] = 255
        
        return combined
    
    def _apply_roi_mask(self, image, shape):
        """应用感兴趣区域掩码"""
        height, width = shape[:2]
        
        # 定义梯形ROI区域
        if self.roi_vertices is None:
            self.roi_vertices = np.array([[
                (width * 0.1, height),
                (width * 0.4, height * 0.6),
                (width * 0.6, height * 0.6),
                (width * 0.9, height)
            ]], dtype=np.int32)
        
        # 创建掩码
        mask = np.zeros_like(image)
        cv2.fillPoly(mask, self.roi_vertices, 255)
        
        masked_image = cv2.bitwise_and(image, mask)
        return masked_image
    
    def _separate_lanes(self, lines, shape):
        """分离左右车道线"""
        if lines is None:
            return None, None
        
        height, width = shape[:2]
        left_lines = []
        right_lines = []
        
        for line in lines:
            x1, y1, x2, y2 = line[0]
            
            # 计算斜率
            if x2 - x1 == 0:
                continue
            slope = (y2 - y1) / (x2 - x1)
            
            # 根据斜率和位置分离左右车道线
            if slope < -0.5:  # 左车道线(负斜率)
                left_lines.append((x1, y1, x2, y2, slope))
            elif slope > 0.5:  # 右车道线(正斜率)
                right_lines.append((x1, y1, x2, y2, slope))
        
        return left_lines, right_lines
    
    def _fit_lane_line(self, lines):
        """拟合车道线"""
        if not lines:
            return None
        
        # 提取所有点
        points_x = []
        points_y = []
        
        for x1, y1, x2, y2, _ in lines:
            points_x.extend([x1, x2])
            points_y.extend([y1, y2])
        
        if len(points_x) < 2:
            return None
        
        # 线性拟合
        poly = np.polyfit(points_y, points_x, 1)
        
        return poly
    
    def _smooth_lane(self, lines, side):
        """平滑车道线"""
        poly = self._fit_lane_line(lines)
        
        if poly is None:
            return None
        
        # 添加到历史
        if side == 'left':
            self.left_lines_history.append(poly)
            history = list(self.left_lines_history)
        else:
            self.right_lines_history.append(poly)
            history = list(self.right_lines_history)
        
        # 计算平均
        if len(history) > 0:
            avg_poly = np.mean(history, axis=0)
            return avg_poly
        
        return poly
    
    def _draw_lanes(self, image, left_poly, right_poly):
        """绘制车道线"""
        result = image.copy()
        height, width = image.shape[:2]
        
        # 创建透明覆盖层
        overlay = np.zeros_like(image)
        
        # 定义车道线的y坐标范围
        y_start = height
        y_end = int(height * 0.6)
        
        # 绘制左车道线
        if left_poly is not None:
            x_start_left = int(np.polyval(left_poly, y_start))
            x_end_left = int(np.polyval(left_poly, y_end))
            cv2.line(result, (x_start_left, y_start), (x_end_left, y_end), 
                    (0, 255, 0), 10)
        
        # 绘制右车道线
        if right_poly is not None:
            x_start_right = int(np.polyval(right_poly, y_start))
            x_end_right = int(np.polyval(right_poly, y_end))
            cv2.line(result, (x_start_right, y_start), (x_end_right, y_end), 
                    (0, 255, 0), 10)
        
        # 绘制车道区域
        if left_poly is not None and right_poly is not None:
            # 定义车道多边形
            lane_poly = np.array([[
                [x_start_left, y_start],
                [x_end_left, y_end],
                [x_end_right, y_end],
                [x_start_right, y_start]
            ]], dtype=np.int32)
            
            cv2.fillPoly(overlay, lane_poly, (0, 255, 0))
            cv2.addWeighted(result, 1, overlay, 0.3, 0, result)
        
        return result
    
    def _calculate_lane_info(self, left_poly, right_poly, shape):
        """计算车道信息"""
        height, width = shape[:2]
        
        info = {
            'left_detected': left_poly is not None,
            'right_detected': right_poly is not None,
            'lane_center_offset': 0,
            'lane_curvature': 0,
            'lane_width': 0
        }
        
        if left_poly is None and right_poly is None:
            return info
        
        # 计算车道中心偏移
        y_eval = height  # 在图像底部评估
        vehicle_center = width / 2
        
        if left_poly is not None and right_poly is not None:
            x_left = np.polyval(left_poly, y_eval)
            x_right = np.polyval(right_poly, y_eval)
            lane_center = (x_left + x_right) / 2
            
            # 偏移量(像素)
            offset_pixels = vehicle_center - lane_center
            
            # 转换为米(假设车道宽度为3.7米)
            meters_per_pixel = 3.7 / (x_right - x_left)
            offset_meters = offset_pixels * meters_per_pixel
            
            info['lane_center_offset'] = offset_meters
            info['lane_width'] = x_right - x_left
        
        # 计算曲率(简化计算)
        # 实际应用中需要更精确的曲率计算
        if left_poly is not None:
            # 一阶导数(斜率)
            slope = left_poly[0]
            # 转换为曲率半径
            info['lane_curvature'] = 1 / (1 + slope**2)**1.5 if slope != 0 else 0
        
        return info


# 测试代码
if __name__ == "__main__":
    from camera_manager import MultiCameraManager
    
    # 初始化摄像头
    camera_manager = MultiCameraManager()
    camera_manager.initialize()
    camera_manager.start_all()
    
    # 初始化车道检测器
    lane_detector = LaneDetector()
    
    try:
        while True:
            # 获取前摄像头画面
            frame_data = camera_manager.get_frame('front')
            
            if frame_data is not None:
                frame = frame_data['frame']
                
                # 检测车道线
                lane_image, lane_info = lane_detector.detect_lanes(frame)
                
                # 显示车道信息
                info_text = [
                    f"Left: {'OK' if lane_info['left_detected'] else 'NO'}",
                    f"Right: {'OK' if lane_info['right_detected'] else 'NO'}",
                    f"Offset: {lane_info['lane_center_offset']:.2f}m",
                    f"Width: {lane_info['lane_width']:.0f}px"
                ]
                
                y_offset = 30
                for text in info_text:
                    cv2.putText(lane_image, text, (10, y_offset),
                              cv2.FONT_HERSHEY_SIMPLEX, 0.6, (255, 255, 0), 2)
                    y_offset += 30
                
                cv2.imshow("Lane Detection", lane_image)
            
            if cv2.waitKey(1) & 0xFF == ord('q'):
                break
    
    except KeyboardInterrupt:
        print("\n用户中断")
    
    finally:
        camera_manager.stop_all()
        cv2.destroyAllWindows()

7.3 深度估计模块

python 复制代码
# depth_estimation.py
import cv2
import numpy as np
import torch
import torch.nn as nn

class MonoDepthEstimator:
    """单目深度估计器"""
    
    def __init__(self, model_path=None):
        """
        初始化深度估计模型
        
        Args:
            model_path: 预训练模型路径
        """
        self.model = None
        self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
        
        # 图像预处理参数
        self.input_size = (384, 640)  # 高x宽
        self.mean = torch.tensor([0.485, 0.456, 0.406]).view(1, 3, 1, 1)
        self.std = torch.tensor([0.229, 0.224, 0.225]).view(1, 3, 1, 1)
        
        if model_path:
            self._load_model(model_path)
    
    def _load_model(self, model_path):
        """加载深度估计模型"""
        try:
            # 这里使用MiDaS作为示例
            self.model = torch.hub.load('intel-isl/MiDaS', 'MiDaS_small')
            self.model.to(self.device)
            self.model.eval()
            print("深度估计模型加载成功")
        except Exception as e:
            print(f"深度估计模型加载失败: {e}")
            self.model = None
    
    def estimate_depth(self, image):
        """
        估计深度图
        
        Args:
            image: 输入RGB图像 (numpy.ndarray)
            
        Returns:
            depth_map: 深度图 (numpy.ndarray), 值越大表示越近
        """
        if self.model is None:
            # 返回简单的基于位置的深度估计(作为fallback)
            return self._simple_depth_estimation(image)
        
        # 预处理
        input_tensor = self._preprocess(image)
        
        # 前向传播
        with torch.no_grad():
            depth_tensor = self.model(input_tensor)
        
        # 后处理
        depth_map = self._postprocess(depth_tensor, image.shape[:2])
        
        return depth_map
    
    def _preprocess(self, image):
        """图像预处理"""
        # 调整大小
        img_resized = cv2.resize(image, (self.input_size[1], self.input_size[0]))
        
        # BGR to RGB
        img_rgb = cv2.cvtColor(img_resized, cv2.COLOR_BGR2RGB)
        
        # 归一化
        img_tensor = torch.from_numpy(img_rgb).permute(2, 0, 1).float() / 255.0
        img_tensor = img_tensor.unsqueeze(0).to(self.device)
        
        # 标准化
        img_tensor = (img_tensor - self.mean.to(self.device)) / self.std.to(self.device)
        
        return img_tensor
    
    def _postprocess(self, depth_tensor, original_size):
        """后处理深度图"""
        # 转换为numpy
        depth = depth_tensor.squeeze().cpu().numpy()
        
        # 调整回原始尺寸
        depth_resized = cv2.resize(depth, (original_size[1], original_size[0]))
        
        # 归一化到0-255
        depth_normalized = cv2.normalize(depth_resized, None, 0, 255, cv2.NORM_MINMAX)
        depth_normalized = depth_normalized.astype(np.uint8)
        
        return depth_normalized
    
    def _simple_depth_estimation(self, image):
        """简单的基于位置的深度估计(作为fallback)"""
        h, w = image.shape[:2]
        
        # 创建线性深度图(从上到下,从远到近)
        depth_map = np.zeros((h, w), dtype=np.uint8)
        for i in range(h):
            depth_map[i, :] = int(255 * i / h)
        
        return depth_map
    
    def visualize_depth(self, image, depth_map):
        """可视化深度图"""
        # 应用颜色映射
        depth_colored = cv2.applyColorMap(depth_map, cv2.COLORMAP_MAGMA)
        
        # 与原图混合
        blended = cv2.addWeighted(image, 0.6, depth_colored, 0.4, 0)
        
        return blended
    
    def get_distance_at_point(self, depth_map, point, calibration):
        """
        根据深度图和相机标定获取某点的实际距离
        
        Args:
            depth_map: 深度图
            point: 图像坐标 (u, v)
            calibration: 相机标定参数
            
        Returns:
            distance: 实际距离(米)
        """
        u, v = point
        h, w = depth_map.shape
        
        if not (0 <= u < w and 0 <= v < h):
            return None
        
        # 获取深度值(归一化值 0-255)
        depth_value = depth_map[v, u]
        
        # 转换为实际距离(需要根据实际情况校准)
        # 这里使用简单的线性映射
        min_distance = 0.5  # 最近距离 0.5m
        max_distance = 10.0  # 最远距离 10m
        
        distance = min_distance + (max_distance - min_distance) * (255 - depth_value) / 255.0
        
        return distance


class StereoDepthEstimator:
    """双目深度估计器"""
    
    def __init__(self, baseline, focal_length):
        """
        初始化双目深度估计
        
        Args:
            baseline: 基线距离(mm)
            focal_length: 焦距(pixels)
        """
        self.baseline = baseline
        self.focal_length = focal_length
        
        # 立体匹配参数
        self.stereo = cv2.StereoSGBM_create(
            minDisparity=0,
            numDisparities=128,  # 必须是16的倍数
            blockSize=5,
            P1=8 * 3 * 5 ** 2,
            P2=32 * 3 * 5 ** 2,
            disp12MaxDiff=1,
            uniquenessRatio=10,
            speckleWindowSize=100,
            speckleRange=32,
            preFilterCap=63,
            mode=cv2.STEREO_SGBM_MODE_SGBM_3WAY
        )
        
        # WLS滤波器(用于视差图后处理)
        self.wls_filter = cv2.ximgproc.createDisparityWLSFilter(self.stereo)
        self.wls_filter.setLambda(8000)
        self.wls_filter.setSigmaColor(1.5)
    
    def estimate_depth(self, left_image, right_image):
        """
        从立体图像对估计深度
        
        Args:
            left_image: 左图像
            right_image: 右图像
            
        Returns:
            depth_map: 深度图(单位: mm)
            disparity_map: 视差图
        """
        # 转换为灰度图
        left_gray = cv2.cvtColor(left_image, cv2.COLOR_BGR2GRAY)
        right_gray = cv2.cvtColor(right_image, cv2.COLOR_BGR2GRAY)
        
        # 计算视差图
        disparity = self.stereo.compute(left_gray, right_gray).astype(np.float32) / 16.0
        
        # 应用WLS滤波
        right_matcher = cv2.ximgproc.createRightMatcher(self.stereo)
        disparity_right = right_matcher.compute(right_gray, left_gray).astype(np.float32) / 16.0
        disparity_filtered = self.wls_filter.filter(disparity, left_gray, None, disparity_right)
        
        # 计算深度
        # Z = (f * B) / d
        # 避免除零
        disparity_filtered[disparity_filtered <= 0] = 0.1
        depth_map = (self.focal_length * self.baseline) / disparity_filtered
        
        # 限制深度范围
        depth_map = np.clip(depth_map, 0, 10000)  # 最大10米
        
        return depth_map, disparity_filtered
    
    def visualize_disparity(self, disparity):
        """可视化视差图"""
        # 归一化视差图
        disparity_normalized = cv2.normalize(disparity, None, 0, 255, cv2.NORM_MINMAX)
        disparity_normalized = disparity_normalized.astype(np.uint8)
        
        # 应用颜色映射
        disparity_colored = cv2.applyColorMap(disparity_normalized, cv2.COLORMAP_JET)
        
        return disparity_colored


# 测试代码
if __name__ == "__main__":
    from camera_manager import MultiCameraManager
    
    # 初始化摄像头
    camera_manager = MultiCameraManager()
    camera_manager.initialize()
    camera_manager.start_all()
    
    # 初始化深度估计器
    mono_depth = MonoDepthEstimator()
    
    try:
        while True:
            # 获取前摄像头画面
            frame_data = camera_manager.get_frame('front')
            
            if frame_data is not None:
                frame = frame_data['frame']
                
                # 估计深度
                depth_map = mono_depth.estimate_depth(frame)
                
                # 可视化
                depth_vis = mono_depth.visualize_depth(frame, depth_map)
                
                # 在中心点显示距离
                h, w = frame.shape[:2]
                center = (w // 2, h // 2)
                distance = mono_depth.get_distance_at_point(depth_map, center, None)
                
                cv2.circle(depth_vis, center, 5, (0, 0, 255), -1)
                if distance is not None:
                    text = f"Distance: {distance:.2f}m"
                    cv2.putText(depth_vis, text, (center[0] - 100, center[1] - 20),
                              cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 255, 0), 2)
                
                cv2.imshow("Depth Estimation", depth_vis)
            
            if cv2.waitKey(1) & 0xFF == ord('q'):
                break
    
    except KeyboardInterrupt:
        print("\n用户中断")
    
    finally:
        camera_manager.stop_all()
        cv2.destroyAllWindows()

8. 决策与规划层实现

8.1 全局路径规划(A*算法)

python 复制代码
# path_planning.py
import numpy as np
import heapq
from collections import defaultdict

class Node:
    """A*算法节点"""
    def __init__(self, position, parent=None):
        self.position = position  # (x, y)
        self.parent = parent
        self.g = 0  # 起点到当前节点的代价
        self.h = 0  # 当前节点到目标的启发式估计
        self.f = 0  # f = g + h
    
    def __lt__(self, other):
        return self.f < other.f
    
    def __eq__(self, other):
        return self.position == other.position


class AStarPlanner:
    """A*全局路径规划器"""
    
    def __init__(self, grid_map, grid_resolution=0.1):
        """
        初始化A*规划器
        
        Args:
            grid_map: 栅格地图 (numpy.ndarray), 0=可通行, 1=障碍物
            grid_resolution: 栅格分辨率(米/格)
        """
        self.grid_map = grid_map
        self.grid_resolution = grid_resolution
        self.height, self.width = grid_map.shape
        
        # 8邻域移动
        self.motions = [
            (0, 1, 1.0),    # 右
            (0, -1, 1.0),   # 左
            (1, 0, 1.0),    # 下
            (-1, 0, 1.0),   # 上
            (1, 1, 1.414),  # 右下
            (1, -1, 1.414), # 左下
            (-1, 1, 1.414), # 右上
            (-1, -1, 1.414) # 左上
        ]
    
    def plan(self, start, goal):
        """
        A*路径规划
        
        Args:
            start: 起点 (x, y) in grid coordinates
            goal: 终点 (x, y) in grid coordinates
            
        Returns:
            path: 路径点列表 [(x, y), ...] 或 None
        """
        start_node = Node(start)
        goal_node = Node(goal)
        
        # 开放列表和关闭列表
        open_list = []
        heapq.heappush(open_list, start_node)
        closed_set = set()
        
        # 用于快速查找节点
        open_dict = {start: start_node}
        
        while open_list:
            # 获取f值最小的节点
            current = heapq.heappop(open_list)
            del open_dict[current.position]
            
            # 到达目标
            if current == goal_node:
                return self._reconstruct_path(current)
            
            closed_set.add(current.position)
            
            # 遍历邻居
            for dx, dy, cost in self.motions:
                neighbor_pos = (current.position[0] + dx, current.position[1] + dy)
                
                # 检查边界
                if not self._is_valid(neighbor_pos):
                    continue
                
                # 已在关闭列表
                if neighbor_pos in closed_set:
                    continue
                
                # 计算代价
                tentative_g = current.g + cost
                
                # 创建或更新邻居节点
                if neighbor_pos in open_dict:
                    neighbor = open_dict[neighbor_pos]
                    if tentative_g < neighbor.g:
                        neighbor.g = tentative_g
                        neighbor.f = neighbor.g + neighbor.h
                        neighbor.parent = current
                else:
                    neighbor = Node(neighbor_pos, current)
                    neighbor.g = tentative_g
                    neighbor.h = self._heuristic(neighbor_pos, goal)
                    neighbor.f = neighbor.g + neighbor.h
                    heapq.heappush(open_list, neighbor)
                    open_dict[neighbor_pos] = neighbor
        
        return None  # 未找到路径
    
    def _is_valid(self, position):
        """检查位置是否有效"""
        x, y = position
        
        # 边界检查
        if x < 0 or x >= self.width or y < 0 or y >= self.height:
            return False
        
        # 障碍物检查
        if self.grid_map[y, x] > 0:
            return False
        
        return True
    
    def _heuristic(self, pos1, pos2):
        """启发式函数(欧氏距离)"""
        dx = pos1[0] - pos2[0]
        dy = pos1[1] - pos2[1]
        return np.sqrt(dx**2 + dy**2)
    
    def _reconstruct_path(self, node):
        """重建路径"""
        path = []
        current = node
        
        while current is not None:
            path.append(current.position)
            current = current.parent
        
        return path[::-1]  # 反转路径
    
    def smooth_path(self, path, weight_data=0.5, weight_smooth=0.3, tolerance=0.001):
        """
        路径平滑(梯度下降法)
        
        Args:
            path: 原始路径
            weight_data: 数据权重(保持原始路径)
            weight_smooth: 平滑权重
            tolerance: 收敛阈值
        """
        if len(path) <= 2:
            return path
        
        # 复制路径
        new_path = [list(p) for p in path]
        
        # 迭代优化
        change = tolerance
        while change >= tolerance:
            change = 0.0
            
            # 不修改起点和终点
            for i in range(1, len(path) - 1):
                for j in range(2):  # x和y
                    original = path[i][j]
                    
                    # 梯度下降更新
                    new_path[i][j] += weight_data * (path[i][j] - new_path[i][j])
                    new_path[i][j] += weight_smooth * (new_path[i-1][j] + new_path[i+1][j] - 2.0 * new_path[i][j])
                    
                    change += abs(original - new_path[i][j])
        
        return [tuple(p) for p in new_path]


### 8.2 局部避障规划(DWA算法)

class DWAPlanner:
    """动态窗口法局部路径规划"""
    
    def __init__(self):
        # 机器人参数
        self.max_speed = 1.0  # m/s
        self.min_speed = -0.5  # m/s (后退)
        self.max_yaw_rate = 40.0 * np.pi / 180.0  # rad/s
        self.max_accel = 0.5  # m/s^2
        self.max_dyaw_rate = 40.0 * np.pi / 180.0  # rad/s^2
        self.v_resolution = 0.01  # m/s
        self.yaw_rate_resolution = 0.1 * np.pi / 180.0  # rad/s
        
        # 预测参数
        self.dt = 0.1  # 时间步长(s)
        self.predict_time = 3.0  # 预测时间(s)
        
        # 评估函数权重
        self.heading_weight = 0.15
        self.clearance_weight = 1.0
        self.velocity_weight = 1.0
        
        # 机器人几何
        self.robot_radius = 0.3  # m
    
    def plan(self, state, goal, obstacles):
        """
        DWA局部规划
        
        Args:
            state: 当前状态 [x, y, yaw, v, omega]
            goal: 目标点 [x, y]
            obstacles: 障碍物列表 [[x, y], ...]
            
        Returns:
            best_u: 最优控制输入 [v, omega] 或 None
            best_trajectory: 最优轨迹
        """
        # 计算动态窗口
        dw = self._calculate_dynamic_window(state)
        
        # 在动态窗口中采样轨迹并评估
        best_trajectory = None
        best_u = None
        max_score = -float('inf')
        
        for v in np.arange(dw[0], dw[1], self.v_resolution):
            for omega in np.arange(dw[2], dw[3], self.yaw_rate_resolution):
                # 生成轨迹
                trajectory = self._predict_trajectory(state, v, omega)
                
                # 检查碰撞
                if self._check_collision(trajectory, obstacles):
                    continue
                
                # 评估轨迹
                score = self._evaluate_trajectory(trajectory, goal, state)
                
                if score > max_score:
                    max_score = score
                    best_trajectory = trajectory
                    best_u = [v, omega]
        
        return best_u, best_trajectory
    
    def _calculate_dynamic_window(self, state):
        """
        计算动态窗口
        
        Returns:
            [v_min, v_max, omega_min, omega_max]
        """
        x, y, yaw, v, omega = state
        
        # 速度限制窗口
        Vs = [self.min_speed, self.max_speed,
              -self.max_yaw_rate, self.max_yaw_rate]
        
        # 加速度限制窗口
        Vd = [v - self.max_accel * self.dt,
              v + self.max_accel * self.dt,
              omega - self.max_dyaw_rate * self.dt,
              omega + self.max_dyaw_rate * self.dt]
        
        # 取交集
        dw = [max(Vs[0], Vd[0]), min(Vs[1], Vd[1]),
              max(Vs[2], Vd[2]), min(Vs[3], Vd[3])]
        
        return dw
    
    def _predict_trajectory(self, state, v, omega):
        """预测轨迹"""
        x, y, yaw, _, _ = state
        trajectory = [[x, y, yaw, v, omega]]
        
        time = 0
        while time <= self.predict_time:
            x += v * np.cos(yaw) * self.dt
            y += v * np.sin(yaw) * self.dt
            yaw += omega * self.dt
            trajectory.append([x, y, yaw, v, omega])
            time += self.dt
        
        return np.array(trajectory)
    
    def _check_collision(self, trajectory, obstacles):
        """检查轨迹是否与障碍物碰撞"""
        for point in trajectory:
            for obstacle in obstacles:
                dist = np.sqrt((point[0] - obstacle[0])**2 + (point[1] - obstacle[1])**2)
                if dist <= self.robot_radius:
                    return True
        return False
    
    def _evaluate_trajectory(self, trajectory, goal, state):
        """
        评估轨迹
        
        评估函数: G(v,ω) = σ(α·heading + β·dist + γ·velocity)
        """
        # 目标方向评价
        dx = goal[0] - trajectory[-1, 0]
        dy = goal[1] - trajectory[-1, 1]
        goal_angle = np.arctan2(dy, dx)
        heading_score = np.pi - abs(trajectory[-1, 2] - goal_angle)
        
        # 速度评价(速度越快越好)
        velocity_score = trajectory[-1, 3]
        
        # 障碍物距离评价(这里简化处理)
        # 实际应用中应该计算轨迹到最近障碍物的距离
        clearance_score = 1.0
        
        # 总评价
        score = (self.heading_weight * heading_score + 
                self.velocity_weight * velocity_score + 
                self.clearance_weight * clearance_score)
        
        return score


# 测试代码
if __name__ == "__main__":
    # 创建测试地图
    map_size = (100, 100)
    grid_map = np.zeros(map_size)
    
    # 添加障碍物
    grid_map[30:70, 40:45] = 1  # 垂直墙
    grid_map[40:45, 30:70] = 1  # 水平墙
    
    # A*规划
    astar = AStarPlanner(grid_map, grid_resolution=0.1)
    start = (10, 10)
    goal = (80, 80)
    
    path = astar.plan(start, goal)
    
    if path:
        print(f"找到路径,长度: {len(path)}")
        
        # 平滑路径
        smooth_path = astar.smooth_path(path)
        print(f"平滑后路径长度: {len(smooth_path)}")
    else:
        print("未找到路径")
    
    # DWA规划测试
    dwa = DWAPlanner()
    state = [0, 0, 0, 0, 0]  # [x, y, yaw, v, omega]
    goal = [5, 5]
    obstacles = [[2, 2], [3, 3]]
    
    best_u, trajectory = dwa.plan(state, goal, obstacles)
    
    if best_u:
        print(f"DWA最优控制: v={best_u[0]:.2f} m/s, omega={best_u[1]:.2f} rad/s")

9. 控制层实现

9.1 Pure Pursuit横向控制

python 复制代码
# vehicle_control.py
import numpy as np

class PurePursuitController:
    """Pure Pursuit横向控制器"""
    
    def __init__(self, wheelbase, lookahead_gain=0.5, lookahead_min=1.0, lookahead_max=5.0):
        """
        初始化Pure Pursuit控制器
        
        Args:
            wheelbase: 轴距(m)
            lookahead_gain: 前视距离增益
            lookahead_min: 最小前视距离(m)
            lookahead_max: 最大前视距离(m)
        """
        self.L = wheelbase
        self.k_ld = lookahead_gain
        self.ld_min = lookahead_min
        self.ld_max = lookahead_max
    
    def compute_steering(self, vehicle_state, path):
        """
        计算转向角
        
        Args:
            vehicle_state: [x, y, yaw, v]
            path: 参考路径 [[x, y], ...]
            
        Returns:
            steering_angle: 转向角(rad)
            target_point: 目标点
        """
        x, y, yaw, v = vehicle_state
        
        # 计算前视距离
        ld = np.clip(self.k_ld * v, self.ld_min, self.ld_max)
        
        # 找到前视点
        target_point = self._find_lookahead_point(vehicle_state, path, ld)
        
        if target_point is None:
            return 0.0, None
        
        # 计算目标点在车辆坐标系中的位置
        dx = target_point[0] - x
        dy = target_point[1] - y
        
        # 转换到车辆坐标系
        alpha = np.arctan2(dy, dx) - yaw
        
        # Pure Pursuit公式
        steering_angle = np.arctan(2.0 * self.L * np.sin(alpha) / ld)
        
        # 限制转向角
        max_steering = np.pi / 6  # 30度
        steering_angle = np.clip(steering_angle, -max_steering, max_steering)
        
        return steering_angle, target_point
    
    def _find_lookahead_point(self, vehicle_state, path, lookahead_distance):
        """找到前视点"""
        x, y, yaw, v = vehicle_state
        
        # 找到距离车辆最近的路径点
        min_dist = float('inf')
        closest_idx = 0
        
        for i, point in enumerate(path):
            dist = np.sqrt((point[0] - x)**2 + (point[1] - y)**2)
            if dist < min_dist:
                min_dist = dist
                closest_idx = i
        
        # 从最近点开始查找前视点
        for i in range(closest_idx, len(path)):
            dist = np.sqrt((path[i][0] - x)**2 + (path[i][1] - y)**2)
            if dist >= lookahead_distance:
                return path[i]
        
        # 如果没找到,返回路径终点
        return path[-1] if len(path) > 0 else None


class PIDController:
    """PID控制器"""
    
    def __init__(self, kp, ki, kd, output_limits=None):
        """
        初始化PID控制器
        
        Args:
            kp: 比例增益
            ki: 积分增益
            kd: 微分增益
            output_limits: 输出限制 (min, max)
        """
        self.kp = kp
        self.ki = ki
        self.kd = kd
        self.output_limits = output_limits
        
        # 状态变量
        self.integral = 0
        self.prev_error = 0
        self.prev_time = None
    
    def compute(self, setpoint, measurement, dt=None):
        """
        计算控制输出
        
        Args:
            setpoint: 目标值
            measurement: 测量值
            dt: 时间步长(可选)
            
        Returns:
            output: 控制输出
        """
        # 计算误差
        error = setpoint - measurement
        
        # 比例项
        p_term = self.kp * error
        
        # 积分项
        self.integral += error * (dt if dt else 1.0)
        i_term = self.ki * self.integral
        
        # 微分项
        if dt and dt > 0:
            derivative = (error - self.prev_error) / dt
        else:
            derivative = error - self.prev_error
        d_term = self.kd * derivative
        
        # 总输出
        output = p_term + i_term + d_term
        
        # 输出限制
        if self.output_limits:
            output = np.clip(output, self.output_limits[0], self.output_limits[1])
            
            # 抗积分饱和
            if output == self.output_limits[0] or output == self.output_limits[1]:
                self.integral -= error * (dt if dt else 1.0)
        
        # 更新状态
        self.prev_error = error
        
        return output
    
    def reset(self):
        """重置PID状态"""
        self.integral = 0
        self.prev_error = 0


class VehicleController:
    """车辆运动控制器"""
    
    def __init__(self, wheelbase=0.3):
        """
        初始化车辆控制器
        
        Args:
            wheelbase: 轴距(m)
        """
        # 横向控制(转向)
        self.lateral_controller = PurePursuitController(wheelbase)
        
        # 纵向控制(速度)
        self.longitudinal_controller = PIDController(
            kp=1.0, ki=0.1, kd=0.05,
            output_limits=(-1.0, 1.0)  # 油门/刹车输出
        )
        
        # 目标速度
        self.target_speed = 0.5  # m/s
    
    def compute_control(self, vehicle_state, path, dt=0.1):
        """
        计算控制输出
        
        Args:
            vehicle_state: [x, y, yaw, v]
            path: 参考路径
            dt: 时间步长
            
        Returns:
            steering: 转向角(rad)
            throttle: 油门(-1到1, 负值为刹车)
            target_point: 目标跟踪点
        """
        x, y, yaw, v = vehicle_state
        
        # 横向控制
        steering, target_point = self.lateral_controller.compute_steering(
            vehicle_state, path)
        
        # 纵向控制
        throttle = self.longitudinal_controller.compute(
            self.target_speed, v, dt)
        
        return steering, throttle, target_point
    
    def set_target_speed(self, speed):
        """设置目标速度"""
        self.target_speed = speed


# 测试代码
if __name__ == "__main__":
    # 创建测试路径(圆形)
    num_points = 100
    radius = 5.0
    angles = np.linspace(0, 2 * np.pi, num_points)
    path = [[radius * np.cos(a), radius * np.sin(a)] for a in angles]
    
    # 初始化控制器
    controller = VehicleController(wheelbase=0.3)
    controller.set_target_speed(1.0)
    
    # 初始状态
    state = [0, -radius, np.pi/2, 0]  # [x, y, yaw, v]
    
    # 仿真循环
    dt = 0.1
    total_time = 30.0
    time = 0
    
    trajectory = [state[:2]]
    
    while time < total_time:
        # 计算控制输出
        steering, throttle, target = controller.compute_control(state, path, dt)
        
        # 更新车辆状态(简单的运动学模型)
        x, y, yaw, v = state
        
        # 速度更新
        v += throttle * dt * 2.0  # 加速度 = 油门 * 2.0 m/s^2
        v = max(0, min(v, 2.0))  # 限制速度
        
        # 位置更新
        x += v * np.cos(yaw) * dt
        y += v * np.sin(yaw) * dt
        yaw += v * np.tan(steering) / controller.lateral_controller.L * dt
        
        state = [x, y, yaw, v]
        trajectory.append([x, y])
        
        time += dt
        
        if time % 1.0 < dt:  # 每秒打印一次
            print(f"t={time:.1f}s: pos=({x:.2f}, {y:.2f}), "
                  f"yaw={np.degrees(yaw):.1f}°, v={v:.2f}m/s, "
                  f"steering={np.degrees(steering):.1f}°")
    
    print(f"仿真完成,轨迹点数: {len(trajectory)}")

这些是核心的路径规划和控制模块。接下来我还需要添加STM32底层控制代码、完整系统集成和测试优化部分。要继续吗?


12. 测试与优化

12.1 单元测试

python 复制代码
# test_modules.py
import unittest
import numpy as np
import cv2
from camera_manager import CameraDevice
from object_detection import YOLODetector
from lane_detection import LaneDetector
from path_planning import AStarPlanner
from vehicle_control import PurePursuitController, PIDController

class TestCameraModule(unittest.TestCase):
    """摄像头模块测试"""
    
    def test_camera_initialization(self):
        """测试摄像头初始化"""
        camera = CameraDevice(0, 'test', resolution=(640, 480))
        self.assertIsNotNone(camera.camera_matrix)
        self.assertIsNotNone(camera.dist_coeffs)
    
    def test_calibration_loading(self):
        """测试标定参数加载"""
        camera = CameraDevice(0, 'test')
        camera.load_calibration()
        self.assertEqual(camera.camera_matrix.shape, (3, 3))


class TestPerceptionModules(unittest.TestCase):
    """感知模块测试"""
    
    def setUp(self):
        """准备测试数据"""
        self.test_image = np.zeros((480, 640, 3), dtype=np.uint8)
        cv2.rectangle(self.test_image, (100, 100), (300, 300), (255, 255, 255), -1)
    
    def test_lane_detection(self):
        """测试车道线检测"""
        detector = LaneDetector()
        result, lane_info = detector.detect_lanes(self.test_image)
        
        self.assertIsNotNone(result)
        self.assertIn('left_detected', lane_info)
        self.assertIn('right_detected', lane_info)
    
    def test_depth_estimation(self):
        """测试深度估计"""
        from depth_estimation import MonoDepthEstimator
        estimator = MonoDepthEstimator()
        depth = estimator.estimate_depth(self.test_image)
        
        self.assertEqual(depth.shape[:2], self.test_image.shape[:2])


class TestPlanningModules(unittest.TestCase):
    """规划模块测试"""
    
    def test_astar_planning(self):
        """测试A*路径规划"""
        # 创建简单地图
        grid_map = np.zeros((50, 50))
        grid_map[20:30, 20:30] = 1  # 障碍物
        
        planner = AStarPlanner(grid_map)
        path = planner.plan((5, 5), (45, 45))
        
        self.assertIsNotNone(path)
        self.assertGreater(len(path), 0)
        self.assertEqual(path[0], (5, 5))
        self.assertEqual(path[-1], (45, 45))
    
    def test_path_smoothing(self):
        """测试路径平滑"""
        grid_map = np.zeros((50, 50))
        planner = AStarPlanner(grid_map)
        
        raw_path = [(0, 0), (10, 10), (20, 20), (30, 30)]
        smooth_path = planner.smooth_path(raw_path)
        
        self.assertEqual(len(smooth_path), len(raw_path))


class TestControlModules(unittest.TestCase):
    """控制模块测试"""
    
    def test_pure_pursuit(self):
        """测试Pure Pursuit控制"""
        controller = PurePursuitController(wheelbase=0.3)
        
        # 简单直线路径
        path = [[i, 0] for i in range(10)]
        state = [0, 0, 0, 1.0]  # [x, y, yaw, v]
        
        steering, target = controller.compute_steering(state, path)
        
        self.assertIsNotNone(steering)
        self.assertIsNotNone(target)
    
    def test_pid_controller(self):
        """测试PID控制器"""
        pid = PIDController(kp=1.0, ki=0.1, kd=0.01)
        
        # 测试阶跃响应
        setpoint = 10.0
        measurement = 0.0
        
        for _ in range(100):
            output = pid.compute(setpoint, measurement, dt=0.01)
            measurement += output * 0.1  # 模拟系统响应
        
        # 检查是否收敛到目标值附近
        self.assertAlmostEqual(measurement, setpoint, delta=1.0)


# 运行所有测试
if __name__ == '__main__':
    # 配置测试
    unittest.main(verbosity=2)

12.2 性能基准测试

python 复制代码
# performance_benchmark.py
import time
import numpy as np
import cv2
from memory_profiler import profile

class PerformanceBenchmark:
    """性能基准测试"""
    
    def __init__(self):
        self.results = {}
    
    def benchmark_camera_capture(self, duration=10.0):
        """测试摄像头采集性能"""
        from camera_manager import CameraDevice
        
        camera = CameraDevice(0, 'test')
        camera.start()
        
        frame_count = 0
        start_time = time.time()
        
        while time.time() - start_time < duration:
            frame_data = camera.get_frame()
            if frame_data:
                frame_count += 1
        
        camera.stop()
        
        fps = frame_count / duration
        self.results['camera_fps'] = fps
        print(f"摄像头采集FPS: {fps:.2f}")
        
        return fps
    
    def benchmark_object_detection(self, num_iterations=100):
        """测试目标检测性能"""
        from object_detection import YOLODetector
        
        detector = YOLODetector()
        test_image = np.random.randint(0, 255, (480, 640, 3), dtype=np.uint8)
        
        # 预热
        for _ in range(5):
            detector.detect(test_image)
        
        # 测试
        start_time = time.time()
        for _ in range(num_iterations):
            detector.detect(test_image)
        elapsed = time.time() - start_time
        
        avg_time = elapsed / num_iterations
        fps = 1.0 / avg_time
        
        self.results['detection_fps'] = fps
        self.results['detection_latency'] = avg_time * 1000  # ms
        
        print(f"目标检测FPS: {fps:.2f}, 延迟: {avg_time*1000:.2f}ms")
        
        return fps
    
    def benchmark_lane_detection(self, num_iterations=100):
        """测试车道线检测性能"""
        from lane_detection import LaneDetector
        
        detector = LaneDetector()
        test_image = np.random.randint(0, 255, (480, 640, 3), dtype=np.uint8)
        
        start_time = time.time()
        for _ in range(num_iterations):
            detector.detect_lanes(test_image)
        elapsed = time.time() - start_time
        
        avg_time = elapsed / num_iterations
        fps = 1.0 / avg_time
        
        self.results['lane_fps'] = fps
        self.results['lane_latency'] = avg_time * 1000
        
        print(f"车道检测FPS: {fps:.2f}, 延迟: {avg_time*1000:.2f}ms")
        
        return fps
    
    def benchmark_planning(self, num_iterations=100):
        """测试路径规划性能"""
        from path_planning import AStarPlanner, DWAPlanner
        
        # A*测试
        grid_map = np.zeros((100, 100))
        astar = AStarPlanner(grid_map)
        
        start_time = time.time()
        for _ in range(num_iterations):
            path = astar.plan((10, 10), (90, 90))
        elapsed = time.time() - start_time
        
        self.results['astar_time'] = elapsed / num_iterations * 1000
        print(f"A*规划平均时间: {self.results['astar_time']:.2f}ms")
        
        # DWA测试
        dwa = DWAPlanner()
        state = [0, 0, 0, 0, 0]
        goal = [5, 5]
        obstacles = [[2, 2], [3, 3]]
        
        start_time = time.time()
        for _ in range(num_iterations):
            best_u, traj = dwa.plan(state, goal, obstacles)
        elapsed = time.time() - start_time
        
        self.results['dwa_time'] = elapsed / num_iterations * 1000
        print(f"DWA规划平均时间: {self.results['dwa_time']:.2f}ms")
    
    @profile
    def benchmark_memory(self):
        """测试内存使用"""
        from autonomous_vehicle_system import AutonomousVehicleSystem
        
        system = AutonomousVehicleSystem()
        system.start()
        time.sleep(5)  # 运行5秒
        system.stop()
    
    def run_all_benchmarks(self):
        """运行所有基准测试"""
        print("="*50)
        print("性能基准测试")
        print("="*50)
        
        print("\n1. 摄像头采集性能")
        self.benchmark_camera_capture(duration=5.0)
        
        print("\n2. 目标检测性能")
        self.benchmark_object_detection()
        
        print("\n3. 车道线检测性能")
        self.benchmark_lane_detection()
        
        print("\n4. 路径规划性能")
        self.benchmark_planning()
        
        print("\n="*50)
        print("测试结果汇总:")
        for key, value in self.results.items():
            print(f"  {key}: {value:.2f}")
        print("="*50)


if __name__ == "__main__":
    benchmark = PerformanceBenchmark()
    benchmark.run_all_benchmarks()

12.3 系统优化建议

12.3.1 算法优化
  1. 多线程并行处理

    • 将目标检测、车道检测、深度估计放在独立线程
    • 使用GPU加速深度学习推理(CUDA)
    • 采用异步I/O处理摄像头数据
  2. 模型优化

    • 使用TensorRT或ONNX Runtime加速推理
    • 模型量化(INT8)降低计算量
    • 采用更轻量的模型(如MobileNet, YOLOv8-nano)
  3. 数据处理优化

    • 图像金字塔多尺度处理
    • ROI裁剪减少计算范围
    • 时间连贯性利用(帧间差分)
12.3.2 硬件优化
  1. CPU优化

    • 升级到更强CPU(如Intel i7或树莓派5)
    • 使能CPU的SIMD指令(SSE, AVX)
    • 优化CPU亲和性和调度策略
  2. GPU加速

    python 复制代码
    # 启用CUDA加速
    import torch
    device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
    model = model.to(device)
  3. 存储优化

    • 使用SSD提升数据读写速度
    • 内存映射文件减少拷贝开销
12.3.3 实时性优化
python 复制代码
# real_time_optimization.py

# 1. 优先级设置
import os
os.nice(-20)  # 提高进程优先级(需要root)

# 2. CPU绑定
import psutil
p = psutil.Process()
p.cpu_affinity([0, 1])  # 绑定到CPU 0和1

# 3. 实时调度
import ctypes
import ctypes.util

# SCHED_FIFO实时调度
SCHED_FIFO = 1
sched_param = ctypes.c_int * 1
param = sched_param(99)  # 最高优先级
libc = ctypes.CDLL(ctypes.util.find_library('c'))
libc.sched_setscheduler(0, SCHED_FIFO, ctypes.byref(param))

# 4. 内存锁定(防止swap)
import mmap
mmap.mlock()
12.3.4 通信优化
python 复制代码
# 使用共享内存加速进程间通信
from multiprocessing import shared_memory
import numpy as np

# 创建共享内存
shm = shared_memory.SharedMemory(create=True, size=640*480*3)
shared_array = np.ndarray((480, 640, 3), dtype=np.uint8, buffer=shm.buf)

# 写入数据(生产者)
shared_array[:] = frame[:]

# 读取数据(消费者)
frame_copy = np.copy(shared_array)

12.4 调试工具

python 复制代码
# debug_tools.py
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation
import time

class DebugVisualizer:
    """调试可视化工具"""
    
    def __init__(self):
        plt.ion()
        self.fig, self.axes = plt.subplots(2, 3, figsize=(15, 10))
        self.fig.suptitle('自动驾驶系统调试界面')
    
    def update_displays(self, data):
        """更新显示"""
        # 前视图像
        if 'front_image' in data:
            self.axes[0, 0].clear()
            self.axes[0, 0].imshow(cv2.cvtColor(data['front_image'], cv2.COLOR_BGR2RGB))
            self.axes[0, 0].set_title('前摄像头')
            self.axes[0, 0].axis('off')
        
        # 车道检测
        if 'lane_image' in data:
            self.axes[0, 1].clear()
            self.axes[0, 1].imshow(cv2.cvtColor(data['lane_image'], cv2.COLOR_BGR2RGB))
            self.axes[0, 1].set_title('车道检测')
            self.axes[0, 1].axis('off')
        
        # 目标检测
        if 'detection_image' in data:
            self.axes[0, 2].clear()
            self.axes[0, 2].imshow(cv2.cvtColor(data['detection_image'], cv2.COLOR_BGR2RGB))
            self.axes[0, 2].set_title('目标检测')
            self.axes[0, 2].axis('off')
        
        # 深度图
        if 'depth_map' in data:
            self.axes[1, 0].clear()
            self.axes[1, 0].imshow(data['depth_map'], cmap='plasma')
            self.axes[1, 0].set_title('深度估计')
            self.axes[1, 0].axis('off')
        
        # 鸟瞰图
        if 'bev_image' in data:
            self.axes[1, 1].clear()
            self.axes[1, 1].imshow(cv2.cvtColor(data['bev_image'], cv2.COLOR_BGR2RGB))
            self.axes[1, 1].set_title('鸟瞰图')
            self.axes[1, 1].axis('off')
        
        # 轨迹图
        if 'trajectory' in data:
            self.axes[1, 2].clear()
            traj = np.array(data['trajectory'])
            if len(traj) > 0:
                self.axes[1, 2].plot(traj[:, 0], traj[:, 1], 'b-', linewidth=2, label='规划路径')
                self.axes[1, 2].plot(traj[0, 0], traj[0, 1], 'go', markersize=10, label='起点')
                self.axes[1, 2].plot(traj[-1, 0], traj[-1, 1], 'ro', markersize=10, label='终点')
            self.axes[1, 2].set_title('规划轨迹')
            self.axes[1, 2].legend()
            self.axes[1, 2].grid(True)
        
        plt.pause(0.01)


class PerformanceMonitor:
    """性能监控器"""
    
    def __init__(self, window_size=100):
        self.window_size = window_size
        self.metrics = {
            'fps': [],
            'latency': [],
            'cpu_usage': [],
            'memory_usage': []
        }
    
    def update(self, metric_name, value):
        """更新指标"""
        if metric_name in self.metrics:
            self.metrics[metric_name].append(value)
            if len(self.metrics[metric_name]) > self.window_size:
                self.metrics[metric_name].pop(0)
    
    def get_stats(self, metric_name):
        """获取统计信息"""
        if metric_name not in self.metrics or len(self.metrics[metric_name]) == 0:
            return None
        
        data = self.metrics[metric_name]
        return {
            'mean': np.mean(data),
            'std': np.std(data),
            'min': np.min(data),
            'max': np.max(data),
            'current': data[-1]
        }
    
    def print_summary(self):
        """打印性能摘要"""
        print("\n" + "="*50)
        print("性能监控摘要")
        print("="*50)
        
        for metric_name in self.metrics.keys():
            stats = self.get_stats(metric_name)
            if stats:
                print(f"\n{metric_name}:")
                print(f"  当前值: {stats['current']:.2f}")
                print(f"  平均值: {stats['mean']:.2f}")
                print(f"  标准差: {stats['std']:.2f}")
                print(f"  范围: [{stats['min']:.2f}, {stats['max']:.2f}]")

13. 总结与展望

13.1 系统特点总结

本文详细介绍了一个基于多摄像头融合的智能小车自动驾驶系统的完整实现方案。系统具有以下特点:

  1. 全方位感知能力

    • 四摄像头360度环境感知
    • 多模态感知融合(视觉+IMU+GPS)
    • 实时目标检测和车道识别
  2. 鲁棒的规划控制

    • 全局A*路径规划
    • 局部DWA动态避障
    • Pure Pursuit + PID混合控制
  3. 模块化架构设计

    • 各模块高度解耦
    • 易于测试和维护
    • 支持功能扩展
  4. 完整的软硬件方案

    • 树莓派4B主控
    • STM32底层控制
    • 成熟的硬件接口设计

13.2 关键技术指标

指标 数值 说明
视觉处理帧率 30 FPS 前摄像头实时处理
目标检测延迟 <50 ms YOLOv5s
车道检测帧率 30 FPS Hough+深度学习
规划频率 10 Hz DWA局部规划
控制频率 20 Hz Pure Pursuit
最大速度 2.0 m/s 可调节
定位精度 ±5 cm 视觉SLAM

13.3 应用场景

  1. 教育科研

    • 自动驾驶算法验证平台
    • 计算机视觉实验平台
    • 机器人技术教学
  2. 工业应用

    • 仓库自动搬运
    • 工厂巡检机器人
    • 智能导览车
  3. 服务领域

    • 无人配送车
    • 清洁机器人
    • 安防巡逻车

13.4 未来改进方向

13.4.1 感知增强
  1. 传感器融合

    复制代码
    - 添加激光雷达(LiDAR)提升3D感知
    - 毫米波雷达增强恶劣天气感知
    - 超声波传感器近距离检测
  2. 高级视觉算法

    复制代码
    - Transformer架构的目标检测(DETR)
    - 实时语义分割(BiSeNet)
    - 端到端深度学习规划控制
13.4.2 定位建图升级
  1. 多传感器SLAM融合

    python 复制代码
    # 融合视觉、IMU、轮速计的SLAM
    class MultiSensorSLAM:
        def __init__(self):
            self.visual_slam = ORB_SLAM3()
            self.imu_preintegration = IMUPreintegration()
            self.wheel_odometry = WheelOdometry()
            self.fusion_ekf = ExtendedKalmanFilter()
  2. 高精度地图构建

    • 3D点云地图
    • 语义地图
    • 拓扑地图
13.4.3 规划决策智能化
  1. 学习型规划

    python 复制代码
    # 强化学习路径规划
    class RLPlanner:
        def __init__(self):
            self.policy_network = PPO()  # Proximal Policy Optimization
            self.value_network = ValueNet()
        
        def plan(self, state, goal):
            action = self.policy_network.forward(state)
            return action
  2. 端到端学习

    • 从感知到控制的端到端神经网络
    • 模仿学习(Imitation Learning)
    • 逆强化学习(Inverse RL)
13.4.4 系统工程化
  1. ROS2集成

    bash 复制代码
    # 使用ROS2实现模块化通信
    ros2 launch autonomous_vehicle system.launch.py
  2. 云端协同

    • 地图云端更新
    • 远程监控和遥操作
    • 车队协同调度
  3. 安全冗余

    • 双MCU冗余控制
    • 故障检测与恢复
    • 安全停车策略

13.5 学习资源推荐

13.5.1 书籍
  1. 《多视图几何计算机视觉》- Hartley & Zisserman
  2. 《概率机器人》- Sebastian Thrun
  3. 《自动驾驶汽车技术》- Sören Kammel
13.5.2 在线课程
  1. Udacity - 自动驾驶工程师纳米学位
  2. Coursera - 自动驾驶专项课程
  3. 深蓝学院 - 自动驾驶感知与决策
13.5.3 开源项目
  1. Apollo - 百度自动驾驶平台
  2. Autoware - 开源自动驾驶软件栈
  3. CARLA - 自动驾驶仿真器

13.6 结语

本文从理论基础、系统架构、模块实现到完整集成,全面阐述了多摄像头智能小车自动驾驶系统的设计与实现。通过模块化的代码实现和详细的技术说明,为自动驾驶初学者提供了一个完整的实践参考。

自动驾驶技术正在快速发展,未来将朝着更智能、更安全、更可靠的方向演进。希望本文能够帮助读者理解自动驾驶系统的核心原理,并为实际项目开发提供有价值的参考。

关键要点回顾:

  • ✅ 多摄像头提供全方位环境感知
  • ✅ 分层架构确保系统可维护性
  • ✅ 感知-规划-控制完整闭环
  • ✅ 软硬件协同实现高性能
  • ✅ 模块化设计支持功能扩展

实践建议:

  1. 先实现单个模块,逐步集成
  2. 充分测试每个算法模块
  3. 注重系统实时性和鲁棒性
  4. 重视安全机制设计
  5. 持续学习和改进

参考文献

1\] Hartley R, Zisserman A. Multiple view geometry in computer vision\[M\]. Cambridge university press, 2003. \[2\] Thrun S, Burgard W, Fox D. Probabilistic robotics\[M\]. MIT press, 2005. \[3\] Redmon J, Farhadi A. YOLOv3: An incremental improvement\[J\]. arXiv preprint arXiv:1804.02767, 2018. \[4\] Mur-Artal R, Tardós J D. ORB-SLAM2: An open-source SLAM system for monocular, stereo, and RGB-D cameras\[J\]. IEEE transactions on robotics, 2017, 33(5): 1255-1262. \[5\] Fox D, Burgard W, Thrun S. The dynamic window approach to collision avoidance\[J\]. IEEE Robotics \& Automation Magazine, 1997, 4(1): 23-33. \[6\] Coulter R C. Implementation of the pure pursuit path tracking algorithm\[R\]. Carnegie-Mellon UNIV Pittsburgh PA Robotics INST, 1992. \[7\] Eigen D, Puhrsch C, Fergus R. Depth map prediction from a single image using a multi-scale deep network\[J\]. Advances in neural information processing systems, 2014. \[8\] Ranft B, Stiller C. The role of machine vision for intelligent vehicles\[J\]. IEEE transactions on intelligent vehicles, 2016, 1(1): 8-19. *** ** * ** *** **附录A: 常见问题解答** **Q1: 系统对硬件有什么要求?** A: 最低配置树莓派4B 4GB + STM32F407,推荐8GB内存版本以获得更好性能。 **Q2: 可以在其他平台上运行吗?** A: 可以,代码设计为跨平台,可在Ubuntu PC、Jetson Nano等平台运行。 **Q3: 如何提升检测精度?** A: 可以使用更大的YOLO模型(如YOLOv8-m)或进行针对性的模型训练。 **Q4: 系统延迟如何优化?** A: 参考12.3节的优化建议,使用GPU加速、模型量化、多线程并行等技术。 **Q5: 如何添加新的传感器?** A: 在感知层添加新的传感器接口类,然后在融合模块中整合数据即可。 *** ** * ** *** END OF DOCUMENT , trajectory = self.local_planner.plan(state, goal, obstacles) if best_u is not None: # 保存规划结果 self.target_path = trajectory # 设置目标速度 self.controller.set_target_speed(abs(best_u[0])) # 控制规划频率(10Hz) elapsed = time.time() - start_time sleep_time = max(0, 1/10.0 - elapsed) time.sleep(sleep_time) except Exception as e: print(f"规划循环错误: {e}") time.sleep(0.1) def _control_loop(self): """控制循环线程""" print("控制线程启动") while self.is_running: start_time = time.time() try: # 获取车辆状态 stm32_state = self.stm32.get_state() if stm32_state: # 更新车辆状态(简化的航位推算) dt = 0.05 # 20Hz v = (stm32_state['speed_left'] + stm32_state['speed_right']) / 2.0 self.vehicle_state['x'] += v * np.cos(self.vehicle_state['yaw']) * dt self.vehicle_state['y'] += v * np.sin(self.vehicle_state['yaw']) * dt self.vehicle_state['v'] = v # 如果有目标路径,执行控制 if len(self.target_path) > 0: state = [ self.vehicle_state['x'], self.vehicle_state['y'], self.vehicle_state['yaw'], self.vehicle_state['v'] ] # 计算控制指令 steering, throttle, target_point = self.controller.compute_control( state, self.target_path, dt=0.05) # 发送到STM32 self.stm32.set_steering(np.degrees(steering)) # 根据油门计算左右轮速度 base_speed = self.controller.target_speed speed_left = base_speed + steering * 0.1 speed_right = base_speed - steering * 0.1 self.stm32.set_speed(speed_left, speed_right) # 控制频率(20Hz) elapsed = time.time() - start_time sleep_time = max(0, 1/20.0 - elapsed) time.sleep(sleep_time) except Exception as e: print(f"控制循环错误: {e}") time.sleep(0.1) def _extract_obstacles(self): """从感知结果中提取障碍物""" obstacles = [] if 'detections' in self.perception_results: for det in self.perception_results['detections']: # 简化:使用图像位置估算障碍物位置 # 实际应该结合深度信息 x = det['center'][0] / 100.0 # 简单映射 y = det['center'][1] / 100.0 obstacles.append([x, y]) return obstacles def run(self): """主运行循环(用于可视化)""" print("进入主循环...") try: while self.is_running: # 获取多摄像头画面 frames = self.camera_manager.get_frames() # 可视化 if 'front' in frames: frame = frames['front']['frame'].copy() # 绘制感知结果 if 'lane_info' in self.perception_results: _, lane_vis = self.lane_detector.detect_lanes(frame) frame = lane_vis # 绘制检测框 detections = self.object_detector.get_detections_by_camera('front') frame = self.object_detector.detector.draw_detections(frame, detections) # 绘制状态信息 info_texts = [ f"Pos: ({self.vehicle_state['x']:.1f}, {self.vehicle_state['y']:.1f})", f"Speed: {self.vehicle_state['v']:.2f} m/s", f"Obstacles: {len(self._extract_obstacles())}" ] y_pos = 30 for text in info_texts: cv2.putText(frame, text, (10, y_pos), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 255, 0), 2) y_pos += 30 cv2.imshow("Autonomous Driving - Front View", frame) # 显示鸟瞰图 bev = self.fusion.create_bev_image(frames) cv2.imshow("Bird's Eye View", bev) # 按键处理 key = cv2.waitKey(1) & 0xFF if key == ord('q'): break elif key == ord('s'): # 急停 self.stm32.emergency_stop() print("急停!") elif key == ord('g'): # 恢复运行 print("恢复运行") except KeyboardInterrupt: print("\n用户中断") finally: cv2.destroyAllWindows() ## STM32通信模块 ## stm32_communication.py import serial import struct import time from threading import Lock class STM32Controller: """STM32底层控制器通信""" def __init__(self, port, baudrate=115200): self.port = port self.baudrate = baudrate self.ser = None self.lock = Lock() # 命令定义 self.CMD_SET_SPEED = 0x01 self.CMD_SET_STEERING = 0x02 self.CMD_GET_STATE = 0x03 self.CMD_EMERGENCY_STOP = 0x04 self.FRAME_HEADER = 0xAA self.FRAME_TAIL = 0x55 def connect(self): """连接STM32""" try: self.ser = serial.Serial( port=self.port, baudrate=self.baudrate, bytesize=serial.EIGHTBITS, parity=serial.PARITY_NONE, stopbits=serial.STOPBITS_ONE, timeout=0.1 ) print(f"已连接到STM32: {self.port}") return True except Exception as e: print(f"连接STM32失败: {e}") return False def disconnect(self): """断开连接""" if self.ser and self.ser.is_open: self.ser.close() print("STM32连接已关闭") def _send_command(self, cmd, data=b''): """发送命令""" with self.lock: if not self.ser or not self.ser.is_open: return False # 构建数据包 packet = bytearray() packet.append(self.FRAME_HEADER) packet.append(len(data) + 1) # 长度 packet.append(cmd) # 命令 packet.extend(data) # 数据 # 校验和 checksum = 0 for b in packet[2:]: checksum ^= b packet.append(checksum) packet.append(self.FRAME_TAIL) try: self.ser.write(packet) return True except Exception as e: print(f"发送命令失败: {e}") return False def set_speed(self, speed_left, speed_right): """设置速度""" data = struct.pack('ff', speed_left, speed_right) return self._send_command(self.CMD_SET_SPEED, data) def set_steering(self, angle): """设置转向角""" data = struct.pack('f', angle) return self._send_command(self.CMD_SET_STEERING, data) def emergency_stop(self): """急停""" return self._send_command(self.CMD_EMERGENCY_STOP) def get_state(self): """获取车辆状态""" if self._send_command(self.CMD_GET_STATE): # 读取响应 try: response = self.ser.read(32) if len(response) >= 25: # 解析状态数据 state = { 'speed_left': struct.unpack('f', response[3:7])[0], 'speed_right': struct.unpack('f', response[7:11])[0], 'steering_angle': struct.unpack('f', response[11:15])[0], 'encoder_left': struct.unpack('h', response[15:17])[0], 'encoder_right': struct.unpack('h', response[17:19])[0], 'battery_voltage': struct.unpack('f', response[19:23])[0], 'emergency_stop': response[23] } return state except: pass return None ## 主程序入口 if **name** == "**main** ": # 创建系统实例 system = AutonomousVehicleSystem() try: # 启动系统 system.start() # 运行主循环 system.run() except Exception as e: print(f"系统错误: {e}") import traceback traceback.print_exc() finally: # 停止系统 system.stop() ``` ```

相关推荐
rengang667 小时前
351-Spring AI Alibaba Dashscope 多模型示例
java·人工智能·spring·多模态·spring ai·ai应用编程
NewCarRen7 小时前
整合STPA、ISO 26262与SOTIF的自动驾驶安全需求推导与验证
人工智能·安全·自动驾驶·预期功能安全
嵌入式老牛8 小时前
1.0 机器学习实际上是什么
人工智能·机器学习
易晨 微盛·企微管家8 小时前
汽车行业SCRM:企业微信+服务商模式破解汽车服务行业痛点的案例分析
大数据·人工智能·汽车·产品运营·企业微信
易晨 微盛·企微管家8 小时前
生鲜电商企业微信私域代运营:从去中心化运营看微盛AI·企微管家SCRM适配案例
人工智能·微信·企业微信
李元豪8 小时前
MetaGPT、AutoGen、XAgent camel仔细对比
人工智能
AndrewHZ8 小时前
【图像处理基石】图像Inpainting入门详解
图像处理·人工智能·深度学习·opencv·transformer·图像修复·inpainting
jiushun_suanli8 小时前
PyTorch CV模型实战全流程(一)
人工智能·pytorch·python
大千AI助手9 小时前
HOSVD(高阶奇异值分解):高维数据的“解剖术”
人工智能·线性代数·矩阵·张量·svd·hosvd·高阶奇异值分解