YOLOv3详解:实时目标检测的巅峰之作

摘要:YOLOv3是目标检测领域的一个重要里程碑,它将检测速度推向了新的高度,同时在精度方面也表现出色。本文将深入剖析YOLOv3的核心原理、网络架构、创新技术,并提供完整的代码实现和训练教程,帮助读者全面理解这一优秀的实时目标检测算法。

一、YOLO系列算法演进回顾

在深入YOLOv3之前,让我们先简要回顾YOLO系列的发展历程。

YOLOv1(2016)首次将目标检测视为回归问题,实现了端到端的检测流程。其核心思想是将输入图像划分为S×S的网格,每个网格预测B个边界框和相应的置信度分数。虽然速度极快,但存在定位不准、召回率较低的问题。

YOLOv2(2017)在v1基础上进行了多项改进:引入批量归一化、使用高分辨率分类器、采用锚框机制、使用维度聚类确定先验框尺寸、添加passthrough层检测细粒度特征等,显著提升了检测精度。

YOLOv3(2018)在保持速度优势的同时,进一步提升了检测精度,特别是在小物体检测方面表现突出。其主要贡献包括:使用更深的骨干网络Darknet-53、采用多尺度特征融合、使用逻辑回归预测对象分数等。

二、YOLOv3核心原理详解

2.1 网络架构设计

YOLOv3采用了名为Darknet-53的骨干网络进行特征提取,该网络结合了残差网络的思想,在保持效率的同时大幅提升了特征提取能力。

Darknet-53网络结构

text

复制代码
Layer     Filters    Size              Output
---------------------------------------------------
0         Conv      3x3/1     416x416x3 -> 416x416x32
1         Conv      3x3/2     416x416x32 -> 208x208x64
2         Conv      1x1/1     208x208x64 -> 208x208x32
          Conv      3x3/1     208x208x32 -> 208x208x64
          Residual  208x208x64 -> 208x208x64
...(类似结构重复多次)...

Darknet-53包含53个卷积层,通过连续的3×3和1×1卷积以及残差连接构建而成。与ResNet-101和ResNet-152相比,Darknet-53在精度相当的情况下速度更快。

2.2 多尺度预测

YOLOv3最大的创新之一是引入了多尺度预测机制。网络在三个不同尺度的特征图上进行预测:

  1. 第1尺度:在骨干网络的中层特征图(如13×13)上进行预测,适合检测大物体

  2. 第2尺度:将上一层特征图上采样后与更底层的特征图融合,在26×26尺度预测,适合检测中物体

  3. 第3尺度:再次上采样并融合,在52×52尺度预测,适合检测小物体

这种多尺度融合策略类似于特征金字塔网络(FPN),能够充分利用不同层次的特征信息。

2.3 边界框预测机制

YOLOv3对每个边界框预测4个坐标值:t_x, t_y, t_w, t_h。如果该单元格从图像左上角偏移了(c_x, c_y),并且先验框的宽度和高度为p_w, p_h,那么预测值对应:

bx=σ(tx)+cxbx​=σ(tx​)+cx​by=σ(ty)+cyby​=σ(ty​)+cy​bw=pwetwbw​=pw​etw​bh=phethbh​=ph​eth​

其中\\sigma是sigmoid函数,将预测值压缩到0-1之间,确保中心点落在当前单元格内。

2.4 对象置信度与分类预测

YOLOv3使用独立的逻辑分类器代替softmax进行类别预测,这样可以让一个边界框属于多个类别,适用于多标签分类场景。

对象置信度表示边界框包含对象的概率,通过sigmoid函数计算:

置信度=σ(to)置信度=σ(to​)

三、YOLOv3代码完整实现

下面我们将使用PyTorch框架完整实现YOLOv3模型。

3.1 基础组件实现

首先实现一些基础组件,包括卷积块、残差块等。

python

复制代码
import torch
import torch.nn as nn
import numpy as np

def conv_bn_leaky(in_channels, out_channels, kernel_size, stride=1, padding=0):
    """卷积+批量归一化+LeakyReLU激活"""
    return nn.Sequential(
        nn.Conv2d(in_channels, out_channels, kernel_size, stride, padding, bias=False),
        nn.BatchNorm2d(out_channels),
        nn.LeakyReLU(0.1, inplace=True)
    )

class ResidualBlock(nn.Module):
    """残差块"""
    def __init__(self, in_channels):
        super(ResidualBlock, self).__init__()
        self.conv1 = conv_bn_leaky(in_channels, in_channels//2, 1)
        self.conv2 = conv_bn_leaky(in_channels//2, in_channels, 3, padding=1)
    
    def forward(self, x):
        residual = x
        out = self.conv1(x)
        out = self.conv2(out)
        out += residual
        return out

class Darknet53(nn.Module):
    """Darknet-53骨干网络"""
    def __init__(self, num_classes=1000):
        super(Darknet53, self).__init__()
        self.conv1 = conv_bn_leaky(3, 32, 3, padding=1)
        self.conv2 = conv_bn_leaky(32, 64, 3, stride=2, padding=1)
        
        # 残差块组
        self.residual_block1 = self._make_layer(64, 32, 1)
        self.conv3 = conv_bn_leaky(64, 128, 3, stride=2, padding=1)
        
        self.residual_block2 = self._make_layer(128, 64, 2)
        self.conv4 = conv_bn_leaky(128, 256, 3, stride=2, padding=1)
        
        self.residual_block3 = self._make_layer(256, 128, 8)
        self.conv5 = conv_bn_leaky(256, 512, 3, stride=2, padding=1)
        
        self.residual_block4 = self._make_layer(512, 256, 8)
        self.conv6 = conv_bn_leaky(512, 1024, 3, stride=2, padding=1)
        
        self.residual_block5 = self._make_layer(1024, 512, 4)
        
        # 分类头
        self.avgpool = nn.AdaptiveAvgPool2d((1, 1))
        self.fc = nn.Linear(1024, num_classes)
    
    def _make_layer(self, in_channels, hidden_channels, blocks):
        layers = []
        for _ in range(blocks):
            layers.append(ResidualBlock(in_channels))
        return nn.Sequential(*layers)
    
    def forward(self, x):
        # 用于特征提取的中间输出
        features = []
        
        x = self.conv1(x)
        x = self.conv2(x)
        x = self.residual_block1(x)
        x = self.conv3(x)
        x = self.residual_block2(x)
        
        x = self.conv4(x)
        x = self.residual_block3(x)
        feature1 = x  # 第1个特征输出
        
        x = self.conv5(x)
        x = self.residual_block4(x)
        feature2 = x  # 第2个特征输出
        
        x = self.conv6(x)
        x = self.residual_block5(x)
        feature3 = x  # 第3个特征输出
        
        # 分类
        x = self.avgpool(x)
        x = torch.flatten(x, 1)
        x = self.fc(x)
        
        return feature1, feature2, feature3, x
3.2 YOLOv3检测头实现

接下来实现YOLOv3的检测头部分,包括多尺度预测和特征融合。

python

复制代码
class YOLOv3Head(nn.Module):
    """YOLOv3检测头"""
    def __init__(self, num_anchors, num_classes):
        super(YOLOv3Head, self).__init__()
        self.num_anchors = num_anchors
        self.num_classes = num_classes
        
        # 每个尺度预测的卷积层
        self.conv_layers = nn.ModuleList()
        for i in range(3):
            # 每个尺度的预测层
            conv = nn.Sequential(
                conv_bn_leaky(512 if i == 0 else 256, 256, 1),
                conv_bn_leaky(256, 512, 3, padding=1),
                conv_bn_leaky(512, 256, 1),
                conv_bn_leaky(256, 512, 3, padding=1),
                conv_bn_leaky(512, 256, 1),
                nn.Conv2d(256, num_anchors * (5 + num_classes), 1)
            )
            self.conv_layers.append(conv)
        
        # 上采样层
        self.upsample = nn.Upsample(scale_factor=2, mode='nearest')
        
        # 特征融合卷积
        self.fusion_convs = nn.ModuleList([
            conv_bn_leaky(256, 128, 1),
            conv_bn_leaky(256, 256, 3, padding=1)
        ])
    
    def forward(self, features):
        """
        features: [feature1, feature2, feature3]
        feature1: (batch, 256, 52, 52)  小尺度特征
        feature2: (batch, 512, 26, 26)  中尺度特征  
        feature3: (batch, 1024, 13, 13) 大尺度特征
        """
        outputs = []
        
        # 大尺度预测 (13x13)
        x = self.conv_layers[0](features[2])
        outputs.append(x)
        
        # 上采样并与中尺度特征融合
        x = self.fusion_convs[0](features[2])
        x = self.upsample(x)
        x = torch.cat([x, features[1]], 1)
        x = self.fusion_convs[1](x)
        
        # 中尺度预测 (26x26)
        x = self.conv_layers[1](x)
        outputs.append(x)
        
        # 再次上采样并与小尺度特征融合
        x = self.fusion_convs[0](x)
        x = self.upsample(x)
        x = torch.cat([x, features[0]], 1)
        x = self.fusion_convs[1](x)
        
        # 小尺度预测 (52x52)
        x = self.conv_layers[2](x)
        outputs.append(x)
        
        return outputs
3.3 完整的YOLOv3模型

现在我们将骨干网络和检测头组合成完整的YOLOv3模型。

python

复制代码
class YOLOv3(nn.Module):
    """完整的YOLOv3模型"""
    def __init__(self, num_classes=80, anchors=None):
        super(YOLOv3, self).__init__()
        self.num_classes = num_classes
        
        # 默认COCO数据集的锚框
        if anchors is None:
            self.anchors = [
                [(116, 90), (156, 198), (373, 326)],   # 大尺度
                [(30, 61), (62, 45), (59, 119)],       # 中尺度  
                [(10, 13), (16, 30), (33, 23)]         # 小尺度
            ]
        else:
            self.anchors = anchors
        
        self.num_anchors = len(self.anchors[0])
        
        # 骨干网络
        self.backbone = Darknet53(num_classes)
        
        # 检测头
        self.head = YOLOv3Head(self.num_anchors, num_classes)
        
        # 初始化权重
        self._initialize_weights()
    
    def _initialize_weights(self):
        for m in self.modules():
            if isinstance(m, nn.Conv2d):
                nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='leaky_relu')
                if m.bias is not None:
                    nn.init.constant_(m.bias, 0)
            elif isinstance(m, nn.BatchNorm2d):
                nn.init.constant_(m.weight, 1)
                nn.init.constant_(m.bias, 0)
            elif isinstance(m, nn.Linear):
                nn.init.normal_(m.weight, 0, 0.01)
                nn.init.constant_(m.bias, 0)
    
    def forward(self, x):
        # 提取特征
        feature1, feature2, feature3, _ = self.backbone(x)
        features = [feature1, feature2, feature3]
        
        # 检测预测
        outputs = self.head(features)
        
        return outputs
3.4 损失函数实现

YOLOv3的损失函数包含三部分:边界框坐标损失、对象置信度损失和分类损失。

python

复制代码
class YOLOv3Loss(nn.Module):
    """YOLOv3损失函数"""
    def __init__(self, anchors, num_classes, img_size=416):
        super(YOLOv3Loss, self).__init__()
        self.anchors = anchors
        self.num_anchors = len(anchors[0])
        self.num_classes = num_classes
        self.img_size = img_size
        self.mse_loss = nn.MSELoss(reduction='sum')
        self.bce_loss = nn.BCEWithLogitsLoss(reduction='sum')
        
        # 损失权重
        self.lambda_coord = 5
        self.lambda_noobj = 0.5
    
    def forward(self, predictions, targets):
        """
        predictions: 模型输出的三个尺度预测
        targets: 真实标签
        """
        total_loss = 0
        coord_loss = 0
        conf_loss = 0
        cls_loss = 0
        
        for i, prediction in enumerate(predictions):
            # 获取当前尺度的锚框
            anchors = self.anchors[i]
            
            # 计算损失
            batch_size = prediction.size(0)
            grid_size = prediction.size(2)
            
            # 调整预测形状: (batch, anchors*(5+num_classes), grid, grid) -> 
            # (batch, anchors, grid, grid, 5+num_classes)
            prediction = prediction.view(
                batch_size, self.num_anchors, 5 + self.num_classes, 
                grid_size, grid_size
            ).permute(0, 1, 3, 4, 2).contiguous()
            
            # 获取预测的各项参数
            x = torch.sigmoid(prediction[..., 0])  # 中心点x
            y = torch.sigmoid(prediction[..., 1])  # 中心点y
            w = prediction[..., 2]  # 宽度
            h = prediction[..., 3]  # 高度
            conf = torch.sigmoid(prediction[..., 4])  # 置信度
            pred_cls = torch.sigmoid(prediction[..., 5:])  # 分类
            
            # 这里需要实现目标匹配和损失计算的具体逻辑
            # 由于篇幅限制,这里只展示框架,完整实现需要处理目标匹配、正负样本分配等
            
            # 坐标损失
            coord_loss += self.mse_loss(x, targets[i][..., 0]) + \
                         self.mse_loss(y, targets[i][..., 1]) + \
                         self.mse_loss(w, targets[i][..., 2]) + \
                         self.mse_loss(h, targets[i][..., 3])
            
            # 置信度损失
            obj_mask = targets[i][..., 4] > 0
            noobj_mask = targets[i][..., 4] == 0
            
            conf_loss += self.bce_loss(conf[obj_mask], targets[i][..., 4][obj_mask])
            conf_loss += self.bce_loss(conf[noobj_mask], targets[i][..., 4][noobj_mask])
            
            # 分类损失
            cls_loss += self.bce_loss(
                pred_cls[obj_mask], 
                targets[i][..., 5:][obj_mask]
            )
        
        total_loss = (self.lambda_coord * coord_loss + 
                     conf_loss + 
                     cls_loss) / batch_size
        
        return total_loss, coord_loss, conf_loss, cls_loss

四、YOLOv3训练流程

4.1 数据预处理与增强

训练YOLOv3需要大量的数据增强技术来提高模型的泛化能力。

python

复制代码
import torchvision.transforms as transforms
from torch.utils.data import Dataset, DataLoader
import cv2
import numpy as np

class YOLODataset(Dataset):
    """YOLO数据集类"""
    def __init__(self, image_paths, annotations, img_size=416, augment=True):
        self.image_paths = image_paths
        self.annotations = annotations
        self.img_size = img_size
        self.augment = augment
        
        # 数据增强
        if augment:
            self.transform = transforms.Compose([
                transforms.ToTensor(),
                # 可以添加更多的数据增强方法
            ])
        else:
            self.transform = transforms.Compose([
                transforms.ToTensor()
            ])
    
    def __len__(self):
        return len(self.image_paths)
    
    def __getitem__(self, idx):
        # 加载图像
        img_path = self.image_paths[idx]
        image = cv2.imread(img_path)
        image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
        
        # 加载标注
        annotation = self.annotations[idx]
        
        # 数据预处理
        image, annotation = self.preprocess(image, annotation)
        
        # 转换为张量
        image = self.transform(image)
        
        return image, annotation
    
    def preprocess(self, image, annotation):
        """图像和标注预处理"""
        # 调整图像大小
        h, w, _ = image.shape
        image = cv2.resize(image, (self.img_size, self.img_size))
        
        # 调整标注坐标
        # 这里需要根据图像缩放比例调整边界框坐标
        
        return image, annotation
4.2 训练循环

python

复制代码
def train_yolov3(model, train_loader, val_loader, epochs, device):
    """训练YOLOv3模型"""
    model.to(device)
    
    # 优化器
    optimizer = torch.optim.Adam(model.parameters(), lr=1e-4, weight_decay=1e-4)
    
    # 学习率调度器
    scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=30, gamma=0.1)
    
    # 损失函数
    criterion = YOLOv3Loss(model.anchors, model.num_classes)
    
    for epoch in range(epochs):
        model.train()
        train_loss = 0
        
        for batch_idx, (images, targets) in enumerate(train_loader):
            images = images.to(device)
            targets = [target.to(device) for target in targets]
            
            # 前向传播
            predictions = model(images)
            loss, coord_loss, conf_loss, cls_loss = criterion(predictions, targets)
            
            # 反向传播
            optimizer.zero_grad()
            loss.backward()
            optimizer.step()
            
            train_loss += loss.item()
            
            if batch_idx % 100 == 0:
                print(f'Epoch: {epoch} | Batch: {batch_idx} | Loss: {loss.item():.4f}')
        
        # 验证
        val_loss = validate(model, val_loader, criterion, device)
        
        # 更新学习率
        scheduler.step()
        
        print(f'Epoch: {epoch} | Train Loss: {train_loss/len(train_loader):.4f} | '
              f'Val Loss: {val_loss:.4f}')

def validate(model, val_loader, criterion, device):
    """验证模型"""
    model.eval()
    val_loss = 0
    
    with torch.no_grad():
        for images, targets in val_loader:
            images = images.to(device)
            targets = [target.to(device) for target in targets]
            
            predictions = model(images)
            loss, _, _, _ = criterion(predictions, targets)
            val_loss += loss.item()
    
    return val_loss / len(val_loader)

五、YOLOv3推理与后处理

5.1 非极大值抑制(NMS)

在推理阶段,我们需要使用非极大值抑制来过滤重叠的检测框。

python

复制代码
def non_max_suppression(prediction, conf_thres=0.5, nms_thres=0.4):
    """
    非极大值抑制
    prediction: 模型原始输出
    conf_thres: 置信度阈值
    nms_thres: NMS阈值
    """
    # 将预测值从 (center x, center y, width, height) 转换为 (x1, y1, x2, y2)
    box_corner = prediction.new(prediction.shape)
    box_corner[:, :, 0] = prediction[:, :, 0] - prediction[:, :, 2] / 2
    box_corner[:, :, 1] = prediction[:, :, 1] - prediction[:, :, 3] / 2
    box_corner[:, :, 2] = prediction[:, :, 0] + prediction[:, :, 2] / 2
    box_corner[:, :, 3] = prediction[:, :, 1] + prediction[:, :, 3] / 2
    prediction[:, :, :4] = box_corner[:, :, :4]
    
    output = [None for _ in range(len(prediction))]
    
    for image_i, image_pred in enumerate(prediction):
        # 过滤低置信度的预测
        conf_mask = (image_pred[:, 4] >= conf_thres).squeeze()
        image_pred = image_pred[conf_mask]
        
        if not image_pred.size(0):
            continue
        
        # 获取最高分数的类别和分数
        class_conf, class_pred = torch.max(image_pred[:, 5:5 + 80], 1, keepdim=True)
        
        # 检测结果: (x1, y1, x2, y2, object_conf, class_conf, class_pred)
        detections = torch.cat((image_pred[:, :5], class_conf.float(), class_pred.float()), 1)
        
        # 获取所有检测到的类别
        unique_labels = detections[:, -1].cpu().unique()
        
        if unique_labels.is_cuda:
            unique_labels = unique_labels.cuda()
        
        for c in unique_labels:
            # 获取特定类别的检测
            detections_class = detections[detections[:, -1] == c]
            
            # 按照对象置信度排序
            _, conf_sort_index = torch.sort(detections_class[:, 4], descending=True)
            detections_class = detections_class[conf_sort_index]
            
            # 执行NMS
            max_detections = []
            while detections_class.size(0):
                # 获取当前最高置信度的检测
                max_detections.append(detections_class[0].unsqueeze(0))
                
                if len(detections_class) == 1:
                    break
                
                # 计算IoU
                ious = bbox_iou(max_detections[-1], detections_class[1:])
                
                # 移除IoU大于阈值的检测
                detections_class = detections_class[1:][ious < nms_thres]
            
            max_detections = torch.cat(max_detections).data
            
            # 添加到输出
            output[image_i] = max_detections if output[image_i] is None else torch.cat(
                (output[image_i], max_detections))
    
    return output

def bbox_iou(box1, box2):
    """
    计算两个边界框之间的IoU
    box1: (1, 4) [x1, y1, x2, y2]
    box2: (N, 4) [x1, y1, x2, y2]
    """
    # 计算交集区域
    inter_x1 = torch.max(box1[:, 0], box2[:, 0])
    inter_y1 = torch.max(box1[:, 1], box2[:, 1])
    inter_x2 = torch.min(box1[:, 2], box2[:, 2])
    inter_y2 = torch.min(box1[:, 3], box2[:, 3])
    
    inter_area = torch.clamp(inter_x2 - inter_x1, 0) * torch.clamp(inter_y2 - inter_y1, 0)
    
    # 计算并集区域
    box1_area = (box1[:, 2] - box1[:, 0]) * (box1[:, 3] - box1[:, 1])
    box2_area = (box2[:, 2] - box2[:, 0]) * (box2[:, 3] - box2[:, 1])
    
    union_area = box1_area + box2_area - inter_area
    
    return inter_area / union_area
5.2 完整推理流程

python

复制代码
def detect_image(model, image_path, device, img_size=416):
    """在单张图像上进行目标检测"""
    # 加载和预处理图像
    image = cv2.imread(image_path)
    orig_image = image.copy()
    image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
    
    # 调整图像大小
    h, w, _ = image.shape
    image = cv2.resize(image, (img_size, img_size))
    image = image.astype(np.float32) / 255.0
    image = torch.from_numpy(image).permute(2, 0, 1).unsqueeze(0).to(device)
    
    # 推理
    model.eval()
    with torch.no_grad():
        predictions = model(image)
    
    # 后处理
    detections = non_max_suppression(predictions, conf_thres=0.5, nms_thres=0.4)
    
    # 可视化结果
    if detections[0] is not None:
        # 将检测框坐标转换回原始图像尺寸
        scale = min(img_size / w, img_size / h)
        pad_x = (img_size - w * scale) / 2
        pad_y = (img_size - h * scale) / 2
        
        detections = detections[0].cpu().numpy()
        
        for detection in detections:
            x1, y1, x2, y2, obj_conf, class_conf, class_pred = detection
            
            # 转换坐标
            x1 = int((x1 - pad_x) / scale)
            y1 = int((y1 - pad_y) / scale)
            x2 = int((x2 - pad_x) / scale)
            y2 = int((y2 - pad_y) / scale)
            
            # 绘制边界框
            cv2.rectangle(orig_image, (x1, y1), (x2, y2), (0, 255, 0), 2)
            
            # 添加标签
            label = f'Class: {int(class_pred)}, Conf: {class_conf:.2f}'
            cv2.putText(orig_image, label, (x1, y1-10), 
                       cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 2)
    
    return orig_image

# 使用示例
if __name__ == "__main__":
    device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
    
    # 加载模型
    model = YOLOv3(num_classes=80)
    model.load_state_dict(torch.load('yolov3_weights.pth', map_location=device))
    
    # 检测图像
    result_image = detect_image(model, 'test_image.jpg', device)
    cv2.imwrite('result.jpg', result_image)

六、YOLOv3性能分析与优化

6.1 性能表现

在COCO数据集上,YOLOv3的表现如下:

  • YOLOv3-608: 57.9% mAP@0.5, 33.0% mAP@0.5:0.95, 51ms推理时间

  • YOLOv3-416: 55.3% mAP@0.5, 31.0% mAP@0.5:0.95, 29ms推理时间

  • YOLOv3-320: 53.3% mAP@0.5, 28.7% mAP@0.5:0.95, 22ms推理时间

与同期其他检测器相比,YOLOv3在速度和精度之间取得了很好的平衡。

6.2 优化技巧
  1. 模型剪枝:移除对精度贡献较小的通道或层

  2. 知识蒸馏:使用大模型指导小模型训练

  3. 量化:将FP32权重转换为INT8,减少模型大小和推理时间

  4. 硬件加速:使用TensorRT、OpenVINO等推理加速库

七、YOLOv3的局限性与改进方向

尽管YOLOv3表现出色,但仍存在一些局限性:

  1. 对密集小物体检测效果有限:由于下采样次数较多,小物体特征容易丢失

  2. 边界框定位精度有待提升:与两阶段方法相比,定位精度仍有差距

  3. 正负样本不平衡:一张图像中负样本远多于正样本

后续的YOLOv4、YOLOv5等版本在这些方面进行了改进,引入了更多先进的训练技巧和网络结构。

相关推荐
sali-tec3 小时前
C# 基于halcon的视觉工作流-章46-不匀面划痕
人工智能·算法·计算机视觉·c#
JJJJ_iii3 小时前
【机器学习03】学习率与特征工程、多项式回归、逻辑回归
人工智能·pytorch·笔记·学习·机器学习·回归·逻辑回归
大明者省4 小时前
人工智能学科主要单词
人工智能
BEOL贝尔科技4 小时前
不稳定的冰箱如何做权限管理?冰箱锁加入远程管理功能后生物保存工作发生了哪些变化?
大数据·人工智能
AI拉呱_4 小时前
第144期《2025年AI现状报告》解读(二):产业篇
人工智能·chatgpt
IT_陈寒4 小时前
Vue3性能优化实战:这7个技巧让我的应用提速50%,尤雨溪都点赞!
前端·人工智能·后端
云端FFF4 小时前
论文理解 【LLM-回归】—— Decoding-based Regression
人工智能·数据挖掘·回归
电棍2336 小时前
工程记录:使用tello edu无人机进行计算机视觉工作(手势识别,yolo3搭载)
人工智能·计算机视觉·无人机
wan5555cn6 小时前
国产电脑操作系统与硬盘兼容性现状分析:挑战与前景评估
人工智能·笔记·深度学习·机器学习·电脑·生活