Vision - 开源视觉分割算法框架 Grounded SAM2 配置与推理 教程 (1)

欢迎关注我的CSDN:https://spike.blog.csdn.net/

本文地址:https://spike.blog.csdn.net/article/details/143388189

免责声明:本文来源于个人知识与公开资料,仅用于学术交流,欢迎讨论,不支持转载。


Grounded SAM2 集成多个先进模型的视觉 AI 框架,融合 GroundingDINO、Florence-2 和 SAM2 等模型,实现开放域目标检测、分割和跟踪等多项视觉任务的突破性进展,通过自然语言描述来定位图像中的目标,生成精细的目标分割掩码,在视频序列中持续跟踪目标,保持 ID 的一致性。

Paper: Grounded SAM: Assembling Open-World Models for Diverse Visual Tasks,SAM 版本由 1.0 升级至 2.0

1. 环境配置

GitHub: Grounded-SAM-2

bash 复制代码
git clone https://github.com/IDEA-Research/Grounded-SAM-2
cd Grounded-SAM-2

准备 SAM 2.1 模型,格式是 pt 的,GroundingDINO 模型,格式是 pth 的,即:

bash 复制代码
wget https://huggingface.co/facebook/sam2.1-hiera-large/resolve/main/sam2.1_hiera_large.pt?download=true -O sam2.1_hiera_large.pt
wget https://huggingface.co/ShilongLiu/GroundingDINO/resolve/main/groundingdino_swint_ogc.pth

最新模型位置:

bash 复制代码
cd checkpoints
ln -s [your path]/llm/workspace_comfyui/ComfyUI/models/sam2/sam2_hiera_large.pt sam2_hiera_large.pt

cd gdino_checkpoints
ln -s [your path]/llm/workspace_comfyui/ComfyUI/models/grounding-dino/groundingdino_swinb_cogcoor.pth groundingdino_swinb_cogcoor.pth
ln -s [your path]/llm/workspace_comfyui/ComfyUI/models/grounding-dino/groundingdino_swint_ogc.pth groundingdino_swint_ogc.pth

激活环境:

bash 复制代码
conda activate sam2

测试 PyTorch:

bash 复制代码
import torch
print(torch.__version__)  # 2.5.0+cu124
print(torch.cuda.is_available())  # True
exit()
echo $CUDA_HOME

安装 Grounding DINO:

bash 复制代码
pip install --no-build-isolation -e grounding_dino
pip show groundingdino

安装 SAM2:

bash 复制代码
pip install --no-build-isolation -e .
pip install --no-build-isolation -e ".[notebooks]"  # 适配 Jupyter
pip show SAM-2

配置参数:视觉分割开源算法 SAM2(Segment Anything 2) 配置与推理

依赖文件:

bash 复制代码
cd grounding_dino/
pip install -r requirements.txt --verbose

2. 测试图像

测试脚本:grounded_sam2_local_demo.py

导入相关的依赖包:

python 复制代码
import os
import cv2
import json
import torch
import numpy as np
import supervision as sv
import pycocotools.mask as mask_util
from pathlib import Path
from torchvision.ops import box_convert
from sam2.build_sam import build_sam2
from sam2.sam2_image_predictor import SAM2ImagePredictor
from grounding_dino.groundingdino.util.inference import load_model, load_image, predict

from PIL import Image
import matplotlib.pyplot as plt

配置数据,以及依赖环境,其中包括:

  • 输入文本提示,例如 袜子(socks) 和 吉他(guitar)
  • 输入图像
  • SAM2 模型 v2.1 版本,以及配置
  • GroundingDINO (DETR with Improved deNoising anchOr boxes, 改进的去噪锚框的DETR) 模型,以及配置
  • Box 阈值、文本阈值
  • 输出文件夹与Json

即:

python 复制代码
TEXT_PROMPT = "socks. guitar."
#IMG_PATH = "notebooks/images/truck.jpg"
IMG_PATH = "[your path]/llm/vision_test_data/image2.png"

image = Image.open(IMG_PATH)
plt.figure(figsize=(9, 6))
plt.title(f"annotated_frame")
plt.imshow(image)

SAM2_CHECKPOINT = "./checkpoints/sam2.1_hiera_large.pt"
SAM2_MODEL_CONFIG = "configs/sam2.1/sam2.1_hiera_l.yaml"
GROUNDING_DINO_CONFIG = "grounding_dino/groundingdino/config/GroundingDINO_SwinT_OGC.py"
GROUNDING_DINO_CHECKPOINT = "gdino_checkpoints/groundingdino_swint_ogc.pth"
BOX_THRESHOLD = 0.35
TEXT_THRESHOLD = 0.25
DEVICE = "cuda" if torch.cuda.is_available() else "cpu"
OUTPUT_DIR = Path("outputs/grounded_sam2_local_demo")
DUMP_JSON_RESULTS = True

# create output directory
OUTPUT_DIR.mkdir(parents=True, exist_ok=True)

加载 SAM2 模型,获得 sam2_predictor,即:

python 复制代码
# build SAM2 image predictor
sam2_checkpoint = SAM2_CHECKPOINT
model_cfg = SAM2_MODEL_CONFIG
sam2_model = build_sam2(model_cfg, sam2_checkpoint, device=DEVICE)
sam2_predictor = SAM2ImagePredictor(sam2_model)

加载 GroundingDINO 模型,获得 grounding_model,即:

复制代码
# build grounding dino model
grounding_model = load_model(
    model_config_path=GROUNDING_DINO_CONFIG, 
    model_checkpoint_path=GROUNDING_DINO_CHECKPOINT,
    device=DEVICE
)

SAM2 加载图像数据,即:

python 复制代码
text = TEXT_PROMPT
img_path = IMG_PATH

# image(原图), image_transformed(正则化图像)
image_source, image = load_image(img_path)
sam2_predictor.set_image(image_source)

GroudingDINO 预测 Bounding Box,输入模型、图像、文本、Box和Text阈值,即:

  • load_image()predict() 都来自于 GroundingDINO,数据和模型匹配。
python 复制代码
boxes, confidences, labels = predict(
    model=grounding_model,
    image=image,
    caption=text,
    box_threshold=BOX_THRESHOLD,
    text_threshold=TEXT_THRESHOLD,
)

适配不同 Box 的格式:

python 复制代码
h, w, _ = image_source.shape
boxes = boxes * torch.Tensor([w, h, w, h])
input_boxes = box_convert(boxes=boxes, in_fmt="cxcywh", out_fmt="xyxy").numpy()

SAM2 依赖的 PyTorch 配置:

python 复制代码
# FIXME: figure how does this influence the G-DINO model
torch.autocast(device_type="cuda", dtype=torch.bfloat16).__enter__()

if torch.cuda.get_device_properties(0).major >= 8:
    # turn on tfloat32 for Ampere GPUs (https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices)
    torch.backends.cuda.matmul.allow_tf32 = True
    torch.backends.cudnn.allow_tf32 = True

SAM2 预测图像:

python 复制代码
masks, scores, logits = sam2_predictor.predict(
    point_coords=None,
    point_labels=None,
    box=input_boxes,
    multimask_output=False,
)

后处理预测结果:

python 复制代码
"""
Post-process the output of the model to get the masks, scores, and logits for visualization
"""
# convert the shape to (n, H, W)
if masks.ndim == 4:
    masks = masks.squeeze(1)


confidences = confidences.numpy().tolist()
class_names = labels

class_ids = np.array(list(range(len(class_names))))

labels = [
    f"{class_name} {confidence:.2f}"
    for class_name, confidence
    in zip(class_names, confidences)
]

输出结果可视化:

python 复制代码
"""
Visualize image with supervision useful API
"""
img = cv2.imread(img_path)
detections = sv.Detections(
    xyxy=input_boxes,  # (n, 4)
    mask=masks.astype(bool),  # (n, h, w)
    class_id=class_ids
)

box_annotator = sv.BoxAnnotator()
annotated_frame = box_annotator.annotate(scene=img.copy(), detections=detections)

label_annotator = sv.LabelAnnotator()
annotated_frame = label_annotator.annotate(scene=annotated_frame, detections=detections, labels=labels)
cv2.imwrite(os.path.join(OUTPUT_DIR, "groundingdino_annotated_image.jpg"), annotated_frame)
plt.figure(figsize=(9, 6))
plt.title(f"annotated_frame")
plt.imshow(annotated_frame[:,:,::-1])

mask_annotator = sv.MaskAnnotator()
annotated_frame = mask_annotator.annotate(scene=annotated_frame, detections=detections)
cv2.imwrite(os.path.join(OUTPUT_DIR, "grounded_sam2_annotated_image_with_mask.jpg"), annotated_frame)
plt.figure(figsize=(9, 6))
plt.title(f"annotated_frame")
plt.imshow(annotated_frame[:,:,::-1])

GroundingDINO 的 Box 效果,准确检测出 袜子 和 吉他,两类实体:

SAM2 的分割效果,如下:

转换成 COCO 数据格式:

python 复制代码
def single_mask_to_rle(mask):
    rle = mask_util.encode(np.array(mask[:, :, None], order="F", dtype="uint8"))[0]
    rle["counts"] = rle["counts"].decode("utf-8")
    return rle

if DUMP_JSON_RESULTS:
    # convert mask into rle format
    mask_rles = [single_mask_to_rle(mask) for mask in masks]

    input_boxes = input_boxes.tolist()
    scores = scores.tolist()
    # save the results in standard format
    results = {
        "image_path": img_path,
        "annotations" : [
            {
                "class_name": class_name,
                "bbox": box,
                "segmentation": mask_rle,
                "score": score,
            }
            for class_name, box, mask_rle, score in zip(class_names, input_boxes, mask_rles, scores)
        ],
        "box_format": "xyxy",
        "img_width": w,
        "img_height": h,
    }
    
    with open(os.path.join(OUTPUT_DIR, "grounded_sam2_local_image_demo_results.json"), "w") as f:
        json.dump(results, f, indent=4)
相关推荐
Mintopia12 分钟前
OpenClaw 对软件行业产生的影响
人工智能
陈广亮1 小时前
构建具有长期记忆的 AI Agent:从设计模式到生产实践
人工智能
会写代码的柯基犬1 小时前
DeepSeek vs Kimi vs Qwen —— AI 生成俄罗斯方块代码效果横评
人工智能·llm
Mintopia1 小时前
OpenClaw 是什么?为什么节后热度如此之高?
人工智能
爱可生开源社区2 小时前
DBA 的未来?八位行业先锋的年度圆桌讨论
人工智能·dba
叁两4 小时前
用opencode打造全自动公众号写作流水线,AI 代笔太香了!
前端·人工智能·agent
前端付豪5 小时前
LangChain记忆:通过Memory记住上次的对话细节
人工智能·python·langchain
strayCat232555 小时前
Clawdbot 源码解读 7: 扩展机制
人工智能·开源
王鑫星5 小时前
SWE-bench 首次突破 80%:Claude Opus 4.5 发布,Anthropic 的野心不止于写代码
人工智能