qwen2 VL 多模态图文模型;图像、视频使用案例

参考:

https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct

模型:

c 复制代码
export HF_ENDPOINT=https://hf-mirror.com

huggingface-cli download --resume-download --local-dir-use-symlinks False Qwen/Qwen2-VL-2B-Instruct  --local-dir qwen2-vl

安装:

transformers-4.45.0.dev0

accelerate-0.34.2 safetensors-0.4.5

c 复制代码
pip install git+https://github.com/huggingface/transformers
pip install 'accelerate>=0.26.0'

代码:

单张图片

c 复制代码
from PIL import Image
import requests
import torch
from torchvision import io
from typing import Dict
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor

# Load the model in half-precision on the available device(s)
model = Qwen2VLForConditionalGeneration.from_pretrained(
    "/ai/qwen2-vl", torch_dtype="auto", device_map="auto"
)
processor = AutoProcessor.from_pretrained("/ai/qwen2-vl")




# Image
url = "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg"
image = Image.open(requests.get(url, stream=True).raw)

conversation = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
            },
            {"type": "text", "text": "Describe this image."},
        ],
    }
]


# Preprocess the inputs
text_prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
# Excepted output: '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n<|vision_start|><|image_pad|><|vision_end|>Describe this image.<|im_end|>\n<|im_start|>assistant\n'

inputs = processor(
    text=[text_prompt], images=[image], padding=True, return_tensors="pt"
)
inputs = inputs.to("cuda")

# Inference: Generation of the output
output_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids = [
    output_ids[len(input_ids) :]
    for input_ids, output_ids in zip(inputs.input_ids, output_ids)
]
output_text = processor.batch_decode(
    generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True
)
print(output_text)

这是图片:


中文问

c 复制代码
# Image
url = "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg"
image = Image.open(requests.get(url, stream=True).raw)

conversation = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
            },
            {"type": "text", "text": "描述下这张图片."},
        ],
    }
]


# Preprocess the inputs
text_prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
# Excepted output: '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n<|vision_start|><|image_pad|><|vision_end|>Describe this image.<|im_end|>\n<|im_start|>assistant\n'

inputs = processor(
    text=[text_prompt], images=[image], padding=True, return_tensors="pt"
)
inputs = inputs.to("cuda")
# Inference: Generation of the output
output_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids = [
    output_ids[len(input_ids) :]
    for input_ids, output_ids in zip(inputs.input_ids, output_ids)
]
output_text = processor.batch_decode(
    generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True
)
print(output_text)

多张图片

c 复制代码
def load_images(image_info):
    images = []
    for info in image_info:
        if "image" in info:
            if info["image"].startswith("http"):
                image = Image.open(requests.get(info["image"], stream=True).raw)
            else:
                image = Image.open(info["image"])
            images.append(image)
    return images

# Messages containing multiple images and a text query
messages = [
    {
        "role": "user",
        "content": [
            {"type": "image", "image": "/ai/fight.png"},
            {"type": "image", "image": "/ai/long.png"},
            {"type": "text", "text": "描述下这两张图片"},
        ],
    }
]

# Load images
image_info = messages[0]["content"][:2]  # Extract image info from the message
images = load_images(image_info)

# Preprocess the inputs
text_prompt = processor.apply_chat_template(messages, add_generation_prompt=True)

inputs = processor(
    text=[text_prompt], images=images, padding=True, return_tensors="pt"
)
inputs = inputs.to("cuda")

# Inference: Generation of the output
output_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids = [
    output_ids[len(input_ids) :]
    for input_ids, output_ids in zip(inputs.input_ids, output_ids)
]
output_text = processor.batch_decode(
    generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True
)
print(output_text)

视频

安装

c 复制代码
pip install qwen-vl-utils
c 复制代码
from qwen_vl_utils import process_vision_info

# Messages containing a images list as a video and a text query
messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "video",
                "video": [
                    "file:///path/to/frame1.jpg",
                    "file:///path/to/frame2.jpg",
                    "file:///path/to/frame3.jpg",
                    "file:///path/to/frame4.jpg",
                ],
                "fps": 1.0,
            },
            {"type": "text", "text": "Describe this video."},
        ],
    }
]
# Messages containing a video and a text query
messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "video",
                "video": "/ai/血液从上肢流入上腔静脉.mp4",
                "max_pixels": 360 * 420,
                "fps": 1.0,
            },
            {"type": "text", "text": "描述下这个视频"},
        ],
    }
]

# Preparation for inference
text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
    text=[text],
    images=image_inputs,
    videos=video_inputs,
    padding=True,
    return_tensors="pt",
)
inputs = inputs.to("cuda")

# Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
    out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
    generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
相关推荐
阿杰学AI15 小时前
AI核心知识80——大语言模型之Slow Thinking和Deep Reasoning(简洁且通俗易懂版)
人工智能·ai·语言模型·自然语言处理·aigc·慢思考·深度推理
山顶夕景15 小时前
【LLM】大模型数据清洗&合成&增强方法
大模型·llm·训练数据
SmartBrain16 小时前
OCR 模型在医疗场景的选型研究
人工智能·算法·语言模型·架构·aigc·ocr
阿杰学AI17 小时前
AI核心知识79——大语言模型之Knowledge Conflict(简洁且通俗易懂版)
人工智能·ai·语言模型·自然语言处理·aigc·rag·知识冲突
tiger11917 小时前
FPGA 在大模型推理中的应用
人工智能·llm·fpga·大模型推理
CoderJia程序员甲18 小时前
GitHub 热榜项目 - 日榜(2026-02-02)
人工智能·ai·大模型·github·ai教程
AndrewHZ18 小时前
【AI黑话日日新】什么是大模型的test-time scaling?
人工智能·深度学习·大模型·llm·推理加速·测试时缩放
xixixi7777719 小时前
Prompt脱敏——不损失(或尽量少损失)原文本语义和上下文价值的前提下,防止原始敏感数据暴露给模型服务方、潜在的攻击者或出现在模型训练数据中
人工智能·microsoft·ai·大模型·数据安全·提示词·敏感信息
GPUStack19 小时前
vLLM、SGLang 融资背后,AI 推理正在走向系统化与治理
大模型·llm·vllm·模型推理·sglang·高性能推理
PaperRed ai写作降重助手20 小时前
主流 AI 论文写作工具排名(2026 最新)
人工智能·aigc·ai写作·论文写作·论文降重·论文查重·辅助写作