阿里通义千问开源Qwen2.5系列模型:Qwen2-VL-72B媲美GPT-4

通义千问团队宣布,继Qwen2发布三个月后,Qwen家族的最新成员------Qwen2.5系列语言模型正式开源。这标志着可能是历史上最大规模的开源发布之一,包括了通用语言模型Qwen2.5,以及专门针对编程和数学领域的Qwen2.5-Coder和Qwen2.5-Math模型。

Qwen2.5系列模型在最新的大规模数据集上进行了预训练,数据集包含高达18T tokens,相较于Qwen2,新模型在知识获取、编程能力和数学能力方面均有显著提升。模型支持长文本处理,能够生成最多8K tokens的内容,并保持了对29种以上语言的支持。


新模型在指令执行、长文本生成、结构化数据理解以及生成结构化输出方面取得了显著改进。特别是在编程和数学领域,Qwen2.5-Coder和Qwen2.5-Math模型在专业数据集上进行了训练,展现了更强的专业领域能力。

Qwen2-VL 有哪些新功能?

主要增强功能:

  • SoTA 可理解各种分辨率和比例的图像:Qwen2-VL 在视觉理解基准测试(包括 MathVista、DocVQA、RealWorldQA、MTVQA 等)中取得了一流的性能。

  • 可理解 20 分钟以上的视频:Qwen2-VL 可理解 20 分钟以上的视频,用于基于视频的高质量问题解答、对话、内容创建等。

  • 可操作手机、机器人等的代理:Qwen2-VL 具有复杂的推理和决策能力,可与手机、机器人等设备集成,根据视觉环境和文本指令进行自动操作。

  • 多语言支持:为了服务全球用户,除了英文和中文,Qwen2-VL 现在还支持理解图像中的不同语言文本,包括大多数欧洲语言、日语、韩语、阿拉伯语、越南语等。

模型架构更新:

  • 自然动态分辨率:与以往不同的是,Qwen2-VL 可以处理任意图像分辨率,并将其映射为动态的视觉标记数,从而提供更接近人类的视觉处理体验。

  • 多模态旋转位置嵌入(M-ROPE) :将位置嵌入分解为多个部分,以捕捉一维文本、二维视觉和三维视频位置信息,从而增强其多模态处理能力。

图像基准

Benchmark Previous SoTA ^(Open-source LVLM)^ Claude-3.5 Sonnet GPT-4o Qwen2-VL-72B
MMMU~val~ 58.3 68.3 69.1 64.5
DocVQA~test~ 94.1 95.2 92.8 96.5
InfoVQA~test~ 82.0 - - 84.5
ChartQA~test~ 88.4 90.8 85.7 88.3
TextVQA~val~ 84.4 - - 85.5
OCRBench 852 788 736 877
MTVQA 17.3 25.7 27.8 30.9
VCR~en easy~ 84.67 63.85 91.55 91.93
VCR~zh easy~ 22.09 1.0 14.87 65.37
RealWorldQA 72.2 60.1 75.4 77.8
MME~sum~ 2414.7 1920.0 2328.7 2482.7
MMBench-EN~test~ 86.5 79.7 83.4 86.5
MMBench-CN~test~ 86.3 80.7 82.1 86.6
MMBench-V1.1~test~ 85.5 78.5 82.2 85.9
MMT-Bench~test~ 63.4 - 65.5 71.7
MMStar 67.1 62.2 63.9 68.3
MMVet~GPT-4-Turbo~ 65.7 66.0 69.1 74.0
HallBench~avg~ 55.2 49.9 55.0 58.1
MathVista~testmini~ 67.5 67.7 63.8 70.5
MathVision 16.97 - 30.4 25.9

视频基准

Benchmark Previous SoTA ^(Open-source LVLM)^ Gemini 1.5-Pro GPT-4o Qwen2-VL-72B
MVBench 69.6 - - 73.6
PerceptionTest~test~ 66.9 - - 68.0
EgoSchema~test~ 62.0 63.2 72.2 77.9
Video-MME ~(wo/w subs)~ 66.3/69.6 75.0 /81.3 71.9/77.2 71.2/77.8

智能体基准

Benchmark Metric Previous SoTA GPT-4o Qwen2-VL-72B
General FnCall^[1]^ TM - 90.2 93.1
EM - 50.0 53.2
Game Number Line SR 89.4^[2]^ 91.5 100.0
BlackJack SR 40.2^[2]^ 34.5 42.6
EZPoint SR 50.0^[2]^ 85.5 100.0
Point24 SR 2.6^[2]^ 3.0 4.5
Android AITZ TM 83.0^[3]^ 70.0 89.6
EM 47.7^[3]^ 35.3 72.1
AI2THOR ALFRED~valid-unseen~ SR 67.7^[4]^ - 67.8
GC 75.3^[4]^ - 75.8
VLN R2R~valid-unseen~ SR 79.0 43.7^[5]^ 51.7
REVERIE~valid-unseen~ SR 61.0 31.6^[5]^ 31.0

SR、GC、TM 和 EM 是成功率、目标条件成功率、类型匹配和精确匹配的简称。SAM[6] 支持 ALFRED。

  1. Qwen 团队自编函数调用基准测试
  2. 通过强化学习微调作为决策代理的大型视觉语言模型
  3. 动物园中的 Android:图形用户界面代理的行动思维链
  4. ThinkBot:利用思维链推理进行嵌入式指令跟踪
  5. MapGPT:地图引导提示与视觉语言导航的自适应路径规划
  6. 任意分段

多语言基准

| Models | AR | DE | FR | IT | JA | KO | RU | TH | VI | AVG |
| Qwen2-VL-72B | 20.7 | 36.5 | 44.1 | 42.8 | 21.6 | 37.4 | 15.6 | 17.7 | 41.6 | 30.9 |
| GPT-4o | 20.2 | 34.2 | 41.2 | 32.7 | 20.0 | 33.9 | 11.5 | 22.5 | 34.2 | 27.8 |
| Claude3 Opus | 15.1 | 33.4 | 40.6 | 34.4 | 19.4 | 27.2 | 13.0 | 19.5 | 29.1 | 25.7 |

Gemini Ultra 14.7 32.3 40.0 31.8 12.3 17.2 11.8 20.3 28.6 23.2

Quickstart

我们提供了一个工具包,帮助您更方便地处理各种类型的视觉输入。其中包括 base64、URL 以及交错图片和视频。您可以使用以下命令安装它:

bash 复制代码
pip install qwen-vl-utils

下面我们将展示一个代码片段,告诉您如何使用transformersqwen_vl_utils 来使用聊天模型:

python 复制代码
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info

# default: Load the model on the available device(s)
model = Qwen2VLForConditionalGeneration.from_pretrained(
    "Qwen/Qwen2-VL-72B-Instruct", torch_dtype="auto", device_map="auto"
)

# We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios.
# model = Qwen2VLForConditionalGeneration.from_pretrained(
#     "Qwen/Qwen2-VL-72B-Instruct",
#     torch_dtype=torch.bfloat16,
#     attn_implementation="flash_attention_2",
#     device_map="auto",
# )

# default processer
processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-72B-Instruct")

# The default range for the number of visual tokens per image in the model is 4-16384. You can set min_pixels and max_pixels according to your needs, such as a token count range of 256-1280, to balance speed and memory usage.
# min_pixels = 256*28*28
# max_pixels = 1280*28*28
# processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-72B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels)

messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
            },
            {"type": "text", "text": "Describe this image."},
        ],
    }
]

# Preparation for inference
text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
    text=[text],
    images=image_inputs,
    videos=video_inputs,
    padding=True,
    return_tensors="pt",
)
inputs = inputs.to("cuda")

# Inference: Generation of the output
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
    out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
    generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)

无 qwen_vl_utils:

python 复制代码
from PIL import Image
import requests
import torch
from torchvision import io
from typing import Dict
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor

# Load the model in half-precision on the available device(s)
model = Qwen2VLForConditionalGeneration.from_pretrained(
    "Qwen/Qwen2-VL-72B-Instruct", torch_dtype="auto", device_map="auto"
)
processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-72B-Instruct")

# Image
url = "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg"
image = Image.open(requests.get(url, stream=True).raw)

conversation = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
            },
            {"type": "text", "text": "Describe this image."},
        ],
    }
]


# Preprocess the inputs
text_prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
# Excepted output: '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n<|vision_start|><|image_pad|><|vision_end|>Describe this image.<|im_end|>\n<|im_start|>assistant\n'

inputs = processor(
    text=[text_prompt], images=[image], padding=True, return_tensors="pt"
)
inputs = inputs.to("cuda")

# Inference: Generation of the output
output_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids = [
    output_ids[len(input_ids) :]
    for input_ids, output_ids in zip(inputs.input_ids, output_ids)
]
output_text = processor.batch_decode(
    generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True
)
print(output_text)

多图像推理

python 复制代码
# Messages containing multiple images and a text query
messages = [
    {
        "role": "user",
        "content": [
            {"type": "image", "image": "file:///path/to/image1.jpg"},
            {"type": "image", "image": "file:///path/to/image2.jpg"},
            {"type": "text", "text": "Identify the similarities between these images."},
        ],
    }
]

# Preparation for inference
text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
    text=[text],
    images=image_inputs,
    videos=video_inputs,
    padding=True,
    return_tensors="pt",
)
inputs = inputs.to("cuda")

# Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
    out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
    generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)

视频推理

python 复制代码
# Messages containing a images list as a video and a text query
messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "video",
                "video": [
                    "file:///path/to/frame1.jpg",
                    "file:///path/to/frame2.jpg",
                    "file:///path/to/frame3.jpg",
                    "file:///path/to/frame4.jpg",
                ],
                "fps": 1.0,
            },
            {"type": "text", "text": "Describe this video."},
        ],
    }
]
# Messages containing a video and a text query
messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "video",
                "video": "file:///path/to/video1.mp4",
                "max_pixels": 360 * 420,
                "fps": 1.0,
            },
            {"type": "text", "text": "Describe this video."},
        ],
    }
]

# Preparation for inference
text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
    text=[text],
    images=image_inputs,
    videos=video_inputs,
    padding=True,
    return_tensors="pt",
)
inputs = inputs.to("cuda")

# Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
    out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
    generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)

批量推理

python 复制代码
# Sample messages for batch inference
messages1 = [
    {
        "role": "user",
        "content": [
            {"type": "image", "image": "file:///path/to/image1.jpg"},
            {"type": "image", "image": "file:///path/to/image2.jpg"},
            {"type": "text", "text": "What are the common elements in these pictures?"},
        ],
    }
]
messages2 = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Who are you?"},
]
# Combine messages for batch processing
messages = [messages1, messages1]

# Preparation for batch inference
texts = [
    processor.apply_chat_template(msg, tokenize=False, add_generation_prompt=True)
    for msg in messages
]
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
    text=texts,
    images=image_inputs,
    videos=video_inputs,
    padding=True,
    return_tensors="pt",
)
inputs = inputs.to("cuda")

# Batch Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
    out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_texts = processor.batch_decode(
    generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_texts)
更多使用技巧

对于输入图片,我们支持本地文件、base64 和 URL。对于视频,我们目前只支持本地文件。

python 复制代码
# You can directly insert a local file path, a URL, or a base64-encoded image into the position where you want in the text.
## Local file path
messages = [
    {
        "role": "user",
        "content": [
            {"type": "image", "image": "file:///path/to/your/image.jpg"},
            {"type": "text", "text": "Describe this image."},
        ],
    }
]
## Image URL
messages = [
    {
        "role": "user",
        "content": [
            {"type": "image", "image": "http://path/to/your/image.jpg"},
            {"type": "text", "text": "Describe this image."},
        ],
    }
]
## Base64 encoded image
messages = [
    {
        "role": "user",
        "content": [
            {"type": "image", "image": "data:image;base64,/9j/..."},
            {"type": "text", "text": "Describe this image."},
        ],
    }
]

提升性能的图像分辨率

该模型支持多种分辨率输入。默认情况下,它使用本机分辨率进行输入,但更高的分辨率会以更多计算量为代价提高性能。用户可以设置最小和最大像素数,以达到最佳配置,满足自己的需求,例如令牌数范围为 256-1280,以平衡速度和内存使用。

min_pixels = 256 * 28 * 28
max_pixels = 1280 * 28 * 28
processor = AutoProcessor.from_pretrained(
    "Qwen/Qwen2-VL-72B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels
)

此外,我们还提供了两种方法,可对模型输入的图像尺寸进行精细控制:

定义 min_pixels 和 max_pixels:图像将在 min_pixels 和 max_pixels 的范围内调整大小,以保持其纵横比。

指定精确尺寸:直接设置 resized_height 和 resized_width。这些值将四舍五入为最接近的 28 的倍数。
python 复制代码
# min_pixels and max_pixels
messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "image": "file:///path/to/your/image.jpg",
                "resized_height": 280,
                "resized_width": 420,
            },
            {"type": "text", "text": "Describe this image."},
        ],
    }
]
# resized_height and resized_width
messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "image": "file:///path/to/your/image.jpg",
                "min_pixels": 50176,
                "max_pixels": 50176,
            },
            {"type": "text", "text": "Describe this image."},
        ],
    }
]
Qwen 2.5 VL
python 复制代码
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "Qwen/Qwen2.5-72B-Instruct"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "Give me a short introduction to large language model."
messages = [
    {"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=512
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]

局限性

虽然 Qwen2-VL 适用于多种视觉任务,但了解其局限性也同样重要。以下是一些已知的限制:

  • 缺乏音频支持:当前模式无法理解视频中的音频信息。
  • 数据时效性:我们的图像数据集更新至 2023 年 6 月,此后的信息可能无法覆盖。
  • 个人和知识产权 (IP) 的限制:该模型识别特定个人或知识产权的能力有限,可能无法全面覆盖所有知名人士或品牌。
  • 处理复杂指令的能力有限:面对复杂的多步骤指令,模型的理解和执行能力需要加强。
  • 计数精度不足:特别是在复杂场景中,物体计数的准确性不高,需要进一步改进。
  • 空间推理能力较弱:特别是在三维空间中,模型对物体位置关系的推理能力不足,难以精确判断物体的相对位置。

这些限制是模型优化和改进的持续方向,我们致力于不断提高模型的性能和应用范围。

相关推荐
古希腊掌管学习的神13 分钟前
[机器学习]XGBoost(3)——确定树的结构
人工智能·机器学习
ZHOU_WUYI41 分钟前
4.metagpt中的软件公司智能体 (ProjectManager 角色)
人工智能·metagpt
靴子学长1 小时前
基于字节大模型的论文翻译(含免费源码)
人工智能·深度学习·nlp
AI_NEW_COME2 小时前
知识库管理系统可扩展性深度测评
人工智能
海棠AI实验室3 小时前
AI的进阶之路:从机器学习到深度学习的演变(一)
人工智能·深度学习·机器学习
hunteritself3 小时前
AI Weekly『12月16-22日』:OpenAI公布o3,谷歌发布首个推理模型,GitHub Copilot免费版上线!
人工智能·gpt·chatgpt·github·openai·copilot
IT古董3 小时前
【机器学习】机器学习的基本分类-强化学习-策略梯度(Policy Gradient,PG)
人工智能·机器学习·分类
centurysee3 小时前
【最佳实践】Anthropic:Agentic系统实践案例
人工智能
mahuifa3 小时前
混合开发环境---使用编程AI辅助开发Qt
人工智能·vscode·qt·qtcreator·编程ai
四口鲸鱼爱吃盐3 小时前
Pytorch | 从零构建GoogleNet对CIFAR10进行分类
人工智能·pytorch·分类