【深度学习】PixArt-Sigma 实战【3】速度测试

css 复制代码
import time

import torch
from diffusers import Transformer2DModel, PixArtSigmaPipeline
from diffusers import ConsistencyDecoderVAE

device = torch.device("cuda:1" if torch.cuda.is_available() else "cpu")
weight_dtype = torch.float16

pipe = PixArtSigmaPipeline.from_pretrained(
    "./PixArt-Sigma-XL-2-1024-MS",
    torch_dtype=weight_dtype,
    use_safetensors=True,
)
pipe.to(device)

# transformer = Transformer2DModel.from_pretrained(
#     # "PixArt-alpha/PixArt-Sigma-XL-2-1024-MS",
#     # "/ssd/xiedong/PixArt/PixArt-Sigma-XL-2-2K-MS",
#     "/ssd/xiedong/PixArt/PixArt-Sigma-XL-2-2K-MS",
#     subfolder='transformer',
#     torch_dtype=weight_dtype,
# )
# pipe = PixArtSigmaPipeline.from_pretrained(
#     # "PixArt-alpha/pixart_sigma_sdxlvae_T5_diffusers",
#     "/ssd/xiedong/PixArt/PixArt-sigma/output/pixart_sigma_sdxlvae_T5_diffusers",
#     transformer=transformer,
#     torch_dtype=weight_dtype,
#     use_safetensors=True,
# )
# pipe.vae = ConsistencyDecoderVAE.from_pretrained("/ssd/xiedong/PixArt/consistency-decoder", torch_dtype=torch.float16)
# pipe.to(device)

# Enable memory optimizations.
# pipe.enable_model_cpu_offload()

time1 = time.time()
prompt = "A small cactus with a happy face in the Sahara desert."
image = pipe(prompt).images[0]
time2 = time.time()
print(f"time use:{time2 - time1}")
image.save("./catcus.png")

time1 = time.time()
prompt = "A small cactus with a happy face in the Sahara desert."
image = pipe(prompt).images[0]
time2 = time.time()
print(f"time use:{time2 - time1}")
image.save("./catcus.png")

A100速度 20轮4.4秒。

Loading pipeline components...: 0%| | 0/5 [00:00<?, ?it/s]You are using the default legacy behaviour of the <class 'transformers.models.t5.tokenization_t5.T5Tokenizer'>. This is expected, and simply means that the legacy (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set legacy=False. This should only be set if you understand what it means, and thouroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565

Loading pipeline components...: 60%|██████ | 3/5 [00:01<00:01, 1.65it/s]

Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s]

Loading checkpoint shards: 50%|█████ | 1/2 [00:01<00:01, 1.83s/it]

Loading checkpoint shards: 100%|██████████| 2/2 [00:03<00:00, 1.70s/it]

Loading pipeline components...: 100%|██████████| 5/5 [00:11<00:00, 2.29s/it]

100%|██████████| 20/20 [00:05<00:00, 3.89it/s]

time use:6.027105093002319

100%|██████████| 20/20 [00:04<00:00, 4.94it/s]

time use:4.406545162200928

相关推荐
麦聪聊数据1 分钟前
为什么 AI Agent 需要 RESTful API 而不是直接执行 SQL?
人工智能·sql·restful
Sagittarius_A*1 分钟前
霍夫变换:几何特征检测与量化验证【计算机视觉】
图像处理·人工智能·opencv·算法·计算机视觉·霍夫变换
这张生成的图像能检测吗1 分钟前
(论文速读)Mono3DVLT:基于单眼视频的3D视觉语言跟踪
深度学习·计算机视觉·视觉语言模型·3d目标追踪·单目视频
Oflycomm1 分钟前
瑞昱亮相 AWE 2026:从 Wi-Fi 7 到 AIoT,全场景连接能力再升级
人工智能·wifi模组·qogrisys·awe·o8852pm·瑞昱芯片
AI精钢3 分钟前
NVIDIA 可以挑战中国 AI 在开源社区的统治地位吗?
人工智能·ai·开源·llm·nvidia·open source·open weight
小陈phd3 分钟前
多模态大模型学习笔记(十八)——基于 DeepSeek-7B 的 LoRA 微调训练实战教程
人工智能·笔记·学习
GISer_Jing4 分钟前
AI Agent技能Skills设计
前端·人工智能·aigc·状态模式
信鸽爱好者4 分钟前
RTX5060显卡+windows CUDA12.8+cuDNN8.9.7+pytorch安装
人工智能·pytorch·windows·深度学习
Allen_LVyingbo6 分钟前
GTC2026前瞻(二)Agentic AI 与开源模型篇+(三)Physical AI 与机器人篇
开发语言·人工智能·数学建模·机器人·开源·知识图谱
FriendshipT7 分钟前
Ultralytics Docker 安装使用教程(以训练 YOLO26 模型为例)
linux·运维·人工智能·目标检测·ubuntu·docker·容器