随机生成pytorch算子测试序列且保证算子参数合法

随机生成pytorch算子测试序列且保证算子参数合法

背景:

1.一些对维度进行操作的算子的单算子测试,结果正常,但多个算子组合在一起,结果就不对。是否能给一个算子列表,随机生成它们的组合呢

功能描述:

1.此程序用于在 CUDA 环境中生成随机张量并对其施加一系列随机选择的操作

2.程序首先随机生成张量的形状和内容,然后随机选择一个操作(如 reshapetransposematmul 等),并生成适当的参数以执行该操作

3.最终输出变换后的张量并打印相关操作信息

4.整个过程在100次不同的种子下每次进行10次操作,以保证操作的多样性和结果的随机性

markup 复制代码
通过LLM多轮对话生成pytorch算子组合测试用例 小结

初衷: 给一个算子列表,自动生成列表中算子的随机组合测试,可以覆盖不同的shape,支持任意多个算子的组合

原计划(LLM全自动生成):
1.测试了qwen-max、kimi moonshot-v1-128k、ERNIE-4.0-8K、sparkai(3.5)、yi-large这几个模型(各家最新的模型)
2.这几个模型都能按要求生成单元测试用例,但几乎所有的代码运行都会出错(95%以上的错都是shape不匹配)
3.一些模型通过几次交互能修复bug,但整体上效果不理想
4.也许LLM对pytorch算子的约束不太了解,可以尝试将算子的接口文档告诉LLM,采用few shot的方式,是否有所改善。

妥协的方案:
1.于是将这个需求细化,与GPT-4o多次交互,生成了这个功能模块的代码,功能正常。之后加到ut测试中

代码

python 复制代码
import torch
import random
from functools import reduce
from operator import mul
import numpy as np

max_size = 4096  # 每个维度的最大大小
max_tensor_elements = 1*4096*4096  # 张量中元素的总数限制

min_dim_size = 1  # 最小维度大小
max_dim_size = max_size  # 扩大这个范围可以更快生成符合要求的大小

def generate_random_shape(dim, max_attempts=10):
    for _ in range(max_attempts):
        shape = [random.randint(min_dim_size, max_dim_size) for _ in range(dim)]
        if reduce(mul, shape, 1) <= max_tensor_elements:
            return tuple(shape)
    # 兜底策略,防止尝试次数用尽:再遍历生成的随机形状,逐个将维度缩小直到符合限制
    shape = [random.randint(1, max_size) for _ in range(dim)]
    current_elements = reduce(mul, shape, 1)
    while current_elements > max_tensor_elements:
        for i in range(len(shape)):
            if shape[i] > 1:
                shape[i] //= 2
                current_elements = reduce(mul, shape, 1)
                if current_elements <= max_tensor_elements:
                    break
    return tuple(shape)

def generate_random_input(shape):
    return torch.randn(shape).to("cuda").half()

def generate_random_operator(input_shape):
    operators = ['unsqueeze', 'repeat', 'permute', 'transpose', 'reshape', 'expand', 'contiguous', 'matmul', 'mul', 'concat',"view"]
    return random.choice(operators)

def generate_random_reshape(input_shape):
    # 计算输入张量的总元素数
    total_elements = np.prod(input_shape)
    divisors = []
    # 找到 total_elements 的所有约数
    for i in range(1, int(np.sqrt(total_elements)) + 1):
        if total_elements % i == 0:
            divisors.append(i)
            if i != total_elements // i:
                divisors.append(total_elements // i)
    dimensions = []
    remaining_elements = total_elements
    # 随机选择新的维度并且保证元素数量不变
    while remaining_elements > 1 and len(dimensions) < len(input_shape):
        divisor = np.random.choice(divisors)
        dimensions.append(divisor)
        remaining_elements //= divisor
        divisors = [d for d in divisors if remaining_elements % d == 0]
    if remaining_elements > 1:
        dimensions.append(remaining_elements)    
    np.random.shuffle(dimensions)    
    return tuple(dimensions)

def generate_reshape_params(tensor):
    return generate_random_reshape(tensor.shape)

def random_transpose_params(tensor):
    return random.sample(range(tensor.dim()), 2)

def generate_repeat_params(input_shape):
    while True:
        repeats = [random.randint(1, 4) for _ in input_shape]
        if reduce(mul, [dim * repeat for dim, repeat in zip(input_shape, repeats)], 1) <= max_tensor_elements:
            return tuple(repeats)

def generate_expand_params(input_shape):
    expanded_shape = []
    while True:
        expanded_shape = [random.randint(min(2,dim), dim*2) if dim == 1 else dim for dim in input_shape]
        if reduce(mul, expanded_shape, 1) <= max_tensor_elements:
            break
    return expanded_shape

def generate_random_operator_parameters(input_shape, operator, input_tensor):
    if operator == 'unsqueeze':
        return (random.randint(0, len(input_shape) - 1),)
    if operator == 'repeat':
        return generate_repeat_params(input_shape)
    if operator == 'permute':
        return random.sample(range(len(input_shape)), len(input_shape))
    if operator == 'transpose':
        return random_transpose_params(input_tensor)
    if operator in ['reshape',"view"]:
        return generate_reshape_params(input_tensor)
    if operator == 'expand':
        return generate_expand_params(input_shape)
    if operator == 'matmul':
        if input_tensor.dim() == 1:
            return ()
        return (input_tensor.size(-1), random.randint(1, max_size))
    if operator in ['contiguous','mul']:
        return ()
    if operator == 'concat':
        return (random.randint(0, len(input_shape) - 1),)

def execute_operator(input_tensor, operator, operator_parameters):
    if operator == 'unsqueeze':
        return input_tensor.unsqueeze(*operator_parameters)
    if operator == 'repeat':
        return input_tensor.repeat(operator_parameters)
    if operator == 'permute':
        return input_tensor.permute(operator_parameters)
    if operator == 'transpose':
        return input_tensor.transpose(*operator_parameters)
    if operator == 'reshape':
        return input_tensor.reshape(operator_parameters)
    if operator == 'view':
        return input_tensor.view(operator_parameters)    
    if operator == 'expand':
        return input_tensor.expand(operator_parameters)
    if operator == 'contiguous':
        return input_tensor.contiguous()
    if operator == 'matmul':
        if input_tensor.dim() ==1:
            return input_tensor
        other = torch.randn(*operator_parameters).to(input_tensor.device).type_as(input_tensor)
        return torch.matmul(input_tensor, other)
    if operator == 'mul':
        return input_tensor * input_tensor
    if operator == 'concat':
        return torch.cat((input_tensor, input_tensor), dim=operator_parameters[0])

def main():
    for seed in range(2):
        random.seed(seed)
        np.random.seed(seed)
        torch.random.manual_seed(seed)
        for i in range(10):
            input_shape = generate_random_shape(random.randint(2, 5))
            input_tensor = generate_random_input(input_shape)
            operator = generate_random_operator(input_shape)
            operator_parameters = generate_random_operator_parameters(input_shape, operator, input_tensor)
            output_tensor = execute_operator(input_tensor, operator, operator_parameters)
            print(f"seed:{seed:03d} seq:{i:02d} {operator:<10} input:{str(input_shape):<32} param:{str(operator_parameters):<32} output:{str(output_tensor.shape):<32}")
        print(output_tensor.cpu().numpy().reshape(-1)[:8])
        torch.cuda.empty_cache()
if __name__ == '__main__':
    main()

输出

bash 复制代码
seed:000 seq:00 repeat     input:(7, 42, 26, 36, 56)              param:(1, 1, 1, 1, 1)                  output:torch.Size([7, 42, 26, 36, 56])
seed:000 seq:01 view       input:(248, 227, 276)                  param:(92, 908, 186)                   output:torch.Size([92, 908, 186])
seed:000 seq:02 view       input:(18, 21, 51, 32, 17)             param:(17, 4536, 136)                  output:torch.Size([17, 4536, 136])
seed:000 seq:03 reshape    input:(2548, 3565)                     param:(644, 65, 217)                   output:torch.Size([644, 65, 217])
seed:000 seq:04 reshape    input:(46, 42, 14, 57, 7)              param:(28, 266, 3, 483)                output:torch.Size([28, 266, 3, 483])
seed:000 seq:05 contiguous input:(222, 100, 597)                  param:()                               output:torch.Size([222, 100, 597])
seed:000 seq:06 view       input:(15, 27, 56, 8, 59)              param:(3, 3, 20160, 1, 59)             output:torch.Size([3, 3, 20160, 1, 59])
seed:000 seq:07 view       input:(1461, 1161)                     param:(188469, 9)                      output:torch.Size([188469, 9])
seed:000 seq:08 reshape    input:(19, 29, 19, 17, 54)             param:(31407, 1, 3, 17, 6, 1)          output:torch.Size([31407, 1, 3, 17, 6, 1])
seed:000 seq:09 transpose  input:(12, 126, 46, 157)               param:[2, 3]                           output:torch.Size([12, 126, 157, 46])
[-0.581   0.568   1.187   2.46   -0.1392 -0.3362  0.2076 -0.662 ]
seed:001 seq:00 view       input:(119, 354, 236)                  param:(4, 1, 17, 146202)               output:torch.Size([4, 1, 17, 146202])
seed:001 seq:01 reshape    input:(60, 961, 178)                   param:(3, 3421160)                     output:torch.Size([3, 3421160])
seed:001 seq:02 expand     input:(16, 10, 34, 37, 58)             param:[16, 10, 34, 37, 58]             output:torch.Size([16, 10, 34, 37, 58])
seed:001 seq:03 concat     input:(12, 44, 12, 26, 55)             param:(1,)                             output:torch.Size([12, 88, 12, 26, 55])
seed:001 seq:04 expand     input:(48, 9, 28, 20, 68)              param:[48, 9, 28, 20, 68]              output:torch.Size([48, 9, 28, 20, 68])
seed:001 seq:05 repeat     input:(16, 16, 162, 233)               param:(1, 1, 1, 1)                     output:torch.Size([16, 16, 162, 233])
seed:001 seq:06 expand     input:(25, 426, 19, 63)                param:[25, 426, 19, 63]                output:torch.Size([25, 426, 19, 63])
seed:001 seq:07 permute    input:(153, 153, 380)                  param:[2, 1, 0]                        output:torch.Size([380, 153, 153])
seed:001 seq:08 permute    input:(3091, 1445)                     param:[1, 0]                           output:torch.Size([1445, 3091])
seed:001 seq:09 mul        input:(142, 254, 388)                  param:()                               output:torch.Size([142, 254, 388])
[3.31   0.3372 0.2354 0.1373 0.594  2.326  0.7344 2.16  ]
相关推荐
想成为高手4993 分钟前
生成式AI在教育技术中的应用:变革与创新
人工智能·aigc
YSGZJJ1 小时前
股指期货的套保策略如何精准选择和规避风险?
人工智能·区块链
无脑敲代码,bug漫天飞1 小时前
COR 损失函数
人工智能·机器学习
幽兰的天空1 小时前
Python 中的模式匹配:深入了解 match 语句
开发语言·python
HPC_fac130520678162 小时前
以科学计算为切入点:剖析英伟达服务器过热难题
服务器·人工智能·深度学习·机器学习·计算机视觉·数据挖掘·gpu算力
网易独家音乐人Mike Zhou4 小时前
【卡尔曼滤波】数据预测Prediction观测器的理论推导及应用 C语言、Python实现(Kalman Filter)
c语言·python·单片机·物联网·算法·嵌入式·iot
安静读书4 小时前
Python解析视频FPS(帧率)、分辨率信息
python·opencv·音视频
小陈phd4 小时前
OpenCV从入门到精通实战(九)——基于dlib的疲劳监测 ear计算
人工智能·opencv·计算机视觉
Guofu_Liao6 小时前
大语言模型---LoRA简介;LoRA的优势;LoRA训练步骤;总结
人工智能·语言模型·自然语言处理·矩阵·llama
小二·6 小时前
java基础面试题笔记(基础篇)
java·笔记·python