ONNX量化
https://onnxruntime.ai/docs/performance/model-optimizations/quantization.html
Quantization Overview
Quantization in ONNX Runtime refers to 8 bit linear quantization of an ONNX model.
During quantization, the floating point values are mapped to an 8 bit quantization space of the form: val_fp32 = scale * (val_quantized - zero_point)
scale is a positive real number used to map the floating point numbers to a quantization space. It is calculated as follows:
For asymmetric quantization:
scale = (data_range_max - data_range_min) / (quantization_range_max - quantization_range_min)
For symmetric quantization:
scale = max(abs(data_range_max), abs(data_range_min)) * 2 / (quantization_range_max - quantization_range_min)
zero_point represents zero in the quantization space. It is important that the floating point zero value be exactly representable in quantization space. This is because zero padding is used in many CNNs. If it is not possible to represent 0 uniquely after quantization, it will result in accuracy errors.