ValueError: You cannot perform fine-tuning on purely quantized models.在使用peft 微调8bit 或者4bit 模型的时候,可能会报错:You cannot perform fine-tuning on purely quantized models. Please attach trainable adapters on top of the quantized model to correctly perform fine-tuning. Please see: https://huggingface.co/docs/transformers/peft for mor