前言:在部署glm时,遇到了一些属性找不到、参数错误的问题,通常遇到这种问题都是因为模型包版本问题导致的,要注意模型版本是否可用。
【运行官方
vllm_cli_demo.py
】报错
**GLM-4: [rank0]: Traceback (most recent call last):
rank0\]: File "/app/glm4/code/GLM-4-main/basic_demo/vllm_cli_demo_self.py", line 176, in \[rank0\]: asyncio.run(chat()) \[rank0\]: File "/opt/conda/envs/chatglm4/lib/python3.9/asyncio/runners.py", line 44, in run \[rank0\]: return loop.run_until_complete(main) \[rank0\]: File "/opt/conda/envs/chatglm4/lib/python3.9/asyncio/base_events.py", line 647, in run_until_complete \[rank0\]: return future.result() \[rank0\]: File "/app/glm4/code/GLM-4-main/basic_demo/vllm_cli_demo_self.py", line 169, in chat \[rank0\]: async for output in vllm_gen(LORA_PATH, enable_lora, messages, top_p, temperature, max_length): \[rank0\]: File "/app/glm4/code/GLM-4-main/basic_demo/vllm_cli_demo_self.py", line 100, in vllm_gen \[rank0\]: inputs = tokenizer.apply_chat_template( \[rank0\]: AttributeError: 'tuple' object has no attribute 'apply_chat_template'** 目前该问题并没有解决,但是猜测是因为vllm版本需要为0.6.4,但是目前找不到这个版本,因此只能用)0.6.3.post1替代,猜测大概率是因为版本问题导致找不到`apply_chat_template ` 属性。由于vllm包较大,测试可用版本比较麻烦。 因此,目前可以暂时用tansformers代替vllm!!! > 【运行官方`trans_cli_demo.py`】报错 **Traceback (most recent call last): File "/app/glm4/code/GLM-4-main/basic_demo/trans_cli_demo.py", line 64, in model_inputs = tokenizer.apply_chat_template( File "/root/.cache/huggingface/modules/transformers_modules/glm4-models/tokenization_chatglm.py", line 220, in apply_chat_template output = self.batch_encode_plus( File "/opt/conda/envs/chatglm4/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 3311, in batch_encode_plus return self._batch_encode_plus( File "/opt/conda/envs/chatglm4/lib/python3.9/site-packages/transformers/tokenization_utils.py", line 892, in _batch_encode_plus batch_outputs = self._batch_prepare_for_model( File "/opt/conda/envs/chatglm4/lib/python3.9/site-packages/transformers/tokenization_utils.py", line 970, in _batch_prepare_for_model batch_outputs = self.pad( File "/opt/conda/envs/chatglm4/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 3527, in pad outputs = self._pad( TypeError: _pad() got an unexpected keyword argument 'padding_side'** 具体可以参考这篇博客: [GLM-4V-9B TypeError: ChatGLMTokenizer._pad() got an unexpected keyword argument 'padding_side'](https://blog.csdn.net/m0_60801087/article/details/143160274) 我的解决方法是将transformers版本降为4.44.0 最终运行截图如下: 