ChatGLM2-6B微调记录【2】

  • 模型推理测试
    • 微调前的chatglm2-6b模型
    • 运行python predict.py --mode glm2 --model_path chatglm2-6b/
    • 运行结果记录
bash 复制代码
/data/user23262833/.conda/envs/chatglm/lib/python3.8/site-packages/transformers/utils/generic.py:311: FutureWarning: `torch.utils._pytree._register_pytree_node` is deprecated. Please use `torch.utils._pytree.register_pytree_node` instead.
  torch.utils._pytree._register_pytree_node(
2024-11-08 17:21:16.440515: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 AVX512F AVX512_VNNI FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-11-08 17:21:16.562434: I tensorflow/core/util/port.cc:104] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-11-08 17:21:17.121689: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda-11.8/lib64:/usr/local/cuda-11.8/lib64:
2024-11-08 17:21:17.121775: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda-11.8/lib64:/usr/local/cuda-11.8/lib64:
2024-11-08 17:21:17.121784: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
/data/user23262833/.conda/envs/chatglm/lib/python3.8/site-packages/transformers/utils/generic.py:311: FutureWarning: `torch.utils._pytree._register_pytree_node` is deprecated. Please use `torch.utils._pytree.register_pytree_node` instead.
  torch.utils._pytree._register_pytree_node(
Loading checkpoint shards:   0%|                                                                             | 0/7 [00:00<?, ?it/s]/data/user23262833/.conda/envs/chatglm/lib/python3.8/site-packages/transformers/modeling_utils.py:488: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
  return torch.load(checkpoint_file, map_location=map_location)
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████| 7/7 [00:10<00:00,  1.52s/it]
根据您提供的文本,我可以抽取出以下三个相关三元组:

1. 性能故障 -> 发动机水温高
2. 部件故障 -> 风扇始终是低速转动
3. 组成 -> 高速档不工作
4. 检测工具 -> 开空调

其中,第一个三元组表示发动机出现了性能故障,导致发动机水温高。第二个三元组表示风扇出现了部件故障,始终是低速转动。第三个三元组表示高速档不工作,可能是由于某个部件出现了问题。最后一个三元组表示开空调时出现了性能故障。
  • 微调后的chatglm2-6b模型
    • 运行python predict.py --mode glm2 --model_path output-glm2/epoch-1-step-90/
    • 运行结果记录
bash 复制代码
/data/user23262833/.conda/envs/chatglm/lib/python3.8/site-packages/transformers/utils/generic.py:311: FutureWarning: `torch.utils._pytree._register_pytree_node` is deprecated. Please use `torch.utils._pytree.register_pytree_node` instead.
  torch.utils._pytree._register_pytree_node(
2024-11-08 17:30:12.764824: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 AVX512F AVX512_VNNI FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-11-08 17:30:12.887176: I tensorflow/core/util/port.cc:104] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-11-08 17:30:13.455307: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda-11.8/lib64:/usr/local/cuda-11.8/lib64:
2024-11-08 17:30:13.455391: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda-11.8/lib64:/usr/local/cuda-11.8/lib64:
2024-11-08 17:30:13.455400: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
/data/user23262833/.conda/envs/chatglm/lib/python3.8/site-packages/transformers/utils/generic.py:311: FutureWarning: `torch.utils._pytree._register_pytree_node` is deprecated. Please use `torch.utils._pytree.register_pytree_node` instead.
  torch.utils._pytree._register_pytree_node(
Loading checkpoint shards:   0%|                                                                             | 0/7 [00:00<?, ?it/s]/data/user23262833/.conda/envs/chatglm/lib/python3.8/site-packages/transformers/modeling_utils.py:488: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
  return torch.load(checkpoint_file, map_location=map_location)
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████| 7/7 [00:10<00:00,  1.55s/it]
/data/user23262833/.conda/envs/chatglm/lib/python3.8/site-packages/peft/utils/save_and_load.py:198: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
  adapters_weights = torch.load(filename, map_location=torch.device(device))
发动机_部件故障_水温高
风扇_部件故障_始终是低速转动
高速档_部件故障_不工作
  • 运行python predict.py --mode glm2 --model_path output-glm2/epoch-2-step-180/
  • 运行结果记录
bash 复制代码
/data/user23262833/.conda/envs/chatglm/lib/python3.8/site-packages/transformers/utils/generic.py:311: FutureWarning: `torch.utils._pytree._register_pytree_node` is deprecated. Please use `torch.utils._pytree.register_pytree_node` instead.
  torch.utils._pytree._register_pytree_node(
2024-11-08 17:34:23.878094: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 AVX512F AVX512_VNNI FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-11-08 17:34:24.001155: I tensorflow/core/util/port.cc:104] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-11-08 17:34:24.560807: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda-11.8/lib64:/usr/local/cuda-11.8/lib64:
2024-11-08 17:34:24.560894: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda-11.8/lib64:/usr/local/cuda-11.8/lib64:
2024-11-08 17:34:24.560903: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
/data/user23262833/.conda/envs/chatglm/lib/python3.8/site-packages/transformers/utils/generic.py:311: FutureWarning: `torch.utils._pytree._register_pytree_node` is deprecated. Please use `torch.utils._pytree.register_pytree_node` instead.
  torch.utils._pytree._register_pytree_node(
Loading checkpoint shards:   0%|                                                                             | 0/7 [00:00<?, ?it/s]/data/user23262833/.conda/envs/chatglm/lib/python3.8/site-packages/transformers/modeling_utils.py:488: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
  return torch.load(checkpoint_file, map_location=map_location)
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████| 7/7 [00:10<00:00,  1.55s/it]
/data/user23262833/.conda/envs/chatglm/lib/python3.8/site-packages/peft/utils/save_and_load.py:198: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
  adapters_weights = torch.load(filename, map_location=torch.device(device))
发动机_部件故障_水温高
风扇_部件故障_低速转动
高速档_部件故障_不工作
相关推荐
HuggingFace1 小时前
大模型评估排障指南 | 关于可复现性
大模型·llm
AI大模型顾潇1 小时前
[特殊字符] 本地部署DeepSeek大模型:安全加固与企业级集成方案
数据库·人工智能·安全·大模型·llm·微调·llama
AIWritePaper智能写作探索3 小时前
高质量学术引言如何妙用ChatGPT?如何写提示词?
人工智能·chatgpt·prompt·智能写作·aiwritepaper·引言
Coding的叶子4 小时前
React Agent:从零开始构建 AI 智能体|React Flow 实战・智能体开发・低代码平台搭建
人工智能·大模型·工作流·智能体·react flow
青衫客369 小时前
使用本地部署的 LLaMA 3 模型进行中文对话生成
大模型·llama
Silence4Allen10 小时前
大模型微调指南之 LLaMA-Factory 篇:一键启动LLaMA系列模型高效微调
人工智能·大模型·微调·llama-factory
江鸟199810 小时前
AI日报 · 2025年05月11日|传闻 OpenAI 考虑推出 ChatGPT “永久”订阅模式
人工智能·gpt·ai·chatgpt·github
知来者逆10 小时前
AI 在模仿历史语言方面面临挑战:大型语言模型在生成历史风格文本时的困境与研究进展
人工智能·深度学习·语言模型·自然语言处理·chatgpt
concisedistinct19 小时前
如何评价大语言模型架构 TTT ?模型应不应该永远“固定”在推理阶段?模型是否应当在使用时继续学习?
人工智能·语言模型·大模型
Silence4Allen1 天前
RagFlow 完全指南(一):从零搭建开源大模型应用平台(Ollama、VLLM本地模型接入实战)
人工智能·大模型·rag·ragflow