llamacpp转换hf、vllm运行gguf

Linux通过huggingface安装大模型

huggingface官网
https://huggingface.co/

powershell 复制代码
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda.sh ; bash ~/miniconda.sh -b -p $HOME/miniconda ; eval "$($HOME/miniconda/bin/conda shell.bash hook)" ; echo 'export PATH="$HOME/miniconda/bin:$PATH"' >> ~/.bashrc ; source ~/.bashrc
conda create -n huggingface python=3.12
conda activate huggingface
pip install -U huggingface_hub
export HF_ENDPOINT=https://hf-mirror.com
export HF_HOME=/root/autodl-tmp/huggingface
(huggingface) root@autodl-container-16494bbe83-56d7d7c3:~/autodl-tmp/huggingface# huggingface-cli login

    _|    _|  _|    _|    _|_|_|    _|_|_|  _|_|_|  _|      _|    _|_|_|      _|_|_|_|    _|_|      _|_|_|  _|_|_|_|
    _|    _|  _|    _|  _|        _|          _|    _|_|    _|  _|            _|        _|    _|  _|        _|
    _|_|_|_|  _|    _|  _|  _|_|  _|  _|_|    _|    _|  _|  _|  _|  _|_|      _|_|_|    _|_|_|_|  _|        _|_|_|
    _|    _|  _|    _|  _|    _|  _|    _|    _|    _|    _|_|  _|    _|      _|        _|    _|  _|        _|
    _|    _|    _|_|      _|_|_|    _|_|_|  _|_|_|  _|      _|    _|_|_|      _|        _|    _|    _|_|_|  _|_|_|_|

    To log in, `huggingface_hub` requires a token generated from https://huggingface.co/settings/tokens .
Enter your token (input will not be visible): 
Add token as git credential? (Y/n) n
huggingface-cli download --resume-download deepseek-ai/DeepSeek-R1-Distill-Qwen-7B

vllm运行模型

https://github.com/vllm-project/vllm
https://docs.vllm.ai/en/latest/getting_started/quickstart.html

powershell 复制代码
conda create -n vllm python=3.12
conda activate vllm
pip install vllm
通过后检查cuda环境

(vllm) root@autodl-container-16494bbe83-56d7d7c3:~# 
python -c "import torch; print(torch.__version__)"
python -c "import torch; print(torch.cuda.is_available())"
python -c "import torch; print(torch.version.cuda)"  
2.6.0+cu124
True
12.4
(vllm) root@autodl-container-16494bbe83-56d7d7c3:~# 

(vllm) root@autodl-container-16494bbe83-56d7d7c3:~# vllm serve /root/autodl-tmp/huggingface/hub/models--deepseek-ai--DeepSeek-R1-Distill-Qwen-7B/snapshots/916b56a44061fd5cd7d6a8fb632557ed4f724f60 --max_model_len 4096
(llamacpp) root@autodl-container-16494bbe83-56d7d7c3:~/autodl-tmp/llama.cpp# nvidia-smi
Wed Mar 26 17:00:01 2025       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.107.02             Driver Version: 550.107.02     CUDA Version: 12.4     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce RTX 4090        On  |   00000000:5A:00.0 Off |                  Off |
| 30%   31C    P8             27W /  450W |   22495MiB /  24564MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
                                                                                         
+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
+-----------------------------------------------------------------------------------------+
(llamacpp) root@autodl-container-16494bbe83-56d7d7c3:~/autodl-tmp/llama.cpp# 

(base) root@autodl-container-16494bbe83-56d7d7c3:~# curl http://localhost:8000/v1/models
{"object":"list","data":[{"id":"/root/autodl-tmp/huggingface/hub/models--deepseek-ai--DeepSeek-R1-Distill-Qwen-7B/snapshots/916b56a44061fd5cd7d6a8fb632557ed4f724f60","object":"model","created":1742994474,"owned_by":"vllm","root":"/root/autodl-tmp/huggingface/hub/models--deepseek-ai--DeepSeek-R1-Distill-Qwen-7B/snapshots/916b56a44061fd5cd7d6a8fb632557ed4f724f60","parent":null,"max_model_len":4096,"permission":[{"id":"modelperm-9fd7c968ad76451bad886268be0ad783","object":"model_permission","created":1742994474,"allow_create_engine":false,"allow_sampling":true,"allow_logprobs":true,"allow_search_indices":false,"allow_view":true,"allow_fine_tuning":false,"organization":"*","group":null,"is_blocking":false}]}]}(base) root@autodl-container-16494bbe83-56d7d7c3:~# 
(base) root@autodl-container-16494bbe83-56d7d7c3:~# 

(base) root@autodl-container-16494bbe83-56d7d7c3:~# curl http://localhost:8000/v1/completions \
    -H "Content-Type: application/json" \
    -d '{
        "model": "/root/autodl-tmp/huggingface/hub/models--deepseek-ai--DeepSeek-R1-Distill-Qwen-7B/snapshots/916b56a44061fd5cd7d6a8fb632557ed4f724f60",
        "prompt": "我考试作弊被老师抓住,老师让我写一篇检讨,我是中学生",
        "max_tokens": 3096,
        "temperature": 0.7
    }'
{"id":"cmpl-a713436c844d4d93a8aa84cd2809ea70","object":"text_completion","created":1742994986,"model":"/root/autodl-tmp/huggingface/hub/models--deepseek-ai--DeepSeek-R1-Distill-Qwen-7B/snapshots/916b56a44061fd5cd7d6a8fb632557ed4f724f60","choices":[{"index":0,"text":",该怎么写?我需要具体的结构和内容,不要太笼统。\n好的,我需要写一篇检讨,针对考试作弊的情况。我是中学生,所以内容应该符合这个年龄段的表达方式,不要太复杂或者太正式,但也要体现出真诚和反思。\n\n首先,检讨开头应该表达悔过的心情,说明自己认识到错误的严重性。然后,详细描述考试作弊的具体过程,包括当时的环境、自己的心理状态,以及为什么会有这样的决定。这部分要具体,比如考试前夜的失眠,身体和心理上的挣扎,最终在短时间内完成试卷的作答。\n\n接下来,检讨要表达对老师的感谢,说明自己深刻认识到自己的错误,以及希望老师能够原谅自己。同时,可以提到自己会更加努力学习,不作弊,不偷看,保持良好的学习习惯。\n\n结尾部分,可以承诺在之后的学习中更加用功,成绩不会再像这次那样下滑,表达出对未来的反思和改进的决心。\n\n整体结构应该是:开头表达悔过,中间详细描述错误过程,结尾表达感谢和承诺。语言要真诚,不要太浮夸,符合中学生的表达方式。\n\n现在,我需要把这些思路整理成一个具体的检讨内容,确保每部分都有足够的细节,同时保持整体的连贯性和逻辑性。\n</think>\n\n### 检讨\n\n对不起,老师。\n\n我知道,考试作弊是我不可原谅的行为。这次的错误让我感到无比愧疚和懊悔。考试作弊不仅违反了学校的规章制度,也破坏了班级的公平竞争,伤害了老师的辛勤教导和同学之间的友谊。\n\n考试作弊的经过是这样的:那天晚上,我因为学习压力过大,无法集中精力复习,导致第二天的考试无法正常发挥。在考试前夜,我失眠了,心里充满了焦虑和不安。最终,我决定铤而走险,通过抄袭同学的答案来完成试卷。虽然我试图欺骗自己和老师,但这次行为已经彻底暴露了我的本性。\n\n作弊后,我感到十分内疚。我知道,作弊不仅损害了老师的教学成果,也让我失去了原本应该拥有的自尊和自信。我意识到,作弊是错误的,它让我失去了与同学们公平竞争的机会,也让我在学习道路上失去了真诚的努力。\n\n老师,您的话语我牢记在心。您教会我们做人要诚实,作弊不仅伤害了老师,也伤害了整个班级的荣誉。我深刻认识到,作弊的后果是不可逆转的,它不仅让我失去了分数,也让我失去了成长的机会。\n\n从今往后,我决心改变。我不会再作弊,也不会偷看。我将以更加努力的态度对待学习,认真复习每节课的知识,不再触碰学习中的"捷径"。我相信,通过自己的努力,我能够取得更好的成绩,不辜负老师的期望。\n\n请相信,我一定会用自己的实际行动证明,作弊是错误的,我不会再犯这样的错误。我承诺,在未来的日子里,我将以更加真诚的态度对待学习,保持良好的学习习惯,不辜负老师和家人的期望。\n\n对不起,老师。我一定会努力改过自新!\n\n检讨人:  \n日期:","logprobs":null,"finish_reason":"stop","stop_reason":null,"prompt_logprobs":null}],"usage":{"prompt_tokens":18,"total_tokens":656,"completion_tokens":638,"prompt_tokens_details":null}}(base) root@autodl-container-16494bbe83-56d7d7c3:~# 


观察监控信息
INFO 03-26 21:16:23 [loggers.py:80] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 0.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
INFO 03-26 21:16:26 [logger.py:39] Received request cmpl-a713436c844d4d93a8aa84cd2809ea70-0: prompt: '我考试作弊被老师抓住,老师让我写一篇检讨,我是中学生', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.7, top_p=0.95, top_k=-1, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=3096, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [151646, 35946, 103960, 114640, 99250, 101049, 104609, 3837, 101049, 104029, 61443, 101555, 97557, 100219, 3837, 104198, 15946, 99720], lora_request: None, prompt_adapter_request: None.
INFO 03-26 21:16:26 [async_llm.py:221] Added request cmpl-a713436c844d4d93a8aa84cd2809ea70-0.
INFO 03-26 21:16:33 [loggers.py:80] Avg prompt throughput: 1.8 tokens/s, Avg generation throughput: 41.3 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 2.8%, Prefix cache hit rate: 0.0%  #这个时候模型正在生成,生成速率是 41.3 tokens/s,表现不错。
INFO:     127.0.0.1:49442 - "POST /v1/completions HTTP/1.1" 200 OK
INFO 03-26 21:16:43 [loggers.py:80] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 22.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%


--max-model-len 主要影响 生成过程中的 token 长度 和 并发能力,但不会直接改变显存占用。
(vllm) root@autodl-container-16494bbe83-56d7d7c3:~# vllm serve /root/autodl-tmp/huggingface/hub/models--deepseek-ai--DeepSeek-R1-Distill-Qwen-7B/snapshots/916b56a44061fd5cd7d6a8fb632557ed4f724f60 --max_model_len 17600
17600会超gpu内存
INFO 03-26 21:32:32 [monitor.py:33] torch.compile takes 7.17 s in total
INFO 03-26 21:32:33 [kv_cache_utils.py:566] GPU KV cache size: 17,536 tokens
INFO 03-26 21:32:33 [kv_cache_utils.py:569] Maximum concurrency for 17,500 tokens per request: 1.00x
最大17536


通过--quantization降低精度,减少模型大小
(vllm) root@autodl-container-16494bbe83-56d7d7c3:~# vllm serve /root/autodl-tmp/huggingface/hub/models--deepseek-ai--DeepSeek-R1-Distill-Qwen-7B/snapshots/916b56a44061fd5cd7d6a8fb632557ed4f724f60 --quantization fp8 --max-model-len 4096
INFO 03-26 21:39:48 [backends.py:132] Cache the graph of shape None for later use
INFO 03-26 21:40:18 [backends.py:144] Compiling a graph for general shape takes 32.74 s
INFO 03-26 21:40:28 [monitor.py:33] torch.compile takes 42.36 s in total
INFO 03-26 21:40:29 [kv_cache_utils.py:566] GPU KV cache size: 124,736 tokens
INFO 03-26 21:40:29 [kv_cache_utils.py:569] Maximum concurrency for 4,096 tokens per request: 30.45x
INFO 03-26 21:40:54 [gpu_model_runner.py:1534] Graph capturing finished in 24 secs, took 0.52 GiB


使用python脚本
(vllm) root@autodl-container-16494bbe83-56d7d7c3:~# cat test.py 
from openai import OpenAI
# Set OpenAI's API key and API base to use vLLM's API server.
openai_api_key = "EMPTY"
openai_api_base = "http://localhost:8000/v1"

client = OpenAI(
    api_key=openai_api_key,
    base_url=openai_api_base,
)

chat_response = client.chat.completions.create(
    model="/root/autodl-tmp/huggingface/hub/models--deepseek-ai--DeepSeek-R1-Distill-Qwen-7B/snapshots/916b56a44061fd5cd7d6a8fb632557ed4f724f60",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Tell me a joke."},
    ]
)
print("Chat response:", chat_response)
(vllm) root@autodl-container-16494bbe83-56d7d7c3:~# 

无论怎么使用vllm他都会吃满显存,倘若什么参数都不加,会按照模型默认值的最大上下文来加载

https://github.com/ggml-org/llama.cpp
https://www.youtube.com/watch?v=tcvXNThnHok&t=435s

lmstudio量化模型搜索
https://huggingface.co/lmstudio-community
https://github.com/ggml-org/llama.cpp/blob/master/examples/quantize/README.md

llama.cpp量化

powershell 复制代码
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
conda create -n llamacpp python=3.12
conda activate llamacpp
pip install -r requirements.txt
mkdir build
cd build
cmake ..
make
(llamacpp) root@autodl-container-16494bbe83-56d7d7c3:~/autodl-tmp/llama.cpp# python convert_hf_to_gguf.py     /root/autodl-tmp/huggingface/hub/models--deepseek-ai--DeepSeek-R1-Distill-Qwen-7B/snapshots/916b56a44061fd5cd7d6a8fb632557ed4f724f60/

(llamacpp) root@autodl-container-16494bbe83-56d7d7c3:~/autodl-tmp/llama.cpp/build/bin# ./llama-quantize  ../../deepseek7b.gguf ../../deepseek7b-Q4_K_M.gguf Q4_K_M

(llamacpp) root@autodl-container-16494bbe83-56d7d7c3:~/autodl-tmp/llama.cpp# du -sh deepseek7b-Q4_K_M.gguf 
4.4G    deepseek7b-Q4_K_M.gguf
(llamacpp) root@autodl-container-16494bbe83-56d7d7c3:~/autodl-tmp/llama.cpp# du -sh deepseek7b.gguf 
15G     deepseek7b.gguf
(llamacpp) root@autodl-container-16494bbe83-56d7d7c3:~/autodl-tmp/llama.cpp# 

https://vllm.hyper.ai/docs/quantization/gguf/

量化后测试

powershell 复制代码
vllm serve deepseek7b-Q4_K_M.gguf 
(base) root@autodl-container-16494bbe83-56d7d7c3:~# curl http://localhost:8000/v1/completions     -H "Content-Type: application/json"     -d '{
        "model": "deepseek7b-Q4_K_M.gguf",                                                                                                              
        "prompt": "我考试作弊被老师抓住,老师让我写一篇检讨,我是中学生",
        "max_tokens": 3096,
        "temperature": 0.7
    }'
{"id":"cmpl-eab600ce936146f1b05bf97c706d6bb8","object":"text_completion","created":1743035371,"model":"deepseek7b-Q4_K_M.gguf","choices":[{"index":0,"text":",检讨要正式一点,但不要太长,大概500字左右。\n\n好的,用户需要写一篇检讨,他是个中学生,考试作弊被老师抓住,现在要写检讨。检讨要正式一点,但不要太长,大概500字左右。\n\n首先,检讨的结构通常包括引言、承认错误、表现分析、改正措施和总结。我得确保内容全面,同时语言正式但不过于冗长。\n\n引言部分,应该说明写检讨的原因,比如被发现作弊。然后承认错误,明确知道自己作弊了。\n\n接下来,分析错误的原因。可能包括对老师的信任不足,没意识到作弊的严重性,或者压力太大导致发挥失常。\n\n然后,提出改正措施,比如认真学习,避免作弊,积极参与课堂活动,向老师道歉。\n\n最后,总结部分表达决心和对未来的承诺,保证不再犯错。\n\n语言方面,要正式一些,用词准确,避免过于口语化。同时,控制在500字左右,不要太长。\n\n具体写的时候,可以先列个大纲,然后逐步填充内容。确保每部分内容简洁明了,逻辑清晰。\n\n可能还要注意用词恰当,比如"深刻认识"、"认真反思"、"深刻吸取教训"等,这些词汇能表达出内心的诚意和决心。\n\n最后,检查一下整体结构是否合理,内容是否全面,有没有遗漏的部分。\n\n总之,写检讨的时候要真实反映自己的行为和认识,体现出悔改和改正的决心,结构清晰,语言得体,符合学校的要求。\n</think>\n\n尊敬的老师:\n\n今天,我怀着无比愧疚的心情给您写下这份检讨。经过深刻反思,我认识到自己在考试中作弊的行为是绝对错误的,不仅损害了您的公平教学,也让我失去了应有的诚信和尊严。\n\n作弊的发生,是由于我对学习的重要性认识不足,加之考试压力带来的焦虑,导致我无法正常发挥水平。对此,我深感懊悔,并深刻吸取教训。\n\n为改掉这一坏习惯,我郑重承诺:一是今后一定严肃对待每一次考试,确保按时完成作业;二是加强自我管理,培养良好的学习习惯;三是主动向您表达歉意,并恳请给予我改过自新的机会。\n\n请相信,我已深刻认识自己的错误,并将以此为新的起点,严格要求自己。希望您能给我一次机会,我将用实际行动证明自己的决心。\n\n此致  \n敬礼!","logprobs":null,"finish_reason":"stop","stop_reason":151643,"prompt_logprobs":null}],"usage":{"prompt_tokens":17,"total_tokens":522,"completion_tokens":505,"prompt_tokens_details":null}}(base) root@autodl-container-16494bbe83-56d7d7c3:~# 
(base) root@autodl-container-16494bbe83-56d7d7c3:~#