Meta Llama 3本地部署

感谢阅读

环境安装

项目文件

下载完后在根目录进入命令终端(windows下cmd、linux下终端、conda的话activate)

运行

python 复制代码
pip install -e .

不要控制台,因为还要下载模型。这里挂着是节省时间

模型申请链接

复制如图所示的链接

然后在刚才的控制台

python 复制代码
bash download.sh

在验证哪里直接输入刚才链接即可

如果报错没有wget,则点我下载wget

然后放到C:\Windows\System32 下

python 复制代码
torchrun --nproc_per_node 1 example_chat_completion.py \
    --ckpt_dir Meta-Llama-3-8B-Instruct/ \
    --tokenizer_path Meta-Llama-3-8B-Instruct/tokenizer.model \
    --max_seq_len 512 --max_batch_size 6

收尾

创建chat.py脚本

python 复制代码
# Copyright (c) Meta Platforms, Inc. and affiliates.
# This software may be used and distributed in accordance with the terms of the Llama 3 Community License Agreement.

from typing import List, Optional

import fire

from llama import Dialog, Llama


def main(
    ckpt_dir: str,
    tokenizer_path: str,
    temperature: float = 0.6,
    top_p: float = 0.9,
    max_seq_len: int = 512,
    max_batch_size: int = 4,
    max_gen_len: Optional[int] = None,
):
    """
    Examples to run with the models finetuned for chat. Prompts correspond of chat
    turns between the user and assistant with the final one always being the user.

    An optional system prompt at the beginning to control how the model should respond
    is also supported.

    The context window of llama3 models is 8192 tokens, so `max_seq_len` needs to be <= 8192.

    `max_gen_len` is optional because finetuned models are able to stop generations naturally.
    """
    generator = Llama.build(
        ckpt_dir=ckpt_dir,
        tokenizer_path=tokenizer_path,
        max_seq_len=max_seq_len,
        max_batch_size=max_batch_size,
    )

    # Modify the dialogs list to only include user inputs
    dialogs: List[Dialog] = [
        [{"role": "user", "content": ""}],  # Initialize with an empty user input
    ]

    # Start the conversation loop
    while True:
        # Get user input
        user_input = input("You: ")
        
        # Exit loop if user inputs 'exit'
        if user_input.lower() == 'exit':
            break
        
        # Append user input to the dialogs list
        dialogs[0][0]["content"] = user_input

        # Use the generator to get model response
        result = generator.chat_completion(
            dialogs,
            max_gen_len=max_gen_len,
            temperature=temperature,
            top_p=top_p,
        )[0]

        # Print model response
        print(f"Model: {result['generation']['content']}")

if __name__ == "__main__":
    fire.Fire(main)

然后运行

python 复制代码
torchrun --nproc_per_node 1 chat.py     --ckpt_dir Meta-Llama-3-8B-Instruct/     --tokenizer_path Meta-Llama-3-8B-Instruct/tokenizer.model     --max_seq_len 512 --max_batch_size 6
相关推荐
thesky12345611 小时前
llama factory怎么命令行推理图片
深度学习·llama
进取星辰1 天前
Windows 10 上运行 Ollama 时遇到 llama runner process has terminated: exit status 2
windows·llama
明天一定早睡早起2 天前
LLaMa Factory大模型微调
llama
脑极体3 天前
应激的Llama,开源的困局
llama
游离子丶4 天前
LLama Factory从入门到放弃
语言模型·游戏程序·llama·yuzu-soft
T0uken5 天前
【LLM】llama.cpp:合并 GGUF 模型分片
语言模型·llama
剑客的茶馆6 天前
GPT,Genini, Claude Llama, DeepSeek,Qwen,Grok,选对LLM大模型真的可以事半功倍!
gpt·llm·llama·选择大模型
try2find6 天前
llama-webui docker实现界面部署
docker·容器·llama
寻丶幽风8 天前
论文阅读笔记——Mixtral of Experts
论文阅读·笔记·语言模型·llama·moe
deephub8 天前
从零开始用Pytorch实现LLaMA 4的混合专家(MoE)模型
人工智能·pytorch·深度学习·大语言模型·llama