Meta Llama 3本地部署

感谢阅读

环境安装

项目文件

下载完后在根目录进入命令终端(windows下cmd、linux下终端、conda的话activate)

运行

python 复制代码
pip install -e .

不要控制台,因为还要下载模型。这里挂着是节省时间

模型申请链接

复制如图所示的链接

然后在刚才的控制台

python 复制代码
bash download.sh

在验证哪里直接输入刚才链接即可

如果报错没有wget,则点我下载wget

然后放到C:\Windows\System32 下

python 复制代码
torchrun --nproc_per_node 1 example_chat_completion.py \
    --ckpt_dir Meta-Llama-3-8B-Instruct/ \
    --tokenizer_path Meta-Llama-3-8B-Instruct/tokenizer.model \
    --max_seq_len 512 --max_batch_size 6

收尾

创建chat.py脚本

python 复制代码
# Copyright (c) Meta Platforms, Inc. and affiliates.
# This software may be used and distributed in accordance with the terms of the Llama 3 Community License Agreement.

from typing import List, Optional

import fire

from llama import Dialog, Llama


def main(
    ckpt_dir: str,
    tokenizer_path: str,
    temperature: float = 0.6,
    top_p: float = 0.9,
    max_seq_len: int = 512,
    max_batch_size: int = 4,
    max_gen_len: Optional[int] = None,
):
    """
    Examples to run with the models finetuned for chat. Prompts correspond of chat
    turns between the user and assistant with the final one always being the user.

    An optional system prompt at the beginning to control how the model should respond
    is also supported.

    The context window of llama3 models is 8192 tokens, so `max_seq_len` needs to be <= 8192.

    `max_gen_len` is optional because finetuned models are able to stop generations naturally.
    """
    generator = Llama.build(
        ckpt_dir=ckpt_dir,
        tokenizer_path=tokenizer_path,
        max_seq_len=max_seq_len,
        max_batch_size=max_batch_size,
    )

    # Modify the dialogs list to only include user inputs
    dialogs: List[Dialog] = [
        [{"role": "user", "content": ""}],  # Initialize with an empty user input
    ]

    # Start the conversation loop
    while True:
        # Get user input
        user_input = input("You: ")
        
        # Exit loop if user inputs 'exit'
        if user_input.lower() == 'exit':
            break
        
        # Append user input to the dialogs list
        dialogs[0][0]["content"] = user_input

        # Use the generator to get model response
        result = generator.chat_completion(
            dialogs,
            max_gen_len=max_gen_len,
            temperature=temperature,
            top_p=top_p,
        )[0]

        # Print model response
        print(f"Model: {result['generation']['content']}")

if __name__ == "__main__":
    fire.Fire(main)

然后运行

python 复制代码
torchrun --nproc_per_node 1 chat.py     --ckpt_dir Meta-Llama-3-8B-Instruct/     --tokenizer_path Meta-Llama-3-8B-Instruct/tokenizer.model     --max_seq_len 512 --max_batch_size 6
相关推荐
zhengdao99061 小时前
【论文笔记】Llama 3 技术报告
论文阅读·llama
橘子在努力18 小时前
【橘子大模型】关于PromptTemplate
python·ai·llama
Chaos_Wang_2 天前
NLP高频面试题(三十)——LLama系列模型介绍,包括LLama LLama2和LLama3
人工智能·自然语言处理·llama
艾鹤2 天前
ollama安装与使用
人工智能·llama
清易3 天前
windows大模型llamafactory微调
llama
漠北尘-Gavin3 天前
【Python3.12.9安装llama-cpp-python遇到编译报错问题解决】
python·llama
爱听歌的周童鞋4 天前
理解llama.cpp如何进行LLM推理
llm·llama·llama.cpp·inference
溯源0064 天前
vscode调试python(transformers库的llama为例)
vscode·python·llama
Flying`5 天前
LLaMA-Factory微调实操记录
llama
张飞飞飞飞飞5 天前
通过Llama-Factory对Deepseek-r1:1.5b进行微调
llama