Meta Llama 3本地部署

感谢阅读

环境安装

项目文件

下载完后在根目录进入命令终端(windows下cmd、linux下终端、conda的话activate)

运行

python 复制代码
pip install -e .

不要控制台,因为还要下载模型。这里挂着是节省时间

模型申请链接

复制如图所示的链接

然后在刚才的控制台

python 复制代码
bash download.sh

在验证哪里直接输入刚才链接即可

如果报错没有wget,则点我下载wget

然后放到C:\Windows\System32 下

python 复制代码
torchrun --nproc_per_node 1 example_chat_completion.py \
    --ckpt_dir Meta-Llama-3-8B-Instruct/ \
    --tokenizer_path Meta-Llama-3-8B-Instruct/tokenizer.model \
    --max_seq_len 512 --max_batch_size 6

收尾

创建chat.py脚本

python 复制代码
# Copyright (c) Meta Platforms, Inc. and affiliates.
# This software may be used and distributed in accordance with the terms of the Llama 3 Community License Agreement.

from typing import List, Optional

import fire

from llama import Dialog, Llama


def main(
    ckpt_dir: str,
    tokenizer_path: str,
    temperature: float = 0.6,
    top_p: float = 0.9,
    max_seq_len: int = 512,
    max_batch_size: int = 4,
    max_gen_len: Optional[int] = None,
):
    """
    Examples to run with the models finetuned for chat. Prompts correspond of chat
    turns between the user and assistant with the final one always being the user.

    An optional system prompt at the beginning to control how the model should respond
    is also supported.

    The context window of llama3 models is 8192 tokens, so `max_seq_len` needs to be <= 8192.

    `max_gen_len` is optional because finetuned models are able to stop generations naturally.
    """
    generator = Llama.build(
        ckpt_dir=ckpt_dir,
        tokenizer_path=tokenizer_path,
        max_seq_len=max_seq_len,
        max_batch_size=max_batch_size,
    )

    # Modify the dialogs list to only include user inputs
    dialogs: List[Dialog] = [
        [{"role": "user", "content": ""}],  # Initialize with an empty user input
    ]

    # Start the conversation loop
    while True:
        # Get user input
        user_input = input("You: ")
        
        # Exit loop if user inputs 'exit'
        if user_input.lower() == 'exit':
            break
        
        # Append user input to the dialogs list
        dialogs[0][0]["content"] = user_input

        # Use the generator to get model response
        result = generator.chat_completion(
            dialogs,
            max_gen_len=max_gen_len,
            temperature=temperature,
            top_p=top_p,
        )[0]

        # Print model response
        print(f"Model: {result['generation']['content']}")

if __name__ == "__main__":
    fire.Fire(main)

然后运行

python 复制代码
torchrun --nproc_per_node 1 chat.py     --ckpt_dir Meta-Llama-3-8B-Instruct/     --tokenizer_path Meta-Llama-3-8B-Instruct/tokenizer.model     --max_seq_len 512 --max_batch_size 6
相关推荐
小技工丨5 小时前
LLaMA-Factory:了解webUI参数
人工智能·llm·llama·llama-factory
AI360labs_atyun18 小时前
谷歌前CEO TED演讲解析:AI 红利的三年窗口期与行业重构
人工智能·科技·ai·重构·llama·教育
KangkangLoveNLP2 天前
Llama:开源的急先锋
人工智能·深度学习·神经网络·算法·机器学习·自然语言处理·llama
小技工丨2 天前
LLaMA-Factory:环境准备
机器学习·大模型·llama·llama-factory
聚客AI2 天前
ChatGPT到Claude全适配:跨模型Prompt高级设计规范与迁移技巧
人工智能·机器学习·语言模型·自然语言处理·langchain·transformer·llama
LucianaiB4 天前
使用GpuGeek高效完成LLaMA大模型微调:实践与心得分享
ai·llama·ai自动化·gpugeek
为啥全要学5 天前
LLaMA-Factory 微调 Qwen2-7B-Instruct
llama·大模型微调·llamafactory
一把年纪学编程5 天前
dify 连接不上ollama An error occurred during credentials validation:
llama
陈奕昆6 天前
五、【LLaMA-Factory实战】模型部署与监控:从实验室到生产的全链路实践
开发语言·人工智能·python·llama·大模型微调
fydw_7157 天前
大语言模型RLHF训练框架全景解析:OpenRLHF、verl、LLaMA-Factory与SWIFT深度对比
语言模型·swift·llama