Meta Llama 3本地部署

感谢阅读

环境安装

项目文件

下载完后在根目录进入命令终端(windows下cmd、linux下终端、conda的话activate)

运行

python 复制代码
pip install -e .

不要控制台,因为还要下载模型。这里挂着是节省时间

模型申请链接

复制如图所示的链接

然后在刚才的控制台

python 复制代码
bash download.sh

在验证哪里直接输入刚才链接即可

如果报错没有wget,则点我下载wget

然后放到C:\Windows\System32 下

python 复制代码
torchrun --nproc_per_node 1 example_chat_completion.py \
    --ckpt_dir Meta-Llama-3-8B-Instruct/ \
    --tokenizer_path Meta-Llama-3-8B-Instruct/tokenizer.model \
    --max_seq_len 512 --max_batch_size 6

收尾

创建chat.py脚本

python 复制代码
# Copyright (c) Meta Platforms, Inc. and affiliates.
# This software may be used and distributed in accordance with the terms of the Llama 3 Community License Agreement.

from typing import List, Optional

import fire

from llama import Dialog, Llama


def main(
    ckpt_dir: str,
    tokenizer_path: str,
    temperature: float = 0.6,
    top_p: float = 0.9,
    max_seq_len: int = 512,
    max_batch_size: int = 4,
    max_gen_len: Optional[int] = None,
):
    """
    Examples to run with the models finetuned for chat. Prompts correspond of chat
    turns between the user and assistant with the final one always being the user.

    An optional system prompt at the beginning to control how the model should respond
    is also supported.

    The context window of llama3 models is 8192 tokens, so `max_seq_len` needs to be <= 8192.

    `max_gen_len` is optional because finetuned models are able to stop generations naturally.
    """
    generator = Llama.build(
        ckpt_dir=ckpt_dir,
        tokenizer_path=tokenizer_path,
        max_seq_len=max_seq_len,
        max_batch_size=max_batch_size,
    )

    # Modify the dialogs list to only include user inputs
    dialogs: List[Dialog] = [
        [{"role": "user", "content": ""}],  # Initialize with an empty user input
    ]

    # Start the conversation loop
    while True:
        # Get user input
        user_input = input("You: ")
        
        # Exit loop if user inputs 'exit'
        if user_input.lower() == 'exit':
            break
        
        # Append user input to the dialogs list
        dialogs[0][0]["content"] = user_input

        # Use the generator to get model response
        result = generator.chat_completion(
            dialogs,
            max_gen_len=max_gen_len,
            temperature=temperature,
            top_p=top_p,
        )[0]

        # Print model response
        print(f"Model: {result['generation']['content']}")

if __name__ == "__main__":
    fire.Fire(main)

然后运行

python 复制代码
torchrun --nproc_per_node 1 chat.py     --ckpt_dir Meta-Llama-3-8B-Instruct/     --tokenizer_path Meta-Llama-3-8B-Instruct/tokenizer.model     --max_seq_len 512 --max_batch_size 6
相关推荐
闻道且行之9 小时前
LLaMA-Factory|微调大语言模型初探索(4),64G显存微调13b模型
人工智能·语言模型·llama·qlora·fsdp
豆芽脚脚9 小时前
LLaMA中的微调方法
llama·deepseek
造夢先森9 小时前
Transformer & LLaMA
深度学习·transformer·llama
一颗小树x9 小时前
Llama 3.1 本地电脑部署 Linux系统 【轻松简易】
linux·llama·本地部署·3.1
喝不完一杯咖啡10 小时前
【AI时代】可视化训练模型工具LLaMA-Factory安装与使用
人工智能·llm·sft·llama·llama-factory
胡侃有料13 小时前
【LLAMA】羊驼从LLAMA1到LLAMA3梳理
llama
神秘的土鸡1 天前
使用Open WebUI下载的模型文件(Model)默认存放在哪里?
人工智能·llama·ollama·openwebui
初窺門徑2 天前
llama-factory部署微调方法(wsl-Ubuntu & Windows)
llama·大模型微调·llama-factory
Neo很努力2 天前
【deepseek】本地部署+RAG知识库挂载+对话测试
自然语言处理·chatgpt·langchain·aigc·llama
大鱼>2 天前
Ubuntu 服务器Llama Factory 搭建DeepSeek-R1微调训练环境
llama·大模型微调·deepseek