Meta Llama 3本地部署

感谢阅读

环境安装

项目文件

下载完后在根目录进入命令终端(windows下cmd、linux下终端、conda的话activate)

运行

python 复制代码
pip install -e .

不要控制台,因为还要下载模型。这里挂着是节省时间

模型申请链接

复制如图所示的链接

然后在刚才的控制台

python 复制代码
bash download.sh

在验证哪里直接输入刚才链接即可

如果报错没有wget,则点我下载wget

然后放到C:\Windows\System32 下

python 复制代码
torchrun --nproc_per_node 1 example_chat_completion.py \
    --ckpt_dir Meta-Llama-3-8B-Instruct/ \
    --tokenizer_path Meta-Llama-3-8B-Instruct/tokenizer.model \
    --max_seq_len 512 --max_batch_size 6

收尾

创建chat.py脚本

python 复制代码
# Copyright (c) Meta Platforms, Inc. and affiliates.
# This software may be used and distributed in accordance with the terms of the Llama 3 Community License Agreement.

from typing import List, Optional

import fire

from llama import Dialog, Llama


def main(
    ckpt_dir: str,
    tokenizer_path: str,
    temperature: float = 0.6,
    top_p: float = 0.9,
    max_seq_len: int = 512,
    max_batch_size: int = 4,
    max_gen_len: Optional[int] = None,
):
    """
    Examples to run with the models finetuned for chat. Prompts correspond of chat
    turns between the user and assistant with the final one always being the user.

    An optional system prompt at the beginning to control how the model should respond
    is also supported.

    The context window of llama3 models is 8192 tokens, so `max_seq_len` needs to be <= 8192.

    `max_gen_len` is optional because finetuned models are able to stop generations naturally.
    """
    generator = Llama.build(
        ckpt_dir=ckpt_dir,
        tokenizer_path=tokenizer_path,
        max_seq_len=max_seq_len,
        max_batch_size=max_batch_size,
    )

    # Modify the dialogs list to only include user inputs
    dialogs: List[Dialog] = [
        [{"role": "user", "content": ""}],  # Initialize with an empty user input
    ]

    # Start the conversation loop
    while True:
        # Get user input
        user_input = input("You: ")
        
        # Exit loop if user inputs 'exit'
        if user_input.lower() == 'exit':
            break
        
        # Append user input to the dialogs list
        dialogs[0][0]["content"] = user_input

        # Use the generator to get model response
        result = generator.chat_completion(
            dialogs,
            max_gen_len=max_gen_len,
            temperature=temperature,
            top_p=top_p,
        )[0]

        # Print model response
        print(f"Model: {result['generation']['content']}")

if __name__ == "__main__":
    fire.Fire(main)

然后运行

python 复制代码
torchrun --nproc_per_node 1 chat.py     --ckpt_dir Meta-Llama-3-8B-Instruct/     --tokenizer_path Meta-Llama-3-8B-Instruct/tokenizer.model     --max_seq_len 512 --max_batch_size 6
相关推荐
AI大模型2 天前
轻松搞定百个大模型微调!LLaMA-Factory:你的AI模型量产神器
程序员·llm·llama
fly五行6 天前
大模型基础入门与 RAG 实战:从理论到 llama-index 项目搭建(有具体代码示例)
python·ai·llama·llamaindex
德育处主任Pro9 天前
前端玩转大模型,DeepSeek-R1 蒸馏 Llama 模型的 Bedrock 部署
前端·llama
relis10 天前
AVX-512深度实现分析:从原理到LLaMA.cpp的性能优化艺术
性能优化·llama
relis12 天前
llama.cpp RMSNorm CUDA 优化分析报告
算法·llama
云雾J视界12 天前
开源革命下的研发突围:Meta Llama系列模型的知识整合实践与启示
meta·开源·llama·知识管理·知识整合·知识迭代·知识共享
丁学文武13 天前
大模型原理与实践:第三章-预训练语言模型详解_第3部分-Decoder-Only(GPT、LLama、GLM)
人工智能·gpt·语言模型·自然语言处理·大模型·llama·glm
余衫马14 天前
llama.cpp:本地大模型推理的高性能 C++ 框架
c++·人工智能·llm·llama·大模型部署
LETTER•18 天前
Llama 模型架构解析:从 Pre-RMSNorm 到 GQA 的技术演进
深度学习·语言模型·自然语言处理·llama
拓端研究室18 天前
JupyterLab+PyTorch:LoRA+4-bit量化+SFT微调Llama 4医疗推理应用|附代码数据
llama