【Datawhaler AI夏令营-浪潮】大模型应用开发学习记录

大模型应用开发

简要:Datawhale 2024 年 AI 夏令营 第四期联合浪潮信息一同开展的学习活动。

个人大模型训练学习还是需要一些成本的,暂时没钱升级显卡,主打哪里能白嫖去哪嫖卡。

大模型应用开发活动安排是:

  • 大模型部署
  • 大模型RAG实战
  • 大模型微调实战

首先搭个环境:

步骤比较简单就是开通阿里云PAI-DSW试用-绑定魔搭后创建实例-能打开jupyterlab就算完成白嫖了(这里开通绑定等图文步骤有其他学员也发出来了,实在不会搜搜)

直接进入正题

task1 跑模型

就是跑个模型没有难度

终端无脑粘贴运行就行

shell 复制代码
git lfs install
git clone https://www.modelscope.cn/datasets/Datawhale/AICamp_yuan_baseline.git
pip install streamlit==1.24.0
streamlit run AICamp_yuan_baseline/Task\ 1:零基础玩转源大模型/web_demo_2b.py --server.address 127.0.0.1 --server.port 6006

跑起来后会出现一个访问的url,点击就直接跳转到模型页面了

然后just ask

demo代码如下:

python 复制代码
# 导入所需的库
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
import streamlit as st

# 创建一个标题和一个副标题
st.title("💬 Yuan2.0 智能编程助手")

# 源大模型下载
from modelscope import snapshot_download
model_dir = snapshot_download('IEITYuan/Yuan2-2B-Mars-hf', cache_dir='./')

# 定义模型路径
path = './IEITYuan/Yuan2-2B-Mars-hf'

# 定义模型数据类型
torch_dtype = torch.bfloat16 # A10
# torch_dtype = torch.float16 # P100

# 定义一个函数,用于获取模型和tokenizer
@st.cache_resource
def get_model():
    print("Creat tokenizer...")
    tokenizer = AutoTokenizer.from_pretrained(path, add_eos_token=False, add_bos_token=False, eos_token='<eod>')
    tokenizer.add_tokens(['<sep>', '<pad>', '<mask>', '<predict>', '<FIM_SUFFIX>', '<FIM_PREFIX>', '<FIM_MIDDLE>','<commit_before>','<commit_msg>','<commit_after>','<jupyter_start>','<jupyter_text>','<jupyter_code>','<jupyter_output>','<empty_output>'], special_tokens=True)

    print("Creat model...")
    model = AutoModelForCausalLM.from_pretrained(path, torch_dtype=torch_dtype, trust_remote_code=True).cuda()

    print("Done.")
    return tokenizer, model

# 加载model和tokenizer
tokenizer, model = get_model()

# 初次运行时,session_state中没有"messages",需要创建一个空列表
if "messages" not in st.session_state:
    st.session_state["messages"] = []

# 每次对话时,都需要遍历session_state中的所有消息,并显示在聊天界面上
for msg in st.session_state.messages:
    st.chat_message(msg["role"]).write(msg["content"])

# 如果用户在聊天输入框中输入了内容,则执行以下操作
if prompt := st.chat_input():
    # 将用户的输入添加到session_state中的messages列表中
    st.session_state.messages.append({"role": "user", "content": prompt})

    # 在聊天界面上显示用户的输入
    st.chat_message("user").write(prompt)

    # 调用模型
    prompt = "<n>".join(msg["content"] for msg in st.session_state.messages) + "<sep>" # 拼接对话历史
    inputs = tokenizer(prompt, return_tensors="pt")["input_ids"].cuda()
    outputs = model.generate(inputs, do_sample=False, max_length=1024) # 设置解码方式和最大生成长度
    model
    output = tokenizer.decode(outputs[0])
    response = output.split("<sep>")[-1].replace("<eod>", '')

    # 将模型的输出添加到session_state中的messages列表中
    st.session_state.messages.append({"role": "assistant", "content": response})

    # 在聊天界面上显示模型的输出
    st.chat_message("assistant").write(response)

task1 END!!!

task 2 RAG

PS:机器实例中大部分依赖都装好了,很方便直接跑代码

下载向量模型

首先下载一个向量模型

项目有ipynb文件可以直接运行,自己弄的话可以写个downloaded_model.py,然后粘贴下面的代码

python 复制代码
# 向量模型下载
from modelscope import snapshot_download
model_dir = snapshot_download("AI-ModelScope/bge-small-zh-v1.5", cache_dir='.')

手写RAG

然后按照教程自己写个rag出来,除了教程给的案例中用了faiss向量数据库,我们自己写的话就用教程中的转向量方法就行。集成向量库可以后面跟着langchain一起写。

新建一个rag_demo.py和一个知识库文件knowledge.txt

knowledge.txt中内容随意添加,我的如下:

python 复制代码
非洲野犬,属于食肉目犬科非洲野犬属哺乳动物。 又称四趾猎狗或非洲猎犬; 其腿长身短、体形细长;身上有鲜艳的黑棕色、黄色和白色斑块;吻通常黑色,头部中间有一黑带,颈背有一块浅黄色斑;尾基呈浅黄色,中段呈黑色,末端为白色,因此又有"杂色狼"之称。 非洲野犬分布于非洲东部、中部、南部和西南部一带。 栖息于开阔的热带疏林草原或稠密的森林附近,有时也到高山地区活动。其结群生活,没有固定的地盘,一般在一个较大的范围内逗留时间较长。非洲野犬性情凶猛,以各种羚羊、斑马、啮齿类等为食。奔跑速度仅次于猎; 雌犬妊娠期为69-73天,一窝十只仔,哺乳期持续6-12个星期。 其寿命11年。 非洲野犬正处在灭绝边缘,自然界中仅存两三千只。 非洲野犬被列入《世界自然保护联盟濒危物种红色名录》中,为濒危(EN)保护等级。 ",非洲野犬共有42颗牙齿(具体分布为:i=3/3;c=1/1;p=4/4;m=2/3x2),前臼齿比相对比其他犬科动物要大,因此可以磨碎大量的骨头,这一点很像鬣狗。 主要生活在非洲的干燥草原和半荒漠地带,活跃于草原、稀树草原和开阔的干燥灌木丛,甚至包括撒哈拉沙漠南部一些多山的地带。非洲野犬从来不到密林中活动。 
周杰伦(Jay Chou),1979年1月18日出生于台湾省新北市,祖籍福建省永春县,华语流行乐男歌手、音乐人、演员、导演、编剧,毕业于淡江中学。2000年,发行个人首张音乐专辑《Jay》 。2001年,凭借专辑《范特西》奠定其融合中西方音乐的风格 。2002年,举行"The One"世界巡回演唱会 。2003年,成为美国《时代》杂志封面人物;同年,发行音乐专辑《叶惠美》 ,该专辑获得第15届台湾金曲奖最佳流行音乐演唱专辑奖 。2004年,发行音乐专辑《七里香》 ,该专辑在全亚洲的首月销量达到300万张 ;同年,获得世界音乐大奖中国区最畅销艺人奖 。2005年,主演个人首部电影《头文字D》,并凭借该片获得第25届香港电影金像奖和第42届台湾电影金马奖的最佳新演员奖 。2006年起,他连续三年获得世界音乐大奖中国区最畅销艺人奖 。2007年,自编自导爱情电影《不能说的秘密》 ,同年,成立杰威尔音乐有限公司 。2008年,凭借歌曲《青花瓷》获得第19届台湾金曲奖最佳作曲人奖 。2009年,入选美国CNN"25位亚洲最具影响力人物" ;同年,凭借专辑《魔杰座》获得第20届台湾金曲奖最佳国语男歌手奖 。2010年,入选美国《Fast Company》杂志评出的"全球百大创意人物"。2011年,凭借专辑《跨时代》获得第22届台湾金曲奖最佳国语男歌手奖 。2012年,登上福布斯中国名人榜榜首 。2014年,发行个人首张数字音乐专辑《哎呦,不错哦》 。2023年,凭借专辑《最伟大的作品》成为首位获得国际唱片业协会"全球畅销专辑榜"冠军的华语歌手。

代码如下:

python 复制代码
from typing import List

import numpy as np
from transformers import AutoTokenizer, AutoModel, AutoModelForCausalLM
import torch
import streamlit as st


# 定义向量模型类
# 为了构造索引,封装一个向量模型类 EmbeddingModel
class EmbeddingModel:
    """
    class for EmbeddingModel
    """

    def __init__(self, path: str) -> None:
        self.tokenizer = AutoTokenizer.from_pretrained(path)

        self.model = AutoModel.from_pretrained(path).cuda()
        print(f'Loading EmbeddingModel from {path}.')

    def get_embeddings(self, texts: List) -> List[float]:
        """
        calculate embedding for text list
        :param texts: 传入已经被切割过文档块列表
        :return: embedding 文档的向量值
        """
        encoded_input = self.tokenizer(texts, padding=True, truncation=True, return_tensors='pt')
        encoded_input = {k: v.cuda() for k, v in encoded_input.items()}
        with torch.no_grad():
            model_output = self.model(**encoded_input)
            sentence_embeddings = model_output[0][:, 0]
        sentence_embeddings = torch.nn.functional.normalize(sentence_embeddings, p=2, dim=1)
        return sentence_embeddings.tolist()


print("> Create embedding model...")
# 此path是你下载的向量模型的地址
embed_model_path = './AI-ModelScope/bge-small-zh-v1___5'
embed_model = EmbeddingModel(embed_model_path)


# 为了实现向量检索,我们定义了一个向量库索引类 VectorStoreIndex
# 定义向量库索引类
class VectorStoreIndex:
    """
    class for VectorStoreIndex
    """

    def __init__(self, document_path: str, embed_model: EmbeddingModel) -> None:
        self.documents = []
        for line in open(document_path, 'r', encoding='utf-8'):
            line = line.strip()
            self.documents.append(line)

        self.embed_model = embed_model
        self.vectors = self.embed_model.get_embeddings(self.documents)

        print(f'Loading {len(self.documents)} documents for {document_path}.')

    def get_similarity(self, vector1: List[float], vector2: List[float]) -> float:
        """
        calculate cosine similarity between two vectors
        """
        dot_product = np.dot(vector1, vector2)
        magnitude = np.linalg.norm(vector1) * np.linalg.norm(vector2)
        if not magnitude:
            return 0
        return dot_product / magnitude

    def query(self, question: str, k: int = 1) -> List[str]:
        question_vector = self.embed_model.get_embeddings([question])[0]
        # 循环找相似度
        result = np.array([self.get_similarity(question_vector, vector) for vector in self.vectors])
        # 返回前K个最大值
        return np.array(self.documents)[result.argsort()[-k:][::-1]].tolist()


print("> Create index...")
# knowledge.txt 里面可以自己加点东西
document_path = './knowledge.txt'
index = VectorStoreIndex(document_path, embed_model)

# question = '介绍一下周杰伦'
# print('> Question:', question)
#
# context = index.query(question)
# print('> Context:', context)


# 为了实现基于RAG的生成,我们还需要定义一个大语言模型类 LLM,这里实际上就是对prompt进行封装,把用户问题写成RAG模式,让模型去接收
# 定义大语言模型类
class LLM:
    """
    class for Yuan2.0 LLM
    """

    def __init__(self, model_path: str) -> None:
        print("Creat tokenizer...")
        self.tokenizer = AutoTokenizer.from_pretrained(model_path, add_eos_token=False, add_bos_token=False,
                                                       eos_token='<eod>')
        self.tokenizer.add_tokens(
            ['<sep>', '<pad>', '<mask>', '<predict>', '<FIM_SUFFIX>', '<FIM_PREFIX>', '<FIM_MIDDLE>', '<commit_before>',
             '<commit_msg>', '<commit_after>', '<jupyter_start>', '<jupyter_text>', '<jupyter_code>',
             '<jupyter_output>', '<empty_output>'], special_tokens=True)

        print("Creat model...")
        self.model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch.bfloat16,
                                                          trust_remote_code=True).cuda()

        print(f'Loading Yuan2.0 model from {model_path}.')

    def generate(self, question: str, context: List):
        if context:
            prompt = f'背景:{context}\n问题:{question}\n请基于背景,回答问题。'
        else:
            prompt = question

        prompt += "<sep>"
        inputs = self.tokenizer(prompt, return_tensors="pt")["input_ids"].cuda()
        outputs = self.model.generate(inputs, do_sample=False, max_length=1024)
        output = self.tokenizer.decode(outputs[0])

        res = output.split("<sep>")[-1]
        print(res)
        return res


print("> Create Yuan2.0 LLM...")
# path 大语言模型的路径
model_path = './IEITYuan/Yuan2-2B-Mars-hf'
llm = LLM(model_path)

# print('> Without RAG:')
# llm.generate(question, [])
#
# print('> With RAG:')
# llm.generate(question, context)


def clear_chat_history():
    del st.session_state.messages


def init_chat_history():
    if "messages" in st.session_state:
        for message in st.session_state.messages:
            avatar = '🧑‍💻' if message["role"] == "user" else '🤖'
            with st.chat_message(message["role"], avatar=avatar):
                st.markdown(message["content"])
    else:
        st.session_state.messages = []

    return st.session_state.messages


# streamlit ui 画一个简单的界面
def ui():
    # 创建一个标题
    st.title('💬 简单的RAG实现')
    messages = init_chat_history()

    if prompt := st.chat_input("Shift + Enter 换行, Enter 发送"):
        with st.chat_message("user", avatar='🧑‍💻'):
            st.markdown(prompt)
        messages.append({"role": "user", "content": prompt})
        print(f"[user] {prompt}", flush=True)
        with st.chat_message("assistant", avatar='🤖'):
            context = index.query(prompt)
            response = llm.generate(prompt, context)
            st.markdown(response)
        messages.append({"role": "assistant", "content": response})
        print(f"[ai] {response}", flush=True)
        st.button("清空对话", on_click=clear_chat_history)


if __name__ == "__main__":
    ui()

# streamlit run rag_demo.py --server.address 127.0.0.1 --server.port 6006

启动命令为streamlit run rag_demo.py --server.address 127.0.0.1 --server.port 6006

最终效果如下:

后端输出如下:

到此就是一个很简单的RAG了。

后面有机会可以补上langchain+chroma或者milvus构建一个简单的rag。

task2 END!!!!

task3 微调

自己微调的话需要自己做微调数据集,这里直接用给的就行,自己做很麻烦,数据集质量很重要。

直接给出微调代码,看代码和注释就能懂,创建文件sft_demo.py,代码如下:

python 复制代码
# 导入环境
import torch
import pandas as pd
from datasets import Dataset
from transformers import AutoTokenizer, AutoModelForCausalLM, DataCollatorForSeq2Seq, TrainingArguments, Trainer

# 读取数据
df = pd.read_json('./data.json')
ds = Dataset.from_pandas(df)
# 查看数据
len(ds)
print("第一条数据为:" + str(ds[:1]))

# 加载 tokenizer
path = './IEITYuan/Yuan2-2B-Mars-hf'

tokenizer = AutoTokenizer.from_pretrained(path, add_eos_token=False, add_bos_token=False, eos_token='<eod>')
tokenizer.add_tokens(
    ['<sep>', '<pad>', '<mask>', '<predict>', '<FIM_SUFFIX>', '<FIM_PREFIX>', '<FIM_MIDDLE>', '<commit_before>',
     '<commit_msg>', '<commit_after>', '<jupyter_start>', '<jupyter_text>', '<jupyter_code>', '<jupyter_output>',
     '<empty_output>'], special_tokens=True)
tokenizer.pad_token = tokenizer.eos_token


# 定义数据处理函数
def process_func(example):
    MAX_LENGTH = 384  # Llama分词器会将一个中文字切分为多个token,因此需要放开一些最大长度,保证数据的完整性

    instruction = tokenizer(f"{example['input']}<sep>")
    response = tokenizer(f"{example['output']}<eod>")
    input_ids = instruction["input_ids"] + response["input_ids"]
    # 注意力掩码 创建了一个与input_ids长度相同的列表,其中每个元素都是1。这个注意力掩码用于告诉模型哪些位置上的输入是有效的,
    # 需要参与注意力机制的计算。在这个例子中,由于所有位置都是有效的(即没有填充或忽略的token),所以注意力掩码的所有值都是1。
    attention_mask = [1] * len(input_ids)
    labels = [-100] * len(instruction["input_ids"]) + response["input_ids"]  # instruction 不计算loss -100代表不计算loss

    if len(input_ids) > MAX_LENGTH:  # 做一个截断
        input_ids = input_ids[:MAX_LENGTH]
        attention_mask = attention_mask[:MAX_LENGTH]
        labels = labels[:MAX_LENGTH]

    return {
        "input_ids": input_ids,
        "attention_mask": attention_mask,
        "labels": labels
    }


# 验证tokenized
tokenized_id = ds.map(process_func, remove_columns=ds.column_names)
print(tokenizer.decode(tokenized_id[0]['input_ids']))
print(tokenizer.decode(list(filter(lambda x: x != -100, tokenized_id[2]['labels']))))

model = AutoModelForCausalLM.from_pretrained(path, device_map='auto', torch_dtype=torch.bfloat16,
                                             trust_remote_code=True)
print(model)
model.enable_input_require_grads()
#model.dtype()

# 配置lora
from peft import LoraConfig, TaskType, get_peft_model

config = LoraConfig(
    task_type=TaskType.CAUSAL_LM,
    target_modules=["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"],
    inference_mode=False,  # 训练模式
    r=8,  # lora 秩
    lora_alpha=32,  # lora alpha
    lora_dropout=0.1  # Dropout 比例
)

# 构建 PeftModel
model = get_peft_model(model, config)
model.print_trainable_parameters()

# 设置训练参数
args = TrainingArguments(
    output_dir="./output/qingchen-sft",
    per_device_train_batch_size=12,
    gradient_accumulation_steps=1,
    logging_steps=1,
    save_strategy="epoch",
    num_train_epochs=3,
    learning_rate=5e-5,
    save_on_each_node=True,
    gradient_checkpointing=True,
    bf16=True
)

# 初始化一个trainer
trainer = Trainer(
    model=model,
    args=args,
    train_dataset=tokenized_id,
    data_collator=DataCollatorForSeq2Seq(tokenizer=tokenizer, padding=True)
)

# 开始训练!
trainer.train()

中途运行报错:

少了依赖包安装一下就行,执行pip install tf-keras

开始训练!

完整训练日志如下:

powershell 复制代码
root@dsw-615380-55b9944446-z7zqc:/mnt/workspace# python sft_demo.py 
2024-08-12 22:22:03.338640: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-08-12 22:22:03.351631: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:485] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-08-12 22:22:03.367559: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:8454] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-08-12 22:22:03.372253: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1452] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-08-12 22:22:03.383663: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-08-12 22:22:04.394840: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
/usr/local/lib/python3.10/site-packages/_distutils_hack/__init__.py:55: UserWarning: Reliance on distutils from stdlib is deprecated. Users must rely on setuptools to provide the distutils module. Avoid importing distutils or import setuptools first, and avoid setting SETUPTOOLS_USE_DISTUTILS=stdlib. Register concerns at https://github.com/pypa/setuptools/issues/new?template=distutils-deprecation.yml
  warnings.warn(
第一条数据为:{'input': ['# 任务描述\n假设你是一个AI简历助手,能从简历中识别出所有的命名实体,并以json格式返回结果。\n\n# 任务要求\n实体的类别包括:姓名、国籍、种族、职位、教育背景、专业、组织名、地名。\n返回的json格式是一个字典,其中每个键是实体的类别,值是一个列表,包含实体的文本。\n\n# 样例\n输入:\n张三,男,中国籍,工程师\n输出:\n{"姓名": ["张三"], "国籍": ["中国"], "职位": ["工程师"]}\n\n# 当前简历\n高勇:男,中国国籍,无境外居留权,\n\n# 任务重述\n请参考样例,按照任务要求,识别出当前简历中所有的命名实体,并以json格式返回结果。'], 'output': ['{"姓名": ["高勇"], "国籍": ["中国国籍"]}']}
You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>. This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 - if you loaded a llama tokenizer from a GGUF file you can ignore this message
You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama_fast.LlamaTokenizerFast'>. This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 - if you loaded a llama tokenizer from a GGUF file you can ignore this message.
Map: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 200/200 [00:00<00:00, 2241.91 examples/s]
# 任务描述
假设你是一个AI简历助手,能从简历中识别出所有的命名实体,并以json格式返回结果。

# 任务要求
实体的类别包括:姓名、国籍、种族、职位、教育背景、专业、组织名、地名。
返回的json格式是一个字典,其中每个键是实体的类别,值是一个列表,包含实体的文本。

# 样例
输入:
张三,男,中国籍,工程师
输出:
{"姓名": ["张三"], "国籍": ["中国"], "职位": ["工程师"]}

# 当前简历
高勇:男,中国国籍,无境外居留权,

# 任务重述
请参考样例,按照任务要求,识别出当前简历中所有的命名实体,并以json格式返回结果。<sep> {"姓名": ["高勇"], "国籍": ["中国国籍"]}<eod>
{"组织名": ["人和投资"], "职位": ["董事"]}<eod>
YuanForCausalLM(
  (model): YuanModel(
    (embed_tokens): Embedding(135040, 2048, padding_idx=77185)
    (layers): ModuleList(
      (0-23): 24 x YuanDecoderLayer(
        (self_attn): YuanAttention(
          (v_proj): Linear(in_features=2048, out_features=2048, bias=False)
          (o_proj): Linear(in_features=2048, out_features=2048, bias=False)
          (rotary_emb): YuanRotaryEmbedding()
          (lf_gate): LocalizedFiltering(
            (conv1): Conv2d(2048, 1024, kernel_size=(2, 1), stride=(1, 1), padding=(1, 0))
            (conv2): Conv2d(1024, 2048, kernel_size=(2, 1), stride=(1, 1), padding=(1, 0))
            (output_layernorm): YuanRMSNorm()
          )
          (q_proj): Linear(in_features=2048, out_features=2048, bias=False)
          (k_proj): Linear(in_features=2048, out_features=2048, bias=False)
        )
        (mlp): YuanMLP(
          (up_proj): Linear(in_features=2048, out_features=8192, bias=False)
          (gate_proj): Linear(in_features=2048, out_features=8192, bias=False)
          (down_proj): Linear(in_features=8192, out_features=2048, bias=False)
          (act_fn): SiLU()
        )
        (input_layernorm): YuanRMSNorm()
        (post_attention_layernorm): YuanRMSNorm()
      )
    )
    (norm): YuanRMSNorm()
  )
  (lm_head): Linear(in_features=2048, out_features=135040, bias=False)
)
trainable params: 9,043,968 || all params: 2,097,768,448 || trainable%: 0.4311
Detected kernel version 4.19.24, which is below the recommended minimum of 5.5.0; this can cause the process to hang. It is recommended to upgrade the kernel to the minimum version or higher.
开始训练!
You are using an old version of the checkpointing format that is deprecated (We will also silently ignore `gradient_checkpointing_kwargs` in case you passed it).Please update to the new format on your modeling file. To use the new format, you need to completely remove the definition of the method `_set_gradient_checkpointing` in your model.
[2024-08-12 22:22:17,900] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect)
df: /root/.triton/autotune: 没有那个文件或目录
 [WARNING]  Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
 [WARNING]  sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.3
 [WARNING]  using untested triton version (2.3.1), only 1.0.0 is known to be compatible
  0%|                                                                                                                                                                                         | 0/51 [00:00<?, ?it/s]/usr/local/lib/python3.10/site-packages/torch/utils/checkpoint.py:464: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
  warnings.warn(
{'loss': 1.29, 'grad_norm': 8.8125, 'learning_rate': 4.901960784313725e-05, 'epoch': 0.06}                                                                                                                           
{'loss': 1.3519, 'grad_norm': 8.25, 'learning_rate': 4.803921568627452e-05, 'epoch': 0.12}                                                                                                                           
{'loss': 0.9244, 'grad_norm': 6.90625, 'learning_rate': 4.705882352941177e-05, 'epoch': 0.18}                                                                                                                        
{'loss': 0.6975, 'grad_norm': 3.703125, 'learning_rate': 4.607843137254902e-05, 'epoch': 0.24}                                                                                                                       
{'loss': 0.5886, 'grad_norm': 2.84375, 'learning_rate': 4.5098039215686275e-05, 'epoch': 0.29}                                                                                                                       
{'loss': 0.6119, 'grad_norm': 3.03125, 'learning_rate': 4.411764705882353e-05, 'epoch': 0.35}                                                                                                                        
{'loss': 0.4495, 'grad_norm': 2.109375, 'learning_rate': 4.313725490196079e-05, 'epoch': 0.41}                                                                                                                       
{'loss': 0.652, 'grad_norm': 2.03125, 'learning_rate': 4.215686274509804e-05, 'epoch': 0.47}                                                                                                                         
{'loss': 0.4105, 'grad_norm': 1.875, 'learning_rate': 4.11764705882353e-05, 'epoch': 0.53}                                                                                                                           
{'loss': 0.3181, 'grad_norm': 1.71875, 'learning_rate': 4.0196078431372555e-05, 'epoch': 0.59}                                                                                                                       
{'loss': 0.4638, 'grad_norm': 2.046875, 'learning_rate': 3.9215686274509805e-05, 'epoch': 0.65}                                                                                                                      
{'loss': 0.4387, 'grad_norm': 2.046875, 'learning_rate': 3.8235294117647055e-05, 'epoch': 0.71}                                                                                                                      
{'loss': 0.3739, 'grad_norm': 1.4453125, 'learning_rate': 3.725490196078432e-05, 'epoch': 0.76}                                                                                                                      
{'loss': 0.3642, 'grad_norm': 1.6484375, 'learning_rate': 3.627450980392157e-05, 'epoch': 0.82}                                                                                                                      
{'loss': 0.5388, 'grad_norm': 1.9140625, 'learning_rate': 3.529411764705883e-05, 'epoch': 0.88}                                                                                                                      
{'loss': 0.531, 'grad_norm': 2.359375, 'learning_rate': 3.431372549019608e-05, 'epoch': 0.94}                                                                                                                        
{'loss': 0.2869, 'grad_norm': 1.828125, 'learning_rate': 3.3333333333333335e-05, 'epoch': 1.0}                                                                                                                       
 33%|██████████████████████████████████████████████████████████▋                                                                                                                     | 17/51 [00:32<00:45,  1.35s/it]/usr/local/lib/python3.10/site-packages/peft/utils/save_and_load.py:195: UserWarning: Could not find a config file in ./IEITYuan/Yuan2-2B-Mars-hf - will assume that the vocabulary was not modified.
  warnings.warn(
/usr/local/lib/python3.10/site-packages/torch/utils/checkpoint.py:464: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
  warnings.warn(
{'loss': 0.2343, 'grad_norm': 1.3671875, 'learning_rate': 3.235294117647059e-05, 'epoch': 1.06}                                                                                                                      
{'loss': 0.3194, 'grad_norm': 1.4921875, 'learning_rate': 3.137254901960784e-05, 'epoch': 1.12}                                                                                                                      
{'loss': 0.2696, 'grad_norm': 1.515625, 'learning_rate': 3.0392156862745097e-05, 'epoch': 1.18}                                                                                                                      
{'loss': 0.4031, 'grad_norm': 1.703125, 'learning_rate': 2.9411764705882354e-05, 'epoch': 1.24}                                                                                                                      
{'loss': 0.3262, 'grad_norm': 1.6015625, 'learning_rate': 2.8431372549019608e-05, 'epoch': 1.29}                                                                                                                     
{'loss': 0.3407, 'grad_norm': 1.328125, 'learning_rate': 2.7450980392156865e-05, 'epoch': 1.35}                                                                                                                      
{'loss': 0.2062, 'grad_norm': 1.1171875, 'learning_rate': 2.647058823529412e-05, 'epoch': 1.41}                                                                                                                      
{'loss': 0.3124, 'grad_norm': 1.65625, 'learning_rate': 2.5490196078431373e-05, 'epoch': 1.47}                                                                                                                       
{'loss': 0.1567, 'grad_norm': 1.09375, 'learning_rate': 2.4509803921568626e-05, 'epoch': 1.53}                                                                                                                       
{'loss': 0.2854, 'grad_norm': 1.2109375, 'learning_rate': 2.3529411764705884e-05, 'epoch': 1.59}                                                                                                                     
{'loss': 0.2808, 'grad_norm': 1.453125, 'learning_rate': 2.2549019607843138e-05, 'epoch': 1.65}                                                                                                                      
{'loss': 0.335, 'grad_norm': 1.5859375, 'learning_rate': 2.1568627450980395e-05, 'epoch': 1.71}                                                                                                                      
{'loss': 0.2329, 'grad_norm': 1.71875, 'learning_rate': 2.058823529411765e-05, 'epoch': 1.76}                                                                                                                        
{'loss': 0.2538, 'grad_norm': 1.4140625, 'learning_rate': 1.9607843137254903e-05, 'epoch': 1.82}                                                                                                                     
{'loss': 0.249, 'grad_norm': 1.2265625, 'learning_rate': 1.862745098039216e-05, 'epoch': 1.88}                                                                                                                       
{'loss': 0.2484, 'grad_norm': 1.484375, 'learning_rate': 1.7647058823529414e-05, 'epoch': 1.94}                                                                                                                      
{'loss': 0.0806, 'grad_norm': 0.88671875, 'learning_rate': 1.6666666666666667e-05, 'epoch': 2.0}                                                                                                                     
 67%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▎                                                          | 34/51 [01:00<00:26,  1.53s/it]/usr/local/lib/python3.10/site-packages/peft/utils/save_and_load.py:195: UserWarning: Could not find a config file in ./IEITYuan/Yuan2-2B-Mars-hf - will assume that the vocabulary was not modified.
  warnings.warn(
/usr/local/lib/python3.10/site-packages/torch/utils/checkpoint.py:464: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
  warnings.warn(
{'loss': 0.1616, 'grad_norm': 1.1640625, 'learning_rate': 1.568627450980392e-05, 'epoch': 2.06}                                                                                                                      
{'loss': 0.2704, 'grad_norm': 1.2734375, 'learning_rate': 1.4705882352941177e-05, 'epoch': 2.12}                                                                                                                     
{'loss': 0.1591, 'grad_norm': 1.34375, 'learning_rate': 1.3725490196078432e-05, 'epoch': 2.18}                                                                                                                       
{'loss': 0.2045, 'grad_norm': 1.1640625, 'learning_rate': 1.2745098039215686e-05, 'epoch': 2.24}                                                                                                                     
{'loss': 0.1846, 'grad_norm': 1.3125, 'learning_rate': 1.1764705882352942e-05, 'epoch': 2.29}                                                                                                                        
{'loss': 0.1164, 'grad_norm': 0.9765625, 'learning_rate': 1.0784313725490197e-05, 'epoch': 2.35}                                                                                                                     
{'loss': 0.1417, 'grad_norm': 1.3515625, 'learning_rate': 9.803921568627451e-06, 'epoch': 2.41}                                                                                                                      
{'loss': 0.2702, 'grad_norm': 1.34375, 'learning_rate': 8.823529411764707e-06, 'epoch': 2.47}                                                                                                                        
{'loss': 0.1033, 'grad_norm': 0.921875, 'learning_rate': 7.84313725490196e-06, 'epoch': 2.53}                                                                                                                        
{'loss': 0.3029, 'grad_norm': 1.5703125, 'learning_rate': 6.862745098039216e-06, 'epoch': 2.59}                                                                                                                      
{'loss': 0.129, 'grad_norm': 1.3125, 'learning_rate': 5.882352941176471e-06, 'epoch': 2.65}                                                                                                                          
{'loss': 0.1083, 'grad_norm': 1.28125, 'learning_rate': 4.901960784313726e-06, 'epoch': 2.71}                                                                                                                        
{'loss': 0.2549, 'grad_norm': 1.3984375, 'learning_rate': 3.92156862745098e-06, 'epoch': 2.76}                                                                                                                       
{'loss': 0.2416, 'grad_norm': 1.3046875, 'learning_rate': 2.9411764705882355e-06, 'epoch': 2.82}                                                                                                                     
{'loss': 0.2361, 'grad_norm': 1.8203125, 'learning_rate': 1.96078431372549e-06, 'epoch': 2.88}                                                                                                                       
{'loss': 0.1691, 'grad_norm': 1.234375, 'learning_rate': 9.80392156862745e-07, 'epoch': 2.94}                                                                                                                        
{'loss': 0.1071, 'grad_norm': 1.40625, 'learning_rate': 0.0, 'epoch': 3.0}                                                                                                                                           
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 51/51 [01:28<00:00,  1.38s/it]/usr/local/lib/python3.10/site-packages/peft/utils/save_and_load.py:195: UserWarning: Could not find a config file in ./IEITYuan/Yuan2-2B-Mars-hf - will assume that the vocabulary was not modified.
  warnings.warn(
/usr/local/lib/python3.10/site-packages/peft/utils/save_and_load.py:195: UserWarning: Could not find a config file in ./IEITYuan/Yuan2-2B-Mars-hf - will assume that the vocabulary was not modified.
  warnings.warn(
{'train_runtime': 89.0809, 'train_samples_per_second': 6.735, 'train_steps_per_second': 0.573, 'train_loss': 0.3526871059747303, 'epoch': 3.0}                                                                       
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 51/51 [01:29<00:00,  1.75s/it]
root@dsw-615380-55b9944446-z7zqc:/mnt/workspace# 

训练完成后会在output_dir="./output/qingchen-sft",生成ckpt(checkpoint)

然后Task 4 案例:AI简历助手.py修改lora_path为你自己设置的路径

完整代码如下:

python 复制代码
# 导入所需的库
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
import streamlit as st
from peft import PeftModel
import json
import pandas as pd

# 创建一个标题和一个副标题
st.title("💬 Yuan2.0 AI简历助手")

# 源大模型下载
from modelscope import snapshot_download
model_dir = snapshot_download('IEITYuan/Yuan2-2B-Mars-hf', cache_dir='./')

# 定义模型路径
path = './IEITYuan/Yuan2-2B-Mars-hf'
lora_path = './output/qingchen/checkpoint-51' # 此处改成自己训练后ckpt的路径

# 定义模型数据类型
torch_dtype = torch.bfloat16  # A10
# torch_dtype = torch.float16 # P100

# 定义一个函数,用于获取模型和tokenizer
@st.cache_resource
def get_model():
    print("Creat tokenizer...")
    tokenizer = AutoTokenizer.from_pretrained(path, add_eos_token=False, add_bos_token=False, eos_token='<eod>')
    tokenizer.add_tokens(['<sep>', '<pad>', '<mask>', '<predict>', '<FIM_SUFFIX>', '<FIM_PREFIX>', '<FIM_MIDDLE>','<commit_before>','<commit_msg>','<commit_after>','<jupyter_start>','<jupyter_text>','<jupyter_code>','<jupyter_output>','<empty_output>'], special_tokens=True)

    print("Creat model...")
    model = AutoModelForCausalLM.from_pretrained(path, torch_dtype=torch_dtype, trust_remote_code=True).cuda()
    model = PeftModel.from_pretrained(model, model_id=lora_path)

    return tokenizer, model

# 加载model和tokenizer
tokenizer, model = get_model()


template = '''
# 任务描述
假设你是一个AI简历助手,能从简历中识别出所有的命名实体,并以json格式返回结果。

# 任务要求
实体的类别包括:姓名、国籍、种族、职位、教育背景、专业、组织名、地名。
返回的json格式是一个字典,其中每个键是实体的类别,值是一个列表,包含实体的文本。

# 样例
输入:
张三,男,中国籍,工程师
输出:
{"姓名": ["张三"], "国籍": ["中国"], "职位": ["工程师"]}

# 当前简历
query

# 任务重述
请参考样例,按照任务要求,识别出当前简历中所有的命名实体,并以json格式返回结果。
'''


# 在聊天界面上显示模型的输出
st.chat_message("assistant").write(f"请输入简历文本:")


# 如果用户在聊天输入框中输入了内容,则执行以下操作
if query := st.chat_input():

    # 在聊天界面上显示用户的输入
    st.chat_message("user").write(query)

    # 调用模型
    prompt = template.replace('query', query).strip()
    prompt += "<sep>"
    inputs = tokenizer(prompt, return_tensors="pt")["input_ids"].cuda()
    outputs = model.generate(inputs, do_sample=False, max_length=1024) # 设置解码方式和最大生成长度
    output = tokenizer.decode(outputs[0])
    response = output.split("<sep>")[-1].replace("<eod>", '').strip()
    print(response)

    # 在聊天界面上显示模型的输出
    st.chat_message("assistant").write(f"正在提取简历信息,请稍候...")

    st.chat_message("assistant").table(pd.DataFrame(json.loads(response)))

然后跑起来看下效果:

streamlit run Task\ 4\ 案例:AI简历助手.py --server.address 127.0.0.1 --server.port 6006


task3 end

END

继续学吧,你就学吧,一学一个不吱声!

相关推荐
脆皮泡泡2 分钟前
Ultiverse 和web3新玩法?AI和GameFi的结合是怎样
人工智能·web3
机器人虎哥5 分钟前
【8210A-TX2】Ubuntu18.04 + ROS_ Melodic + TM-16多线激光 雷达评测
人工智能·机器学习
码银13 分钟前
冲破AI 浪潮冲击下的 迷茫与焦虑
人工智能
何大春16 分钟前
【弱监督语义分割】Self-supervised Image-specific Prototype Exploration for WSSS 论文阅读
论文阅读·人工智能·python·深度学习·论文笔记·原型模式
uncle_ll24 分钟前
PyTorch图像预处理:计算均值和方差以实现标准化
图像处理·人工智能·pytorch·均值算法·标准化
宋1381027972024 分钟前
Manus Xsens Metagloves虚拟现实手套
人工智能·机器人·vr·动作捕捉
SEVEN-YEARS28 分钟前
深入理解TensorFlow中的形状处理函数
人工智能·python·tensorflow
世优科技虚拟人32 分钟前
AI、VR与空间计算:教育和文旅领域的数字转型力量
人工智能·vr·空间计算
EterNity_TiMe_33 分钟前
【论文复现】(CLIP)文本也能和图像配对
python·学习·算法·性能优化·数据分析·clip
sanguine__37 分钟前
java学习-集合
学习