MobileLLM开发安卓AI的体验

MobileLLM是一个在安卓端跑的大语言模型,关键它还有调动api的能力

https://github.com/facebookresearch/MobileLLM

项目地址是这个。

看了下,似乎还是中国人团队

@article{liu2024mobilellm, title={MobileLLM: Optimizing Sub-billion Parameter Language Models for On-Device Use Cases}, author={Liu, Zechun and Zhao, Changsheng and Iandola, Forrest and Lai, Chen and Tian, Yuandong and Fedorov, Igor and Xiong, Yunyang and Chang, Ernie and Shi, Yangyang and Krishnamoorthi, Raghuraman 等}, journal={arXiv preprint arXiv:2402.14905}, year={2024} }

下来,根据流程要求,重现过程

我本地的pyton只有3.10

干脆就使用了Anaconda Navigator来安装 pytorch

这个是他的图形界面,里面包含pytorch环境

关于安装过程,我的另一篇文章有专门去写。

安装PyTorch深度学习框架-CSDN博客文章浏览阅读54次。下载地址下载下来直接安装就好了,可能安装比较慢一些注意要用管理员权限安装,最好关闭杀毒软件。安装好后运行时这个样子的,这个图形界面被称为Anaconda Navigator。Anaconda Navigator是一个桌面图形用户界面,允许您轻松地管理conda包、环境和应用程序。创建一个环境,选择你需要的python版本名字我用过了 PyTorch,python选择然后启动命令行激活程序就可以了。https://blog.csdn.net/CDialog/article/details/144399243那么现在主要是针对MobileLLM

环境搭好了,下来处理MobileLLM

我看gihub上有一句,要安装这个环境,那么来吧

其他安装

pip install -r requirement.txt , pytorch环境中,在项目根目录下执行,和requirement.txt同目录就行

pip install -r requirement.txt

网不太好的多搞几遍~~~

配置模型参数/数据预处理

这个是mobilellm的目录结构,猛地一看肯定是一脸懵逼

关键是readme文档对新人也不友好

就这么草草提了一句,还需要自己建立数据目录。

GitHub - LLM360/amber-data-prep: Data preparation code for Amber 7B LLMData preparation code for Amber 7B LLM. Contribute to LLM360/amber-data-prep development by creating an account on GitHub.https://github.com/LLM360/amber-data-prep

这个链接是另外一个项目的,下载发现有数据样本可以用,

然后把good_sample.jsonl弄到mobilellm项目里,

建立这样的目录结构,主要是有个数据根目录,需要再py文件执行的时候传参进去告诉数据目录是哪一个

还有一个输出路径,我也加了一个文件夹

(pytorch) C:\AI\MobileLLM-main>python pretrain.py --data_path "C:\AI\MobileLLM-main\basepath" --model_config ".\configs\125M\config.json"

(pytorch) C:\AI\MobileLLM-main>python pretrain.py --data_path "C:\AI\MobileLLM-main\basepath" --model_config ".\configs\125M\config.json"

执行这样的指令

pretrain.py文件就会按照参数内容指定的目录去加载配置文件并运行。

然后默认的文件报错,原因是多个并发处理的配置问题

=======================================下排等号之间的看看就可以,最后代码不是这个。

穷啊,只有一块显卡,所以

查了一下几个参数

看来要修改pretrain.py这个文件了。

在155行开始,这里进行修改

另外需要注意所有的执行文件需要转一下utf8,网上down下来的都是ANSI,会报错

pretrain.py中追加下列环境变量,配置单机单卡

os.environ['MASTER_ADDR'] = 'localhost'

再追加一行这个。

======================================== 不搞了,我直接贴单机代码

# coding=utf-8
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.

import json
import logging
import os
from logging import Logger
import re
import sys
from typing import Dict, Iterator, List, Optional
import datetime

import torch
import transformers

from utils.modeling_llama import LlamaForCausalLM
from utils.pretrain_trainer import PretrainTrainer
from utils.process_args import process_args
from torch import distributed as dist
from torch.utils.data import Dataset, DataLoader
from transformers import AutoConfig, default_data_collator

# 设置环境变量
os.environ['RANK'] = '0'
os.environ['WORLD_SIZE'] = '1'
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '12345'  # 选择一个未被占用的端口号
os.environ["PL_TORCH_DISTRIBUTED_BACKEND"] = "gloo"

# Define a utility method for setting the logging parameters of a logger
def get_logger(logger_name: Optional[str]) -> logging.Logger:
    # Get the logger with the specified name
    logger = logging.getLogger(logger_name)

    # Set the logging level of the logger to INFO
    logger.setLevel(logging.INFO)

    # Define a formatter for the log messages
    formatter = logging.Formatter(
        "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
    )

    # Create a console handler for outputting log messages to the console
    console_handler = logging.StreamHandler()
    console_handler.setFormatter(formatter)

    # Add the console handler to the logger
    logger.addHandler(console_handler)

    return logger


log: Logger = get_logger("mobileLLM")


def get_local_rank() -> int:
    if os.environ.get("LOCAL_RANK"):
        return int(os.environ["LOCAL_RANK"])
    else:
        logging.warning(
            "LOCAL_RANK from os.environ is None, fall back to get rank from torch distributed"
        )
        return torch.distributed.get_rank()

def get_global_rank() -> int:
    """
    Get rank using torch.distributed if available. Otherwise, the RANK env var instead if initialized.
    Returns 0 if neither condition is met.
    """
    if torch.distributed.is_available() and torch.distributed.is_initialized():
        return torch.distributed.get_rank()

    environ_rank = os.environ.get("RANK", "")
    if environ_rank.isdecimal():
        return int(os.environ["RANK"])

    return 0


def get_folder_paths(directory: str) -> List[str]:
    folder_paths = [
        os.path.join(directory, item)
        for item in os.listdir(directory)
        if os.path.isdir(os.path.join(directory, item))
    ]
    return folder_paths

def get_iterable_dataloader(iterator, batch_size):
    def custom_collate_fn(batch):
        return dict(input_ids=torch.stack(batch), labels=torch.stack(batch))
    class IteratorDataset(Dataset):
        def __init__(self, iterator):
            self.iterator = iterator
        def __len__(self):
        # Return an arbitrarily large number
            return sys.maxsize
        def __getitem__(self, index):
            try:
                ids = next(self.iterator)
                return torch.tensor(ids)
            except StopIteration:
                raise IndexError
    # Create dataset
    dataset = IteratorDataset(iterator)
    # Create DataLoader with custom collate function
    dataloader = DataLoader(dataset, batch_size=batch_size, collate_fn=custom_collate_fn)
    return dataloader

class JSONLIterator:
    def __init__(
        self,
        fpath: str,
        world_size: int,
        world_rank: int,
        infinite: bool,
    ) -> None:
        assert 0 <= world_rank < world_size, (world_rank, world_size)
        self.f = open(fpath, "r", encoding="utf-8", errors="ignore")
        self.fpath = fpath
        self.world_size = world_size
        self.world_rank = world_rank
        self.line_num = 0
        self.iter = iter(self.gen(infinite))
        self.iter_id = 0

    def __iter__(self) -> "JSONLIterator":
        return self

    def __next__(self):
        return next(self.iter)

    def gen(self, infinite: bool) -> Iterator[Dict]:
        while True:
            log.info(f"Starting iteration {self.iter_id} over {self.fpath} ...")
            self.iter_id += 1
            while True:
                try:
                    line, self.line_num = self.f.readline(), self.line_num + 1
                    if not line:
                        break
                    if (self.line_num - 1) % self.world_size == self.world_rank:
                        try:
                            yield json.loads(line)['token_ids']
                        except json.JSONDecodeError as e:
                            print("Failed to parse JSON:", e)
                        except Exception as e:
                            print(f"Unexpected Jsonl error: {e}")
                        continue  # Skip to the next iteration
                except Exception as e:
                    print(f"Unexpected error while reading line: {e}")
                continue
            if not infinite:
                break
            self.f.seek(0)
            self.line_num = 0
        self.f.close()

def train() -> None:
    #dist.init_process_group(
    #    backend="cpu:gloo,cuda:nccl", timeout=datetime.timedelta(hours=8)
    #)
    #torch.distributed.init_process_group('gloo', init_method="env://?use_libuv=False")
    model_args, data_args, training_args = process_args()

    #global_rank = get_global_rank()
    #local_rank = get_local_rank()
    global_rank =0
    local_rank =0


    log.info(f"Global Rank: {global_rank}")
    log.info(f"Local Rank: {local_rank}")
    config = AutoConfig.from_pretrained(model_args.input_model_filename)
    config.share_embedding = model_args.share_embedding
    config.layer_sharing = model_args.layer_sharing
    model = LlamaForCausalLM(
        config=config,
    )
    log.info(
        "model size is "
        + str(sum(param.numel() for param in model.model.parameters()) / 1024 / 1024)
    )
    log.info("Start to load tokenizer...")
    tokenizer = transformers.AutoTokenizer.from_pretrained(
        pretrained_model_name_or_path=model_args.input_model_filename,
        cache_dir=training_args.cache_dir,
        model_max_length=training_args.model_max_length,
        padding_side="right",
        use_fast=False,
    )
    log.info("Complete tokenizer loading...")

    # go to current node's data rank
    #local_data_folder = os.path.join(data_args.train_data_local_path, str(global_rank//8+1))
    #如果您是在单机上运行,并且想要加载全部数据集进行训练,您应该去掉与分布式训练相关的逻辑。
    #您可以直接设置 local_data_folder 为整个数据集的路径,而不是基于 global_rank 来选择子集。
    local_data_folder = data_args.train_data_local_path
    
    # Data load locally from shard total data, so world_size is 8 and rank is the current node's local rank
    log.info("world_rank for data loader is " + str(local_rank))
    log.info("world_size for data loader is " + str(1))
    assert os.path.isdir(local_data_folder), local_data_folder
    fname_match_re: str = r"\.jsonl$"

    # get the jsonl file name. Currently only support 1 file/folder per node
    ####  fnames = [x for x in os.listdir(local_data_folder) if re.search(fname_match_re, x)][0]
    #如果您是在单机单卡上运行,并且不想使用分布式训练的数据分片结构,
    #您应该修改代码以直接加载整个数据集,而不是尝试从分布式节点结构中选择数据。
    #这意味着您应该直接指定包含所有训练数据的 .jsonl 文件的路径。
    fnames = "good_sample.jsonl"  # 替换为您的文件名
    data_iter = JSONLIterator(
        fpath=os.path.join(local_data_folder,fnames),
        world_rank=local_rank,
        world_size=1,
        infinite=True,
    )
    trainer = PretrainTrainer(
        model=model,
        tokenizer=tokenizer,
        args=training_args,
        train_dataset=get_iterable_dataloader(data_iter, training_args.per_device_train_batch_size) if training_args.do_train else None,
        eval_dataset=None,
        data_collator=default_data_collator,
    )
    #torch.distributed.barrier(device_ids=[local_rank])

    if training_args.do_train:
        _ = trainer.train()
        trainer.save_state()

    #torch.distributed.barrier(device_ids=[local_rank])


if __name__ == "__main__":
    train()

这个是我修改后的可以执行的单机代码,最终是屏蔽了所有分布式运算的代码。

执行的命令我也贴一下

python pretrain.py  --train_data_local_path  "C:\techxixi_project\AI\MobileLLM-main\basepath"  --input_model_filename ".\configs\125M"  --output_dir ".\output_path"  --per_device_train_batch_size 8  --num_train_epochs 3  --learning_rate 1e-5  --do_train

执行结果如下

worldsize为8是硬编码,直接打印的,懒得改了。

加上do命令就是执行,然后开始训练。

====================================

预训练就这样了,我就不贴正式训练的部分了,系统报给我1050ti内存不足,我是没办法了,那个再找机会吧

=======================================

执行eval.py

该死的教程readme中就这么一句话,好吧我去下载模型,然后执行eval.sh

PS:eval是一个用于评估语言模型在 WikiText-2 数据集上的困惑度(Perplexity, PPL)的 Python 脚本。困惑度是衡量语言模型质量的一个指标,它反映了模型对于测试数据集的预测能力。

然后eval代码也被我改了,其中的分布式训练也屏蔽掉

列出来我的代码

import json
import logging
import os
from logging import Logger
import sys
from typing import Dict, Iterator, List, Optional
import datetime
from tqdm import tqdm
from torch import nn
import torch
import transformers
from transformers import AutoConfig, AutoTokenizer
from torch.utils.data import Dataset, DataLoader

# 确保这里的导入路径是正确的,根据您的项目结构进行调整
from utils.modeling_llama import LlamaForCausalLM
from utils.process_args import process_args

def get_logger(logger_name: Optional[str]) -> logging.Logger:
    logger = logging.getLogger(logger_name)
    logger.setLevel(logging.INFO)
    formatter = logging.Formatter(
        "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
    )
    console_handler = logging.StreamHandler()
    console_handler.setFormatter(formatter)
    logger.addHandler(console_handler)
    return logger

log: Logger = get_logger("mobileLLM")

def train() -> None:
    model_args, data_args, training_args = process_args()
    config = AutoConfig.from_pretrained(model_args.input_model_filename)
    config.share_embedding = model_args.share_embedding
    config.layer_sharing = model_args.layer_sharing
    
    log.info("开始读取模型...")
    model = LlamaForCausalLM.from_pretrained(
        pretrained_model_name_or_path=model_args.input_model_filename,
        config=config,
    )
    model.cuda()
    log.info("模型大小为 " + str(sum(param.numel() for param in model.parameters()) / 1024 / 1024))
    
    log.info("开始加载分词器...")
    tokenizer = AutoTokenizer.from_pretrained(
        pretrained_model_name_or_path=model_args.input_model_filename,
        cache_dir=training_args.cache_dir,
        padding_side="right",
        use_fast=False,
    )
    log.info("完成分词器加载...")

    class LocalDataset(Dataset):
        def __init__(self, file_path):
            self.file_path = file_path

        def __getitem__(self, idx):
            with open(self.file_path, 'r', encoding='utf-8') as f:
                lines = f.readlines()[idx]
            return torch.tensor(json.loads(lines)["token_ids"])

        def __len__(self):
            with open(self.file_path, 'r', encoding='utf-8') as f:
                return sum(1 for _ in f)

    local_dataset = LocalDataset(file_path=data_args.train_data_local_path)
    dataloader = DataLoader(local_dataset, batch_size=training_args.per_device_train_batch_size, shuffle=False)

    seqlen = config.max_position_embeddings  # 获取模型的最大序列长度

    model.eval()
    nlls = []

    for i, batch in enumerate(tqdm(dataloader, desc="Evaluating...")):
        batch = batch.to(model.device)
        
        with torch.no_grad():
            lm_logits = model(batch).logits
            shift_logits = lm_logits[:, :-1, :].contiguous().float()
            shift_labels = batch[:, 1:].reshape(-1)
            print(f"Batch {i}: shift_logits shape: {shift_logits.shape}, shift_labels shape: {shift_labels.shape}")  # 打印语句
            loss_fct = nn.CrossEntropyLoss()
            loss = loss_fct(shift_logits.view(-1, shift_logits.size(-1)), shift_labels)
            if torch.isnan(loss) or torch.isinf(loss):
                log.warning(f"Loss is NaN or inf for batch {i}.")
                continue  # 如果损失是NaN或inf,则跳过这个批次
            neg_log_likelihood = loss.float() * seqlen
            nlls.append(neg_log_likelihood)

    ppl = torch.exp(torch.stack(nlls).sum() / len(nlls) * seqlen) if nlls else float('inf')
    print("Perplexity:", ppl)

if __name__ == "__main__":
    train()

数据还有点问题,但是不管了,这个灯熟练了后续再出文来说吧,反正pytorch大概就是这么个用法。等我玩熟悉了,其他文章再专门别的话题讨论

相关推荐
玩AI的小胡子23 分钟前
让文案生成更具灵活性/chatGPT新功能canvas画布编辑
人工智能·gpt·chatgpt·aigc
远洋录1 小时前
前端单元测试实战:从零开始构建可靠的测试体系
前端·人工智能·react
纪伊路上盛名在1 小时前
生成式AI、大模型、多模态技术开发与应用学习清单
服务器·人工智能·笔记·学习·知识图谱·学习方法
hongkid2 小时前
mac 安装CosyVoice (cpu版本)
人工智能·macos·cosyvoice
CodeCraft Studio3 小时前
什么是定性数据分析?有哪些定性数据分析技术及应用实践?
大数据·人工智能·数据分析
道友老李3 小时前
【深度学习进阶】CNN-VGG
人工智能·深度学习·神经网络·机器学习·cnn
z千鑫3 小时前
【Flask+OpenAI】利用Flask+OpenAI Key实现GPT4-智能AI对话接口demo - 从0到1手把手全教程(附源码)
人工智能·后端·python·chatgpt·flask·ai编程
AI完全体3 小时前
【AI日记】24.12.14 kaggle 比赛 2-4 EDA
人工智能·机器学习·kaggle 比赛
青云交3 小时前
智创 AI 新视界 -- 基于 Transformer 架构的 AI 模型优化(16 - 11)
人工智能·模型压缩·模型优化·应用案例·多头注意力·技术挑战·transformer 架构·训练算法