【HuggingFace LLM】经典NLP微调任务之分类

本次使用CoNLL-2003 dataset数据集对bert-base-cased模型进行分类下游任务微调。

本次共分为3块进行:

  1. 数据预处理;
  2. API模型微调;
  3. Accelerate模型自定义微调;

数据预处理

查看数据
python 复制代码
from datasets import load_dataset
raw_datasets = load_dataset("conll2003") # train/valid/test

ner_feature = raw_datasets["train"].features["ner_tags"]
>>>ner_feature
>>>Sequence(feature=ClassLabel(num_classes=9, names=['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC', 'B-MISC', 'I-MISC'], names_file=None, id=None), length=-1, id=None)
>>>ner_feature.feature.names
>>>['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC', 'B-MISC', 'I-MISC']

可以使用Datasets.features对象,并通过其属性去查看分类标签、总分类数等。

python 复制代码
# 编辑了一个函数,可以打印出样本和其对应标签的对齐展示方式
def align_token_label(label_name, idx):
  labels = raw_datasets['train'].features[label_name]
  labels_name = labels.feature.names

  line1 = ''
  line2 = ''

  for token, label in zip(raw_datasets['train'][idx]['tokens'], raw_datasets['train'][idx][label_name]):
    # print(token, label)
    max_length = max(len(token), len(labels_name[label]))
    line1 += token + ' '* (max_length - len(token) + 1)
    line2 += labels_name[label] + ' '* (max_length - len(labels_name[label]) + 1)
  
  print(line1)
  print(line2)

align_token_label('chunk_tags', 0)
align_token_label('pos_tags', 0)
align_token_label('ner_tags', 0)

#EU   rejects German call to   boycott British lamb . 
#B-NP B-VP    B-NP   I-NP B-VP I-VP    B-NP    I-NP O 
#EU  rejects German call to boycott British lamb . 
#NNP VBZ     JJ     NN   TO VB      JJ      NN   . 
#EU    rejects German call to boycott British lamb . 
#B-ORG O       B-MISC O    O  O       B-MISC  O    O 

这一步是对预分词的结果进行标签化,而不是直接对可输入模型的token进行标签化。

tokenizer

因此,下一步是需要把预分词结果进行tokenizer,并且把标签进行扩展

python 复制代码
from transformers import AutoTokenizer

model_checkpoint = "bert-base-cased"
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
inputs = tokenizer(raw_datasets["train"][0]["tokens"], is_split_into_words=True)
>>>inputs.tokens()
>>>['[CLS]', 'EU', 'rejects', 'German', 'call', 'to', 'boycott', 'British', 'la', '##mb', '.', '[SEP]']

其中is_split_into_words=True是指,让tokenizer认清输入样本的边界,不需要把样本中的单词合并 后再次使用自带的tokenizer进行处理,而是对每个单词进行tokenizer

下一步需要把新增的la##mb与不足的标签进行对应,需要用到word_ids得到tokens_to_word

python 复制代码
>>>inputs.word_ids()
>>>[None, 0, 1, 2, 3, 4, 5, 6, 7, 7, 8, None]
添加标签

核心的思路是:需要把word_ids重复的 (来源于一个单词)打上标签 (如果是B-开头换成I-

python 复制代码
from transformers import AutoTokenizer

model_checkpoint = "bert-base-cased"
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
inputs = tokenizer(raw_datasets["train"][0]["tokens"], is_split_into_words=True)
labels = raw_datasets['train'][0]['ner_tags']
word_ids = inputs.word_ids()

new_labels = []
current_word_idx = None
for word_idx in word_ids:
  if word_idx == None:
    new_labels.append(-100)
  elif word_idx != current_word_idx:
    current_word_idx = word_idx
    new_labels.append(labels[word_idx])
  else:
    new_labels.append(-100)
    # Same word as previous token 
	#label = labels[word_id]
	# If the label is B-XXX we change it to I-XXX
	#if label % 2 == 1:
	#	label += 1
	#new_labels.append(label)

print(inputs.tokens())
print(new_labels)

为了使得适配于batched=True的批次处理,需要适用于 List[List]嵌套结构

python 复制代码
def tokenize_and_align_labels(examples):
    tokenized_inputs = tokenizer(
        examples["tokens"], truncation=True, is_split_into_words=True
    )
    all_labels = examples["ner_tags"]
    new_labels = []
    for i, labels in enumerate(all_labels):
        word_ids = tokenized_inputs.word_ids(i)
        new_labels.append(align_labels_with_tokens(labels, word_ids))

    tokenized_inputs["labels"] = new_labels
    return tokenized_inputs

tokenized_datasets = raw_datasets.map(
    tokenize_and_align_labels,
    batched=True,
    remove_columns=raw_datasets["train"].column_names,
)

#技巧 输入的examples{'ids':[1,2,3...], 'tokens':[[x,xx,xx], [xxx, xxx],...],...}这个形式,因此新添加一个labels(模型认可的输入字段),该字段也是[[], [], [],...]嵌套 的形式。这就是最外层需要一个列表的原因。

API模型微调

数据padding

由于这里需要把padding的内容与inputs中的相同,用-100以不参与损失函数计算。

python 复制代码
from transformers import DataCollatorForTokenClassification

data_collator = DataCollatorForTokenClassification(tokenizer=tokenizer)
batch = data_collator([tokenized_datasets["train"][i] for i in range(2)])
>>>batch["labels"]
>>>tensor([[-100,    3,    0,    7,    0,    0,    0,    7,    0,    0,    0, -100],
        [-100,    1,    2, -100, -100, -100, -100, -100, -100, -100, -100, -100]])

这里可以看到padding的内容由-100填充。

指标计算函数
python 复制代码
!pip install seqeval
import evaluate

metric = evaluate.load("seqeval")

seqeval是把标签视作字符串 ,如B-ORG进行准确率计算,而不是视作整型数据(one-hot)对比。

例如,

python 复制代码
labels = raw_datasets["train"][0]["ner_tags"]
labels = [label_names[i] for i in labels]
>>>labels
>>>['B-ORG', 'O', 'B-MISC', 'O', 'O', 'O', 'B-MISC', 'O', 'O']

predictions = labels.copy()
predictions[2] = "O"
metric.compute(predictions=[predictions], references=[labels])

这里相当于直接使用['B-ORG', 'O', 'B-MISC', 'O', 'O', 'O', 'B-MISC', 'O', 'O']['B-ORG', 'O', 'O', 'O', 'O', 'O', 'B-MISC', 'O', 'O']进行指标的计算。

最终可以得到每一个实体以及总体precision/F1/recall分数。

python 复制代码
import numpy as np

def compute_metrics(eval_preds):
    logits, labels = eval_preds
    predictions = np.argmax(logits, axis=-1)

    # Remove ignored index (special tokens) and convert to labels
    true_labels = [[label_names[l] for l in label if l != -100] for label in labels]
    true_predictions = [
        [label_names[p] for (p, l) in zip(prediction, label) if l != -100]
        for prediction, label in zip(predictions, labels)
    ]
    all_metrics = metric.compute(predictions=true_predictions, references=true_labels)
    return {
        "precision": all_metrics["overall_precision"],
        "recall": all_metrics["overall_recall"],
        "f1": all_metrics["overall_f1"],
        "accuracy": all_metrics["overall_accuracy"],
    }

这里注意,传入的eval_preds是模型输出的概率值张量以及标签值元组。进行指标计算时,把真实标签为特殊字符 的(预处理时token转化为了-100不纳入指标计算中。

模型参数定义
python 复制代码
id2label = {i: label for i, label in enumerate(label_names)}
label2id = {v: k for k, v in id2label.items()}
from transformers import AutoModelForTokenClassification

model = AutoModelForTokenClassification.from_pretrained(
    model_checkpoint,
    id2label=id2label,
    label2id=label2id,
)

#开发 分类模型中需传入num_lables参数,若不确定可传入id2label/label2id参数动态计算

python 复制代码
from transformers import TrainingArguments

args = TrainingArguments(
    "bert-finetuned-ner",
    evaluation_strategy="epoch",
    save_strategy="epoch",
    learning_rate=2e-5,
    num_train_epochs=3,
    weight_decay=0.01,
    push_to_hub=True,
)

from transformers import Trainer

trainer = Trainer(
    model=model,
    args=args,
    train_dataset=tokenized_datasets["train"],
    eval_dataset=tokenized_datasets["validation"],
    data_collator=data_collator,
    compute_metrics=compute_metrics,
    processing_class=tokenizer,
)
trainer.train()

自定义训练流程

python 复制代码
from torch.utils.data import DataLoader

train_dataloader = DataLoader(
    tokenized_datasets["train"],
    shuffle=True,
    collate_fn=data_collator,
    batch_size=8,
)
eval_dataloader = DataLoader(
    tokenized_datasets["validation"], collate_fn=data_collator, batch_size=8
)

model = AutoModelForTokenClassification.from_pretrained(
    model_checkpoint,
    id2label=id2label,
    label2id=label2id,
)

使用DataLoader方法加载数据集,其中需要使用collate_fn字段去传递动态分词填充器。

python 复制代码
from torch.optim import AdamW

optimizer = AdamW(model.parameters(), lr=2e-5)

使用AdamW优化器取代Adam优化器,改进权重衰减

python 复制代码
from accelerate import Accelerator

accelerator = Accelerator()
model, optimizer, train_dataloader, eval_dataloader = accelerator.prepare(
    model, optimizer, train_dataloader, eval_dataloader
)

#提示 一次性把模型、优化器、训练集、测试集放入accelerate.prepare中。

python 复制代码
from transformers import get_scheduler

num_train_epochs = 3
num_update_steps_per_epoch = len(train_dataloader)
num_training_steps = num_train_epochs * num_update_steps_per_epoch

lr_scheduler = get_scheduler(
    "linear",
    optimizer=optimizer,
    num_warmup_steps=0,
    num_training_steps=num_training_steps,
)

使用经典的线性学习率递增策略,从学习率逐渐减小到 0:

!NOTE

由于collate_fn会使得数据集的长度有所变化 ,因此要以此为准来规划lr_scheduler

python 复制代码
from huggingface_hub import Repository, get_full_repo_name

# 存储库名称定义,上传到huggingface
model_name = "bert-finetuned-ner-accelerate"
repo_name = get_full_repo_name(model_name)

# 同步存储库到本地位置
output_dir = "bert-finetuned-ner-accelerate"
repo = Repository(output_dir, clone_from=repo_name)

我们可以将该仓库克隆到本地文件夹,后续可以通过调用 repo.push_to_hub() 方法上传保存在 output_dir (文件夹)中的任何内容。这将有助于我们在每个训练周期结束时上传中间模型

python 复制代码
def postprocess(predictions, labels):
    predictions = predictions.detach().cpu().clone().numpy()
    labels = labels.detach().cpu().clone().numpy()

    # Remove ignored index (special tokens) and convert to labels
    true_labels = [[label_names[l] for l in label if l != -100] for label in labels]
    true_predictions = [
        [label_names[p] for (p, l) in zip(prediction, label) if l != -100]
        for prediction, label in zip(predictions, labels)
    ]
    return true_labels, true_predictions

接收预测和标签的字符串,同样排除不参与指标计算的[CLS]/[SEP]等特殊字符。

python 复制代码
from tqdm.auto import tqdm
import torch

progress_bar = tqdm(range(num_training_steps))

for epoch in range(num_train_epochs):
    # Training
    model.train()
    for batch in train_dataloader:
        outputs = model(**batch)
        loss = outputs.loss
        accelerator.backward(loss)

        optimizer.step()
        lr_scheduler.step()
        optimizer.zero_grad()
        progress_bar.update(1)

    # Evaluation
    model.eval()
    for batch in eval_dataloader:
        with torch.no_grad():
            outputs = model(**batch)

        predictions = outputs.logits.argmax(dim=-1)
        labels = batch["labels"]

        # Necessary to pad predictions and labels for being gathered
        predictions = accelerator.pad_across_processes(predictions, dim=1, pad_index=-100)
        labels = accelerator.pad_across_processes(labels, dim=1, pad_index=-100)

        predictions_gathered = accelerator.gather(predictions)
        labels_gathered = accelerator.gather(labels)

        true_predictions, true_labels = postprocess(predictions_gathered, labels_gathered)
        metric.add_batch(predictions=true_predictions, references=true_labels)

    results = metric.compute()
    print(
        f"epoch {epoch}:",
        {
            key: results[f"overall_{key}"]
            for key in ["precision", "recall", "f1", "accuracy"]
        },
    )

    # Save and upload
    accelerator.wait_for_everyone()
    unwrapped_model = accelerator.unwrap_model(model)
    unwrapped_model.save_pretrained(output_dir, save_function=accelerator.save)
    if accelerator.is_main_process:
        tokenizer.save_pretrained(output_dir)
        repo.push_to_hub(
            commit_message=f"Training in progress epoch {epoch}", blocking=False
        )

上述为训练和测试的全流程,其中几个不同的点需要注意:

  1. accelerator.backward(loss),进行梯度回传;
  2. accelerator.pad_across_processesaccelerator.gather,由于accelerator是一个分布式训练的框架,因此涉及到不同进程之间batch长度对齐 ,需要使用pad_across_processes。等不同的进程之间的batch长度对齐后,使用gather所有进程的张量进行聚合
  3. accelerator.wait_for_everyone(),告诉所有进程等待,直到所有进程都达到该阶段后再继续,以确保在保存之前,每个进程都使用相同的模型;
  4. accelerator.unwrapped_model(),由于使用accelerator.prepare()把模型改为了分布式,缺少了save_pretrained方法,因此需要使用unwrapped_model使得恢复保存方法
相关推荐
之歆2 小时前
RAG幻觉评估和解决方案
java·人工智能·spring
Eva任2 小时前
AI 算力驱动下的电源技术革新:架构演进、器件突破与实战落地
人工智能
之歆2 小时前
Spring ai 指标监控
java·人工智能·spring·ai
一点 内容2 小时前
智汇前沿,印创未来:2026中国五大专业印刷展会全景洞察与战略导航
大数据·人工智能·物联网
三万棵雪松2 小时前
【AI小智硬件程序(九)】
c++·人工智能·嵌入式·esp32·ai小智
深圳佛手2 小时前
未来已来,首款AI手机“豆包手机”问世
人工智能·智能手机
力学与人工智能2 小时前
博士学位答辩PPT分享 | 基于机器学习的复杂流场预测方法研究
人工智能·机器学习·西北工业大学·航空航天·答辩·博士学位·ppt分享
视觉震撼2 小时前
为大型语言模型(LLM)自动化知识图谱流水线:2026年手册
人工智能·算法·机器学习
Hi202402172 小时前
使用星图AI算力平台训练PETRV2-BEV模型
人工智能·自动驾驶·gpu·机器视觉·bev·算力平台