🔥 Transformers实战:Text分类×SQuAD问答×CoNLL实体识别(含超参调优方案)

本文较长,建议点赞收藏,以免遗失。更多AI大模型应用开发学习视频及资料,尽在聚客AI学院

本文将通过代码实战带你快速掌握NLP三大核心任务,使用Hugging Face Transformers库实现工业级AI应用开发。

一、环境准备

复制代码
pip install transformers datasets torch tensorboard

二、文本分类实战(情感分析)

1. 数据加载与预处理

ini 复制代码
from datasets import load_dataset
from transformers import AutoTokenizer
dataset = load_dataset("imdb")
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
def tokenize_function(examples):
    return tokenizer(examples["text"], padding="max_length", truncation=True)
tokenized_datasets = dataset.map(tokenize_function, batched=True)

2. 模型训练

ini 复制代码
from transformers import AutoModelForSequenceClassification, TrainingArguments, Trainer
model = AutoModelForSequenceClassification.from_pretrained(
    "distilbert-base-uncased", num_labels=2
)
training_args = TrainingArguments(
    output_dir="./results",
    evaluation_strategy="epoch",
    learning_rate=2e-5,
    per_device_train_batch_size=16,
    num_train_epochs=3,
)
trainer = Trainer(
    model=model,
    args=training_args,
    train_dataset=tokenized_datasets["train"],
    eval_dataset=tokenized_datasets["test"],
)
trainer.train()

3. 推理预测

scss 复制代码
from transformers import pipeline
classifier = pipeline("text-classification", model=model, tokenizer=tokenizer)
result = classifier("This movie was absolutely fantastic!")
print(result)  # [{'label': 'POSITIVE', 'score': 0.999}]

三、问答系统实战(SQuAD数据集)

1. 加载问答数据集

ini 复制代码
dataset = load_dataset("squad")
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
def preprocess_function(examples):
    questions = [q.strip() for q in examples["question"]]
    inputs = tokenizer(
        questions,
        examples["context"],
        max_length=384,
        truncation="only_second",
        return_offsets_mapping=True,
        padding="max_length",
    )
    return inputs
tokenized_squad = dataset.map(preprocess_function, batched=True)

2. 训练问答模型

ini 复制代码
from transformers import AutoModelForQuestionAnswering
model = AutoModelForQuestionAnswering.from_pretrained("distilbert-base-uncased")
training_args = TrainingArguments(
    output_dir="./qa_results",
    evaluation_strategy="epoch",
    learning_rate=3e-5,
    per_device_train_batch_size=12,
    num_train_epochs=2,
)
trainer = Trainer(
    model=model,
    args=training_args,
    train_dataset=tokenized_squad["train"],
    eval_dataset=tokenized_squad["validation"],
)
trainer.train()

3. 执行问答

ini 复制代码
question = "What does NLP stand for?"
context = "Natural Language Processing (NLP) is a subfield of artificial intelligence."
qa_pipeline = pipeline("question-answering", model=model, tokenizer=tokenizer)
result = qa_pipeline(question=question, context=context)
print(result)
# {'score': 0.982, 'start': 0, 'end': 24, 'answer': 'Natural Language Processing'}

四、命名实体识别实战(CoNLL-2003)

1. 数据预处理

ini 复制代码
dataset = load_dataset("conll2003")
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
label_list = dataset["train"].features["ner_tags"].feature.names
def tokenize_and_align_labels(examples):
    tokenized_inputs = tokenizer(
        examples["tokens"], 
        truncation=True,
        is_split_into_words=True
    )
    
    labels = []
    for i, label in enumerate(examples["ner_tags"]):
        word_ids = tokenized_inputs.word_ids(batch_index=i)
        previous_word_idx = None
        label_ids = []
        for word_idx in word_ids:
            if word_idx is None:
                label_ids.append(-100)
            elif word_idx != previous_word_idx:
                label_ids.append(label[word_idx])
            else:
                label_ids.append(-100)
            previous_word_idx = word_idx
        labels.append(label_ids)
    
    tokenized_inputs["labels"] = labels
    return tokenized_inputs
tokenized_dataset = dataset.map(tokenize_and_align_labels, batched=True)

2. 训练NER模型

ini 复制代码
from transformers import AutoModelForTokenClassification
model = AutoModelForTokenClassification.from_pretrained(
    "bert-base-cased", 
    num_labels=len(label_list)
    
training_args = TrainingArguments(
    output_dir="./ner_results",
    evaluation_strategy="epoch",
    learning_rate=2e-5,
    per_device_train_batch_size=16,
    num_train_epochs=3,
)
trainer = Trainer(
    model=model,
    args=training_args,
    train_dataset=tokenized_dataset["train"],
    eval_dataset=tokenized_dataset["validation"],
)
trainer.train()

3. 实体识别推理

ini 复制代码
from transformers import pipeline
ner_pipeline = pipeline("ner", model=model, tokenizer=tokenizer)
sample_text = "Apple was founded by Steve Jobs in Cupertino, California."
entities = ner_pipeline(sample_text)
for entity in entities:
    print(f"{entity['word']} -> {label_list[entity['entity'][-1]]}")
    
# Apple -> B-ORG
# Steve Jobs -> B-PER
# Cupertino -> B-LOC
# California -> B-LOC

五、核心技巧总结

迁移学习优势:使用预训练模型可节省90%训练时间

动态填充:使用DataCollator提升训练效率

混合精度训练:添加fp16=True参数加速训练

学习率调度:采用线性衰减策略更稳定收敛

早停机制:监控验证集损失防止过拟合

六、进阶学习方向

关键提示:实践时注意调整超参数(batch size、学习率)以适应你的硬件配置,小显存设备建议使用distilbert等轻量模型。更多AI大模型应用开发学习视频内容和资料,尽在聚客AI学院

相关推荐
Java中文社群16 分钟前
保姆级喂饭教程:什么是Skills?如何用Skills?
人工智能
2301_8002561120 分钟前
【人工智能引论期末复习】 第6章 深度学习4 - RNN
人工智能·rnn·深度学习
商业讯网11 小时前
国家电投海外项目运营经验丰富
大数据·人工智能·区块链
薛定谔的猫19821 小时前
llama-index Embedding 落地到 RAG 系统
开发语言·人工智能·python·llama-index
gorgeous(๑>؂<๑)1 小时前
【西北工业大学-邢颖慧组-AAAI26】YOLO-IOD:实时增量目标检测
人工智能·yolo·目标检测·计算机视觉·目标跟踪
飞哥数智坊1 小时前
TRAE 国际版限免开启!一份给新手的入门说明书
人工智能·ai编程·trae
翱翔的苍鹰2 小时前
神经网络中损失函数(Loss Function)介绍
人工智能·深度学习·神经网络
狼爷2 小时前
【译】Skills 详解:Skills 与 prompts、Projects、MCP 和 subagents 的比较
人工智能·aigc
元智启2 小时前
企业AI应用面临“敏捷响应”难题:快速变化的业务与相对滞后的智能如何同步?
人工智能·深度学习·机器学习