Bert Model的应用
- [1. Sequence分类](#1. Sequence分类)
-
- [1.1 主要模块](#1.1 主要模块)
- [1.2 应用场景](#1.2 应用场景)
- [1.3 BertForSequenceClassification 源码解析](#1.3 BertForSequenceClassification 源码解析)
- [2. Token分类](#2. Token分类)
-
- [2.1 主要模块](#2.1 主要模块)
- [2.2 应用场景](#2.2 应用场景)
- [2.3 BertForTokenClassification 源码解析](#2.3 BertForTokenClassification 源码解析)
- [3. 两种分类对比](#3. 两种分类对比)
在自然语言处理领域中,BERT 模型因其强大的文本理解能力被广泛应用于各种分类任务。具体而言,BERT 在分类任务中可以分为两种主要应用:Sequence 分类和 Token 分类。下面将分别介绍这两种应用及其实现方式。
1. Sequence分类
1.1 主要模块
- BERT 模型:提供输入序列的编码表示,尤其是 [CLS] 标记的全局表示。
- 分类器:一个简单的线性层,将编码后的表示转换为类别分布。
- 损失函数:根据任务类型动态调整损失函数类型,用于训练过程中优化模型参数。
1.2 应用场景
BertForSequenceClassification 类被设计用于各种序列分类任务,广泛应用于自然语言处理(NLP)的多个领域,如情感分析、文本分类、语义文本相似度等。通过微调 BERT 模型,可以使模型学习到与特定任务相关的特征,从而提高模型在该任务上的表现。
1.3 BertForSequenceClassification 源码解析
python
# -*- coding: utf-8 -*-
# @time: 2024/8/26 16:39
from typing import Optional, Tuple, Union
import torch
from torch import nn
from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss
from transformers import BertModel, BertPreTrainedModel
from transformers.modeling_outputs import SequenceClassifierOutput
from transformers.models.bert.modeling_bert import BERT_INPUTS_DOCSTRING, _CHECKPOINT_FOR_SEQUENCE_CLASSIFICATION, _CONFIG_FOR_DOC, _SEQ_CLASS_EXPECTED_OUTPUT, _SEQ_CLASS_EXPECTED_LOSS, BERT_START_DOCSTRING
from transformers.utils import add_start_docstrings, add_start_docstrings_to_model_forward, add_code_sample_docstrings
@add_start_docstrings(
"""
Bert Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled
output) e.g. for GLUE tasks.
""",
BERT_START_DOCSTRING,
)
class BertForSequenceClassification(BertPreTrainedModel):
def __init__(self, config):
super().__init__(config)
self.num_labels = config.num_labels
self.config = config
# 加载并初始化 BERT 模型,BERT 模型的主要功能是对输入文本序列进行编码,输出包含丰富上下文信息的表示。
self.bert = BertModel(config)
classifier_dropout = (
config.classifier_dropout if config.classifier_dropout is not None else config.hidden_dropout_prob
)
# 添加 Dropout 层,用于在训练时随机丢弃部分神经元,帮助防止过拟合。
self.dropout = nn.Dropout(classifier_dropout)
# 定义线性分类器,将 BERT 输出的 hidden_size 大小的向量转换为 num_labels 大小的类别分布。
self.classifier = nn.Linear(config.hidden_size, config.num_labels)
# Initialize weights and apply final processing
self.post_init()
@add_start_docstrings_to_model_forward(BERT_INPUTS_DOCSTRING.format("batch_size, sequence_length"))
@add_code_sample_docstrings(
checkpoint=_CHECKPOINT_FOR_SEQUENCE_CLASSIFICATION,
output_type=SequenceClassifierOutput,
config_class=_CONFIG_FOR_DOC,
expected_output=_SEQ_CLASS_EXPECTED_OUTPUT,
expected_loss=_SEQ_CLASS_EXPECTED_LOSS,
)
def forward(
self,
input_ids: Optional[torch.Tensor] = None,
attention_mask: Optional[torch.Tensor] = None,
token_type_ids: Optional[torch.Tensor] = None,
position_ids: Optional[torch.Tensor] = None,
head_mask: Optional[torch.Tensor] = None,
inputs_embeds: Optional[torch.Tensor] = None,
labels: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
) -> Union[Tuple[torch.Tensor], SequenceClassifierOutput]:
r"""
labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
`config.num_labels > 1` a classification loss is computed (Cross-Entropy).
"""
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
# BertModel 对文本序列的编码输出
outputs = self.bert(
input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
position_ids=position_ids,
head_mask=head_mask,
inputs_embeds=inputs_embeds,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
)
# outputs[1] 是 pooled_output,即 [CLS] 标记对应的隐藏状态。这个输出被视为整个序列的全局表示。
pooled_output = outputs[1]
pooled_output = self.dropout(pooled_output)
logits = self.classifier(pooled_output)
# 损失计算:据任务类型(回归、单标签分类、多标签分类)计算不同的损失
loss = None
if labels is not None:
if self.config.problem_type is None:
if self.num_labels == 1:
self.config.problem_type = "regression"
elif self.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int):
self.config.problem_type = "single_label_classification"
else:
self.config.problem_type = "multi_label_classification"
if self.config.problem_type == "regression": # 回归任务: 使用均方误差损失(MSELoss)
loss_fct = MSELoss()
if self.num_labels == 1:
loss = loss_fct(logits.squeeze(), labels.squeeze())
else:
loss = loss_fct(logits, labels)
elif self.config.problem_type == "single_label_classification": # 单标签分类任务: 使用交叉熵损失(CrossEntropyLoss)
loss_fct = CrossEntropyLoss()
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
elif self.config.problem_type == "multi_label_classification": # 多标签分类任务: 使用带有 Sigmoid 激活的二元交叉熵损失(BCEWithLogitsLoss)
loss_fct = BCEWithLogitsLoss()
loss = loss_fct(logits, labels)
# 输出
if not return_dict:
output = (logits,) + outputs[2:]
return ((loss,) + output) if loss is not None else output
return SequenceClassifierOutput(
loss=loss,
logits=logits,
hidden_states=outputs.hidden_states,
attentions=outputs.attentions,
)
2. Token分类
2.1 主要模块
- BERT 模型:生成每个 token 的隐藏状态。
- 分类器:通过线性层对每个 token 的隐藏状态进行分类预测,得到每个 token 的类别得分。
- 损失函数:对于分类任务,通常使用交叉熵损失(CrossEntropyLoss)。
2.2 应用场景
BertForTokenClassification 主要用于标注任务(Token Classification Tasks),如命名实体识别(NER)、词性标注(POS tagging)以及其他需要对每个词或子词进行分类的任务。
- 命名实体识别 (NER):将每个单词或子词分类为不同的实体类别(如人名、地名、组织等)
- 词性标注 (POS Tagging):标注句子中的每个单词的词性类别(如名词、动词、形容词等)
2.3 BertForTokenClassification 源码解析
python
# -*- coding: utf-8 -*-
# @time: 2024/8/26 16:45
from typing import Optional, Tuple, Union
import torch
from torch import nn
from torch.nn import CrossEntropyLoss
from transformers import BertModel, BertPreTrainedModel
from transformers.modeling_outputs import TokenClassifierOutput
from transformers.models.bert.modeling_bert import BERT_INPUTS_DOCSTRING, _CONFIG_FOR_DOC, BERT_START_DOCSTRING, _CHECKPOINT_FOR_TOKEN_CLASSIFICATION, _TOKEN_CLASS_EXPECTED_OUTPUT, _TOKEN_CLASS_EXPECTED_LOSS
from transformers.utils import add_start_docstrings, add_start_docstrings_to_model_forward, add_code_sample_docstrings
@add_start_docstrings(
"""
Bert Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
Named-Entity-Recognition (NER) tasks.
""",
BERT_START_DOCSTRING,
)
class BertForTokenClassification(BertPreTrainedModel):
def __init__(self, config):
super().__init__(config)
self.num_labels = config.num_labels
# 加载并初始化 BERT 模型,BERT 模型的主要功能是对输入文本序列进行编码,输出包含丰富上下文信息的表示。
self.bert = BertModel(config, add_pooling_layer=False)
classifier_dropout = (
config.classifier_dropout if config.classifier_dropout is not None else config.hidden_dropout_prob
)
# 添加 Dropout 层,用于在训练时随机丢弃部分神经元,帮助防止过拟合。
self.dropout = nn.Dropout(classifier_dropout)
# 定义线性分类器,将 BERT 输出的 hidden_size 大小的向量转换为 num_labels 大小的类别分布。
self.classifier = nn.Linear(config.hidden_size, config.num_labels)
# Initialize weights and apply final processing
self.post_init()
@add_start_docstrings_to_model_forward(BERT_INPUTS_DOCSTRING.format("batch_size, sequence_length"))
@add_code_sample_docstrings(
checkpoint=_CHECKPOINT_FOR_TOKEN_CLASSIFICATION,
output_type=TokenClassifierOutput,
config_class=_CONFIG_FOR_DOC,
expected_output=_TOKEN_CLASS_EXPECTED_OUTPUT,
expected_loss=_TOKEN_CLASS_EXPECTED_LOSS,
)
def forward(
self,
input_ids: Optional[torch.Tensor] = None,
attention_mask: Optional[torch.Tensor] = None,
token_type_ids: Optional[torch.Tensor] = None,
position_ids: Optional[torch.Tensor] = None,
head_mask: Optional[torch.Tensor] = None,
inputs_embeds: Optional[torch.Tensor] = None,
labels: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
) -> Union[Tuple[torch.Tensor], TokenClassifierOutput]:
r"""
labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
Labels for computing the token classification loss. Indices should be in `[0, ..., config.num_labels - 1]`.
"""
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
# BertModel 对文本序列的编码输出
outputs = self.bert(
input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
position_ids=position_ids,
head_mask=head_mask,
inputs_embeds=inputs_embeds,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
)
# outputs[0] 包含输入序列中所有token的隐藏状态
sequence_output = outputs[0]
sequence_output = self.dropout(sequence_output)
logits = self.classifier(sequence_output)
# 损失计算
loss = None
if labels is not None:
loss_fct = CrossEntropyLoss()
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
# 输出
if not return_dict:
output = (logits,) + outputs[2:]
return ((loss,) + output) if loss is not None else output
return TokenClassifierOutput(
loss=loss,
logits=logits,
hidden_states=outputs.hidden_states,
attentions=outputs.attentions,
)
3. 两种分类对比
- 任务类型:BertForSequenceClassification 适用于句子级别的分类任务,而 BertForTokenClassification 适用于词级别的标注任务。
- 输出维度:BertForSequenceClassification 的输出是整个序列的一个分类结果,而 BertForTokenClassification 的输出是序列中每个 token 的分类结果。
- 应用场景:BertForSequenceClassification 用于文本分类任务,而 BertForTokenClassification 则用于序列标注任务。
源码地址:transformers/src/transformers/models/bert/modeling_bert.py