【实验16】基于双向LSTM模型完成文本分类任务

目录

[1 数据集处理- IMDB 电影评论数据集](#1 数据集处理- IMDB 电影评论数据集)

[1.1 认识数据集](#1.1 认识数据集)

[1.2 数据加载](#1.2 数据加载)

[1.3 构造Dataset类](#1.3 构造Dataset类)

[1.4 封装DataLoader](#1.4 封装DataLoader)

[1.4.1 collate_fn函数](#1.4.1 collate_fn函数)

[1.4.2 封装dataloader](#1.4.2 封装dataloader)

[2 模型构建](#2 模型构建)

[2.1 汇聚层算子](#2.1 汇聚层算子)

[2.2 模型汇总](#2.2 模型汇总)

[3 模型训练](#3 模型训练)

[4 模型评价](#4 模型评价)

[5 模型预测](#5 模型预测)

[6 完整代码](#6 完整代码)

[7 拓展实验](#7 拓展实验)


1 数据集处理- IMDB 电影评论数据集

1.1 认识数据集

IMDB 电影评论数据集是一份关于电影评论的经典二分类数据集.IMDB 按照评分的高低筛选出了积极评论和消极评论,如果评分 ≥7,则认为是积极评论;如果评分 ≤4,则认为是消极评论。

数据集包含训练集和测试集数据,数量各为 25000 条,每条数据都是一段用户关于某个电影的真实评价,以及观众对这个电影的情感倾向。

下载地址:IMDB数据集

1.2 数据加载

这里将原始的测试集平均分为两份,分别作为验证集和测试集 ,存放于./dataset目录下。

代码如下:

python 复制代码
import os

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

# 加载IMDB数据集
def load_imdb_data(path):
    assert os.path.exists(path)
    # 初始化数据集列表
    trainset, devset, testset = [], [], []

    # 加载训练集数据
    for label in ['pos', 'neg']:
        label_path = os.path.join(path, 'train', label)
        for filename in os.listdir(label_path):
            if filename.endswith('.txt'):
                with open(os.path.join(label_path, filename), 'r', encoding='utf-8') as f:
                    sentence = f.read().strip().lower()  # 读取并处理每个评论
                    trainset.append((sentence, label))

    # 加载测试集数据
    for label in ['pos', 'neg']:
        label_path = os.path.join(path, 'test', label)
        for filename in os.listdir(label_path):
            if filename.endswith('.txt'):
                with open(os.path.join(label_path, filename), 'r', encoding='utf-8') as f:
                    sentence = f.read().strip().lower()  # 读取并处理每个评论
                    testset.append((sentence, label))

    # 随机拆分测试集的一半作为验证集
    random.shuffle(testset)  # 打乱测试集顺序
    split_index = len(testset) // 2  # 计算拆分索引
    devset = testset[:split_index]  # 选择测试集前一半作为验证集
    testset = testset[split_index:]  # 剩下的部分作为测试集

    return trainset, devset, testset


# 加载IMDB数据集
train_data, dev_data, test_data = load_imdb_data("./dataset/")

# 打印一下加载后的数据样式
print(train_data[4])  # 打印训练集中的第5条数据
python 复制代码
this is not the typical mel brooks film. it was much less slapstick than most of his movies and actually had a plot that was followable. leslie ann warren made the movie, she is such a fantastic, under-rated actress. there were some moments that could have been fleshed out a bit more, and some scenes that could probably have been cut to make the room to do so, but all in all, this is worth the price to rent and see it. the acting was good overall, brooks himself did a good job without his characteristic speaking to directly to the audience. again, warren was the best actor in the movie, but "fume" and "sailor" both played their parts well.', 'pos')

从输出结果看,加载后的每条样本包含两部分内容:文本串和标签。

1.3 构造Dataset类

构造IMDBDataset类用于数据管理,输入是文本序列,需要先将其中的每个词转换为该词在词表中的序号 ID ,然后根据词表ID查询这些词对应的词向量(词向量)【使用IMDBDataset类中的words_to_id方法】。

利用词表将序列中的每个词映射为对应的数字编号,便于进一步转为为词向量。当序列中的词没有包含在词表时,默认会将该词用[UNK]代替。words_to_id方法利用一个如下图所示的哈希表来进行转换,实验中词表为数据集文件中的imdb.vocab。【这里注意原来的imdb.vocab中没有UNK和PAD映射,需要自行添加】

python 复制代码
class IMDBDataset(Dataset):
    def __init__(self, examples, word2id_dict):
        super(IMDBDataset, self).__init__()
        self.word2id_dict = word2id_dict
        self.examples = self.words_to_id(examples)

    def words_to_id(self, examples):
        tmp_examples = []
        for idx, example in enumerate(examples):
            seq, label = example
            # 将单词映射为字典索引的ID, 对于词典中没有的单词用[UNK]对应的ID进行替代
            seq = [self.word2id_dict.get(word, self.word2id_dict['[UNK]']) for word in seq.split(" ")]

            # 映射标签: 'pos' -> 1, 'neg' -> 0
            label = 1 if label == 'pos' else 0  # 将标签从'pos'/'neg'转换为1/0
            tmp_examples.append([seq, label])
        return tmp_examples

    def __getitem__(self, idx):
        seq, label = self.examples[idx]
        return seq, label

    def __len__(self):
        return len(self.examples)


def load_vocab(path):
    assert os.path.exists(path)  # 确保词表文件路径存在
    words = []  # 初始化空列表,存储词表中的单词
    with open(path, "r", encoding="utf-8") as f:  # 打开文件并读取内容
        words = f.readlines()  # 读取文件中的所有行
        words = [word.strip() for word in words if word.strip()]  # 移除每个单词的前后空白字符并去掉空行
    word2id = dict(zip(words, range(len(words))))  # 创建一个字典,将单词与对应的ID映射
    return word2id  # 返回这个字典


# 加载词表
word2id_dict = load_vocab("./dataset/imdb.vocab")

# 实例化Dataset
train_set = IMDBDataset(train_data, word2id_dict)
dev_set = IMDBDataset(dev_data, word2id_dict)
test_set = IMDBDataset(test_data, word2id_dict)

print('训练集样本数:', len(train_set))
print('样本示例:', train_set[4])

运行结果:

python 复制代码
 ([11, 7, 21, 2, 764, 3633, 2822, 0, 8, 13, 74, 324, 2706, 72, 89, 5, 25, 101, 3, 161, 67, 4, 113, 12, 13, 0, 2750, 1900, 3725, 92, 2, 0, 53, 7, 138, 4, 0, 13617, 0, 39, 69, 49, 369, 12, 97, 26, 75, 7239, 46, 4, 221, 0, 3, 49, 137, 12, 97, 234, 26, 75, 644, 6, 96, 2, 667, 6, 83, 0, 18, 30, 9, 0, 11, 7, 283, 2, 1766, 6, 859, 3, 66, 0, 2, 111, 13, 50, 0, 2822, 302, 119, 4, 50, 284, 202, 25, 7517, 1409, 6, 2475, 6, 2, 0, 0, 3725, 13, 2, 117, 266, 9, 2, 0, 18, 0, 3, 0, 192, 248, 65, 512, 0], 1)
python 复制代码
训练集样本数: 25000
样本示例: ([11, 7, 21, 2, 764, 3633, 2822, 0, 8, 13, 74, 324, 2706, 72, 89, 5, 25, 101, 3, 161, 67, 4, 113, 12, 13, 0, 2750, 1900, 3725, 92, 2, 0, 53, 7, 138, 4, 0, 13617, 0, 39, 69, 49, 369, 12, 97, 26, 75, 7239, 46, 4, 221, 0, 3, 49, 137, 12, 97, 234, 26, 75, 644, 6, 96, 2, 667, 6, 83, 0, 18, 30, 9, 0, 11, 7, 283, 2, 1766, 6, 859, 3, 66, 0, 2, 111, 13, 50, 0, 2822, 302, 119, 4, 50, 284, 202, 25, 7517, 1409, 6, 2475, 6, 2, 0, 0, 3725, 13, 2, 117, 266, 9, 2, 0, 18, 0, 3, 0, 192, 248, 65, 512, 0], 1)

可知,train_set[4] 的样本为:

python 复制代码
this is not the typical mel brooks film. it was much less slapstick than most of his movies and actually had a plot that was followable. leslie ann warren made the movie, she is such a fantastic, under-rated actress. there were some moments that could have been fleshed out a bit more, and some scenes that could probably have been cut to make the room to do so, but all in all, this is worth the price to rent and see it. the acting was good overall, brooks himself did a good job without his characteristic speaking to directly to the audience. again, warren was the best actor in the movie, but "fume" and "sailor" both played their parts well.', 'pos')

成功被转换为该词在词表中的序号 ID:

python 复制代码
 ([11, 7, 21, 2, 764, 3633, 2822, 0, 8, 13, 74, 324, 2706, 72, 89, 5, 25, 101, 3, 161, 67, 4, 113, 12, 13, 0, 2750, 1900, 3725, 92, 2, 0, 53, 7, 138, 4, 0, 13617, 0, 39, 69, 49, 369, 12, 97, 26, 75, 7239, 46, 4, 221, 0, 3, 49, 137, 12, 97, 234, 26, 75, 644, 6, 96, 2, 667, 6, 83, 0, 18, 30, 9, 0, 11, 7, 283, 2, 1766, 6, 859, 3, 66, 0, 2, 111, 13, 50, 0, 2822, 302, 119, 4, 50, 284, 202, 25, 7517, 1409, 6, 2475, 6, 2, 0, 0, 3725, 13, 2, 117, 266, 9, 2, 0, 18, 0, 3, 0, 192, 248, 65, 512, 0], 1)

1.4 封装DataLoader

构造对应的 DataLoader,用于批次数据的迭代。

主要功能:

  1. **长度限制:需要将序列的长度控制在一定的范围内,避免部分数据过长影响整体训练效果。-----**使用max_seq_len参数对于过长的文本进行截断.
  2. **长度补齐:神经网络模型通常需要同一批处理的数据的序列长度是相同的,然而在分批时通常会将不同长度序列放在同一批,因此需要对序列进行补齐处理.-----**先统计该批数据中序列的最大长度,并将短的序列填充一些没有特殊意义的占位符 [PAD],将长度补齐到该批次的最大长度

1.4.1 collate_fn函数

定义一个collate_fn函数来做数据的截断和填充.。该函数可以作为回调函数传入 DataLoader,DataLoader 在返回一批数据之前,调用该函数去处理数据,并返回处理后的序列数据和对应标签

python 复制代码
def collate_fn(batch_data, pad_val=0, max_seq_len=256):
    seqs, seq_lens, labels = [], [], []
    max_len = 0
    for example in batch_data:
        seq, label = example
        # 对数据序列进行截断
        seq = seq[:max_seq_len]
        # 对数据截断并保存于seqs中
        seqs.append(seq)
        seq_lens.append(len(seq))
        labels.append(label)
        # 保存序列最大长度
        max_len = max(max_len, len(seq))
    # 对数据序列进行填充至最大长度
    for i in range(len(seqs)):
        seqs[i] = seqs[i] + [pad_val] * (max_len - len(seqs[i]))
    return (torch.tensor(seqs).to(device), torch.tensor(seq_lens)), torch.tensor(labels).to(device)

测试一下collate_fn函数的功能,假定一下max_seq_len为5,然后定义序列长度分别为6和3的两条数据,传入collate_fn函数中

python 复制代码
# =======测试==============
max_seq_len = 5
batch_data = [[[1, 2, 3, 4, 5, 6], 1], [[2, 4, 6], 0]]
(seqs, seq_lens), labels = collate_fn(batch_data, pad_val=word2id_dict["[PAD]"], max_seq_len=max_seq_len)
print("seqs: ", seqs)
print("seq_lens: ", seq_lens)
print("labels: ", labels)

运行结果:

python 复制代码
seqs:  tensor([[1, 2, 3, 4, 5],
        [2, 4, 6, 0, 0]], device='cuda:0')
seq_lens:  tensor([5, 3])
labels:  tensor([1, 0], device='cuda:0')

可以看到,原始序列中长度为6的序列被截断为5,同时原始序列中长度为3的序列被填充到5,同时返回了非[PAD]的序列长度。

1.4.2 封装dataloader

将collate_fn作为回调函数传入DataLoader中,其在返回一批数据时,可以通过collate_fn函数处理该批次的数据:

python 复制代码
max_seq_len = 256
batch_size = 128
collate_fn = partial(collate_fn, pad_val=word2id_dict["[PAD]"], max_seq_len=max_seq_len)
train_loader = torch.utils.data.DataLoader(train_set, batch_size=batch_size,
                                           shuffle=True, drop_last=False, collate_fn=collate_fn)
dev_loader = torch.utils.data.DataLoader(dev_set, batch_size=batch_size,
                                         shuffle=False, drop_last=False, collate_fn=collate_fn)
test_loader = torch.utils.data.DataLoader(test_set, batch_size=batch_size,
                                          shuffle=False, drop_last=False, collate_fn=collate_fn)

2 模型构建

实践的整个模型结构如下图所示:

(1)嵌入层: 将输入的数字序列(单词映射成的ID)进行向量化,即将每个数字映射为向量。 ---------------使用Pytorch API:torch.nn.Embedding来完成

(2)双向LSTM层: 接收向量序列,分别用前向和反向更新循环单元。----------------使用Pytorch API:torch.nn.LSTM来完成【定义LSTM时设置参数bidirectional为True ,可使用双向LSTM。】

**(3)汇聚层:**将双向LSTM层所有位置上的隐状态进行平均,作为整个句子的表示。

(4)输出层: 输出分类的几率。----------调用torch.nn.Linear来完成。

2.1 汇聚层算子

实现了AveragePooling算子进行隐状态的汇聚,首先利用序列长度向量生成掩码(Mask)矩阵****【LSTM在传入批次数据的真实长度后,会对[PAD]位置返回零向量,但考虑到汇聚层与处理序列数据的模型进行解耦,因此在汇聚层的实现中,会对[PAD]位置进行掩码】,用于对文本序列中[PAD]位置的向量进行掩蔽,然后将该序列的向量进行相加后取均值。具体操作如下图:

代码实现如下:

python 复制代码
class AveragePooling(nn.Module):
    def __init__(self):
        super(AveragePooling, self).__init__()
 
    def forward(self, sequence_output, sequence_length):
        # 假设 sequence_length 是一个 PyTorch 张量
        sequence_length = sequence_length.unsqueeze(-1).to(torch.float32)
        # 根据sequence_length生成mask矩阵,用于对Padding位置的信息进行mask
        max_len = sequence_output.shape[1]
 
        mask = torch.arange(max_len, device='cuda') < sequence_length.to('cuda')
        mask = mask.to(torch.float32).unsqueeze(-1)
        # 对序列中paddling部分进行mask
 
        sequence_output = torch.multiply(sequence_output, mask.to('cuda'))
        # 对序列中的向量取均值
        batch_mean_hidden = torch.divide(torch.sum(sequence_output, dim=1), sequence_length.to('cuda'))
        return batch_mean_hidden

2.2 模型汇总

python 复制代码
# ===================模型汇总=====================
class Model_BiLSTM_FC(nn.Module):
    def __init__(self, num_embeddings, input_size, hidden_size, num_classes=2):
        super(Model_BiLSTM_FC, self).__init__()
        # 词典大小
        self.num_embeddings = num_embeddings
        # 单词向量的维度
        self.input_size = input_size
        # LSTM隐藏单元数量
        self.hidden_size = hidden_size
        # 情感分类类别数量
        self.num_classes = num_classes
        # 实例化嵌入层
        self.embedding_layer = nn.Embedding(num_embeddings, input_size, padding_idx=0)
        # 实例化LSTM层
        self.lstm_layer = nn.LSTM(input_size, hidden_size, batch_first=True, bidirectional=True)
        # 实例化聚合层
        self.average_layer = AveragePooling()
        # 实例化输出层
        self.output_layer = nn.Linear(hidden_size * 2, num_classes)

    def forward(self, inputs):
        # 对模型输入拆分为序列数据和mask
        input_ids, sequence_length = inputs
        # 获取词向量
        inputs_emb = self.embedding_layer(input_ids)

        packed_input = nn.utils.rnn.pack_padded_sequence(inputs_emb, sequence_length.cpu(), batch_first=True,
                                                         enforce_sorted=False)
        # 使用lstm处理数据
        packed_output, _ = self.lstm_layer(packed_input)
        # 解包输出
        sequence_output, _ = nn.utils.rnn.pad_packed_sequence(packed_output, batch_first=True)
        # 使用聚合层聚合sequence_output
        batch_mean_hidden = self.average_layer(sequence_output, sequence_length)
        # 输出文本分类logits
        logits = self.output_layer(batch_mean_hidden)
        return logits

3 模型训练

python 复制代码
from Runner import RunnerV3,Accuracy,plot_training_loss_acc
np.random.seed(0)
random.seed(0)
torch.seed()

# 指定训练轮次
num_epochs = 1
# 指定学习率
learning_rate = 0.001
# 指定embedding的数量为词表长度
num_embeddings = len(word2id_dict)
# embedding向量的维度
input_size = 256
# LSTM网络隐状态向量的维度
hidden_size = 256
# 模型保存目录
save_dir = "./checkpoints/best.pdparams"

# 实例化模型
model = Model_BiLSTM_FC(num_embeddings, input_size, hidden_size).to(device)
# 指定优化器
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate, betas=(0.9, 0.999))
# 指定损失函数
loss_fn = nn.CrossEntropyLoss()
# 指定评估指标
metric = Accuracy()
# 实例化Runner
runner = RunnerV3(model, optimizer, loss_fn, metric)
# 模型训练
start_time = time.time()
runner.train(train_loader, dev_loader, num_epochs=num_epochs, eval_steps=10, log_steps=10,
             save_path=save_dir)
end_time = time.time()
print("time: ", (end_time - start_time))
# ============  绘制训练过程中在训练集和验证集上的损失图像和在验证集上的准确率图像 ===========
# sample_step: 训练损失的采样step,即每隔多少个点选择1个点绘制
# loss_legend_loc: loss 图像的图例放置位置
# acc_legend_loc: acc 图像的图例放置位置
plot_training_loss_acc(runner,  fig_size=(16, 6), sample_step=10, loss_legend_loc="lower left",
                       acc_legend_loc="lower right")

epoch =1:

python 复制代码
[Evaluate] best accuracy performence has been updated: 0.81120 --> 0.81880
[Train] epoch: 0/1, step: 190/196, loss: 0.31320
[Evaluate]  dev score: 0.81912, dev loss: 0.40167
[Evaluate] best accuracy performence has been updated: 0.81880 --> 0.81912
[Evaluate]  dev score: 0.81728, dev loss: 0.40298
[Train] Training done!
time:  124.63010001182556

观察到模型未收敛,验证集和训练集的损失都在下降,于是我增大epoch为3;

epoch=3:

python 复制代码
[Train] epoch: 0/3, step: 0/588, loss: 0.69394
[Train] epoch: 0/3, step: 10/588, loss: 0.70491
[Evaluate]  dev score: 0.52592, dev loss: 0.68407
[Evaluate] best accuracy performence has been updated: 0.00000 --> 0.52592
[Train] epoch: 0/3, step: 20/588, loss: 0.66260
[Evaluate]  dev score: 0.62080, dev loss: 0.66250
[Evaluate] best accuracy performence has been updated: 0.52592 --> 0.62080
[Train] epoch: 0/3, step: 30/588, loss: 0.61762
[Evaluate]  dev score: 0.62880, dev loss: 0.64522

......
.....
[Train] epoch: 1/3, step: 370/588, loss: 0.31063
[Evaluate]  dev score: 0.84656, dev loss: 0.36641
[Evaluate] best accuracy performence has been updated: 0.84272 --> 0.84656
[Train] epoch: 1/3, step: 380/588, loss: 0.30818
[Evaluate]  dev score: 0.84264, dev loss: 0.37259
[Train] epoch: 1/3, step: 390/588, loss: 0.19482
[Evaluate]  dev score: 0.84600, dev loss: 0.35535
......
[Evaluate] best accuracy performence has been updated: 0.85048 --> 0.85088
[Train] epoch: 2/3, step: 560/588, loss: 0.25688
[Evaluate]  dev score: 0.84792, dev loss: 0.37273
[Train] epoch: 2/3, step: 570/588, loss: 0.12472
[Evaluate]  dev score: 0.84856, dev loss: 0.36705
[Train] epoch: 2/3, step: 580/588, loss: 0.11621
[Evaluate]  dev score: 0.84848, dev loss: 0.38805
[Evaluate]  dev score: 0.84976, dev loss: 0.37620
[Train] Training done!
time:  356.31542706489563

验证集的损失及变化趋于平衡,且有过拟合的迹象,不必再增大epoch,模型在在验证集的准确率也在不断上升 。

4 模型评价

python 复制代码
# ======== 模型评价 =============
model_path = "./checkpoints/best.pdparams"
runner.load_model(model_path)
accuracy, _ =  runner.evaluate(test_loader)
print(f"Evaluate on test set, Accuracy: {accuracy:.5f}")

epoch =1:

python 复制代码
Evaluate on test set, Accuracy: 0.81352

epoch=3:

python 复制代码
Evaluate on test set, Accuracy: 0.84704

对比不同epoch的结果,当eooch=3时,模型训练较好,测试集上准确率达到0.847 。

5 模型预测

python 复制代码
# =======模型预测==========
id2label={0:"消极情绪", 1:"积极情绪"}
text = "this movie is so great. I watched it three times already"
# 处理单条文本
sentence = text.split(" ")
words = [word2id_dict[word] if word in word2id_dict else word2id_dict['[UNK]'] for word in sentence]
words = words[:max_seq_len]
sequence_length = torch.tensor([len(words)], dtype=torch.int64)
words = torch.tensor(words, dtype=torch.int64).unsqueeze(0)
# 使用模型进行预测
logits = runner.predict((words.to(device), sequence_length.to(device)))
max_label_id = torch.argmax(logits, dim=-1).cpu().numpy()[0]
pred_label = id2label[max_label_id]
print("Label: ", pred_label)
python 复制代码
Label:  积极情绪

根据输出可以看出模型预测正确。

6 完整代码

python 复制代码
'''
@Function:基于双向LSTM实现文本分类
@Author: lxy
@Date: 2024/12/12
'''
import os
import torch
import torch.nn as nn
from torch.utils.data import Dataset
from functools import partial
import random
import numpy as np
import time

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
'''
数据集处理部分
'''
# 加载IMDB数据集
def load_imdb_data(path):
    assert os.path.exists(path)
    # 初始化数据集列表
    trainset, devset, testset = [], [], []
    # 加载训练集数据
    for label in ['pos', 'neg']:
        label_path = os.path.join(path, 'train', label)
        for filename in os.listdir(label_path):
            if filename.endswith('.txt'):
                with open(os.path.join(label_path, filename), 'r', encoding='utf-8') as f:
                    sentence = f.read().strip().lower()  # 读取并处理每个评论
                    trainset.append((sentence, label))

    # 加载测试集数据
    for label in ['pos', 'neg']:
        label_path = os.path.join(path, 'test', label)
        for filename in os.listdir(label_path):
            if filename.endswith('.txt'):
                with open(os.path.join(label_path, filename), 'r', encoding='utf-8') as f:
                    sentence = f.read().strip().lower()  # 读取并处理每个评论
                    testset.append((sentence, label))

    # 随机拆分测试集的一半作为验证集
    random.shuffle(testset)  # 打乱测试集顺序
    split_index = len(testset) // 2  # 计算拆分索引
    devset = testset[:split_index]  # 选择测试集前一半作为验证集
    testset = testset[split_index:]  # 剩下的部分作为测试集

    return trainset, devset, testset
# 加载IMDB数据集
train_data, dev_data, test_data = load_imdb_data("./dataset/")

# # 打印一下加载后的数据样式
# print(train_data[4])  # 打印训练集中的第5条数据


class IMDBDataset(Dataset):
    def __init__(self, examples, word2id_dict):
        super(IMDBDataset, self).__init__()
        self.word2id_dict = word2id_dict
        self.examples = self.words_to_id(examples)

    def words_to_id(self, examples):
        tmp_examples = []
        for idx, example in enumerate(examples):
            seq, label = example
            # 将单词映射为字典索引的ID, 对于词典中没有的单词用[UNK]对应的ID进行替代
            seq = [self.word2id_dict.get(word, self.word2id_dict['[UNK]']) for word in seq.split(" ")]

            # 映射标签: 'pos' -> 1, 'neg' -> 0
            label = 1 if label == 'pos' else 0  # 将标签从'pos'/'neg'转换为1/0
            tmp_examples.append([seq, label])
        return tmp_examples

    def __getitem__(self, idx):
        seq, label = self.examples[idx]
        return seq, label

    def __len__(self):
        return len(self.examples)

# ===============ID映射=====================
def load_vocab(path):
    assert os.path.exists(path)  # 确保词表文件路径存在
    words = []  # 初始化空列表,存储词表中的单词
    with open(path, "r", encoding="utf-8") as f:  # 打开文件并读取内容
        words = f.readlines()  # 读取文件中的所有行
        words = [word.strip() for word in words if word.strip()]  # 移除每个单词的前后空白字符并去掉空行
    word2id = dict(zip(words, range(len(words))))  # 创建一个字典,将单词与对应的ID映射
    return word2id  # 返回这个字典


# 加载词表
word2id_dict = load_vocab("./dataset/imdb.vocab")
# 实例化Dataset
train_set = IMDBDataset(train_data, word2id_dict)
dev_set = IMDBDataset(dev_data, word2id_dict)
test_set = IMDBDataset(test_data, word2id_dict)

# print('训练集样本数:', len(train_set))
# print('样本示例:', train_set[4])


def collate_fn(batch_data, pad_val=0, max_seq_len=256):
    seqs, seq_lens, labels = [], [], []
    max_len = 0
    for example in batch_data:
        seq, label = example
        # 对数据序列进行截断
        seq = seq[:max_seq_len]
        # 对数据截断并保存于seqs中
        seqs.append(seq)
        seq_lens.append(len(seq))
        labels.append(label)
        # 保存序列最大长度
        max_len = max(max_len, len(seq))
    # 对数据序列进行填充至最大长度
    for i in range(len(seqs)):
        seqs[i] = seqs[i] + [pad_val] * (max_len - len(seqs[i]))
    return (torch.tensor(seqs).to(device), torch.tensor(seq_lens)), torch.tensor(labels).to(device)
# =======测试==============
# max_seq_len = 5
# batch_data = [[[1, 2, 3, 4, 5, 6], 1], [[2, 4, 6], 0]]
# (seqs, seq_lens), labels = collate_fn(batch_data, pad_val=word2id_dict["[PAD]"], max_seq_len=max_seq_len)
# print("seqs: ", seqs)
# print("seq_lens: ", seq_lens)
# print("labels: ", labels)

# ===============封装dataloader=========================
max_seq_len = 256
batch_size = 128
collate_fn = partial(collate_fn, pad_val=word2id_dict["[PAD]"], max_seq_len=max_seq_len)
train_loader = torch.utils.data.DataLoader(train_set, batch_size=batch_size,
                                           shuffle=True, drop_last=False, collate_fn=collate_fn)
dev_loader = torch.utils.data.DataLoader(dev_set, batch_size=batch_size,
                                         shuffle=False, drop_last=False, collate_fn=collate_fn)
test_loader = torch.utils.data.DataLoader(test_set, batch_size=batch_size,
                                          shuffle=False, drop_last=False, collate_fn=collate_fn)
'''
数据集处理部分结束
'''

'''
模型构建部分
'''
# ======================汇聚层====================
class AveragePooling(nn.Module):
    def __init__(self):
        super(AveragePooling, self).__init__()

    def forward(self, sequence_output, sequence_length):
        # 假设 sequence_length 是一个 PyTorch 张量
        sequence_length = sequence_length.unsqueeze(-1).to(torch.float32)
        # 根据sequence_length生成mask矩阵,用于对Padding位置的信息进行mask
        max_len = sequence_output.shape[1]

        mask = torch.arange(max_len, device='cuda') < sequence_length.to('cuda')
        mask = mask.to(torch.float32).unsqueeze(-1)
        # 对序列中paddling部分进行mask

        sequence_output = torch.multiply(sequence_output, mask.to('cuda'))
        # 对序列中的向量取均值
        batch_mean_hidden = torch.divide(torch.sum(sequence_output, dim=1), sequence_length.to('cuda'))
        return batch_mean_hidden
# ===================模型汇总=====================
class Model_BiLSTM_FC(nn.Module):
    def __init__(self, num_embeddings, input_size, hidden_size, num_classes=2):
        super(Model_BiLSTM_FC, self).__init__()
        # 词典大小
        self.num_embeddings = num_embeddings
        # 单词向量的维度
        self.input_size = input_size
        # LSTM隐藏单元数量
        self.hidden_size = hidden_size
        # 情感分类类别数量
        self.num_classes = num_classes
        # 实例化嵌入层
        self.embedding_layer = nn.Embedding(num_embeddings, input_size, padding_idx=0)
        # 实例化LSTM层
        self.lstm_layer = nn.LSTM(input_size, hidden_size, batch_first=True, bidirectional=True)
        # 实例化聚合层
        self.average_layer = AveragePooling()
        # 实例化输出层
        self.output_layer = nn.Linear(hidden_size * 2, num_classes)

    def forward(self, inputs):
        # 对模型输入拆分为序列数据和mask
        input_ids, sequence_length = inputs
        # 获取词向量
        inputs_emb = self.embedding_layer(input_ids)

        packed_input = nn.utils.rnn.pack_padded_sequence(inputs_emb, sequence_length.cpu(), batch_first=True,
                                                         enforce_sorted=False)
        # 使用lstm处理数据
        packed_output, _ = self.lstm_layer(packed_input)
        # 解包输出
        sequence_output, _ = nn.utils.rnn.pad_packed_sequence(packed_output, batch_first=True)
        # 使用聚合层聚合sequence_output
        batch_mean_hidden = self.average_layer(sequence_output, sequence_length)
        # 输出文本分类logits
        logits = self.output_layer(batch_mean_hidden)
        return logits
'''
模型构建部分结束
'''
'''
模型训练部分
'''
# ===============模型训练===================
from Runner import RunnerV3,Accuracy,plot_training_loss_acc
np.random.seed(0)
random.seed(0)
torch.seed()

# 指定训练轮次
num_epochs = 1
# 指定学习率
learning_rate = 0.001
# 指定embedding的数量为词表长度
num_embeddings = len(word2id_dict)
# embedding向量的维度
input_size = 256
# LSTM网络隐状态向量的维度
hidden_size = 256
# 模型保存目录
save_dir = "./checkpoints/best.pdparams"

# 实例化模型
model = Model_BiLSTM_FC(num_embeddings, input_size, hidden_size).to(device)
# 指定优化器
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate, betas=(0.9, 0.999))
# 指定损失函数
loss_fn = nn.CrossEntropyLoss()
# 指定评估指标
metric = Accuracy()
# 实例化Runner
runner = RunnerV3(model, optimizer, loss_fn, metric)
# 模型训练
start_time = time.time()
runner.train(train_loader, dev_loader, num_epochs=num_epochs, eval_steps=10, log_steps=10,
             save_path=save_dir)
end_time = time.time()
print("time: ", (end_time - start_time))
# ============  绘制训练过程中在训练集和验证集上的损失图像和在验证集上的准确率图像 ===========
# sample_step: 训练损失的采样step,即每隔多少个点选择1个点绘制
# loss_legend_loc: loss 图像的图例放置位置
# acc_legend_loc: acc 图像的图例放置位置
plot_training_loss_acc(runner,  fig_size=(16, 6), sample_step=10, loss_legend_loc="lower left",
                       acc_legend_loc="lower right")
'''
模型训练部分结束
'''

# ==================== 模型评价 =============
model_path = "./checkpoints/best.pdparams"
runner.load_model(model_path)
accuracy, _ =  runner.evaluate(test_loader)
print(f"Evaluate on test set, Accuracy: {accuracy:.5f}")

# =====================模型预测==========
id2label={0:"消极情绪", 1:"积极情绪"}
text = "this movie is so great. I watched it three times already"
# 处理单条文本
sentence = text.split(" ")
words = [word2id_dict[word] if word in word2id_dict else word2id_dict['[UNK]'] for word in sentence]
words = words[:max_seq_len]
sequence_length = torch.tensor([len(words)], dtype=torch.int64)
words = torch.tensor(words, dtype=torch.int64).unsqueeze(0)
# 使用模型进行预测
logits = runner.predict((words.to(device), sequence_length.to(device)))
max_label_id = torch.argmax(logits, dim=-1).cpu().numpy()[0]
pred_label = id2label[max_label_id]
print("Label: ", pred_label)

7 拓展实验

点击跳转--基于双向LSTM和注意力机制的文本分类​​​​​​​

8 参考链接

|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 参考资料: |
| NNDL 实验6(下) - HBU_DAVID - 博客园 |
| 情感分析--数据集来源 |
| IMDB数据集的解释_imdb数据集介绍-CSDN博客 |
| 一幅图真正理解LSTM、BiLSTM_bilstm和lstm的区别-CSDN博客 写的超细!! |
| 长短期记忆神经网络(LSTM)介绍以及简单应用分析 - 舞动的心 - 博客园 |
| 【掩码】深度学习时为什么需要掩码(Mask)? |

相关推荐
牧歌悠悠4 小时前
【深度学习】Unet的基础介绍
人工智能·深度学习·u-net
坚毅不拔的柠檬柠檬5 小时前
AI革命下的多元生态:DeepSeek、ChatGPT、XAI、文心一言与通义千问的行业渗透与场景重构
人工智能·chatgpt·文心一言
坚毅不拔的柠檬柠檬5 小时前
2025:人工智能重构人类文明的新纪元
人工智能·重构
jixunwulian5 小时前
DeepSeek赋能AI边缘计算网关,开启智能新时代!
人工智能·边缘计算
Archie_IT5 小时前
DeepSeek R1/V3满血版——在线体验与API调用
人工智能·深度学习·ai·自然语言处理
大数据追光猿5 小时前
Python应用算法之贪心算法理解和实践
大数据·开发语言·人工智能·python·深度学习·算法·贪心算法
灵感素材坊6 小时前
解锁音乐创作新技能:AI音乐网站的正确使用方式
人工智能·经验分享·音视频
xinxiyinhe7 小时前
如何设置Cursor中.cursorrules文件
人工智能·python
AI服务老曹7 小时前
运用先进的智能算法和优化模型,进行科学合理调度的智慧园区开源了
运维·人工智能·安全·开源·音视频
alphaAIstack7 小时前
大语言模型推理能力从何而来?
人工智能·语言模型·自然语言处理