基于albert的汽车评论情感分析【含代码】

汽车评论情感分析

汽车评论情感数据集

链接:https://pan.baidu.com/s/1K5TWrXbXBRXkCUpMbZq2XA

提取码:9mt9

代码

加载库与参数设置

首先先把一些基础的库进行加载

python 复制代码
import random
import torch
from torch.utils.data import DataLoader
from transformers import AdamW, BertTokenizerFast, AutoModelForSequenceClassification
from sklearn.metrics import classification_report, accuracy_score, recall_score, f1_score
from tqdm import tqdm
# Set seed for reproducibility
import pandas as pd
import os
from sklearn.model_selection import train_test_split
import numpy as np

做实验时需要固定随机种子,方便实验的可重复性

python 复制代码
# 设置种子
seed = 42

random.seed(seed)
torch.manual_seed(seed)
os.environ["CUDA_VISIBLE_DEVICES"] = '0'  # 设置GPU型号

# 设置训练装置为GPU或CPU
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

在这里设置GPU或CPU,如果你的机器存在多个GPU,则可以修改以下代码

python 复制代码
os.environ["CUDA_VISIBLE_DEVICES"] = '0'

修改为

os.environ["CUDA_VISIBLE_DEVICES"] = '0,1,2,3'

在情感分析中,一般情况存在三种标签:消极、中性、积极。如果是更细粒度的标签,比如电商中对于评论分析有一星到五星的标签,则根据标签修改字典即可。

python 复制代码
text_lst = []
label_lst = []
num = 0
label2id = {"消极": 0, "中性": 1, "积极": 2}

数据集的读取

读取数据集,并确保文本与标签数量相同。读者可根据自己实际情况进行修改该部分。

python 复制代码
# 读取"消极"类别数据集
with open('negative.txt', 'r', encoding='utf-8') as f:
    lines = f.readlines()
    for idx, ele in enumerate(lines):
        ele = ele.strip('\n')
        text_lst.append(ele)
        label_lst.append(label2id['消极'])

# 读取"中性"类别数据集
with open('neutral.txt', 'r', encoding='utf-8') as f:
    lines = f.readlines()
    for idx, ele in enumerate(lines):
        ele = ele.strip('\n')
        text_lst.append(ele)
        label_lst.append(label2id['中性'])

# 读取"积极"类别数据集
with open('positive.txt', 'r', encoding='utf-8') as f:
    lines = f.readlines()
    for idx, ele in enumerate(lines):
        ele = ele.strip('\n')
        text_lst.append(ele)
        label_lst.append(label2id['积极'])

# 打印文本和标签的数量,确认两者相同
print(len(text_lst))
print(len(label_lst))
assert len(text_lst) == len(label_lst)

超参数设置与数据集的构建

当准备好数据集后,开始初始化参数,然后进行训练前的基本工作。一般情况下,读者需要修改的参数为batch_size 和num_epochs ,其他参数无需修改。根据albert原始论文,学习率一般在e-5级别,因此可以保留默认参数(其实修改了也不会有很大优化)。

在本实验中,我们对数据集进行了4:1的划分,构建训练集和验证集。我们采用了albert_base_chinese模型,具体模型可以在huggingface下载

python 复制代码
# 设置超参数
num_labels = len(set(label_lst))  # 根据数据集中的类别数量调整此参数
max_seq_length = 256
batch_size = 16
learning_rate = 2e-5
num_epochs = 10
accumulation_steps = 4

# 加载分词器和模型
# model_name = "allenai/led-base-16384"
model_name = './albert_base_chinese'
tokenizer = BertTokenizerFast.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=num_labels).to(device)
model.resize_token_embeddings(len(tokenizer))

# 分割文本和标签
texts = text_lst
labels = label_lst

# 切分训练集和测试集,80%的训练集和20%的验证集
X_train, X_test, y_train, y_test = train_test_split(texts, labels, test_size=0.2, random_state=42, shuffle=True)

# 对训练集和测试集文本进行编码
train_encodings = tokenizer(X_train, truncation=True, padding=True, max_length=max_seq_length, return_tensors="pt")
eval_encodings = tokenizer(X_test, truncation=True, padding=True, max_length=max_seq_length, return_tensors="pt")
train_labels = y_train
eval_labels = y_test

# 创建训练和验证数据集
class ClassificationDataset(torch.utils.data.Dataset):
    def __init__(self, encodings, labels):
        self.encodings = encodings
        self.labels = labels

    def __getitem__(self, idx):
        item = {key: val[idx].to(device) for key, val in self.encodings.items()}
        item["labels"] = torch.tensor(self.labels[idx]).to(device)
        return item

    def __len__(self):
        return len(self.labels)

train_dataset = ClassificationDataset(train_encodings, y_train)
eval_dataset = ClassificationDataset(eval_encodings, y_test)

模型的训练与验证

python 复制代码
# 创建用于训练和验证的数据加载器
train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
eval_loader = DataLoader(eval_dataset, batch_size=batch_size, shuffle=False)

# 初始化变量来存储预测和真实标签以进行评估
all_eval_predictions = []
all_eval_true_labels = []

# 准备优化器和损失函数
optimizer = AdamW(model.parameters(), lr=learning_rate)
loss_fn = torch.nn.CrossEntropyLoss()

# 训练循环
model.train()
optimizer.zero_grad()
total_steps = len(train_loader) * num_epochs  # 计算总步数
current_step = 0  # 跟踪当前步数
best_f1 = 0
for epoch in range(num_epochs):
    # 重置下一个epoch的变量
    all_eval_predictions = []
    all_eval_true_labels = []
    model.train()
    train_loss = 0.0
    progress_bar = tqdm(train_loader, desc="Epoch {}/{}".format(epoch + 1, num_epochs))
    for i, batch in enumerate(progress_bar):

        input_ids = batch["input_ids"]
        attention_mask = batch["attention_mask"]
        labels = batch["labels"]

        # 将输入数据放入GPU或CPU中
        input_ids = input_ids.to(device)
        attention_mask = attention_mask.to(device)
        labels = labels.to(device)
        
        outputs = model(input_ids=input_ids, attention_mask=attention_mask, labels=labels)
        loss = outputs.loss
        
        # 进行梯度累加
        loss = loss / accumulation_steps
        loss.backward()
        
        if (i + 1) % accumulation_steps == 0:
            optimizer.step()
            optimizer.zero_grad()
        
        train_loss += loss.item()
        progress_bar.set_postfix({"Training Loss": train_loss / (i + 1)})
        
        # 更新当前步数
        current_step += 1
        progress = current_step / total_steps
        progress_bar.set_postfix({"Training Loss": train_loss / (i + 1), "Progress": "{:.1%}".format(progress)})
        
    # 验证循环
    model.eval()
    for eval_batch in tqdm(eval_loader, desc="Evaluating"):
        # 将输入数据放入GPU或CPU中
        eval_input_ids = eval_batch["input_ids"].to(device)
        eval_attention_mask = eval_batch["attention_mask"].to(device)
        eval_labels = eval_batch["labels"].to(device)

        with torch.no_grad():
            eval_outputs = model(input_ids=eval_input_ids, attention_mask=eval_attention_mask)

        eval_logits = eval_outputs.logits
        eval_predictions = torch.argmax(eval_logits, dim=1).tolist()

        all_eval_predictions.extend(eval_predictions)
        all_eval_true_labels.extend(eval_labels.tolist())

    # 计算验证指标,准确率、召回率和F1值
    eval_accuracy = accuracy_score(all_eval_true_labels, all_eval_predictions)
    eval_recall = recall_score(all_eval_true_labels, all_eval_predictions, average='weighted')
    eval_f1 = f1_score(all_eval_true_labels, all_eval_predictions, average='weighted')
    eval_accuracy = eval_accuracy * 100
    eval_recall = eval_recall * 100
    eval_f1 = eval_f1 * 100
    if eval_f1 > best_f1:
        best_f1 = eval_f1
        torch.save(model.state_dict(), './output/best_model.bin')

    # 打印验证指标
    print("Validation Accuracy: {:.2f}%".format(eval_accuracy))
    print("Validation Recall: {:.2f}%".format(eval_recall))
    print("Validation F1: {:.2f}%".format(eval_f1))
    print("classification_report:\n", classification_report(all_eval_true_labels, all_eval_predictions))
相关推荐
往日情怀酿做酒 V17639296381 分钟前
Django基础配置
后端·python·django
小白的程序空间2 分钟前
人工智能之机器学习5-回归算法1【培训机构学习笔记】
人工智能·机器学习·回归
chenchihwen4 分钟前
《生成式 AI》课程 作业6 大语言模型(LLM)的训练微调 Fine Tuning -- part1
人工智能
澜舟孟子开源社区18 分钟前
“AI玩手机”原理揭秘:大模型驱动的移动端GUI智能体
人工智能·科技·agi
Mr.鱼26 分钟前
opencv undefined reference to `cv::noarray()‘ 。window系统配置opencv,找到opencv库,但连接不了
人工智能·opencv·计算机视觉
ATpiu29 分钟前
免费微调自己的大模型(llama-factory微调llama3.1-8b)
人工智能·机器学习·llama
凌虚(失业了求个工作)33 分钟前
RAG 示例:使用 langchain、Redis、llama.cpp 构建一个 kubernetes 知识库问答
人工智能·redis·python·langchain·llama
0zxm42 分钟前
01.Django快速入门
数据库·vscode·python·django·sqlite
数据小爬虫@1 小时前
利用Python爬虫获取淘宝商品评论:实战案例分析
开发语言·爬虫·python
逝去的紫枫1 小时前
Python PIL:探索图像处理的无限可能
图像处理·人工智能·python