第R3周:RNN-心脏病预测(pytorch版)

1.前期准备工作

1.1.设置硬件设备

复制代码
import numpy as np
import pandas as pd
import torch
from torch import nn
import torch.nn.functional as F
import seaborn as sns

# 设置训练GPU训练,也可以使用CPU
device=torch.device("cuda" if torch.cuda.is_available() else "cpu")
device

输出:

复制代码
device(type='cuda')

1.2.导入数据

复制代码
df=pd.read_csv("E:/DATABASE/RNN/R3/heart.csv")
df

输出:

2.构建数据集

2.1.标准化

复制代码
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split

x=df.iloc[:,:-1]
y=df.iloc[:,-1]

# 将每一列特征标准化为正态分布,注意,标准化是针对每一列而言的
sc=StandardScaler()
x=sc.fit_transform(x)
x.shape,y.shape

输出:

复制代码
((303, 13), (303,))

2.2.划分数据集

复制代码
x=torch.tensor(np.array(x),dtype=torch.float32)
y=torch.tensor(np.array(y),dtype=torch.int64)

x_train,x_test,y_train,y_test=train_test_split(x,y,test_size=0.1,random_state=1)

x_train.shape,y_train.shape

输出:

复制代码
(torch.Size([272, 13]), torch.Size([272]))

2.3.构建数据加载器

复制代码
from torch.utils.data import TensorDataset,DataLoader

train_dl=DataLoader(TensorDataset(x_train,y_train),batch_size=64,shuffle=True)
test_dl=DataLoader(TensorDataset(x_test,y_test),batch_size=64,shuffle=False)

3.模型训练

3.1.构建模型

复制代码
class model_rnn(nn.Module):
    def __init__(self):
        super(model_rnn,self).__init__()
        self.rnn0=nn.RNN(input_size=13,hidden_size=200,
                         num_layers=1,batch_first=True)
        self.fc0=nn.Linear(200,64)
        self.fc1=nn.Linear(64,2)
        
    def forward(self,x):
        out,hidden1=self.rnn0(x)
        out=self.fc0(out)
        out=self.fc1(out)
        return out
    
model=model_rnn().to(device)
model

输出:

复制代码
model_rnn(
  (rnn0): RNN(13, 200, batch_first=True)
  (fc0): Linear(in_features=200, out_features=64, bias=True)
  (fc1): Linear(in_features=64, out_features=2, bias=True)
)

查看模型的输出数据集格式:

复制代码
model(torch.rand(30,13).to(device)).shape

输出:

复制代码
torch.Size([30, 2])

3.2.定义训练函数

复制代码
# 训练循环
def train(dataloader,model,loss_fn,optimizer):
    size=len(dataloader.dataset) # 训练集的大小
    num_batches=len(dataloader) # 批次数目,(size/batch_size,向上取整)
    
    train_loss,train_acc=0,0 # 初始化训练损失和正确率
    
    for x,y in dataloader: # 获取图片及其标签
        x,y=x.to(device),y.to(device)
        
        # 计算预测误差
        pred=model(x) # 网络输出
        loss=loss_fn(pred,y) # 计算网络输出和真实值之间的差距,targets为真实值,计算二者差值即为损失
        
        # 反向传播
        optimizer.zero_grad() # grad属性归零
        loss.backward() # 反向传播
        optimizer.step() # 每一步自动更新
        
        # 记录acc与loss
        train_acc+=(pred.argmax(1)==y).type(torch.float).sum().item()
        train_loss+=loss.item()
        
    train_acc/=size
    train_loss/=num_batches
    
    return train_acc,train_loss

3.3.定义测试函数

复制代码
# 测试函数
def test(dataloader,model,loss_fn):
    size=len(dataloader.dataset) # 测试集的大小
    num_batches=len(dataloader) # 批次数目,(size/batch_size,向上取整)
    test_loss,test_acc=0,0
    
    # 当不进行训练时,停止梯度更新,节省计算内存消耗
    with torch.no_grad():
        for imgs,target in dataloader:
            imgs,target=imgs.to(device),target.to(device)
            
            # 计算loss
            target_pred=model(imgs)
            loss=loss_fn(target_pred,target)
            
            test_loss+=loss.item()
            test_acc+=(target_pred.argmax(1)==target).type(torch.float).sum().item()
            
    test_acc/=size
    test_loss/=num_batches
    
    return test_acc,test_loss

3.4.正式训练模型

复制代码
loss_fn=nn.CrossEntropyLoss() # 创建损失函数
learn_rate=1e-4 # 学习率
opt=torch.optim.Adam(model.parameters(),lr=learn_rate)
epochs=50

train_acc=[]
train_loss=[]
test_acc=[]
test_loss=[]

for epoch in range(epochs):
    model.train()
    epoch_train_acc,epoch_train_loss=train(train_dl,model,loss_fn,opt)
    
    model.eval()
    epoch_test_acc,epoch_test_loss=test(test_dl,model,loss_fn)
    
    train_acc.append(epoch_train_acc)
    train_loss.append(epoch_train_loss)
    test_acc.append(epoch_test_acc)
    test_loss.append(epoch_test_loss)
    
    # 获取当前学习率
    lr=opt.state_dict()['param_groups'][0]['lr']
    
    template=('Epoch:{:2d},Train_acc:{:.1f}%,Train_loss:{:.3f},Test_acc:{:.1f}%,Test_loss:{:.3f},Lr:{:.2E}')
    print(template.format(epoch+1,epoch_train_acc*100,epoch_train_loss,
                          epoch_test_acc*100,epoch_test_loss,lr))
    
print("="*20,'Done',"="*20)

输出:

复制代码
Epoch: 1,Train_acc:48.2%,Train_loss:0.699,Test_acc:54.8%,Test_loss:0.690,Lr:1.00E-04
Epoch: 2,Train_acc:57.4%,Train_loss:0.688,Test_acc:74.2%,Test_loss:0.675,Lr:1.00E-04
Epoch: 3,Train_acc:63.6%,Train_loss:0.674,Test_acc:77.4%,Test_loss:0.661,Lr:1.00E-04
Epoch: 4,Train_acc:73.2%,Train_loss:0.659,Test_acc:87.1%,Test_loss:0.648,Lr:1.00E-04
Epoch: 5,Train_acc:73.2%,Train_loss:0.650,Test_acc:90.3%,Test_loss:0.634,Lr:1.00E-04
Epoch: 6,Train_acc:76.1%,Train_loss:0.636,Test_acc:90.3%,Test_loss:0.620,Lr:1.00E-04
Epoch: 7,Train_acc:75.0%,Train_loss:0.630,Test_acc:90.3%,Test_loss:0.607,Lr:1.00E-04
Epoch: 8,Train_acc:78.3%,Train_loss:0.614,Test_acc:87.1%,Test_loss:0.592,Lr:1.00E-04
Epoch: 9,Train_acc:77.6%,Train_loss:0.602,Test_acc:87.1%,Test_loss:0.577,Lr:1.00E-04
Epoch:10,Train_acc:80.5%,Train_loss:0.595,Test_acc:90.3%,Test_loss:0.562,Lr:1.00E-04
Epoch:11,Train_acc:79.0%,Train_loss:0.584,Test_acc:90.3%,Test_loss:0.546,Lr:1.00E-04
Epoch:12,Train_acc:79.4%,Train_loss:0.565,Test_acc:90.3%,Test_loss:0.531,Lr:1.00E-04
Epoch:13,Train_acc:80.9%,Train_loss:0.558,Test_acc:90.3%,Test_loss:0.515,Lr:1.00E-04
Epoch:14,Train_acc:79.8%,Train_loss:0.550,Test_acc:90.3%,Test_loss:0.500,Lr:1.00E-04
Epoch:15,Train_acc:82.4%,Train_loss:0.529,Test_acc:90.3%,Test_loss:0.485,Lr:1.00E-04
Epoch:16,Train_acc:80.9%,Train_loss:0.537,Test_acc:90.3%,Test_loss:0.471,Lr:1.00E-04
Epoch:17,Train_acc:81.2%,Train_loss:0.522,Test_acc:90.3%,Test_loss:0.458,Lr:1.00E-04
Epoch:18,Train_acc:80.9%,Train_loss:0.515,Test_acc:90.3%,Test_loss:0.446,Lr:1.00E-04
Epoch:19,Train_acc:82.4%,Train_loss:0.503,Test_acc:90.3%,Test_loss:0.432,Lr:1.00E-04
Epoch:20,Train_acc:81.6%,Train_loss:0.494,Test_acc:90.3%,Test_loss:0.419,Lr:1.00E-04
Epoch:21,Train_acc:81.6%,Train_loss:0.499,Test_acc:90.3%,Test_loss:0.405,Lr:1.00E-04
Epoch:22,Train_acc:84.6%,Train_loss:0.470,Test_acc:90.3%,Test_loss:0.394,Lr:1.00E-04
Epoch:23,Train_acc:82.0%,Train_loss:0.475,Test_acc:90.3%,Test_loss:0.383,Lr:1.00E-04
Epoch:24,Train_acc:82.7%,Train_loss:0.464,Test_acc:90.3%,Test_loss:0.374,Lr:1.00E-04
Epoch:25,Train_acc:82.4%,Train_loss:0.448,Test_acc:90.3%,Test_loss:0.365,Lr:1.00E-04
Epoch:26,Train_acc:82.4%,Train_loss:0.438,Test_acc:90.3%,Test_loss:0.356,Lr:1.00E-04
Epoch:27,Train_acc:83.8%,Train_loss:0.418,Test_acc:90.3%,Test_loss:0.350,Lr:1.00E-04
Epoch:28,Train_acc:81.2%,Train_loss:0.448,Test_acc:90.3%,Test_loss:0.344,Lr:1.00E-04
Epoch:29,Train_acc:82.7%,Train_loss:0.402,Test_acc:90.3%,Test_loss:0.340,Lr:1.00E-04
Epoch:30,Train_acc:84.9%,Train_loss:0.412,Test_acc:90.3%,Test_loss:0.336,Lr:1.00E-04
Epoch:31,Train_acc:84.2%,Train_loss:0.394,Test_acc:90.3%,Test_loss:0.333,Lr:1.00E-04
Epoch:32,Train_acc:83.8%,Train_loss:0.420,Test_acc:90.3%,Test_loss:0.330,Lr:1.00E-04
Epoch:33,Train_acc:85.3%,Train_loss:0.400,Test_acc:90.3%,Test_loss:0.326,Lr:1.00E-04
Epoch:34,Train_acc:85.3%,Train_loss:0.407,Test_acc:90.3%,Test_loss:0.325,Lr:1.00E-04
Epoch:35,Train_acc:81.6%,Train_loss:0.420,Test_acc:90.3%,Test_loss:0.322,Lr:1.00E-04
Epoch:36,Train_acc:83.8%,Train_loss:0.399,Test_acc:90.3%,Test_loss:0.321,Lr:1.00E-04
Epoch:37,Train_acc:84.6%,Train_loss:0.394,Test_acc:90.3%,Test_loss:0.318,Lr:1.00E-04
Epoch:38,Train_acc:83.5%,Train_loss:0.358,Test_acc:90.3%,Test_loss:0.317,Lr:1.00E-04
Epoch:39,Train_acc:83.5%,Train_loss:0.405,Test_acc:90.3%,Test_loss:0.317,Lr:1.00E-04
Epoch:40,Train_acc:83.1%,Train_loss:0.382,Test_acc:87.1%,Test_loss:0.317,Lr:1.00E-04
Epoch:41,Train_acc:84.9%,Train_loss:0.357,Test_acc:87.1%,Test_loss:0.314,Lr:1.00E-04
Epoch:42,Train_acc:83.5%,Train_loss:0.407,Test_acc:87.1%,Test_loss:0.311,Lr:1.00E-04
Epoch:43,Train_acc:84.9%,Train_loss:0.402,Test_acc:90.3%,Test_loss:0.309,Lr:1.00E-04
Epoch:44,Train_acc:83.1%,Train_loss:0.360,Test_acc:90.3%,Test_loss:0.307,Lr:1.00E-04
Epoch:45,Train_acc:84.9%,Train_loss:0.368,Test_acc:90.3%,Test_loss:0.306,Lr:1.00E-04
Epoch:46,Train_acc:84.9%,Train_loss:0.380,Test_acc:90.3%,Test_loss:0.307,Lr:1.00E-04
Epoch:47,Train_acc:84.2%,Train_loss:0.362,Test_acc:90.3%,Test_loss:0.308,Lr:1.00E-04
Epoch:48,Train_acc:83.8%,Train_loss:0.351,Test_acc:90.3%,Test_loss:0.307,Lr:1.00E-04
Epoch:49,Train_acc:84.2%,Train_loss:0.372,Test_acc:90.3%,Test_loss:0.305,Lr:1.00E-04
Epoch:50,Train_acc:83.8%,Train_loss:0.418,Test_acc:90.3%,Test_loss:0.303,Lr:1.00E-04
==================== Done ====================

4.模型评估

4.1.Loss与Accuracy图

复制代码
import matplotlib.pyplot as plt
from datetime import datetime
# 隐藏警告
import warnings
warnings.filterwarnings("ignore") # 忽略警告信息

current_time=datetime.now() # 获取当前时间

plt.rcParams['font.sans-serif']=['SimHei'] # 用来正常显示中文标签
plt.rcParams['axes.unicode_minus']=False # 用来正常显示负号
plt.rcParams['figure.dpi']=300 # 分辨率

epochs_range=range(epochs)
plt.figure(figsize=(12,3))

plt.subplot(1,2,1)
plt.plot(epochs_range,train_acc,label='Training Accuracy')
plt.plot(epochs_range,test_acc,label='Test Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.xlabel(current_time)

plt.subplot(1,2,2)
plt.plot(epochs_range,train_loss,label='Training Loss')
plt.plot(epochs_range,test_loss,label='Test Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()

输出:

4.2.混淆矩阵

复制代码
print("="*20,'输入数据shape为',"="*20)
print("x_test.shape:",x_test.shape)
print("y_test.shape:",y_test.shape)

pred=model(x_test.to(device)).argmax(1).cpu().numpy()

print("="*20,'输出数据shape为',"="*20)
print("pred.shape:",pred.shape)

输出:

复制代码
==================== 输入数据shape为 ====================
x_test.shape: torch.Size([31, 13])
y_test.shape: torch.Size([31])
==================== 输出数据shape为 ====================
pred.shape: (31,)

复制代码
import numpy as np
import matplotlib.pyplot as plt
from sklearn.metrics import confusion_matrix,ConfusionMatrixDisplay

# 计算混淆矩阵
cm=confusion_matrix(y_test,pred)

plt.figure(figsize=(6,5))
plt.suptitle('')
sns.heatmap(cm,annot=True,fmt="d",cmap="Blues")

# 修改字体大小
plt.xticks(fontsize=10)
plt.yticks(fontsize=10)
plt.title("Confusion Matrix",fontsize=12)
plt.xlabel("Predicted Label",fontsize=10)
plt.ylabel("True Label",fontsize=10)

# 显示图
plt.tight_layout() # 调整布局防止重叠
plt.show()

输出:

5.心得体会

原模型中全连接层本为

复制代码
self.fc0=nn.Linear(200,50)
self.fc1=nn.Linear(50,2)

运行后准确率一直卡在87%无法提升,后将全连接层结构修改为 (200,64),运行后准确率达到90%,满足开始的89%以上的要求。

相关推荐
Blossom.1181 小时前
使用Python和Scikit-Learn实现机器学习模型调优
开发语言·人工智能·python·深度学习·目标检测·机器学习·scikit-learn
scdifsn2 小时前
动手学深度学习12.7. 参数服务器-笔记&练习(PyTorch)
pytorch·笔记·深度学习·分布式计算·数据并行·参数服务器
DFminer2 小时前
【LLM】fast-api 流式生成测试
人工智能·机器人
郄堃Deep Traffic2 小时前
机器学习+城市规划第十四期:利用半参数地理加权回归来实现区域带宽不同的规划任务
人工智能·机器学习·回归·城市规划
海盗儿3 小时前
Attention Is All You Need (Transformer) 以及Transformer pytorch实现
pytorch·深度学习·transformer
GIS小天3 小时前
AI+预测3D新模型百十个定位预测+胆码预测+去和尾2025年6月7日第101弹
人工智能·算法·机器学习·彩票
阿部多瑞 ABU3 小时前
主流大语言模型安全性测试(三):阿拉伯语越狱提示词下的表现与分析
人工智能·安全·ai·语言模型·安全性测试
cnbestec3 小时前
Xela矩阵三轴触觉传感器的工作原理解析与应用场景
人工智能·线性代数·触觉传感器
不爱写代码的玉子4 小时前
HALCON透视矩阵
人工智能·深度学习·线性代数·算法·计算机视觉·矩阵·c#
sbc-study4 小时前
PCDF (Progressive Continuous Discrimination Filter)模块构建
人工智能·深度学习·计算机视觉