deeplearning with pytorch (四)

1.Convolutional Neural Network Model

torch.Tensor.view --- PyTorch 2.2 documentation

在神经网络中,使用激活函数(如ReLU)是为了引入非线性,使得网络能够学习和模拟复杂的函数映射。ReLU(Rectified Linear Unit)激活函数因其简单性和效率而广泛使用,特别是在隐藏层中。然而,在网络的最后一层使用激活函数的决策取决于特定任务的需求:

  1. 对于分类任务

    • 如果是多类分类问题,通常在最后一层使用softmax激活函数,因为softmax可以将输出转换为概率分布,每个类别的概率和为1。
    • 对于二分类问题,有时使用sigmoid激活函数将输出压缩到0和1之间,表示为概率。
  2. 对于回归任务

    • 最后一层通常不使用激活函数,因为我们希望预测连续值,而不是将其限制在特定的范围内(例如,ReLU将所有负值设为0,这对于回归任务可能不合适)

LogSoftmax --- PyTorch 2.2 documentation

2. Train and Test CNN Model

python 复制代码
import time
start_time = time.time()
# create varibles to track things
epochs = 5
train_losses = []
test_losses = []
train_correct = []
test_correct = []

# for loop of epochs
for i in range(epochs):
    trn_corr = 0
    tst_corr = 0


    #Train
    for b, (X_train, y_train) in enumerate(train_loader):
        b += 1 # start out batches at 1
        y_pred = model(X_train) # get predicted values from the training set,Not flattened;
        loss = criterion(y_pred, y_train) #how off we are,compare the predicitons to correct answer to y_train
        
        predicted =  torch.max(y_pred.data, 1)[1] # add up the number of correct predictions. indexed off the first point
        batch_corr = (predicted == y_train).sum() # how many we got correct from this batch
        trn_corr += batch_corr  # keep track as we go along in trainging
        
        #update out parameters
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()


        #print out some results
        if b%600 == 0:
            print(f'Epoch: {i} Batch: {b} Loss:{loss.item()}')

    train_losses.append(loss)
    train_correct.append(trn_corr)



    # Test
    with torch.no_grad(): #No gradient so we don't update our weight and biases with this test
        for b, (X_test, y_test) in enumerate(test_loader):
            y_val = model(X_test)
            predicted =  torch.max(y_val.data, 1)[1] # add up the number of correct predictions. indexed off the first point
            tst_corr += (predicted == y_test).sum()

    loss = criterion(y_val, y_test)
    test_losses.append(loss)
    test_correct.append(tst_corr)


current_time = time.time()
total = current_time - start_time
print(f'Training Took: {total/60} minutes!')

训练和测试过程

bash 复制代码
ConvolutonalNetaaaWork(
  (conv1): Conv2d(1, 6, kernel_size=(3, 3), stride=(1, 1))
  (conv2): Conv2d(6, 16, kernel_size=(3, 3), stride=(1, 1))
  (fc1): Linear(in_features=400, out_features=120, bias=True)
  (fc2): Linear(in_features=120, out_features=84, bias=True)
  (fc3): Linear(in_features=84, out_features=10, bias=True)
)
Epoch: 0 Batch: 600 Loss:0.16236098110675812
Epoch: 0 Batch: 1200 Loss:0.16147294640541077
Epoch: 0 Batch: 1800 Loss:0.46548572182655334
Epoch: 0 Batch: 2400 Loss:0.14589160680770874
Epoch: 0 Batch: 3000 Loss:0.006830060388892889
Epoch: 0 Batch: 3600 Loss:0.4129134714603424
Epoch: 0 Batch: 4200 Loss:0.004275710787624121
Epoch: 0 Batch: 4800 Loss:0.002969620516523719
Epoch: 0 Batch: 5400 Loss:0.04636438935995102
Epoch: 0 Batch: 6000 Loss:0.000430782965850085
Epoch: 1 Batch: 600 Loss:0.002715964335948229
Epoch: 1 Batch: 1200 Loss:0.17854242026805878
Epoch: 1 Batch: 1800 Loss:0.0020668990910053253
Epoch: 1 Batch: 2400 Loss:0.0038429438136518
Epoch: 1 Batch: 3000 Loss:0.03475978597998619
Epoch: 1 Batch: 3600 Loss:0.2954908013343811
Epoch: 1 Batch: 4200 Loss:0.02363143488764763
Epoch: 1 Batch: 4800 Loss:0.00022474219440482557
Epoch: 1 Batch: 5400 Loss:0.0005058477981947362
Epoch: 1 Batch: 6000 Loss:0.29113149642944336
Epoch: 2 Batch: 600 Loss:0.11854789406061172
Epoch: 2 Batch: 1200 Loss:0.003075268818065524
Epoch: 2 Batch: 1800 Loss:0.0007867529056966305
Epoch: 2 Batch: 2400 Loss:0.025718092918395996
Epoch: 2 Batch: 3000 Loss:0.020713506266474724
Epoch: 2 Batch: 3600 Loss:0.0005251148249953985
Epoch: 2 Batch: 4200 Loss:0.02623259648680687
Epoch: 2 Batch: 4800 Loss:0.0008421383099630475
Epoch: 2 Batch: 5400 Loss:0.12240316718816757
Epoch: 2 Batch: 6000 Loss:0.1951633244752884
Epoch: 3 Batch: 600 Loss:0.0012102334294468164
Epoch: 3 Batch: 1200 Loss:0.003382322611287236
Epoch: 3 Batch: 1800 Loss:0.002483583288267255
Epoch: 3 Batch: 2400 Loss:8.7084794358816e-05
Epoch: 3 Batch: 3000 Loss:0.0006959225866012275
Epoch: 3 Batch: 3600 Loss:0.0016453089192509651
Epoch: 3 Batch: 4200 Loss:0.04044409096240997
Epoch: 3 Batch: 4800 Loss:4.738060670206323e-05
Epoch: 3 Batch: 5400 Loss:0.1202053427696228
Epoch: 3 Batch: 6000 Loss:0.14659245312213898
Epoch: 4 Batch: 600 Loss:0.018919644877314568
Epoch: 4 Batch: 1200 Loss:0.07315998524427414
Epoch: 4 Batch: 1800 Loss:0.07178398221731186
Epoch: 4 Batch: 2400 Loss:0.0009470336954109371
Epoch: 4 Batch: 3000 Loss:0.0004728620406240225
Epoch: 4 Batch: 3600 Loss:0.24831190705299377
Epoch: 4 Batch: 4200 Loss:0.0003230355796404183
Epoch: 4 Batch: 4800 Loss:0.0002209811209468171
Epoch: 4 Batch: 5400 Loss:0.04399774223566055
Epoch: 4 Batch: 6000 Loss:0.00020674565166700631
Training Took: 1.3477467536926269 minutes!
相关推荐
Blossom.1181 分钟前
人工智能在智能家居中的应用与发展
人工智能·深度学习·机器学习·智能家居·vr·虚拟现实·多模态融合
biter00884 分钟前
ubuntu(28):ubuntu系统多版本conda和多版本cuda共存
linux·人工智能·ubuntu·conda
内网渗透4 分钟前
Python 虚拟环境管理:venv 与 conda 的选择与配置
开发语言·python·conda·虚拟环境·venv
薄荷很无奈12 分钟前
CuML + Cudf (RAPIDS) 加速python数据分析脚本
python·机器学习·数据分析·gpu算力
yivifu17 分钟前
pyqt中以鼠标所在位置为锚点缩放图片
python·pyqt·以鼠标为锚点缩放图片
电鱼智能的电小鱼23 分钟前
基于 EFISH-SBC-RK3588 的无人机通信云端数据处理模块方案‌
linux·网络·人工智能·嵌入式硬件·无人机·边缘计算
HyperAI超神经29 分钟前
12个HPC教程汇总!从入门到实战,覆盖分子模拟/材料计算/生物信息分析等多个领域
图像处理·人工智能·深度学习·生物信息·分子模拟·材料计算·vasp
正在走向自律29 分钟前
AI数字人:繁荣背后的伦理困境与法律迷局(8/10)
人工智能·python·opencv·语音识别·ai数字人·ai伦理与法律
qq_4369621835 分钟前
AI数据分析的利器:解锁BI工具的无限潜力
人工智能·数据挖掘·数据分析·ai数据分析