人工智能零基础入门学习笔记

学习视频:人工智能零基础入门教程

文章目录

1.简介








2.应用


















3.演进










4.机器学习







5.深度学习







6.强化学习













7.图像识别





8.自然语言














9.Python

















10.Python开发环境












11.机器学习算法

1.多元线性回归













项自实战:糖尿病回归预测


配置清华镜像


2.逻辑回归








3.Softmax回归








项目实战:鸢尾花大作战

4.正则化技术









项目实战:新闻分类

解决fetch_20newsgroups数据集无法加载403问题

5.梯度下降法













6.数据归一化








项目实战:手写数字识别




7.KMeans聚类








项目实战:KMeans聚类代码实现



8.高斯混合模型




项目实战:说话人识别









12.神经网络

1.感知机






2.神经网络








3.激活函数









4.正向反向传播





5.梯度消失







6.Dropout




13.PyTorch实战,手写数字识别





python 复制代码
import torch
from torchvision import datasets, transforms

# print(torch.__version__)


#检测CUDA是否可用
use_cuda = torch.cuda.is_available()
# print(use_cuda)

# 设置device变量并
if use_cuda:
    device = torch.device("cuda")
else:
    device = torch.device("cpu")

transform = transforms.Compose([
    #让数据转成Tensor张量
    transforms.ToTensor()
    # 让图片数据进行标准归一化,0.1307是标准归一化的均值,0.3081对应的是标准归一化的方差
    #transforms.Normalize((0.1307),(.3081,))
])

# 读取数据
datasets1 = datasets.MNIST('./data', train=True, download=True, transform=transform)
datasets2 = datasets.MNIST('./data', train=False, download=True, transform=transform)

#设置数据加载器,顺带手设置批次大小和是否打乱数据顺序
train_loader = torch.utils.data.DataLoader(datasets1, batch_size=60000, shuffle=True)
test_loader = torch.utils.data.DataLoader(datasets2, batch_size=1000)

for batch_idx, data in enumerate(train_loader, 0):
    inputs, targets = data
    # view在下一行会把我们的训练集(60000,1,28,28)转换成(60000,28*28)
    x = inputs.view(-1, 28 * 28)
    #计算所有训练样本的标准差和均值
    x_std = x.std().item()
    x_mean = x.mean().item()
print('均值mean为:' + str(x_mean))
print('标准差std为:' + str(x_std))
python 复制代码
import torch
from torchvision import datasets, transforms
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim

# print(torch.__version__)


#检测CUDA是否可用
use_cuda = torch.cuda.is_available()
# print(use_cuda)

# 设置device变量并
if use_cuda:
    device = torch.device("cuda")
else:
    device = torch.device("cpu")

transform = transforms.Compose([
    #让数据转成Tensor张量
    transforms.ToTensor(),
    # 让图片数据进行标准归一化,0.1307是标准归一化的均值,0.3081对应的是标准归一化的方差
    transforms.Normalize((0.1307,), (0.3081,))
])

# 读取数据
datasets1 = datasets.MNIST('./data', train=True, download=True, transform=transform)
datasets2 = datasets.MNIST('./data', train=False, download=True, transform=transform)

#设置数据加载器,顺带手设置批次大小和是否打乱数据顺序
train_loader = torch.utils.data.DataLoader(datasets1, batch_size=128, shuffle=True)
test_loader = torch.utils.data.DataLoader(datasets2, batch_size=1000)


# for batch_idx, data in enumerate(train_loader, 0):
#     inputs, targets = data
#     # view在下一行会把我们的训练集(60000,1,28,28)转换成(60000,28*28)
#     x = inputs.view(-1, 28 * 28)
#     #计算所有训练样本的标准差和均值
#     x_std = x.std().item()
#     x_mean = x.mean().item()
# print('均值mean为:' + str(x_mean))
# print('标准差std为:' + str(x_std))

#通过自定义类来构建模型
class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.fc1 = nn.Linear(784, 128)
        self.dropout = nn.Dropout(0.2)
        self.fc2 = nn.Linear(128, 10)

    def forward(self, x):
        x = torch.flatten(x, 1)
        x = self.fc1(x)
        x = F.relu(x)
        x = self.dropout(x)
        x = self.fc2(x)
        output = F.log_softmax(x, dim=1)
        return output


#创建一个模型实例
model = Net().to(device)


#定义训练模型的逻辑
def train_step(data, target, model, optimizer):
    optimizer.zero_grad()
    output = model(data)
    #nll代表着negative log likely hood 负对数似然
    loss = F.nll_loss(output, target)
    #反向传播的本质是不是就是去求梯度
    loss.backward()
    #本质就是应用梯度去调参
    optimizer.step()
    return loss


#定义测试模型的逻辑
def test_step(data, target, model, test_loss, correct):
    output = model(data)
    # 累积的批次损失
    test_loss += F.nll_loss(output, target, reduction='sum').item()
    #获得对数概率最大值对应的索引号,这里其实就是类别号
    pred = output.argmax(dim=1, keepdims=True)
    correct += pred.eq(target.view_as(pred)).sum().item()
    return test_loss, correct


# 创建训练调参使用的优化器
optimizer = optim.Adam(model.parameters(), lr=0.001)
#真正的分轮次训练
EPOCHS = 5

for epoch in range(EPOCHS):
    model.train()
    for batch_idx, (data, target) in enumerate(train_loader):
        data, target = data.to(device), target.to(device)
        loss = train_step(data, target, model, optimizer)
        # 每隔10个批次,打印信息
        if batch_idx % 10 == 0:
            print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss:{:.6f}'.format(epoch, batch_idx * len(data),
                                                                          len(train_loader.dataset),
                                                                          100. * batch_idx / len(train_loader), loss.item()))
    model.eval()
    test_loss = 0
    correct = 0
    with torch.no_grad():
        for data, target in test_loader:
            data, target = data.to(device), target.to(device)
            test_loss, correct = test_step(data, target, model, test_loss, correct)
    test_loss /= len(test_loader.dataset)
    print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(test_loss, correct,
                                                                                     len(test_loader.dataset),
                                                                                     100. * correct / len(
                                                                                         test_loader.dataset)))
bash 复制代码
D:\ProgramData\miniconda3\envs\pytorch113\python.exe "D:\ProgramData\AIProject\AI\PyTorch_Study\mnist _dnn.py" 
Train Epoch: 0 [0/60000 (0%)]	Loss:2.357153
Train Epoch: 0 [1280/60000 (2%)]	Loss:1.188171
Train Epoch: 0 [2560/60000 (4%)]	Loss:0.778216
Train Epoch: 0 [3840/60000 (6%)]	Loss:0.527082
Train Epoch: 0 [5120/60000 (9%)]	Loss:0.449032
Train Epoch: 0 [6400/60000 (11%)]	Loss:0.439059
Train Epoch: 0 [7680/60000 (13%)]	Loss:0.456310
Train Epoch: 0 [8960/60000 (15%)]	Loss:0.340990
Train Epoch: 0 [10240/60000 (17%)]	Loss:0.537856
Train Epoch: 0 [11520/60000 (19%)]	Loss:0.359342
Train Epoch: 0 [12800/60000 (21%)]	Loss:0.373274
Train Epoch: 0 [14080/60000 (23%)]	Loss:0.251031
Train Epoch: 0 [15360/60000 (26%)]	Loss:0.360636
Train Epoch: 0 [16640/60000 (28%)]	Loss:0.302707
Train Epoch: 0 [17920/60000 (30%)]	Loss:0.185309
Train Epoch: 0 [19200/60000 (32%)]	Loss:0.299687
Train Epoch: 0 [20480/60000 (34%)]	Loss:0.435019
Train Epoch: 0 [21760/60000 (36%)]	Loss:0.208732
Train Epoch: 0 [23040/60000 (38%)]	Loss:0.335426
Train Epoch: 0 [24320/60000 (41%)]	Loss:0.301936
Train Epoch: 0 [25600/60000 (43%)]	Loss:0.237923
Train Epoch: 0 [26880/60000 (45%)]	Loss:0.243458
Train Epoch: 0 [28160/60000 (47%)]	Loss:0.258387
Train Epoch: 0 [29440/60000 (49%)]	Loss:0.324313
Train Epoch: 0 [30720/60000 (51%)]	Loss:0.226612
Train Epoch: 0 [32000/60000 (53%)]	Loss:0.286255
Train Epoch: 0 [33280/60000 (55%)]	Loss:0.286046
Train Epoch: 0 [34560/60000 (58%)]	Loss:0.319120
Train Epoch: 0 [35840/60000 (60%)]	Loss:0.235170
Train Epoch: 0 [37120/60000 (62%)]	Loss:0.234801
Train Epoch: 0 [38400/60000 (64%)]	Loss:0.172543
Train Epoch: 0 [39680/60000 (66%)]	Loss:0.171685
Train Epoch: 0 [40960/60000 (68%)]	Loss:0.223411
Train Epoch: 0 [42240/60000 (70%)]	Loss:0.181646
Train Epoch: 0 [43520/60000 (72%)]	Loss:0.236268
Train Epoch: 0 [44800/60000 (75%)]	Loss:0.147353
Train Epoch: 0 [46080/60000 (77%)]	Loss:0.404344
Train Epoch: 0 [47360/60000 (79%)]	Loss:0.210359
Train Epoch: 0 [48640/60000 (81%)]	Loss:0.193106
Train Epoch: 0 [49920/60000 (83%)]	Loss:0.213325
Train Epoch: 0 [51200/60000 (85%)]	Loss:0.239207
Train Epoch: 0 [52480/60000 (87%)]	Loss:0.194574
Train Epoch: 0 [53760/60000 (90%)]	Loss:0.130250
Train Epoch: 0 [55040/60000 (92%)]	Loss:0.174132
Train Epoch: 0 [56320/60000 (94%)]	Loss:0.157513
Train Epoch: 0 [57600/60000 (96%)]	Loss:0.210445
Train Epoch: 0 [58880/60000 (98%)]	Loss:0.178082

Test set: Average loss: 0.1582, Accuracy: 9543/10000 (95%)

Train Epoch: 1 [0/60000 (0%)]	Loss:0.190592
Train Epoch: 1 [1280/60000 (2%)]	Loss:0.137684
Train Epoch: 1 [2560/60000 (4%)]	Loss:0.271779
Train Epoch: 1 [3840/60000 (6%)]	Loss:0.212214
Train Epoch: 1 [5120/60000 (9%)]	Loss:0.293748
Train Epoch: 1 [6400/60000 (11%)]	Loss:0.230604
Train Epoch: 1 [7680/60000 (13%)]	Loss:0.254167
Train Epoch: 1 [8960/60000 (15%)]	Loss:0.168410
Train Epoch: 1 [10240/60000 (17%)]	Loss:0.146805
Train Epoch: 1 [11520/60000 (19%)]	Loss:0.190707
Train Epoch: 1 [12800/60000 (21%)]	Loss:0.173118
Train Epoch: 1 [14080/60000 (23%)]	Loss:0.129635
Train Epoch: 1 [15360/60000 (26%)]	Loss:0.176186
Train Epoch: 1 [16640/60000 (28%)]	Loss:0.122384
Train Epoch: 1 [17920/60000 (30%)]	Loss:0.163052
Train Epoch: 1 [19200/60000 (32%)]	Loss:0.143208
Train Epoch: 1 [20480/60000 (34%)]	Loss:0.249731
Train Epoch: 1 [21760/60000 (36%)]	Loss:0.141281
Train Epoch: 1 [23040/60000 (38%)]	Loss:0.245926
Train Epoch: 1 [24320/60000 (41%)]	Loss:0.224928
Train Epoch: 1 [25600/60000 (43%)]	Loss:0.226181
Train Epoch: 1 [26880/60000 (45%)]	Loss:0.098913
Train Epoch: 1 [28160/60000 (47%)]	Loss:0.138237
Train Epoch: 1 [29440/60000 (49%)]	Loss:0.145725
Train Epoch: 1 [30720/60000 (51%)]	Loss:0.142157
Train Epoch: 1 [32000/60000 (53%)]	Loss:0.085880
Train Epoch: 1 [33280/60000 (55%)]	Loss:0.160553
Train Epoch: 1 [34560/60000 (58%)]	Loss:0.133455
Train Epoch: 1 [35840/60000 (60%)]	Loss:0.129697
Train Epoch: 1 [37120/60000 (62%)]	Loss:0.241462
Train Epoch: 1 [38400/60000 (64%)]	Loss:0.138529
Train Epoch: 1 [39680/60000 (66%)]	Loss:0.147758
Train Epoch: 1 [40960/60000 (68%)]	Loss:0.223078
Train Epoch: 1 [42240/60000 (70%)]	Loss:0.136696
Train Epoch: 1 [43520/60000 (72%)]	Loss:0.162502
Train Epoch: 1 [44800/60000 (75%)]	Loss:0.201067
Train Epoch: 1 [46080/60000 (77%)]	Loss:0.118544
Train Epoch: 1 [47360/60000 (79%)]	Loss:0.075555
Train Epoch: 1 [48640/60000 (81%)]	Loss:0.187998
Train Epoch: 1 [49920/60000 (83%)]	Loss:0.158163
Train Epoch: 1 [51200/60000 (85%)]	Loss:0.117933
Train Epoch: 1 [52480/60000 (87%)]	Loss:0.074444
Train Epoch: 1 [53760/60000 (90%)]	Loss:0.076563
Train Epoch: 1 [55040/60000 (92%)]	Loss:0.126042
Train Epoch: 1 [56320/60000 (94%)]	Loss:0.104785
Train Epoch: 1 [57600/60000 (96%)]	Loss:0.124018
Train Epoch: 1 [58880/60000 (98%)]	Loss:0.131464

Test set: Average loss: 0.1128, Accuracy: 9647/10000 (96%)

Train Epoch: 2 [0/60000 (0%)]	Loss:0.135893
Train Epoch: 2 [1280/60000 (2%)]	Loss:0.107287
Train Epoch: 2 [2560/60000 (4%)]	Loss:0.092717
Train Epoch: 2 [3840/60000 (6%)]	Loss:0.162008
Train Epoch: 2 [5120/60000 (9%)]	Loss:0.118251
Train Epoch: 2 [6400/60000 (11%)]	Loss:0.167732
Train Epoch: 2 [7680/60000 (13%)]	Loss:0.099356
Train Epoch: 2 [8960/60000 (15%)]	Loss:0.058201
Train Epoch: 2 [10240/60000 (17%)]	Loss:0.087654
Train Epoch: 2 [11520/60000 (19%)]	Loss:0.103737
Train Epoch: 2 [12800/60000 (21%)]	Loss:0.092910
Train Epoch: 2 [14080/60000 (23%)]	Loss:0.168416
Train Epoch: 2 [15360/60000 (26%)]	Loss:0.120313
Train Epoch: 2 [16640/60000 (28%)]	Loss:0.179698
Train Epoch: 2 [17920/60000 (30%)]	Loss:0.195164
Train Epoch: 2 [19200/60000 (32%)]	Loss:0.100405
Train Epoch: 2 [20480/60000 (34%)]	Loss:0.127287
Train Epoch: 2 [21760/60000 (36%)]	Loss:0.096777
Train Epoch: 2 [23040/60000 (38%)]	Loss:0.128975
Train Epoch: 2 [24320/60000 (41%)]	Loss:0.174980
Train Epoch: 2 [25600/60000 (43%)]	Loss:0.153128
Train Epoch: 2 [26880/60000 (45%)]	Loss:0.202121
Train Epoch: 2 [28160/60000 (47%)]	Loss:0.098918
Train Epoch: 2 [29440/60000 (49%)]	Loss:0.106479
Train Epoch: 2 [30720/60000 (51%)]	Loss:0.094558
Train Epoch: 2 [32000/60000 (53%)]	Loss:0.104092
Train Epoch: 2 [33280/60000 (55%)]	Loss:0.075859
Train Epoch: 2 [34560/60000 (58%)]	Loss:0.155020
Train Epoch: 2 [35840/60000 (60%)]	Loss:0.172556
Train Epoch: 2 [37120/60000 (62%)]	Loss:0.054027
Train Epoch: 2 [38400/60000 (64%)]	Loss:0.242806
Train Epoch: 2 [39680/60000 (66%)]	Loss:0.150216
Train Epoch: 2 [40960/60000 (68%)]	Loss:0.131288
Train Epoch: 2 [42240/60000 (70%)]	Loss:0.055607
Train Epoch: 2 [43520/60000 (72%)]	Loss:0.137465
Train Epoch: 2 [44800/60000 (75%)]	Loss:0.141450
Train Epoch: 2 [46080/60000 (77%)]	Loss:0.120138
Train Epoch: 2 [47360/60000 (79%)]	Loss:0.086141
Train Epoch: 2 [48640/60000 (81%)]	Loss:0.149587
Train Epoch: 2 [49920/60000 (83%)]	Loss:0.074414
Train Epoch: 2 [51200/60000 (85%)]	Loss:0.106521
Train Epoch: 2 [52480/60000 (87%)]	Loss:0.082931
Train Epoch: 2 [53760/60000 (90%)]	Loss:0.085414
Train Epoch: 2 [55040/60000 (92%)]	Loss:0.166222
Train Epoch: 2 [56320/60000 (94%)]	Loss:0.164097
Train Epoch: 2 [57600/60000 (96%)]	Loss:0.115938
Train Epoch: 2 [58880/60000 (98%)]	Loss:0.144959

Test set: Average loss: 0.0957, Accuracy: 9718/10000 (97%)

Train Epoch: 3 [0/60000 (0%)]	Loss:0.145348
Train Epoch: 3 [1280/60000 (2%)]	Loss:0.129786
Train Epoch: 3 [2560/60000 (4%)]	Loss:0.068685
Train Epoch: 3 [3840/60000 (6%)]	Loss:0.044195
Train Epoch: 3 [5120/60000 (9%)]	Loss:0.077902
Train Epoch: 3 [6400/60000 (11%)]	Loss:0.108578
Train Epoch: 3 [7680/60000 (13%)]	Loss:0.149138
Train Epoch: 3 [8960/60000 (15%)]	Loss:0.099387
Train Epoch: 3 [10240/60000 (17%)]	Loss:0.103183
Train Epoch: 3 [11520/60000 (19%)]	Loss:0.100638
Train Epoch: 3 [12800/60000 (21%)]	Loss:0.092041
Train Epoch: 3 [14080/60000 (23%)]	Loss:0.073178
Train Epoch: 3 [15360/60000 (26%)]	Loss:0.207282
Train Epoch: 3 [16640/60000 (28%)]	Loss:0.076489
Train Epoch: 3 [17920/60000 (30%)]	Loss:0.148314
Train Epoch: 3 [19200/60000 (32%)]	Loss:0.039077
Train Epoch: 3 [20480/60000 (34%)]	Loss:0.115610
Train Epoch: 3 [21760/60000 (36%)]	Loss:0.133416
Train Epoch: 3 [23040/60000 (38%)]	Loss:0.084655
Train Epoch: 3 [24320/60000 (41%)]	Loss:0.148035
Train Epoch: 3 [25600/60000 (43%)]	Loss:0.152145
Train Epoch: 3 [26880/60000 (45%)]	Loss:0.071100
Train Epoch: 3 [28160/60000 (47%)]	Loss:0.056352
Train Epoch: 3 [29440/60000 (49%)]	Loss:0.227481
Train Epoch: 3 [30720/60000 (51%)]	Loss:0.138899
Train Epoch: 3 [32000/60000 (53%)]	Loss:0.080404
Train Epoch: 3 [33280/60000 (55%)]	Loss:0.066160
Train Epoch: 3 [34560/60000 (58%)]	Loss:0.080147
Train Epoch: 3 [35840/60000 (60%)]	Loss:0.079220
Train Epoch: 3 [37120/60000 (62%)]	Loss:0.058759
Train Epoch: 3 [38400/60000 (64%)]	Loss:0.073250
Train Epoch: 3 [39680/60000 (66%)]	Loss:0.121640
Train Epoch: 3 [40960/60000 (68%)]	Loss:0.166685
Train Epoch: 3 [42240/60000 (70%)]	Loss:0.047174
Train Epoch: 3 [43520/60000 (72%)]	Loss:0.076186
Train Epoch: 3 [44800/60000 (75%)]	Loss:0.115248
Train Epoch: 3 [46080/60000 (77%)]	Loss:0.138637
Train Epoch: 3 [47360/60000 (79%)]	Loss:0.138182
Train Epoch: 3 [48640/60000 (81%)]	Loss:0.138327
Train Epoch: 3 [49920/60000 (83%)]	Loss:0.027602
Train Epoch: 3 [51200/60000 (85%)]	Loss:0.068769
Train Epoch: 3 [52480/60000 (87%)]	Loss:0.125922
Train Epoch: 3 [53760/60000 (90%)]	Loss:0.079334
Train Epoch: 3 [55040/60000 (92%)]	Loss:0.184128
Train Epoch: 3 [56320/60000 (94%)]	Loss:0.110396
Train Epoch: 3 [57600/60000 (96%)]	Loss:0.115563
Train Epoch: 3 [58880/60000 (98%)]	Loss:0.051400

Test set: Average loss: 0.0802, Accuracy: 9759/10000 (98%)

Train Epoch: 4 [0/60000 (0%)]	Loss:0.085800
Train Epoch: 4 [1280/60000 (2%)]	Loss:0.047675
Train Epoch: 4 [2560/60000 (4%)]	Loss:0.029820
Train Epoch: 4 [3840/60000 (6%)]	Loss:0.091837
Train Epoch: 4 [5120/60000 (9%)]	Loss:0.076924
Train Epoch: 4 [6400/60000 (11%)]	Loss:0.077931
Train Epoch: 4 [7680/60000 (13%)]	Loss:0.096714
Train Epoch: 4 [8960/60000 (15%)]	Loss:0.043513
Train Epoch: 4 [10240/60000 (17%)]	Loss:0.165721
Train Epoch: 4 [11520/60000 (19%)]	Loss:0.077272
Train Epoch: 4 [12800/60000 (21%)]	Loss:0.036313
Train Epoch: 4 [14080/60000 (23%)]	Loss:0.108138
Train Epoch: 4 [15360/60000 (26%)]	Loss:0.084638
Train Epoch: 4 [16640/60000 (28%)]	Loss:0.157234
Train Epoch: 4 [17920/60000 (30%)]	Loss:0.094364
Train Epoch: 4 [19200/60000 (32%)]	Loss:0.069152
Train Epoch: 4 [20480/60000 (34%)]	Loss:0.014761
Train Epoch: 4 [21760/60000 (36%)]	Loss:0.046572
Train Epoch: 4 [23040/60000 (38%)]	Loss:0.076240
Train Epoch: 4 [24320/60000 (41%)]	Loss:0.064022
Train Epoch: 4 [25600/60000 (43%)]	Loss:0.051202
Train Epoch: 4 [26880/60000 (45%)]	Loss:0.113288
Train Epoch: 4 [28160/60000 (47%)]	Loss:0.105636
Train Epoch: 4 [29440/60000 (49%)]	Loss:0.103099
Train Epoch: 4 [30720/60000 (51%)]	Loss:0.027316
Train Epoch: 4 [32000/60000 (53%)]	Loss:0.029250
Train Epoch: 4 [33280/60000 (55%)]	Loss:0.087277
Train Epoch: 4 [34560/60000 (58%)]	Loss:0.151358
Train Epoch: 4 [35840/60000 (60%)]	Loss:0.106468
Train Epoch: 4 [37120/60000 (62%)]	Loss:0.052081
Train Epoch: 4 [38400/60000 (64%)]	Loss:0.161355
Train Epoch: 4 [39680/60000 (66%)]	Loss:0.058824
Train Epoch: 4 [40960/60000 (68%)]	Loss:0.115524
Train Epoch: 4 [42240/60000 (70%)]	Loss:0.122241
Train Epoch: 4 [43520/60000 (72%)]	Loss:0.097723
Train Epoch: 4 [44800/60000 (75%)]	Loss:0.022386
Train Epoch: 4 [46080/60000 (77%)]	Loss:0.038309
Train Epoch: 4 [47360/60000 (79%)]	Loss:0.056229
Train Epoch: 4 [48640/60000 (81%)]	Loss:0.053395
Train Epoch: 4 [49920/60000 (83%)]	Loss:0.108051
Train Epoch: 4 [51200/60000 (85%)]	Loss:0.062542
Train Epoch: 4 [52480/60000 (87%)]	Loss:0.020315
Train Epoch: 4 [53760/60000 (90%)]	Loss:0.054988
Train Epoch: 4 [55040/60000 (92%)]	Loss:0.083139
Train Epoch: 4 [56320/60000 (94%)]	Loss:0.068836
Train Epoch: 4 [57600/60000 (96%)]	Loss:0.113024
Train Epoch: 4 [58880/60000 (98%)]	Loss:0.076040

Test set: Average loss: 0.0813, Accuracy: 9740/10000 (97%)


进程已结束,退出代码为 0
相关推荐
向前看-4 分钟前
青训营刷题笔记18
数据结构·c++·笔记·算法
PLUS_WAVE14 分钟前
EG3D: Efficient Geometry-aware 3D Generative Adversarial Networks 学习笔记
笔记·学习·gan·nerf·head avatar·头像·3dvision
星海幻影33 分钟前
shell脚本基础学习_总结篇(完结)
linux·学习·网络安全·shell·shell编程·shell脚本学习
Lostgreen1 小时前
图数据库 & Cypher语言
大数据·数据库·笔记
『₣λ¥√≈üĐ』1 小时前
如何写出好证明(支持思想的深入数学写作)
人工智能·学习·数学建模·矩阵·动态规划·概率论·抽象代数
bbppooi1 小时前
排序学习整理(1)
c语言·数据结构·学习·算法·排序算法
陈逸轩*^_^*2 小时前
硅谷甄选前端项目环境配置笔记
前端·笔记
心怀梦想的咸鱼2 小时前
ue5第三人称闯关游戏学习(一)
学习·游戏·ue5
爱穿西装的C先生2 小时前
C++学习日记---第13天(类和对象---封装)
c++·学习·程序人生·蓝桥杯
legend_jz2 小时前
【linux】手搓线程池
linux·运维·服务器·c++·笔记·学习·学习方法