生成对抗网络入门:Mnist手写数字生成

本文为为🔗365天深度学习训练营内部文章

原作者:K同学啊

一 理论基础

生成对抗网络(Generative Adversarial Networks,GAN)是近年来深度学习领域的一个热点方向。 GAN并不指代某一个具体的神经网络,而是指一类基于博弈思想而设计的神经网络。GAN由两个分别被称为生成器(Generator)和判别器(Discriminator)的神经网络组成。其中,生成器从某种噪声分布中随机采样作为输入,输出与训练集中真实样本非常相似的人工样本;判别器的输入则为真实样本或人工样本,其目的是将人工样本与真实样本尽可能地区分出来。生成器和判别器交替运行,相互博弈,各自的能力都得到升。理想情况下,经过足够次数的博弈之后,判别器无法判断给定样本的真实性,即对于所有样本都输出50%真,50%假的判断。此时,生成器输出的人工样本已经逼真到使判别器无法分辨真假,停止博弈。这样就可以得到一个具有"伪造"真实样本能力的生成器。

1.生成器

GANs中,生成器G选取随机噪声z作为输入,通过生成器的不断拟合,最终输出一个和真实样本尺寸相同,分布相似的伪造样本G(Z)。生成器的本质是一个使用生成式方法的模型,它对数据的分布假设和分布参数进行学习,然后根据学习到的模型重新采样出新的样本。

从数学上来说,生成式方法对于给定的真实数据,首先需要对数据的显式变量或隐含变量做分布假设;然后再将真实数据输入到模型中对变量、参数进行训练;最后得到一个学习后的近似分布,这个分布可以用来生成新的数据。从机器学习的角度来说,模型不会去做分布假设,而是通过不断地学习真实数据,对模型进行修正,最后也可以得到一个学习后的模型来做样本生成任务。这种方法不同于数学方法,学习的过程对人类理解较不直观。

2.判别器

GANs中,判别器D对于输入的样本x,输出一个[0,1]之间的概率数值D(x)。x可能是来自于原始数据集中的真实样本x,也可能是来自于生成器G的人工样本G(Z)。通常约定,概率值D(x)越接近于1就代表此样本为真实样本的可能性更大;反之概率值越小则此样本为伪造样本的可能性越大。也就是说,这里的判别器是一个二分类的神经网络分类器,目的不是判定输入数据的原始类别,而是区分输入样本的真伪。可以注意到,不管在生成器还是判别器中,样本的类别信息都没有用到,也表明GAN 是一个无监督的学习过程。

3.理论原理

GAN是博弈论和机器学习相结合的产物,于2014年lan Goodfellow的论文中问世,一经问世即火爆足以看出人们对于这种算法的认可和狂热的研究热忱。想要更详细的了解GAN,就要知道它是怎么来的,以及这种算法出现的意义是什么。研究者最初想要通过计算机完成自动生成数据的功能,例如通过训练某种算法模型,让某模型学习过一些苹果的图片后能自动生成苹果的图片,具备些功能的算法即认为具有生成功能。但是GAN不是第一个生成算法,而是以往的生成算法在衡量生成图片和真实图片的差距时采用均方误差作为损失函数,但是研究者发现有时均方误差一样的两张生成图片效果却截然不同,鉴于此不足lan Goodfellow提出了GAN。
那么GAN是如何完成生成图片这项功能的呢,如图1所示,GAN是由两个模型组成的:生成模型G和判别模型D。首先第一代生成模型1G的输入是随机噪声z,然后生成模型会生成一张初级照片,训练一代判别模型1D另其进行二分类操作,将生成的图片判别为0,而真实图片判别为1;为了欺瞒一代鉴别器,于是一代生成模型开始优化,然后它进阶成了二代,当它生成的数据成功欺瞒1D时,鉴别模型也会优化更新,进而升级为2D,按照同样的过程也会不断更新出N代的G和D。

二 前期工作

1.定义超参数

python 复制代码
import argparse
import os
import numpy as np
import torchvision.transforms as transforms
from torch.utils.data import DataLoader
from torchvision.utils import save_image
from torchvision import datasets
from torch.autograd import Variable
import torch.nn as nn
import torch
import warnings
warnings.filterwarnings('ignore')

# 创建文件夹
os.makedirs('./images/',exist_ok=True)     # 记录训练过程的图片效果
os.makedirs('./save/',exist_ok=True)     # 训练完成时模型保存的位置
os.makedirs('./datasets/mnist',exist_ok=True)   # 下载数据集存放的位置
python 复制代码
# 超参数配置
n_epochs = 50    # 这个参数决定了模型训练的总轮数。轮数越多,模型有更多机会学习数据中的模式,但也可能导致过拟合
batch_size = 64   # 批次大小影响模型每次更新时使用的数据量。较小的批次可能导致训练过程波动较大,但可能有助于模型逃离局部最小值;较大的批次则可能使训练更稳定,但需要更多的内存空间
lr = 0.0002    # 学习率控制着模型权重更新的步长。学习率过大可能导致模型在最优解附近震荡甚至发散;学习率过小则可能导致模型收敛速度缓慢或陷入局部最小值
b1 = 0.5    # b1和b2是Adam优化器的一部分,分别控制一阶矩(梯度的指数移动平均)和二阶矩(梯度平方的指数移动平均)的指数衰减率,它们影响模型更新的稳定性和收敛速度
b2 = 0.999
n_cpu = 2    # 指定了用于数据加载的cpu数量,可以影响数据预处理和加载的速度,进而影响训练的效率
latent_dim = 100   # 随机向量的维度,它影响生成器生成图像的多样性和质量。维度过低可能导致生成图像缺乏多样性,而维度过高可能导致模型难以训练
img_size = 28   # 图像的大小直接影响模型的感受野和所需计算资源。图像尺寸越大,模型可能需要更多的计算资源和更长的训练时间
channels = 1   # 图像的通道数,对于彩色图像通常是3(RGB),对于灰度图像是1.通道数影响模型处理的信息量
sample_interval = 500   # 保存生成图像的间隔,决定了我们在训练过程中多久保存一次生成的图像,用于监控生成图像的质量

# 图像的尺寸:(1,28,28),和图像的像素面积:(784)
img_shape = (channels,img_size,img_size)
img_area = np.prod(img_shape)

# 设置cuda
cuda = True if torch.cuda.is_available() else False

2.下载数据

python 复制代码
# mnist数据集下载
mnist = datasets.MNIST(
    root='./datasets/',train=True,download=True,transform=transforms.Compose(
        [transforms.Resize(img_size),transforms.ToTensor(),transforms.Normalize([0.5],[0.5])]
    )
)

3.配置数据

python 复制代码
# 配置数据到加载器
dataloader = DataLoader(
    mnist,
    batch_size=batch_size,
    shuffle=True
)

三 定义模型

1.判别器模型

python 复制代码
'''
定义鉴别器
'''
# 将图片28*28展开成784,然后通过多层感知器,中间经过斜率设置为0.2的LeakyReLu激活函数
# 最后接sigmoid激活函数得到一个0到1之间的概率进行二分类
class Discriminator(nn.Module):
    def __init__(self):
        super(Discriminator,self).__init__()
        self.model = nn.Sequential(
            nn.Linear(img_area,512),     # 输入特征数为784,输出为512
            nn.LeakyReLU(0.2,inplace=True),      # 进行非线性映射
            nn.Linear(512,256),     # 输入特征数是512,输出为256
            nn.LeakyReLU(0.2,inplace=True),      # 进行非线性映射
            nn.Linear(256,1),       #  输入特征数是256,输出为1
            nn.Sigmoid()       # 二分类问题用sigmoid函数,多分类用softmax函数
        )

    def forward(self,img):
        img_flat = img.view(img.size(0),-1)  # 鉴别器输入是一个被view展开的(784)的一维图像:(64,784)
        validity = self.model(img_flat)     # 通过鉴别器网络
        return validity    # 鉴别器返回的是一个[0,1]间的概率

2.生成器模型

python 复制代码
'''
定义生成器
'''
# 输入一个100维的0~1之间的高斯分布,然后通过第一层线性变换将其映射到256维
# 然后通过LeakyRelu激活函数,接着进行一个线性变换,再经过一个LeakyRelu激活函数
# 然后经过线性变换将其变成784维,最后经过Tanh激活函数是希望生成的假的图片数据分布,能够在-1~1之间
class Generator(nn.Module):
    def __init__(self):
        super(Generator,self).__init__()
        # 模型中间块
        def block(in_feat,out_feat,normalize=True):
            layers = [nn.Linear(in_feat,out_feat)]     # 线性变换将输入映射到out维度
            if normalize:
                layers.append(nn.BatchNorm1d(out_feat,0.8))    # 正则化
            layers.append(nn.LeakyReLU(0.2,inplace=True))  # 非线性激活函数
            return layers

        # prod():返回给定轴上的数组元素的乘积:1*28*28 = 784
        self.model = nn.Sequential(
            *block(latent_dim,128,normalize=False),     # 线性变化将输入映射 100 to 128,正则化,LeakyRelu
            *block(128,256),    # 线性变化将输入映射 128 to 256,正则化,LeakyRelu
            *block(256,512),    # 线性变化将输入映射 256 to 512,正则化,LeakyRelu
            *block(512,1024),    # 线性变化将输入映射 512 to 1024,正则化,LeakyRelu
            nn.Linear(1024,img_area),     # 线性变化将输入映射 1024 to 784
            nn.Tanh()     # 将(784)的数据每一个都映射到[-1,1]之间
        )

    # view():相当于numpy中的reshape,重新定义矩阵的形状,这里是reshape(64,1,28,28)
    def forward(self,z):          # 输入的是(64,100)的噪声数据
        imgs = self.model(z)      # 噪声数据通过生成器模型
        imgs = imgs.view(imgs.size(0),*img_shape)    # reshape成(64,1,28,28)
        return imgs   # 输出为64张大小为(1,28,28)的图像

四 训练模型

1.训练模型

python 复制代码
'''
训练模型
'''
# 创建生成器,鉴别器对象
generator = Generator()
discriminator = Discriminator()

# 定义loss的度量方式(二分类的交叉熵)
criterion = torch.nn.BCELoss()

# 定义优化函数,优化函数的学习率为0.0003
# betas:用于计算梯度以及梯度评分的运行平均值的系数
optimizer_G = torch.optim.Adam(generator.parameters(),lr=lr,betas=(b1,b2))
optimizer_D = torch.optim.Adam(discriminator.parameters(),lr=lr,betas=(b1,b2))

# 如果有显卡,都在cuda模式中运行
if torch.cuda.is_available():
    generator = generator.cuda()
    discriminator = discriminator.cuda()
    criterion = criterion.cuda()

# 训练模型
for epoch in range(n_epochs):
    for i,(imgs,_) in enumerate(dataloader):
        # ========================训练判别器======================
        # view():相当于numpy中的reshape,重新定义矩阵的形状,相当于reshape(128,784),原来是reshape(64,1,28,28)
        imgs = imgs.view(imgs.size(0),-1)   # 将图片展开为28*28=784   imgs:(64,784)
        real_img = Variable(imgs)    # 将tensor变成Variable放入计算图中,tensor变成variable之后才能进行反向传播求梯度
        real_label = Variable(torch.ones(imgs.size(0),1))     # 定义真实的图片label为1
        fake_label = Variable(torch.zeros(imgs.size(0),1))     # 定义假的图片label为0

        # -----------------------------------------------------
        # Train Discriminator
        # 分为两部分:1、真的图像判别为真  2、假的图像判别为假
        # -----------------------------------------------------
        # 计算真实图片的损失
        real_out = discriminator(real_img)   # 将真实图片放入判别器中
        loss_real_D = criterion(real_out,real_label)   # 得到真实图片的loss
        real_scores = real_out     # 得到真实图片的判别值,输出的值越接近1越好
        # 计算假的图片的损失
        # detach():从当前计算图中分离下来避免梯度传到G,因为G不用更新
        z = Variable(torch.randn(imgs.size(0),latent_dim))    # 随机生成一些噪声
        fake_img = generator(z).detach()   # 随机噪声放入生成网络中,生成一张假的图片
        fake_out = discriminator(fake_img)    # 鉴别器判断假的图片
        loss_fake_D = criterion(fake_out,fake_label)    # 得到假的图片的loss
        fake_scores = fake_out    # 得到假图片的判别值,对于判别器来说,假图片的损失越接近0越好
        # 损失函数和优化
        loss_D = loss_real_D + loss_fake_D   # 损失包括判真损失和判假损失
        optimizer_D.zero_grad()   # 在反向传播之前,先将梯度归0
        loss_D.backward()  # 将误差反向传播
        optimizer_D.step()   # 更新参数

        # ---------------------
        # Train Generator
        # 原理:目的是希望生成假的图片被判别器判断为真
        # 在此过程中,将判别器固定,将假的图片传入判别器的结果与真实的label对应
        # 反向传播更新的参数是生成网络里面的参数
        # 这样可以通过更新生成网络里面的参数,来训练网络,使得生成的图片让判别器以为是真的
        # ---------------------
        z = Variable(torch.randn(imgs.size(0),latent_dim))   # 得到随机噪声
        fake_img = generator(z)     # 随机噪声输入到生成器中,得到一个假的图片
        output = discriminator(fake_img)     # 经过判别器得到的结果
        # 损失函数和优化
        loss_G = criterion(output,real_label)      # 得到的假的图片与真实图片的label的loss
        optimizer_G.zero_grad()    # 在反向传播之前,先将梯度归0
        loss_G.backward()        # 将误差反向传播
        optimizer_G.step()      # 更新参数

        # 输出日志
        if (i + 1) % 300 == 0:
            print(
                "[Epoch %d/%d] [Batch %d/%d]  [D loss:%f] [G loss:%f]  [D real:%f] [D fake:%f]"
                %(epoch,n_epochs,i,len(dataloader),loss_D.item(),loss_G.item(),real_scores.data.mean(),fake_scores.data.mean())
            )
        # 保存训练过程中的图片
        batches_done = epoch * len(dataloader) + i
        if batches_done % sample_interval == 0:
            save_image(fake_img.data[:25],"./images/%d.png"%batches_done,nrow=5,normalize=True)
复制代码
[Epoch 0/50] [Batch 299/938]  [D loss:1.126462] [G loss:0.899555]  [D real:0.516139] [D fake:0.339249]
[Epoch 0/50] [Batch 599/938]  [D loss:1.149556] [G loss:0.897094]  [D real:0.544126] [D fake:0.379397]
[Epoch 0/50] [Batch 899/938]  [D loss:0.996889] [G loss:1.260947]  [D real:0.650683] [D fake:0.410245]
[Epoch 1/50] [Batch 299/938]  [D loss:1.115366] [G loss:2.147109]  [D real:0.732169] [D fake:0.522674]
[Epoch 1/50] [Batch 599/938]  [D loss:1.085733] [G loss:2.925582]  [D real:0.830789] [D fake:0.563939]
[Epoch 1/50] [Batch 899/938]  [D loss:1.212135] [G loss:2.759920]  [D real:0.856782] [D fake:0.635772]
[Epoch 2/50] [Batch 299/938]  [D loss:1.120076] [G loss:1.927145]  [D real:0.809302] [D fake:0.573418]
[Epoch 2/50] [Batch 599/938]  [D loss:0.918613] [G loss:1.235865]  [D real:0.622094] [D fake:0.307855]
[Epoch 2/50] [Batch 899/938]  [D loss:0.959764] [G loss:1.823482]  [D real:0.820892] [D fake:0.509622]
[Epoch 3/50] [Batch 299/938]  [D loss:0.853248] [G loss:1.786410]  [D real:0.744379] [D fake:0.380615]
[Epoch 3/50] [Batch 599/938]  [D loss:0.892074] [G loss:2.111202]  [D real:0.760768] [D fake:0.390234]
[Epoch 3/50] [Batch 899/938]  [D loss:0.989855] [G loss:2.272981]  [D real:0.836766] [D fake:0.510386]
[Epoch 4/50] [Batch 299/938]  [D loss:0.907261] [G loss:2.853382]  [D real:0.838500] [D fake:0.472724]
[Epoch 4/50] [Batch 599/938]  [D loss:1.158909] [G loss:0.798443]  [D real:0.518166] [D fake:0.145365]
[Epoch 4/50] [Batch 899/938]  [D loss:0.811972] [G loss:2.601337]  [D real:0.823196] [D fake:0.405719]
[Epoch 5/50] [Batch 299/938]  [D loss:0.591123] [G loss:2.082723]  [D real:0.771063] [D fake:0.245541]
[Epoch 5/50] [Batch 599/938]  [D loss:0.647286] [G loss:1.995042]  [D real:0.787497] [D fake:0.275002]
[Epoch 5/50] [Batch 899/938]  [D loss:0.620029] [G loss:1.987055]  [D real:0.767942] [D fake:0.245935]
[Epoch 6/50] [Batch 299/938]  [D loss:0.920807] [G loss:1.601699]  [D real:0.762614] [D fake:0.389948]
[Epoch 6/50] [Batch 599/938]  [D loss:0.824215] [G loss:1.587954]  [D real:0.704582] [D fake:0.226340]
[Epoch 6/50] [Batch 899/938]  [D loss:0.718090] [G loss:1.939514]  [D real:0.758678] [D fake:0.297810]
[Epoch 7/50] [Batch 299/938]  [D loss:0.750032] [G loss:1.268215]  [D real:0.703582] [D fake:0.235967]
[Epoch 7/50] [Batch 599/938]  [D loss:0.783685] [G loss:2.126447]  [D real:0.778962] [D fake:0.343705]
[Epoch 7/50] [Batch 899/938]  [D loss:0.807744] [G loss:1.234891]  [D real:0.703821] [D fake:0.264793]
[Epoch 8/50] [Batch 299/938]  [D loss:1.123362] [G loss:0.812300]  [D real:0.537284] [D fake:0.226940]
[Epoch 8/50] [Batch 599/938]  [D loss:0.932809] [G loss:1.276933]  [D real:0.694338] [D fake:0.332504]
[Epoch 8/50] [Batch 899/938]  [D loss:0.973240] [G loss:0.877600]  [D real:0.537963] [D fake:0.156072]
[Epoch 9/50] [Batch 299/938]  [D loss:0.851623] [G loss:1.151848]  [D real:0.613663] [D fake:0.194390]
[Epoch 9/50] [Batch 599/938]  [D loss:0.757661] [G loss:2.185205]  [D real:0.798452] [D fake:0.331717]
[Epoch 9/50] [Batch 899/938]  [D loss:0.930353] [G loss:1.250855]  [D real:0.647663] [D fake:0.235231]
[Epoch 10/50] [Batch 299/938]  [D loss:0.846923] [G loss:1.422895]  [D real:0.705119] [D fake:0.298378]
[Epoch 10/50] [Batch 599/938]  [D loss:1.120350] [G loss:2.781064]  [D real:0.871781] [D fake:0.587554]
[Epoch 10/50] [Batch 899/938]  [D loss:0.824792] [G loss:2.011739]  [D real:0.735258] [D fake:0.330044]
[Epoch 11/50] [Batch 299/938]  [D loss:0.949749] [G loss:1.582255]  [D real:0.649686] [D fake:0.275324]
[Epoch 11/50] [Batch 599/938]  [D loss:0.982256] [G loss:1.346781]  [D real:0.648031] [D fake:0.319623]
[Epoch 11/50] [Batch 899/938]  [D loss:1.134111] [G loss:0.734665]  [D real:0.502376] [D fake:0.201197]
[Epoch 12/50] [Batch 299/938]  [D loss:0.886618] [G loss:1.887888]  [D real:0.751914] [D fake:0.387430]
[Epoch 12/50] [Batch 599/938]  [D loss:0.980123] [G loss:1.845785]  [D real:0.826584] [D fake:0.499849]
[Epoch 12/50] [Batch 899/938]  [D loss:1.128903] [G loss:0.812115]  [D real:0.447066] [D fake:0.128298]
[Epoch 13/50] [Batch 299/938]  [D loss:1.307499] [G loss:1.138010]  [D real:0.444763] [D fake:0.118793]
[Epoch 13/50] [Batch 599/938]  [D loss:0.919566] [G loss:1.435858]  [D real:0.705751] [D fake:0.372891]
[Epoch 13/50] [Batch 899/938]  [D loss:1.045991] [G loss:0.795672]  [D real:0.551619] [D fake:0.233923]
[Epoch 14/50] [Batch 299/938]  [D loss:0.974308] [G loss:1.132812]  [D real:0.645558] [D fake:0.321130]
[Epoch 14/50] [Batch 599/938]  [D loss:1.077103] [G loss:1.873058]  [D real:0.763407] [D fake:0.485259]
[Epoch 14/50] [Batch 899/938]  [D loss:1.154649] [G loss:1.311222]  [D real:0.643791] [D fake:0.423526]
[Epoch 15/50] [Batch 299/938]  [D loss:1.055950] [G loss:1.662198]  [D real:0.711471] [D fake:0.457155]
[Epoch 15/50] [Batch 599/938]  [D loss:0.976650] [G loss:0.991488]  [D real:0.591312] [D fake:0.272113]
[Epoch 15/50] [Batch 899/938]  [D loss:0.972705] [G loss:1.357392]  [D real:0.667669] [D fake:0.373288]
[Epoch 16/50] [Batch 299/938]  [D loss:0.952374] [G loss:1.087495]  [D real:0.587684] [D fake:0.241297]
[Epoch 16/50] [Batch 599/938]  [D loss:0.904115] [G loss:1.359004]  [D real:0.717762] [D fake:0.368899]
[Epoch 16/50] [Batch 899/938]  [D loss:0.946697] [G loss:1.670658]  [D real:0.765068] [D fake:0.431645]
[Epoch 17/50] [Batch 299/938]  [D loss:0.997313] [G loss:0.943261]  [D real:0.616844] [D fake:0.312825]
[Epoch 17/50] [Batch 599/938]  [D loss:1.030199] [G loss:1.093042]  [D real:0.555989] [D fake:0.235301]
[Epoch 17/50] [Batch 899/938]  [D loss:0.911224] [G loss:1.282301]  [D real:0.661065] [D fake:0.313903]
[Epoch 18/50] [Batch 299/938]  [D loss:1.039436] [G loss:1.392826]  [D real:0.689580] [D fake:0.437706]
[Epoch 18/50] [Batch 599/938]  [D loss:1.082913] [G loss:0.873606]  [D real:0.558201] [D fake:0.300038]
[Epoch 18/50] [Batch 899/938]  [D loss:1.258200] [G loss:0.618452]  [D real:0.442770] [D fake:0.194379]
[Epoch 19/50] [Batch 299/938]  [D loss:1.082308] [G loss:1.268667]  [D real:0.520880] [D fake:0.255018]
[Epoch 19/50] [Batch 599/938]  [D loss:0.905622] [G loss:1.311422]  [D real:0.732448] [D fake:0.399873]
[Epoch 19/50] [Batch 899/938]  [D loss:1.077982] [G loss:1.436989]  [D real:0.724235] [D fake:0.479413]
[Epoch 20/50] [Batch 299/938]  [D loss:1.031461] [G loss:1.433434]  [D real:0.704877] [D fake:0.433269]
[Epoch 20/50] [Batch 599/938]  [D loss:0.979215] [G loss:1.721090]  [D real:0.748500] [D fake:0.444550]
[Epoch 20/50] [Batch 899/938]  [D loss:0.967548] [G loss:0.979543]  [D real:0.605029] [D fake:0.275011]
[Epoch 21/50] [Batch 299/938]  [D loss:1.008990] [G loss:1.505808]  [D real:0.700990] [D fake:0.414113]
[Epoch 21/50] [Batch 599/938]  [D loss:1.120533] [G loss:0.947614]  [D real:0.501168] [D fake:0.196343]
[Epoch 21/50] [Batch 899/938]  [D loss:0.963488] [G loss:1.843049]  [D real:0.777486] [D fake:0.464934]
[Epoch 22/50] [Batch 299/938]  [D loss:0.975867] [G loss:1.108254]  [D real:0.650325] [D fake:0.377432]
[Epoch 22/50] [Batch 599/938]  [D loss:0.957223] [G loss:1.135555]  [D real:0.639857] [D fake:0.328309]
[Epoch 22/50] [Batch 899/938]  [D loss:0.987199] [G loss:1.326054]  [D real:0.667016] [D fake:0.364796]
[Epoch 23/50] [Batch 299/938]  [D loss:0.920097] [G loss:1.332339]  [D real:0.706342] [D fake:0.359756]
[Epoch 23/50] [Batch 599/938]  [D loss:1.022273] [G loss:1.082345]  [D real:0.587763] [D fake:0.281549]
[Epoch 23/50] [Batch 899/938]  [D loss:0.908397] [G loss:1.259532]  [D real:0.649278] [D fake:0.304928]
[Epoch 24/50] [Batch 299/938]  [D loss:1.084111] [G loss:1.708223]  [D real:0.748224] [D fake:0.492710]
[Epoch 24/50] [Batch 599/938]  [D loss:1.118541] [G loss:1.251814]  [D real:0.624162] [D fake:0.374923]
[Epoch 24/50] [Batch 899/938]  [D loss:1.082891] [G loss:1.213567]  [D real:0.622567] [D fake:0.365029]
[Epoch 25/50] [Batch 299/938]  [D loss:1.071242] [G loss:1.101048]  [D real:0.715404] [D fake:0.451872]
[Epoch 25/50] [Batch 599/938]  [D loss:1.214661] [G loss:1.769220]  [D real:0.826745] [D fake:0.569088]
[Epoch 25/50] [Batch 899/938]  [D loss:1.042482] [G loss:1.574865]  [D real:0.720972] [D fake:0.444199]
[Epoch 26/50] [Batch 299/938]  [D loss:1.014647] [G loss:0.812489]  [D real:0.555258] [D fake:0.246729]
[Epoch 26/50] [Batch 599/938]  [D loss:0.972982] [G loss:1.268242]  [D real:0.722147] [D fake:0.392772]
[Epoch 26/50] [Batch 899/938]  [D loss:1.158464] [G loss:2.182199]  [D real:0.791383] [D fake:0.547543]
[Epoch 27/50] [Batch 299/938]  [D loss:1.016049] [G loss:0.993888]  [D real:0.645598] [D fake:0.349482]
[Epoch 27/50] [Batch 599/938]  [D loss:1.011526] [G loss:0.771782]  [D real:0.618560] [D fake:0.322423]
[Epoch 27/50] [Batch 899/938]  [D loss:1.097934] [G loss:1.229312]  [D real:0.591573] [D fake:0.330537]
[Epoch 28/50] [Batch 299/938]  [D loss:1.043149] [G loss:0.680922]  [D real:0.537576] [D fake:0.223805]
[Epoch 28/50] [Batch 599/938]  [D loss:0.924291] [G loss:1.150502]  [D real:0.682267] [D fake:0.325791]
[Epoch 28/50] [Batch 899/938]  [D loss:0.774712] [G loss:1.353207]  [D real:0.662420] [D fake:0.237354]
[Epoch 29/50] [Batch 299/938]  [D loss:1.098983] [G loss:1.458917]  [D real:0.708487] [D fake:0.456147]
[Epoch 29/50] [Batch 599/938]  [D loss:0.901726] [G loss:1.458206]  [D real:0.704136] [D fake:0.354276]
[Epoch 29/50] [Batch 899/938]  [D loss:1.024077] [G loss:1.027868]  [D real:0.530083] [D fake:0.195724]
[Epoch 30/50] [Batch 299/938]  [D loss:1.006195] [G loss:1.339055]  [D real:0.681568] [D fake:0.381492]
[Epoch 30/50] [Batch 599/938]  [D loss:1.139939] [G loss:0.850789]  [D real:0.540653] [D fake:0.274699]
[Epoch 30/50] [Batch 899/938]  [D loss:1.045264] [G loss:1.354315]  [D real:0.661704] [D fake:0.374420]
[Epoch 31/50] [Batch 299/938]  [D loss:0.906118] [G loss:1.087022]  [D real:0.646974] [D fake:0.287992]
[Epoch 31/50] [Batch 599/938]  [D loss:0.923142] [G loss:1.168598]  [D real:0.661384] [D fake:0.317574]
[Epoch 31/50] [Batch 899/938]  [D loss:0.893291] [G loss:1.127621]  [D real:0.610552] [D fake:0.239781]
[Epoch 32/50] [Batch 299/938]  [D loss:1.028418] [G loss:0.872905]  [D real:0.511972] [D fake:0.163022]
[Epoch 32/50] [Batch 599/938]  [D loss:1.001148] [G loss:1.375986]  [D real:0.634169] [D fake:0.332520]
[Epoch 32/50] [Batch 899/938]  [D loss:0.897700] [G loss:1.646899]  [D real:0.638676] [D fake:0.226422]
[Epoch 33/50] [Batch 299/938]  [D loss:1.021669] [G loss:0.766808]  [D real:0.583394] [D fake:0.284108]
[Epoch 33/50] [Batch 599/938]  [D loss:1.095916] [G loss:1.762437]  [D real:0.771493] [D fake:0.501423]
[Epoch 33/50] [Batch 899/938]  [D loss:0.873408] [G loss:1.385971]  [D real:0.706132] [D fake:0.343957]
[Epoch 34/50] [Batch 299/938]  [D loss:0.974229] [G loss:1.208778]  [D real:0.628312] [D fake:0.261362]
[Epoch 34/50] [Batch 599/938]  [D loss:0.958586] [G loss:0.977570]  [D real:0.575545] [D fake:0.224128]
[Epoch 34/50] [Batch 899/938]  [D loss:0.962942] [G loss:1.669462]  [D real:0.711120] [D fake:0.392588]
[Epoch 35/50] [Batch 299/938]  [D loss:0.941913] [G loss:1.235123]  [D real:0.636958] [D fake:0.302639]
[Epoch 35/50] [Batch 599/938]  [D loss:0.866773] [G loss:1.674663]  [D real:0.809084] [D fake:0.426007]
[Epoch 35/50] [Batch 899/938]  [D loss:0.839387] [G loss:1.347061]  [D real:0.681547] [D fake:0.276491]
[Epoch 36/50] [Batch 299/938]  [D loss:0.908666] [G loss:1.740739]  [D real:0.802489] [D fake:0.433976]
[Epoch 36/50] [Batch 599/938]  [D loss:0.747275] [G loss:1.465722]  [D real:0.680790] [D fake:0.228464]
[Epoch 36/50] [Batch 899/938]  [D loss:0.853031] [G loss:1.050651]  [D real:0.617018] [D fake:0.190504]
[Epoch 37/50] [Batch 299/938]  [D loss:0.992326] [G loss:0.987399]  [D real:0.668963] [D fake:0.380671]
[Epoch 37/50] [Batch 599/938]  [D loss:0.999288] [G loss:1.124590]  [D real:0.722034] [D fake:0.401539]
[Epoch 37/50] [Batch 899/938]  [D loss:1.128850] [G loss:1.283640]  [D real:0.474914] [D fake:0.138298]
[Epoch 38/50] [Batch 299/938]  [D loss:1.140573] [G loss:0.910530]  [D real:0.543450] [D fake:0.280660]
[Epoch 38/50] [Batch 599/938]  [D loss:1.125377] [G loss:1.274833]  [D real:0.623111] [D fake:0.358482]
[Epoch 38/50] [Batch 899/938]  [D loss:0.903505] [G loss:2.463851]  [D real:0.804626] [D fake:0.452510]
[Epoch 39/50] [Batch 299/938]  [D loss:1.029963] [G loss:0.861951]  [D real:0.537239] [D fake:0.246637]
[Epoch 39/50] [Batch 599/938]  [D loss:0.928971] [G loss:1.389888]  [D real:0.726214] [D fake:0.353216]
[Epoch 39/50] [Batch 899/938]  [D loss:0.850291] [G loss:1.156543]  [D real:0.665365] [D fake:0.278161]
[Epoch 40/50] [Batch 299/938]  [D loss:1.114792] [G loss:1.124351]  [D real:0.620180] [D fake:0.358419]
[Epoch 40/50] [Batch 599/938]  [D loss:1.211019] [G loss:1.504199]  [D real:0.654950] [D fake:0.419927]
[Epoch 40/50] [Batch 899/938]  [D loss:1.067284] [G loss:1.602810]  [D real:0.738925] [D fake:0.461606]
[Epoch 41/50] [Batch 299/938]  [D loss:0.979463] [G loss:0.973400]  [D real:0.615557] [D fake:0.287014]
[Epoch 41/50] [Batch 599/938]  [D loss:1.006179] [G loss:1.956477]  [D real:0.717201] [D fake:0.396797]
[Epoch 41/50] [Batch 899/938]  [D loss:0.866434] [G loss:1.753222]  [D real:0.715549] [D fake:0.302359]
[Epoch 42/50] [Batch 299/938]  [D loss:0.859131] [G loss:2.199386]  [D real:0.731800] [D fake:0.323501]
[Epoch 42/50] [Batch 599/938]  [D loss:1.212487] [G loss:2.213250]  [D real:0.892617] [D fake:0.590188]
[Epoch 42/50] [Batch 899/938]  [D loss:0.993835] [G loss:1.371387]  [D real:0.711200] [D fake:0.400020]
[Epoch 43/50] [Batch 299/938]  [D loss:0.937333] [G loss:1.240518]  [D real:0.594048] [D fake:0.165137]
[Epoch 43/50] [Batch 599/938]  [D loss:0.826414] [G loss:1.805223]  [D real:0.716086] [D fake:0.306703]
[Epoch 43/50] [Batch 899/938]  [D loss:1.078007] [G loss:1.593838]  [D real:0.745717] [D fake:0.453779]
[Epoch 44/50] [Batch 299/938]  [D loss:0.884524] [G loss:1.749227]  [D real:0.724763] [D fake:0.355184]
[Epoch 44/50] [Batch 599/938]  [D loss:0.966496] [G loss:0.809007]  [D real:0.577357] [D fake:0.226516]
[Epoch 44/50] [Batch 899/938]  [D loss:0.865563] [G loss:1.897072]  [D real:0.715572] [D fake:0.327211]
[Epoch 45/50] [Batch 299/938]  [D loss:0.919493] [G loss:1.926025]  [D real:0.669408] [D fake:0.297098]
[Epoch 45/50] [Batch 599/938]  [D loss:1.062480] [G loss:1.232262]  [D real:0.549454] [D fake:0.219422]
[Epoch 45/50] [Batch 899/938]  [D loss:0.863310] [G loss:1.340070]  [D real:0.694014] [D fake:0.295423]
[Epoch 46/50] [Batch 299/938]  [D loss:0.974231] [G loss:1.514071]  [D real:0.675866] [D fake:0.307070]
[Epoch 46/50] [Batch 599/938]  [D loss:0.935487] [G loss:1.674119]  [D real:0.819082] [D fake:0.461227]
[Epoch 46/50] [Batch 899/938]  [D loss:0.883260] [G loss:2.014895]  [D real:0.805989] [D fake:0.412275]
[Epoch 47/50] [Batch 299/938]  [D loss:1.042589] [G loss:1.013629]  [D real:0.577147] [D fake:0.206049]
[Epoch 47/50] [Batch 599/938]  [D loss:0.942141] [G loss:1.822692]  [D real:0.712124] [D fake:0.330109]
[Epoch 47/50] [Batch 899/938]  [D loss:1.027585] [G loss:1.700922]  [D real:0.733564] [D fake:0.393722]
[Epoch 48/50] [Batch 299/938]  [D loss:1.036207] [G loss:2.060518]  [D real:0.794986] [D fake:0.473033]
[Epoch 48/50] [Batch 599/938]  [D loss:0.881303] [G loss:1.338223]  [D real:0.683420] [D fake:0.288061]
[Epoch 48/50] [Batch 899/938]  [D loss:0.844696] [G loss:1.736678]  [D real:0.706084] [D fake:0.303232]
[Epoch 49/50] [Batch 299/938]  [D loss:0.932430] [G loss:1.783170]  [D real:0.700632] [D fake:0.345498]
[Epoch 49/50] [Batch 599/938]  [D loss:0.746427] [G loss:1.045450]  [D real:0.687943] [D fake:0.225172]
[Epoch 49/50] [Batch 899/938]  [D loss:0.903058] [G loss:1.841586]  [D real:0.741125] [D fake:0.375467]

2.保存模型

python 复制代码
# 保存模型
torch.save(generator.state_dict(),'./save/generator.pth')
torch.save(discriminator.state_dict(),'./save/discriminator.pth')
相关推荐
特立独行的猫a1 小时前
HarmonyOS 【诗韵悠然】AI古诗词赏析APP开发实战从零到一系列(一、开篇,项目介绍)
人工智能·华为·harmonyos·古诗词
yu4106212 小时前
2025年中期大语言模型实力深度剖析
人工智能·语言模型·自然语言处理
IT猿手2 小时前
基于强化学习 Q-learning 算法求解城市场景下无人机三维路径规划研究,提供完整MATLAB代码
神经网络·算法·matlab·人机交互·无人机·强化学习·无人机三维路径规划
feng995204 小时前
技术伦理双轨认证如何重构AI工程师能力评估体系——基于AAIA框架的技术解析与行业实证研究
人工智能·aaif·aaia·iaaai
2301_776681655 小时前
【用「概率思维」重新理解生活】
开发语言·人工智能·自然语言处理
蜡笔小新..5 小时前
从零开始:用PyTorch构建CIFAR-10图像分类模型达到接近1的准确率
人工智能·pytorch·机器学习·分类·cifar-10
富唯智能5 小时前
转运机器人可以绕障吗?
人工智能·智能机器人·转运机器人
沅_Yuan5 小时前
基于小波神经网络(WNN)的回归预测模型【MATLAB】
深度学习·神经网络·matlab·回归·小波神经网络·wnn
视觉语言导航6 小时前
湖南大学3D场景问答最新综述!3D-SQA:3D场景问答助力具身智能场景理解
人工智能·深度学习·具身智能
AidLux6 小时前
端侧智能重构智能监控新路径 | 2025 高通边缘智能创新应用大赛第三场公开课来袭!
大数据·人工智能