一:理论基础
生成对抗网络(Generative Adversarial Networks,GAN) 是近年来深度学习领域的一个热门方向。GAN 并不指代某一个具体的神经网络,而是指一类基于博弈思想而设计的神经网络。GAN由两个分别被称为生成器(Generator)和判别器(Discriminator) 的神经网络组成。其中,生成器从某种噪声分布中随机采样作为输入,输出与训练集中真实样本非常相似的人工样本;判别器的输入则是真实样本或人工样本,其目的是将人工样本与真实样本尽可能地区分出来。生成器和判别器交替运行,相互博弈,各自的能力都得到提升。理想情况下,经过足够次数的博弈后,判别器无法判断给定样本的真实性,即对于所有样本都输出50%真,50%假的判断。此时,生成器输出的人工样本已经逼真到使判别器无法分辨真假,停止博弈。
1.生成器
GANs中,生成器G选取随机噪声z作为输入,通过生成器的不断拟合,最终输出一个和真实样本尺寸相同,分布相似的伪造样本G(z)。生成器的本质是一个使用生成式方法的模型,它对数据的分布假设和分布参数进行学习,然后根据学习到的模型重新采样出新的样本。
从数学上来说,生成式方法对于给定的真实数据,首先需要对数据的显示变量或隐含变量做分布假设;然后再将真实数据输入到模型中对变量,参数进行训练;最后得到一个学习后的近似分布,这个分布可以用来生成新的数据。
2.判别器
GANs中,判别器D对于输入的样本x,输出一个[0,1]之间的概率数值D(x)。x可能来自于原始数据集中的真实样本x,也可能是来自于生成器G的人工样本G(z)。通常约定,概率值D(x)越接近于1就代表此样本为真实样本的可能性更大;反之越小则此样本为伪造样本的可能性越大。
3.基本原理
研究者最初想要通过计算机完成自动生成数据的功能。但是GAN不是一个生成算法,而是以往的生成算法在衡量生成图片和真实图片的差距时采用均方误差作为损失函数,但是有时发现均方误差一样的两张生成图片效果却截然不同,鉴于此,提出了GAN。
二:前期准备工作
1.定义超参数
import argparse
import os
import numpy as np
import torchvision.transforms as transforms
from torchvision.utils import save_image
from torch.utils.data import DataLoader
from torchvision import datasets
from torch.autograd import Variable
import torch.nn as nn
import torch
## 创建文件夹
os.makedirs("./data/images/", exist_ok=True) ## 记录训练过程的图片效果
os.makedirs("./data/save/", exist_ok=True) ## 训练完成时模型保存的位置
os.makedirs("./data/mnist", exist_ok=True) ## 下载数据集存放的位置
## 超参数配置
n_epochs=50
batch_size=64
lr=0.0002
b1=0.5
b2=0.999
n_cpu=2
latent_dim=100
img_size=28
channels=1
sample_interval=500
## 图像的尺寸:(1, 28, 28), 和图像的像素面积:(784)
img_shape = (channels, img_size, img_size)
img_area = np.prod(img_shape)
## 设置cuda:(cuda:0)
cuda = True if torch.cuda.is_available() else False
print(cuda)
2:下载数据
mnist = datasets.MNIST(
root='./data/', train=True, download=True, transform=transforms.Compose(
[transforms.Resize(img_size), transforms.ToTensor(), transforms.Normalize([0.5], [0.5])]),
)
3.配置数据
dataloader = DataLoader(mnist,batch_size=batch_size,shuffle= True)
三:定义模型
1.定义鉴别器
##### 定义判别器 Discriminator ######
## 将图片28x28展开成784,然后通过多层感知器,中间经过斜率设置为0.2的LeakyReLU激活函数,
## 最后接sigmoid激活函数得到一个0到1之间的概率进行二分类
class Discriminator(nn.Module):
def __init__(self):
super(Discriminator, self).__init__()
self.model = nn.Sequential(
nn.Linear(img_area, 512), # 输入特征数为784,输出为512
nn.LeakyReLU(0.2, inplace=True), # 进行非线性映射
nn.Linear(512, 256), # 输入特征数为512,输出为256
nn.LeakyReLU(0.2, inplace=True), # 进行非线性映射
nn.Linear(256, 1), # 输入特征数为256,输出为1
nn.Sigmoid(), # sigmoid是一个激活函数,二分类问题中可将实数映射到[0, 1],作为概率值, 多分类用softmax函数
)
def forward(self, img):
img_flat = img.view(img.size(0), -1) # 鉴别器输入是一个被view展开的(784)的一维图像:(64, 784)
validity = self.model(img_flat) # 通过鉴别器网络
return validity
2.定义生成器
###### 定义生成器 Generator #####
## 输入一个100维的0~1之间的高斯分布,然后通过第一层线性变换将其映射到256维,
## 然后通过LeakyReLU激活函数,接着进行一个线性变换,再经过一个LeakyReLU激活函数,
## 然后经过线性变换将其变成784维,最后经过Tanh激活函数是希望生成的假的图片数据分布, 能够在-1~1之间。
class Generator(nn.Module):
def __init__(self):
super(Generator, self).__init__()
## 模型中间块儿
def block(in_feat, out_feat, normalize=True): # block(in, out )
layers = [nn.Linear(in_feat, out_feat)] # 线性变换将输入映射到out维
if normalize:
layers.append(nn.BatchNorm1d(out_feat, 0.8)) # 正则化
layers.append(nn.LeakyReLU(0.2, inplace=True)) # 非线性激活函数
return layers
## prod():返回给定轴上的数组元素的乘积:1*28*28=784
self.model = nn.Sequential(
*block(latent_dim, 128, normalize=False), # 线性变化将输入映射 100 to 128, 正则化, LeakyReLU
*block(128, 256), # 线性变化将输入映射 128 to 256, 正则化, LeakyReLU
*block(256, 512), # 线性变化将输入映射 256 to 512, 正则化, LeakyReLU
*block(512, 1024), # 线性变化将输入映射 512 to 1024, 正则化, LeakyReLU
nn.Linear(1024, img_area), # 线性变化将输入映射 1024 to 784
nn.Tanh() # 将(784)的数据每一个都映射到[-1, 1]之间
)
## view():相当于numpy中的reshape,重新定义矩阵的形状:这里是reshape(64, 1, 28, 28)
def forward(self, z): # 输入的是(64, 100)的噪声数据
imgs = self.model(z) # 噪声数据通过生成器模型
imgs = imgs.view(imgs.size(0), *img_shape) # reshape成(64, 1, 28, 28)
return imgs
四:训练模型
1.创建实例
## 创建生成器,判别器对象
generator = Generator()
discriminator = Discriminator()
## 首先需要定义loss的度量方式 (二分类的交叉熵)
criterion = torch.nn.BCELoss()
## 其次定义 优化函数,优化函数的学习率为0.0003
## betas:用于计算梯度以及梯度平方的运行平均值的系数
optimizer_G = torch.optim.Adam(generator.parameters(), lr=lr, betas=(b1, b2))
optimizer_D = torch.optim.Adam(discriminator.parameters(), lr=lr, betas=(b1, b2))
## 如果有显卡,都在cuda模式中运行
if torch.cuda.is_available():
generator = generator.cuda()
discriminator = discriminator.cuda()
criterion = criterion.cuda()
2.训练模型
## 进行多个epoch的训练
for epoch in range(n_epochs): # epoch:50
for i, (imgs, _) in enumerate(dataloader): # imgs:(64, 1, 28, 28) _:label(64)
## =============================训练判别器==================
## view(): 相当于numpy中的reshape,重新定义矩阵的形状, 相当于reshape(128,784) 原来是(128, 1, 28, 28)
imgs = imgs.view(imgs.size(0), -1) # 将图片展开为28*28=784 imgs:(64, 784)
real_img = Variable(imgs).cuda() # 将tensor变成Variable放入计算图中,tensor变成variable之后才能进行反向传播求梯度
real_label = Variable(torch.ones(imgs.size(0), 1)).cuda() ## 定义真实的图片label为1
fake_label = Variable(torch.zeros(imgs.size(0), 1)).cuda() ## 定义假的图片的label为0
## ---------------------
## Train Discriminator
## 分为两部分:1、真的图像判别为真;2、假的图像判别为假
## ---------------------
## 计算真实图片的损失
real_out = discriminator(real_img) # 将真实图片放入判别器中
loss_real_D = criterion(real_out, real_label) # 得到真实图片的loss
real_scores = real_out # 得到真实图片的判别值,输出的值越接近1越好
## 计算假的图片的损失
## detach(): 从当前计算图中分离下来避免梯度传到G,因为G不用更新
z = Variable(torch.randn(imgs.size(0), latent_dim)).cuda() ## 随机生成一些噪声, 大小为(128, 100)
fake_img = generator(z).detach() ## 随机噪声放入生成网络中,生成一张假的图片。
fake_out = discriminator(fake_img) ## 判别器判断假的图片
loss_fake_D = criterion(fake_out, fake_label) ## 得到假的图片的loss
fake_scores = fake_out ## 得到假图片的判别值,对于判别器来说,假图片的损失越接近0越好
## 损失函数和优化
loss_D = loss_real_D + loss_fake_D # 损失包括判真损失和判假损失
optimizer_D.zero_grad() # 在反向传播之前,先将梯度归0
loss_D.backward() # 将误差反向传播
optimizer_D.step() # 更新参数
## -----------------
## Train Generator
## 原理:目的是希望生成的假的图片被判别器判断为真的图片,
## 在此过程中,将判别器固定,将假的图片传入判别器的结果与真实的label对应,
## 反向传播更新的参数是生成网络里面的参数,
## 这样可以通过更新生成网络里面的参数,来训练网络,使得生成的图片让判别器以为是真的, 这样就达到了对抗的目的
## -----------------
z = Variable(torch.randn(imgs.size(0), latent_dim)).cuda() ## 得到随机噪声
fake_img = generator(z) ## 随机噪声输入到生成器中,得到一副假的图片
output = discriminator(fake_img) ## 经过判别器得到的结果
## 损失函数和优化
loss_G = criterion(output, real_label) ## 得到的假的图片与真实的图片的label的loss
optimizer_G.zero_grad() ## 梯度归0
loss_G.backward() ## 进行反向传播
optimizer_G.step() ## step()一般用在反向传播后面,用于更新生成网络的参数
## 打印训练过程中的日志
## item():取出单元素张量的元素值并返回该值,保持原元素类型不变
if (i + 1) % 300 == 0:
print(
"[Epoch %d/%d] [Batch %d/%d] [D loss: %f] [G loss: %f] [D real: %f] [D fake: %f]"
% (epoch, n_epochs, i, len(dataloader), loss_D.item(), loss_G.item(), real_scores.data.mean(), fake_scores.data.mean())
)
## 保存训练过程中的图像
batches_done = epoch * len(dataloader) + i
if batches_done % sample_interval == 0:
save_image(fake_img.data[:25], "./data/images/%d.png" % batches_done, nrow=5, normalize=True)
[Epoch 0/50] [Batch 299/938] [D loss: 1.062744] [G loss: 0.998991] [D real: 0.686333] [D fake: 0.477075]
[Epoch 0/50] [Batch 599/938] [D loss: 1.138402] [G loss: 1.708447] [D real: 0.757014] [D fake: 0.568092]
[Epoch 0/50] [Batch 899/938] [D loss: 0.874018] [G loss: 1.538504] [D real: 0.630406] [D fake: 0.300350]
[Epoch 1/50] [Batch 299/938] [D loss: 1.008905] [G loss: 1.797931] [D real: 0.850955] [D fake: 0.562927]
[Epoch 1/50] [Batch 599/938] [D loss: 0.776864] [G loss: 2.109859] [D real: 0.752511] [D fake: 0.358050]
[Epoch 1/50] [Batch 899/938] [D loss: 0.936565] [G loss: 2.237901] [D real: 0.807196] [D fake: 0.485258]
[Epoch 2/50] [Batch 299/938] [D loss: 0.585652] [G loss: 2.538615] [D real: 0.814312] [D fake: 0.278410]
[Epoch 2/50] [Batch 599/938] [D loss: 0.680700] [G loss: 1.438134] [D real: 0.629067] [D fake: 0.112096]
[Epoch 2/50] [Batch 899/938] [D loss: 0.818238] [G loss: 1.474816] [D real: 0.675095] [D fake: 0.206882]
[Epoch 3/50] [Batch 299/938] [D loss: 0.956872] [G loss: 0.862775] [D real: 0.561085] [D fake: 0.104294]
[Epoch 3/50] [Batch 599/938] [D loss: 0.649356] [G loss: 2.328832] [D real: 0.809782] [D fake: 0.240162]
[Epoch 3/50] [Batch 899/938] [D loss: 0.618475] [G loss: 2.204416] [D real: 0.775485] [D fake: 0.248777]
[Epoch 4/50] [Batch 299/938] [D loss: 0.303458] [G loss: 3.270469] [D real: 0.902577] [D fake: 0.148545]
[Epoch 4/50] [Batch 599/938] [D loss: 0.636712] [G loss: 3.809136] [D real: 0.902870] [D fake: 0.387796]
[Epoch 4/50] [Batch 899/938] [D loss: 0.562836] [G loss: 1.995883] [D real: 0.776156] [D fake: 0.173165]
[Epoch 5/50] [Batch 299/938] [D loss: 0.970683] [G loss: 4.392030] [D real: 0.924271] [D fake: 0.568643]
[Epoch 5/50] [Batch 599/938] [D loss: 0.636667] [G loss: 1.629214] [D real: 0.736580] [D fake: 0.202164]
[Epoch 5/50] [Batch 899/938] [D loss: 0.380502] [G loss: 3.089733] [D real: 0.879478] [D fake: 0.181905]
[Epoch 6/50] [Batch 299/938] [D loss: 0.666225] [G loss: 3.291718] [D real: 0.923496] [D fake: 0.401533]
[Epoch 6/50] [Batch 599/938] [D loss: 0.617952] [G loss: 2.622025] [D real: 0.798732] [D fake: 0.209249]
[Epoch 6/50] [Batch 899/938] [D loss: 1.615218] [G loss: 3.244916] [D real: 0.518405] [D fake: 0.012072]
[Epoch 7/50] [Batch 299/938] [D loss: 0.894060] [G loss: 3.562847] [D real: 0.873052] [D fake: 0.414627]
[Epoch 7/50] [Batch 599/938] [D loss: 0.371754] [G loss: 2.514917] [D real: 0.797416] [D fake: 0.069653]
[Epoch 7/50] [Batch 899/938] [D loss: 0.511698] [G loss: 2.138561] [D real: 0.830197] [D fake: 0.221475]
[Epoch 8/50] [Batch 299/938] [D loss: 0.675444] [G loss: 1.731117] [D real: 0.772514] [D fake: 0.185948]
[Epoch 8/50] [Batch 599/938] [D loss: 0.937024] [G loss: 3.755513] [D real: 0.853989] [D fake: 0.461043]
[Epoch 8/50] [Batch 899/938] [D loss: 0.616977] [G loss: 2.554311] [D real: 0.786181] [D fake: 0.211708]
[Epoch 9/50] [Batch 299/938] [D loss: 0.473390] [G loss: 2.797653] [D real: 0.828926] [D fake: 0.163060]
[Epoch 9/50] [Batch 599/938] [D loss: 0.505826] [G loss: 1.967516] [D real: 0.794953] [D fake: 0.135482]
[Epoch 9/50] [Batch 899/938] [D loss: 0.415636] [G loss: 3.531227] [D real: 0.918756] [D fake: 0.234716]
[Epoch 10/50] [Batch 299/938] [D loss: 0.334533] [G loss: 3.521941] [D real: 0.914758] [D fake: 0.186773]
[Epoch 10/50] [Batch 599/938] [D loss: 0.427849] [G loss: 2.177489] [D real: 0.821728] [D fake: 0.081672]
[Epoch 10/50] [Batch 899/938] [D loss: 0.358280] [G loss: 3.123958] [D real: 0.912934] [D fake: 0.204408]
[Epoch 11/50] [Batch 299/938] [D loss: 0.553102] [G loss: 1.960408] [D real: 0.826580] [D fake: 0.106998]
[Epoch 11/50] [Batch 599/938] [D loss: 0.159242] [G loss: 3.673257] [D real: 0.925992] [D fake: 0.063112]
[Epoch 11/50] [Batch 899/938] [D loss: 0.793491] [G loss: 0.989646] [D real: 0.685122] [D fake: 0.052228]
[Epoch 12/50] [Batch 299/938] [D loss: 1.568396] [G loss: 1.733763] [D real: 0.494010] [D fake: 0.010037]
[Epoch 12/50] [Batch 599/938] [D loss: 0.445660] [G loss: 2.680232] [D real: 0.829233] [D fake: 0.111195]
[Epoch 12/50] [Batch 899/938] [D loss: 0.370891] [G loss: 3.059388] [D real: 0.927789] [D fake: 0.229339]
[Epoch 13/50] [Batch 299/938] [D loss: 0.187710] [G loss: 2.769137] [D real: 0.892172] [D fake: 0.018910]
[Epoch 13/50] [Batch 599/938] [D loss: 0.343926] [G loss: 5.052759] [D real: 0.965712] [D fake: 0.245905]
[Epoch 13/50] [Batch 899/938] [D loss: 0.406878] [G loss: 2.686711] [D real: 0.861410] [D fake: 0.142992]
[Epoch 14/50] [Batch 299/938] [D loss: 0.404130] [G loss: 3.501252] [D real: 0.857759] [D fake: 0.121450]
[Epoch 14/50] [Batch 599/938] [D loss: 0.575298] [G loss: 3.785912] [D real: 0.901401] [D fake: 0.288195]
[Epoch 14/50] [Batch 899/938] [D loss: 0.599810] [G loss: 1.994080] [D real: 0.731276] [D fake: 0.055559]
[Epoch 15/50] [Batch 299/938] [D loss: 0.348519] [G loss: 2.980924] [D real: 0.884915] [D fake: 0.111430]
[Epoch 15/50] [Batch 599/938] [D loss: 0.518708] [G loss: 4.959539] [D real: 0.941624] [D fake: 0.334962]
[Epoch 15/50] [Batch 899/938] [D loss: 0.430769] [G loss: 3.706588] [D real: 0.833605] [D fake: 0.016777]
[Epoch 16/50] [Batch 299/938] [D loss: 0.429710] [G loss: 4.957369] [D real: 0.981422] [D fake: 0.302755]
[Epoch 16/50] [Batch 599/938] [D loss: 0.408069] [G loss: 3.796279] [D real: 0.930731] [D fake: 0.251387]
[Epoch 16/50] [Batch 899/938] [D loss: 0.807350] [G loss: 4.110381] [D real: 0.646739] [D fake: 0.004604]
[Epoch 17/50] [Batch 299/938] [D loss: 0.232989] [G loss: 2.726997] [D real: 0.921093] [D fake: 0.097117]
[Epoch 17/50] [Batch 599/938] [D loss: 0.607006] [G loss: 5.815968] [D real: 0.785427] [D fake: 0.015712]
[Epoch 17/50] [Batch 899/938] [D loss: 0.504233] [G loss: 4.621054] [D real: 0.927587] [D fake: 0.280223]
[Epoch 18/50] [Batch 299/938] [D loss: 0.300116] [G loss: 2.634073] [D real: 0.865037] [D fake: 0.012056]
[Epoch 18/50] [Batch 599/938] [D loss: 0.303095] [G loss: 1.935076] [D real: 0.883199] [D fake: 0.097526]
[Epoch 18/50] [Batch 899/938] [D loss: 0.258796] [G loss: 2.947055] [D real: 0.890724] [D fake: 0.065869]
[Epoch 19/50] [Batch 299/938] [D loss: 0.685378] [G loss: 6.535305] [D real: 0.885325] [D fake: 0.323817]
[Epoch 19/50] [Batch 599/938] [D loss: 0.421842] [G loss: 3.306463] [D real: 0.810625] [D fake: 0.016703]
[Epoch 19/50] [Batch 899/938] [D loss: 0.375154] [G loss: 4.167430] [D real: 0.934244] [D fake: 0.197812]
[Epoch 20/50] [Batch 299/938] [D loss: 0.591057] [G loss: 3.119317] [D real: 0.731286] [D fake: 0.007508]
[Epoch 20/50] [Batch 599/938] [D loss: 0.337807] [G loss: 1.682541] [D real: 0.866504] [D fake: 0.056187]
[Epoch 20/50] [Batch 899/938] [D loss: 0.355729] [G loss: 3.516787] [D real: 0.891088] [D fake: 0.068899]
[Epoch 21/50] [Batch 299/938] [D loss: 0.315579] [G loss: 3.906839] [D real: 0.908219] [D fake: 0.164071]
[Epoch 21/50] [Batch 599/938] [D loss: 0.173505] [G loss: 3.587407] [D real: 0.966580] [D fake: 0.096252]
[Epoch 21/50] [Batch 899/938] [D loss: 0.220905] [G loss: 3.660075] [D real: 0.963011] [D fake: 0.135703]
[Epoch 22/50] [Batch 299/938] [D loss: 0.170453] [G loss: 2.666228] [D real: 0.900861] [D fake: 0.014845]
[Epoch 22/50] [Batch 599/938] [D loss: 0.306215] [G loss: 3.129571] [D real: 0.888006] [D fake: 0.093870]
[Epoch 22/50] [Batch 899/938] [D loss: 0.155793] [G loss: 3.632946] [D real: 0.955390] [D fake: 0.064384]
[Epoch 23/50] [Batch 299/938] [D loss: 0.356662] [G loss: 4.402007] [D real: 0.927568] [D fake: 0.203440]
[Epoch 23/50] [Batch 599/938] [D loss: 0.465632] [G loss: 1.605053] [D real: 0.859301] [D fake: 0.139864]
[Epoch 23/50] [Batch 899/938] [D loss: 0.561042] [G loss: 2.354625] [D real: 0.815784] [D fake: 0.050830]
[Epoch 24/50] [Batch 299/938] [D loss: 0.327157] [G loss: 4.139232] [D real: 0.877129] [D fake: 0.038646]
[Epoch 24/50] [Batch 599/938] [D loss: 0.324535] [G loss: 3.476531] [D real: 0.922986] [D fake: 0.166053]
[Epoch 24/50] [Batch 899/938] [D loss: 0.200960] [G loss: 4.156766] [D real: 0.922793] [D fake: 0.079723]
[Epoch 25/50] [Batch 299/938] [D loss: 0.484370] [G loss: 2.756196] [D real: 0.875537] [D fake: 0.141417]
[Epoch 25/50] [Batch 599/938] [D loss: 0.185231] [G loss: 3.521098] [D real: 0.908381] [D fake: 0.047883]
[Epoch 25/50] [Batch 899/938] [D loss: 0.401090] [G loss: 3.734334] [D real: 0.894597] [D fake: 0.155837]
[Epoch 26/50] [Batch 299/938] [D loss: 0.256147] [G loss: 3.769781] [D real: 0.894257] [D fake: 0.048824]
[Epoch 26/50] [Batch 599/938] [D loss: 0.542256] [G loss: 4.231453] [D real: 0.904529] [D fake: 0.285221]
[Epoch 26/50] [Batch 899/938] [D loss: 0.285862] [G loss: 4.320946] [D real: 0.947245] [D fake: 0.182356]
[Epoch 27/50] [Batch 299/938] [D loss: 0.169338] [G loss: 2.722309] [D real: 0.911741] [D fake: 0.023506]
[Epoch 27/50] [Batch 599/938] [D loss: 0.399153] [G loss: 2.906273] [D real: 0.806440] [D fake: 0.017474]
[Epoch 27/50] [Batch 899/938] [D loss: 0.282505] [G loss: 2.549886] [D real: 0.888163] [D fake: 0.023338]
[Epoch 28/50] [Batch 299/938] [D loss: 0.604350] [G loss: 2.217489] [D real: 0.837789] [D fake: 0.170963]
[Epoch 28/50] [Batch 599/938] [D loss: 0.466866] [G loss: 3.620631] [D real: 0.890022] [D fake: 0.169355]
[Epoch 28/50] [Batch 899/938] [D loss: 0.281701] [G loss: 2.144376] [D real: 0.863881] [D fake: 0.044732]
[Epoch 29/50] [Batch 299/938] [D loss: 0.775994] [G loss: 2.102048] [D real: 0.716906] [D fake: 0.007638]
[Epoch 29/50] [Batch 599/938] [D loss: 0.224053] [G loss: 3.030093] [D real: 0.914856] [D fake: 0.050997]
[Epoch 29/50] [Batch 899/938] [D loss: 0.237019] [G loss: 2.900931] [D real: 0.895625] [D fake: 0.072492]
[Epoch 30/50] [Batch 299/938] [D loss: 0.782251] [G loss: 6.105689] [D real: 0.718568] [D fake: 0.002575]
[Epoch 30/50] [Batch 599/938] [D loss: 0.265013] [G loss: 2.404941] [D real: 0.891558] [D fake: 0.070822]
[Epoch 30/50] [Batch 899/938] [D loss: 1.017040] [G loss: 4.020058] [D real: 0.659709] [D fake: 0.001705]
[Epoch 31/50] [Batch 299/938] [D loss: 0.397375] [G loss: 4.976284] [D real: 0.974327] [D fake: 0.289873]
[Epoch 31/50] [Batch 599/938] [D loss: 0.315092] [G loss: 3.193804] [D real: 0.899616] [D fake: 0.122920]
[Epoch 31/50] [Batch 899/938] [D loss: 0.380334] [G loss: 2.588212] [D real: 0.863771] [D fake: 0.144091]
[Epoch 32/50] [Batch 299/938] [D loss: 1.147674] [G loss: 4.028924] [D real: 0.627983] [D fake: 0.001301]
[Epoch 32/50] [Batch 599/938] [D loss: 0.342375] [G loss: 4.423591] [D real: 0.951381] [D fake: 0.215124]
[Epoch 32/50] [Batch 899/938] [D loss: 0.210521] [G loss: 2.572556] [D real: 0.895898] [D fake: 0.048010]
[Epoch 33/50] [Batch 299/938] [D loss: 0.304810] [G loss: 2.493082] [D real: 0.875015] [D fake: 0.054701]
[Epoch 33/50] [Batch 599/938] [D loss: 0.321373] [G loss: 2.517244] [D real: 0.897960] [D fake: 0.095477]
[Epoch 33/50] [Batch 899/938] [D loss: 0.515816] [G loss: 5.828977] [D real: 0.944610] [D fake: 0.341102]
[Epoch 34/50] [Batch 299/938] [D loss: 0.409922] [G loss: 6.623507] [D real: 0.978187] [D fake: 0.265168]
[Epoch 34/50] [Batch 599/938] [D loss: 0.341234] [G loss: 3.718881] [D real: 0.816547] [D fake: 0.006091]
[Epoch 34/50] [Batch 899/938] [D loss: 0.350949] [G loss: 4.508050] [D real: 0.948336] [D fake: 0.192818]
[Epoch 35/50] [Batch 299/938] [D loss: 0.138449] [G loss: 4.477051] [D real: 0.926968] [D fake: 0.026736]
[Epoch 35/50] [Batch 599/938] [D loss: 0.173707] [G loss: 3.243648] [D real: 0.943513] [D fake: 0.076281]
[Epoch 35/50] [Batch 899/938] [D loss: 0.463550] [G loss: 2.685429] [D real: 0.849280] [D fake: 0.056260]
[Epoch 36/50] [Batch 299/938] [D loss: 0.316062] [G loss: 2.704587] [D real: 0.887094] [D fake: 0.091721]
[Epoch 36/50] [Batch 599/938] [D loss: 0.186407] [G loss: 3.310061] [D real: 0.927350] [D fake: 0.067080]
[Epoch 36/50] [Batch 899/938] [D loss: 0.363015] [G loss: 2.367592] [D real: 0.823603] [D fake: 0.018048]
[Epoch 37/50] [Batch 299/938] [D loss: 0.414002] [G loss: 3.926821] [D real: 0.904654] [D fake: 0.128346]
[Epoch 37/50] [Batch 599/938] [D loss: 0.538029] [G loss: 2.172678] [D real: 0.793457] [D fake: 0.081910]
[Epoch 37/50] [Batch 899/938] [D loss: 0.272864] [G loss: 4.492427] [D real: 0.945358] [D fake: 0.115727]
[Epoch 38/50] [Batch 299/938] [D loss: 0.377487] [G loss: 1.950431] [D real: 0.845567] [D fake: 0.068117]
[Epoch 38/50] [Batch 599/938] [D loss: 0.272310] [G loss: 4.274941] [D real: 0.945490] [D fake: 0.160435]
[Epoch 38/50] [Batch 899/938] [D loss: 0.285233] [G loss: 3.814036] [D real: 0.905265] [D fake: 0.087217]
[Epoch 39/50] [Batch 299/938] [D loss: 0.463167] [G loss: 3.948573] [D real: 0.869545] [D fake: 0.027951]
[Epoch 39/50] [Batch 599/938] [D loss: 0.698889] [G loss: 5.423176] [D real: 0.745425] [D fake: 0.001301]
[Epoch 39/50] [Batch 899/938] [D loss: 0.430070] [G loss: 2.143290] [D real: 0.851997] [D fake: 0.059725]
[Epoch 40/50] [Batch 299/938] [D loss: 0.361483] [G loss: 4.420526] [D real: 0.929252] [D fake: 0.219515]
[Epoch 40/50] [Batch 599/938] [D loss: 0.226961] [G loss: 3.292178] [D real: 0.917161] [D fake: 0.067162]
[Epoch 40/50] [Batch 899/938] [D loss: 0.454976] [G loss: 2.793385] [D real: 0.832017] [D fake: 0.090737]
[Epoch 41/50] [Batch 299/938] [D loss: 0.384724] [G loss: 2.922082] [D real: 0.853736] [D fake: 0.029955]
[Epoch 41/50] [Batch 599/938] [D loss: 0.145182] [G loss: 4.042384] [D real: 0.962897] [D fake: 0.081205]
[Epoch 41/50] [Batch 899/938] [D loss: 0.320098] [G loss: 4.591055] [D real: 0.936096] [D fake: 0.191183]
[Epoch 42/50] [Batch 299/938] [D loss: 0.359770] [G loss: 3.819891] [D real: 0.922519] [D fake: 0.155782]
[Epoch 42/50] [Batch 599/938] [D loss: 0.253998] [G loss: 3.334822] [D real: 0.935975] [D fake: 0.114938]
[Epoch 42/50] [Batch 899/938] [D loss: 0.212634] [G loss: 4.326683] [D real: 0.951967] [D fake: 0.123964]
[Epoch 43/50] [Batch 299/938] [D loss: 0.247934] [G loss: 3.677986] [D real: 0.914760] [D fake: 0.051124]
[Epoch 43/50] [Batch 599/938] [D loss: 0.495208] [G loss: 2.402143] [D real: 0.866695] [D fake: 0.117677]
[Epoch 43/50] [Batch 899/938] [D loss: 0.397974] [G loss: 3.050493] [D real: 0.866244] [D fake: 0.088412]
[Epoch 44/50] [Batch 299/938] [D loss: 0.198564] [G loss: 4.099626] [D real: 0.925422] [D fake: 0.059895]
[Epoch 44/50] [Batch 599/938] [D loss: 0.208132] [G loss: 3.020036] [D real: 0.926674] [D fake: 0.030141]
[Epoch 44/50] [Batch 899/938] [D loss: 0.426271] [G loss: 4.777842] [D real: 0.938641] [D fake: 0.229754]
[Epoch 45/50] [Batch 299/938] [D loss: 0.507072] [G loss: 3.133851] [D real: 0.817144] [D fake: 0.037286]
[Epoch 45/50] [Batch 599/938] [D loss: 0.977314] [G loss: 6.984739] [D real: 0.988801] [D fake: 0.436019]
[Epoch 45/50] [Batch 899/938] [D loss: 0.462772] [G loss: 2.489332] [D real: 0.869932] [D fake: 0.083717]
[Epoch 46/50] [Batch 299/938] [D loss: 0.170483] [G loss: 4.484272] [D real: 0.930090] [D fake: 0.009704]
[Epoch 46/50] [Batch 599/938] [D loss: 0.179156] [G loss: 3.109918] [D real: 0.929771] [D fake: 0.044969]
[Epoch 46/50] [Batch 899/938] [D loss: 0.195152] [G loss: 4.442445] [D real: 0.944687] [D fake: 0.029242]
[Epoch 47/50] [Batch 299/938] [D loss: 0.265071] [G loss: 4.094112] [D real: 0.923925] [D fake: 0.118196]
[Epoch 47/50] [Batch 599/938] [D loss: 0.398625] [G loss: 2.484815] [D real: 0.860867] [D fake: 0.039074]
[Epoch 47/50] [Batch 899/938] [D loss: 0.415479] [G loss: 2.386563] [D real: 0.857307] [D fake: 0.049753]
[Epoch 48/50] [Batch 299/938] [D loss: 0.432472] [G loss: 4.023952] [D real: 0.814374] [D fake: 0.009090]
[Epoch 48/50] [Batch 599/938] [D loss: 0.400884] [G loss: 2.327960] [D real: 0.829886] [D fake: 0.041326]
[Epoch 48/50] [Batch 899/938] [D loss: 0.249835] [G loss: 4.092502] [D real: 0.914343] [D fake: 0.054030]
[Epoch 49/50] [Batch 299/938] [D loss: 0.663928] [G loss: 2.409004] [D real: 0.743706] [D fake: 0.031403]
[Epoch 49/50] [Batch 599/938] [D loss: 0.464804] [G loss: 5.393755] [D real: 0.943974] [D fake: 0.232481]
[Epoch 49/50] [Batch 899/938] [D loss: 0.164426] [G loss: 3.941015] [D real: 0.939509] [D fake: 0.068522]
3.保存模型
torch.save(generator.state_dict(), "./data/save/generator.pth")
torch.save(discriminator.state_dict(), "./data/save/discriminator.pth")