目录
- [1. 宝可梦数据集训练的问题](#1. 宝可梦数据集训练的问题)
- [2. 迁移学习](#2. 迁移学习)
- [3. 迁移学习实现](#3. 迁移学习实现)
- [4. 完整代码](#4. 完整代码)
1. 宝可梦数据集训练的问题
宝可梦数据总共有1000多张,对于resnet18网络来说数据量是不够的,训练时很容易出现过拟合,那么如何解决这个问题呢?
2. 迁移学习
下图以⚪和■二分类为例
第一种模式,下图中第一列图,直接使用4个数据训练,有三种可能,中间一种肯定是最好的。
第二种模式,宝可梦数据集和imagenet中1000多种数据有相似甚至包含其中,那么是否可以利用imagenet的模型来提高宝可梦数据集训练模型的性能呢?答案肯定的,如图中第二列图,使用在已有的相似知识基础上训练,大大提升了中间分割线的可能。
可以说利用已有模型权重,训练特定分类任务的操作就叫迁移学习。
3. 迁移学习实现
见下图,只需替换掉输出层即可。
针对上文的宝可梦数据集模型训练,如何实现迁移学习呢?
步骤:
- 加载标准resnet18网络
python
from torchvision.models import resnet18
- 增加预训练标识
python
trained_model = resnet18(pretrained=True)
- 取resnet前17层,并增加一个线性输出层
*list(trained_model.children())[:-1]取前17层,并将数据打散
代码中增加了一个打平操作,将resnet18第17层输出的[b, 512, 1, 1]转换到 [b, 512]
python
trained_model = resnet18(pretrained=True)
model = nn.Sequential(*list(trained_model.children())[:-1], #[b, 512, 1, 1]
Flatten(), # [b, 512, 1, 1] => [b, 512]
nn.Linear(512, 5)
).to(device)
经验证准确率从84%提升到94%,因此迁移学习有助于提升小数据集性能的提升和防止过拟合。
4. 完整代码
train_transfer.py
python
import torch
from torch import optim, nn
import visdom
import torchvision
from torch.utils.data import DataLoader
from pokemon import Pokemon
# from resnet import ResNet18
from torchvision.models import resnet18
from utils import Flatten
batchsz = 32
lr = 1e-3
epochs = 10
device = torch.device('cuda')
torch.manual_seed(1234)
train_db = Pokemon('pokemon', 224, mode='train')
val_db = Pokemon('pokemon', 224, mode='val')
test_db = Pokemon('pokemon', 224, mode='test')
train_loader = DataLoader(train_db, batch_size=batchsz, shuffle=True,
num_workers=4)
val_loader = DataLoader(val_db, batch_size=batchsz, num_workers=2)
test_loader = DataLoader(test_db, batch_size=batchsz, num_workers=2)
viz = visdom.Visdom()
def evalute(model, loader):
model.eval()
correct = 0
total = len(loader.dataset)
for x,y in loader:
x,y = x.to(device), y.to(device)
with torch.no_grad():
logits = model(x)
pred = logits.argmax(dim=1)
correct += torch.eq(pred, y).sum().float().item()
return correct / total
def main():
# model = ResNet18(5).to(device)
trained_model = resnet18(pretrained=True)
model = nn.Sequential(*list(trained_model.children())[:-1], #[b, 512, 1, 1]
Flatten(), # [b, 512, 1, 1] => [b, 512]
nn.Linear(512, 5)
).to(device)
# x = torch.randn(2, 3, 224, 224)
# print(model(x).shape)
optimizer = optim.Adam(model.parameters(), lr=lr)
criteon = nn.CrossEntropyLoss()
best_acc, best_epoch = 0, 0
global_step = 0
viz.line([0], [-1], win='loss', opts=dict(title='loss'))
viz.line([0], [-1], win='val_acc', opts=dict(title='val_acc'))
for epoch in range(epochs):
for step, (x,y) in enumerate(train_loader):
# x: [b, 3, 224, 224], y: [b]
x, y = x.to(device), y.to(device)
model.train()
logits = model(x)
loss = criteon(logits, y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
viz.line([loss.item()], [global_step], win='loss', update='append')
global_step += 1
if epoch % 1 == 0:
val_acc = evalute(model, val_loader)
if val_acc> best_acc:
best_epoch = epoch
best_acc = val_acc
torch.save(model.state_dict(), 'best.mdl')
viz.line([val_acc], [global_step], win='val_acc', update='append')
print('best acc:', best_acc, 'best epoch:', best_epoch)
model.load_state_dict(torch.load('best.mdl'))
print('loaded from ckpt!')
test_acc = evalute(model, test_loader)
print('test acc:', test_acc)
if __name__ == '__main__':
main()
python
from matplotlib import pyplot as plt
import torch
from torch import nn
class Flatten(nn.Module):
def __init__(self):
super(Flatten, self).__init__()
def forward(self, x):
shape = torch.prod(torch.tensor(x.shape[1:])).item()
return x.view(-1, shape)
def plot_image(img, label, name):
fig = plt.figure()
for i in range(6):
plt.subplot(2, 3, i + 1)
plt.tight_layout()
plt.imshow(img[i][0]*0.3081+0.1307, cmap='gray', interpolation='none')
plt.title("{}: {}".format(name, label[i].item()))
plt.xticks([])
plt.yticks([])
plt.show()