【PyTorch Lightning】

python 复制代码
import os
from torch import optim, nn, utils, Tensor
from torchvision.datasets import MNIST
from torchvision.transforms import ToTensor
import lightning as L

# define any number of nn.Modules (or use your current ones)
encoder = nn.Sequential(nn.Linear(28 * 28, 64), nn.ReLU(), nn.Linear(64, 3))
decoder = nn.Sequential(nn.Linear(3, 64), nn.ReLU(), nn.Linear(64, 28 * 28))


# define the LightningModule
class LitAutoEncoder(L.LightningModule):
    def __init__(self, encoder, decoder):
        super().__init__()
        self.encoder = encoder
        self.decoder = decoder

    def training_step(self, batch, batch_idx):
        # training_step defines the train loop.
        # it is independent of forward
        x, _ = batch
        x = x.view(x.size(0), -1)
        z = self.encoder(x)
        x_hat = self.decoder(z)
        loss = nn.functional.mse_loss(x_hat, x)
        # Logging to TensorBoard (if installed) by default
        self.log("train_loss", loss)
        return loss

    def configure_optimizers(self):
        optimizer = optim.Adam(self.parameters(), lr=1e-3)
        return optimizer


# init the autoencoder
autoencoder = LitAutoEncoder(encoder, decoder)

# setup data
dataset = MNIST(os.getcwd(), download=True, transform=ToTensor())
train_loader = utils.data.DataLoader(dataset)

# train the model (hint: here are some helpful Trainer arguments for rapid idea iteration)
trainer = L.Trainer(limit_train_batches=100, max_epochs=1)
trainer.fit(model=autoencoder, train_dataloaders=train_loader)

# load checkpoint
checkpoint = "./lightning_logs/version_0/checkpoints/epoch=0-step=100.ckpt"
autoencoder = LitAutoEncoder.load_from_checkpoint(checkpoint, encoder=encoder, decoder=decoder)

# choose your trained nn.Module
encoder = autoencoder.encoder
encoder.eval()

# embed 4 fake images!
fake_image_batch = torch.rand(4, 28 * 28, device=autoencoder.device)
embeddings = encoder(fake_image_batch)
print("⚡" * 20, "\nPredictions (4 image embeddings):\n", embeddings, "\n", "⚡" * 20)
python 复制代码
tensorboard --logdir .

Module中要定义forward函数;

Lightning Module中除了forward函数,还要定义configure_optimizers, training_step和validation_step函数

相关推荐
智算菩萨5 分钟前
Anthropic Claude 4.5:AI分层编排的革命,成本、速度与能力的新平衡
前端·人工智能
CCPC不拿奖不改名6 分钟前
网络与API:从HTTP协议视角理解网络分层原理+面试习题
开发语言·网络·python·网络协议·学习·http·面试
小Pawn爷10 分钟前
12. 智能与风险并存:金融AI的成本,合规与伦理平衡术
人工智能·金融·llm·合规
nervermore99010 分钟前
3.2 django框架
python
●VON10 分钟前
AI 保险机制:为智能时代的不确定性兜底
人工智能·学习·安全·制造·von
开发者导航11 分钟前
【开发者导航】一键解决AI生成内容格式复制难题的剪贴板工具:PasteMD
人工智能
bu_shuo23 分钟前
将AI生成的数学公式正确复制到word中
人工智能·chatgpt·word·latex
Learner23 分钟前
Python异常处理
java·前端·python
AI科技星26 分钟前
光速飞行器动力学方程的第一性原理推导、验证与范式革命
数据结构·人工智能·线性代数·算法·机器学习·概率论
摘星编程29 分钟前
RAG的下一站:检索增强生成如何重塑企业知识中枢?
android·人工智能