Introduction to Deep Learning with PyTorch

1、Introduction to PyTorch, a Deep Learning Library

python 复制代码
import torch

# supports:
## image data with torchvision
## audio data with torchaudio
## text data with torchtext

1.2、Tensors: the building blocks of networks in PyTorch

1.2.1、Load from list

python 复制代码
import torch

lst = [[1,2,3], [4,5,6]]
tensor = torch.tensor(lst)

1.2.2、Load from NumPy array

python 复制代码
np_array = np.array(array)
np_tensor = torch.from_numpy(np_array)

1.3、Creating our first neural network

1.3.1、A basic, two-layer network with no hidden layers

python 复制代码
import torch.nn as nn

# Create input_tensor with three features
input_tensor = torch.tensor([0.3471, 0.4547, -0.2356])

# Define our first linear layer
linear_layer = nn.Linear(in_features=3, out_features=2

# Pass input through linear layer
output = linear_layer(input_tensor)



# Show the output
print(output)

# Each linear layer has a .weight and .bias property
linear_layer.weight
linear_layer.bias
  • Networks with only linear layers are called fully connected networks.

1.3.2、Stacking layers with nn.Sequential()

python 复制代码
# Create network with three linear layers
model = nn.Sequential(
    nn.Linear(10,18),
    nn.Linear(18,20),
    nn.Linear(20, 5),
)

1.4、Discovering activation functions

  • Activation functions add non-linearity to the network.

  • A model can learn more complex relationships with non-linearity.

  • Two-class classification: Sigmoid function demo:

    python 复制代码
    import torch
    import torch.nn as nn
    
    input_tensor = torch.tensor([[6.0]])
    sigmoid = nn.Sigmoid()
    output = sigmoid(input_tensor)
    
    # tensor([[0.9975]])
  • Application for Sigmoid function:

    python 复制代码
    model = nn.Sequential(
        nn.Linear(6,4),
        nn.Linear(4,1),
        nn.Sigmoid()
    )
  • Multi-class classification: Softmax demo:

    python 复制代码
    import torch
    import torch.nn as nn
    
    input_tensor = torch.tensor([[4.3, 6.1, 2.3]])
    
    # dim=-1 indicates softmax is applied to the input tensor's last dimension
    # nn.Softmax() can be used as last step in nn.Sequential()
    probabilities = nn.Softmax(dim=-1)
    output_tensor = probabilities(input_tensor)
    
    print(output_tensor)
    
    # tensor([[0.1392, 0.8420, 0.0188]])

2、Training Our First Neural Network with PyTorch

2.1、Running a forward pass

2.1.1、Forward pass

  • Input data is passed forward or propagated through a network.
  • Coputations performed at each layer.
  • Outputs of each layer passed to each subsequent layer.
  • Output of final layer: "prediction".
  • Used for both training and prediction.
  • Some possible outputs:

2.1.2、Backward pass

2.1.3、Binary classification: forward pass

2.1.4、Multi-class classification: forward pass

2.1.5、Regression: forward pass

2.2、Using loss functions to assess model predictions

2.2.1、Why we need a loss function?

  • Give feedback to model during training.
  • Take in model prediction and ground truth .
  • Output a float.

2.2.2、One-hot encoding concepts

python 复制代码
import torch.nn.functional as F

F.one_hot(torch.tensor(0), num_classes = 3)
# tensor([1,0,0]) --- first class

F.one_hot(torch.tensor(1), num_classes = 3)
# tensor([0,1,0]) --- second class

F.one_hot(torch.tensor(2), num_classes = 3)
# tensor([0,0,1]) --- third class

2.2.3、Cross entropy loss in PyTorch

python 复制代码
from torch.nn import CrossEntropyLoss

scores = tensor([[-0.1211, 0.1059]])
one_hot_target = tensor([[1, 0]])

criterion = CrossEntropyLoss()
criterion(scores.double(), one_hot_target.double())
# tensor(0.8131, dtype=torch.float64)

2.2.4、Bringing it all together

2.3、Using derivatives to update model parameters

2.3.1、Minimizing the loss

  • High loss: model prediction is wrong
  • Low loss: model prediction is correct

2.3.2、Connecting derivatives and model training

2.3.3、Backpropagation concepts

2.3.4、Gradient descent

2.4、Writing our first training loop

2.4.1、Training a neural network

2.4.2、Mean Squared Error (MSE) Loss

2.4.3、Before the training loop

2.4.4、The training loop

3、Neural Network Architecture and Hyperparameters

3.1、Discovering activation functions between layers

3.1.1、Limitations of the sigmoid and softmax function

3.1.2、Introducing ReLU

3.1.3、Introducing Leaky ReLU

3.2、A deeper dive into neural network architecture

3.2.1、Counting the number of parameters

3.3、Learning rate and momentum

3.4、Layer initialization and transfer learning

3.4.1、Layer initialization

3.4.2、Transfer learning and fine tuning

4、Evaluating and Improving Models

4.1、A deeper dive into loading data

4.1.1、Recalling TensorDataset

4.1.2、Recalling DataLoader

4.2、Evaluating model performance

4.2.1、Model evaluation metrics

4.2.2、Calculating training loss

4.2.3、Calculating validation loss

4.2.4、Calculating accuracy with torchmetrics

4.3、Fighting overfitting

4.4、Improving model performance

  • Overfit the training set
  • Reduce overfitting
  • Fine-tune the hyperparameters
相关推荐
a程序小傲13 小时前
小红书Java面试被问:TCC事务的悬挂、空回滚问题解决方案
java·开发语言·人工智能·后端·python·面试·职场和发展
竹君子13 小时前
AIDC知识库(3)英伟达Rubin 架构对未来AIDC方案的影响初探
人工智能
棒棒的皮皮13 小时前
【深度学习】YOLO模型速度优化全攻略(模型 / 推理 / 硬件三层维度)
人工智能·深度学习·yolo·计算机视觉
线束线缆组件品替网13 小时前
Amphenol RF 同轴线缆:高频 RF 系统设计中 VSWR 与损耗控制实践
网络·人工智能·电脑·硬件工程·材料工程
土星云SaturnCloud13 小时前
液冷技术的未来:相变冷却、喷淋冷却等前沿技术探索
服务器·人工智能·ai
悟道心13 小时前
7. 自然语言处理NLP - Bert
人工智能·自然语言处理·bert
头发还在的女程序员14 小时前
小剧场短剧影视小程序源码分享,搭建自己的短剧小程序
人工智能·小程序·短剧·影视·微剧
l1t14 小时前
NineData第三届数据库编程大赛:用一条 SQL 解数独问题我的参赛程序
数据库·人工智能·sql·算法·postgresql·oracle·数独
土豆.exe14 小时前
若爱 (IfAI) v0.2.6 - 智能体进化:任务拆解与环境感知
人工智能
colfree14 小时前
Scanpy
人工智能·机器学习