Introduction to Deep Learning with PyTorch

1、Introduction to PyTorch, a Deep Learning Library

python 复制代码
import torch

# supports:
## image data with torchvision
## audio data with torchaudio
## text data with torchtext

1.2、Tensors: the building blocks of networks in PyTorch

1.2.1、Load from list

python 复制代码
import torch

lst = [[1,2,3], [4,5,6]]
tensor = torch.tensor(lst)

1.2.2、Load from NumPy array

python 复制代码
np_array = np.array(array)
np_tensor = torch.from_numpy(np_array)

1.3、Creating our first neural network

1.3.1、A basic, two-layer network with no hidden layers

python 复制代码
import torch.nn as nn

# Create input_tensor with three features
input_tensor = torch.tensor([0.3471, 0.4547, -0.2356])

# Define our first linear layer
linear_layer = nn.Linear(in_features=3, out_features=2

# Pass input through linear layer
output = linear_layer(input_tensor)



# Show the output
print(output)

# Each linear layer has a .weight and .bias property
linear_layer.weight
linear_layer.bias
  • Networks with only linear layers are called fully connected networks.

1.3.2、Stacking layers with nn.Sequential()

python 复制代码
# Create network with three linear layers
model = nn.Sequential(
    nn.Linear(10,18),
    nn.Linear(18,20),
    nn.Linear(20, 5),
)

1.4、Discovering activation functions

  • Activation functions add non-linearity to the network.

  • A model can learn more complex relationships with non-linearity.

  • Two-class classification: Sigmoid function demo:

    python 复制代码
    import torch
    import torch.nn as nn
    
    input_tensor = torch.tensor([[6.0]])
    sigmoid = nn.Sigmoid()
    output = sigmoid(input_tensor)
    
    # tensor([[0.9975]])
  • Application for Sigmoid function:

    python 复制代码
    model = nn.Sequential(
        nn.Linear(6,4),
        nn.Linear(4,1),
        nn.Sigmoid()
    )
  • Multi-class classification: Softmax demo:

    python 复制代码
    import torch
    import torch.nn as nn
    
    input_tensor = torch.tensor([[4.3, 6.1, 2.3]])
    
    # dim=-1 indicates softmax is applied to the input tensor's last dimension
    # nn.Softmax() can be used as last step in nn.Sequential()
    probabilities = nn.Softmax(dim=-1)
    output_tensor = probabilities(input_tensor)
    
    print(output_tensor)
    
    # tensor([[0.1392, 0.8420, 0.0188]])

2、Training Our First Neural Network with PyTorch

2.1、Running a forward pass

2.1.1、Forward pass

  • Input data is passed forward or propagated through a network.
  • Coputations performed at each layer.
  • Outputs of each layer passed to each subsequent layer.
  • Output of final layer: "prediction".
  • Used for both training and prediction.
  • Some possible outputs:

2.1.2、Backward pass

2.1.3、Binary classification: forward pass

2.1.4、Multi-class classification: forward pass

2.1.5、Regression: forward pass

2.2、Using loss functions to assess model predictions

2.2.1、Why we need a loss function?

  • Give feedback to model during training.
  • Take in model prediction and ground truth .
  • Output a float.

2.2.2、One-hot encoding concepts

python 复制代码
import torch.nn.functional as F

F.one_hot(torch.tensor(0), num_classes = 3)
# tensor([1,0,0]) --- first class

F.one_hot(torch.tensor(1), num_classes = 3)
# tensor([0,1,0]) --- second class

F.one_hot(torch.tensor(2), num_classes = 3)
# tensor([0,0,1]) --- third class

2.2.3、Cross entropy loss in PyTorch

python 复制代码
from torch.nn import CrossEntropyLoss

scores = tensor([[-0.1211, 0.1059]])
one_hot_target = tensor([[1, 0]])

criterion = CrossEntropyLoss()
criterion(scores.double(), one_hot_target.double())
# tensor(0.8131, dtype=torch.float64)

2.2.4、Bringing it all together

2.3、Using derivatives to update model parameters

2.3.1、Minimizing the loss

  • High loss: model prediction is wrong
  • Low loss: model prediction is correct

2.3.2、Connecting derivatives and model training

2.3.3、Backpropagation concepts

2.3.4、Gradient descent

2.4、Writing our first training loop

2.4.1、Training a neural network

2.4.2、Mean Squared Error (MSE) Loss

2.4.3、Before the training loop

2.4.4、The training loop

3、Neural Network Architecture and Hyperparameters

3.1、Discovering activation functions between layers

3.1.1、Limitations of the sigmoid and softmax function

3.1.2、Introducing ReLU

3.1.3、Introducing Leaky ReLU

3.2、A deeper dive into neural network architecture

3.2.1、Counting the number of parameters

3.3、Learning rate and momentum

3.4、Layer initialization and transfer learning

3.4.1、Layer initialization

3.4.2、Transfer learning and fine tuning

4、Evaluating and Improving Models

4.1、A deeper dive into loading data

4.1.1、Recalling TensorDataset

4.1.2、Recalling DataLoader

4.2、Evaluating model performance

4.2.1、Model evaluation metrics

4.2.2、Calculating training loss

4.2.3、Calculating validation loss

4.2.4、Calculating accuracy with torchmetrics

4.3、Fighting overfitting

4.4、Improving model performance

  • Overfit the training set
  • Reduce overfitting
  • Fine-tune the hyperparameters
相关推荐
华奥系科技33 分钟前
智慧水务发展迅猛:从物联网架构到AIoT系统的跨越式升级
人工智能·物联网·智慧城市
R²AIN SUITE33 分钟前
MCP协议重构AI Agent生态:万能插槽如何终结工具孤岛?
人工智能
b***251143 分钟前
动力电池点焊机:驱动电池焊接高效与可靠的核心力量|比斯特自动化
人工智能·科技·自动化
Gyoku Mint1 小时前
机器学习×第二卷:概念下篇——她不再只是模仿,而是开始决定怎么靠近你
人工智能·python·算法·机器学习·pandas·ai编程·matplotlib
小和尚同志1 小时前
通俗易懂的 MCP 概念入门
人工智能·aigc
dudly1 小时前
大语言模型评测体系全解析(下篇):工具链、学术前沿与实战策略
人工智能·语言模型
zzlyx992 小时前
AI大数据模型如何与thingsboard物联网结合
人工智能·物联网
说私域2 小时前
定制开发开源AI智能名片驱动下的海报工厂S2B2C商城小程序运营策略——基于社群口碑传播与子市场细分的实证研究
人工智能·小程序·开源·零售
HillVue2 小时前
AI,如何重构理解、匹配与决策?
人工智能·重构
skywalk81633 小时前
市面上哪款AI开源软件做ppt最好?
人工智能·powerpoint