Introduction to Deep Learning with PyTorch

1、Introduction to PyTorch, a Deep Learning Library

python 复制代码
import torch

# supports:
## image data with torchvision
## audio data with torchaudio
## text data with torchtext

1.2、Tensors: the building blocks of networks in PyTorch

1.2.1、Load from list

python 复制代码
import torch

lst = [[1,2,3], [4,5,6]]
tensor = torch.tensor(lst)

1.2.2、Load from NumPy array

python 复制代码
np_array = np.array(array)
np_tensor = torch.from_numpy(np_array)

1.3、Creating our first neural network

1.3.1、A basic, two-layer network with no hidden layers

python 复制代码
import torch.nn as nn

# Create input_tensor with three features
input_tensor = torch.tensor([0.3471, 0.4547, -0.2356])

# Define our first linear layer
linear_layer = nn.Linear(in_features=3, out_features=2

# Pass input through linear layer
output = linear_layer(input_tensor)



# Show the output
print(output)

# Each linear layer has a .weight and .bias property
linear_layer.weight
linear_layer.bias
  • Networks with only linear layers are called fully connected networks.

1.3.2、Stacking layers with nn.Sequential()

python 复制代码
# Create network with three linear layers
model = nn.Sequential(
    nn.Linear(10,18),
    nn.Linear(18,20),
    nn.Linear(20, 5),
)

1.4、Discovering activation functions

  • Activation functions add non-linearity to the network.

  • A model can learn more complex relationships with non-linearity.

  • Two-class classification: Sigmoid function demo:

    python 复制代码
    import torch
    import torch.nn as nn
    
    input_tensor = torch.tensor([[6.0]])
    sigmoid = nn.Sigmoid()
    output = sigmoid(input_tensor)
    
    # tensor([[0.9975]])
  • Application for Sigmoid function:

    python 复制代码
    model = nn.Sequential(
        nn.Linear(6,4),
        nn.Linear(4,1),
        nn.Sigmoid()
    )
  • Multi-class classification: Softmax demo:

    python 复制代码
    import torch
    import torch.nn as nn
    
    input_tensor = torch.tensor([[4.3, 6.1, 2.3]])
    
    # dim=-1 indicates softmax is applied to the input tensor's last dimension
    # nn.Softmax() can be used as last step in nn.Sequential()
    probabilities = nn.Softmax(dim=-1)
    output_tensor = probabilities(input_tensor)
    
    print(output_tensor)
    
    # tensor([[0.1392, 0.8420, 0.0188]])

2、Training Our First Neural Network with PyTorch

2.1、Running a forward pass

2.1.1、Forward pass

  • Input data is passed forward or propagated through a network.
  • Coputations performed at each layer.
  • Outputs of each layer passed to each subsequent layer.
  • Output of final layer: "prediction".
  • Used for both training and prediction.
  • Some possible outputs:

2.1.2、Backward pass

2.1.3、Binary classification: forward pass

2.1.4、Multi-class classification: forward pass

2.1.5、Regression: forward pass

2.2、Using loss functions to assess model predictions

2.2.1、Why we need a loss function?

  • Give feedback to model during training.
  • Take in model prediction and ground truth .
  • Output a float.

2.2.2、One-hot encoding concepts

python 复制代码
import torch.nn.functional as F

F.one_hot(torch.tensor(0), num_classes = 3)
# tensor([1,0,0]) --- first class

F.one_hot(torch.tensor(1), num_classes = 3)
# tensor([0,1,0]) --- second class

F.one_hot(torch.tensor(2), num_classes = 3)
# tensor([0,0,1]) --- third class

2.2.3、Cross entropy loss in PyTorch

python 复制代码
from torch.nn import CrossEntropyLoss

scores = tensor([[-0.1211, 0.1059]])
one_hot_target = tensor([[1, 0]])

criterion = CrossEntropyLoss()
criterion(scores.double(), one_hot_target.double())
# tensor(0.8131, dtype=torch.float64)

2.2.4、Bringing it all together

2.3、Using derivatives to update model parameters

2.3.1、Minimizing the loss

  • High loss: model prediction is wrong
  • Low loss: model prediction is correct

2.3.2、Connecting derivatives and model training

2.3.3、Backpropagation concepts

2.3.4、Gradient descent

2.4、Writing our first training loop

2.4.1、Training a neural network

2.4.2、Mean Squared Error (MSE) Loss

2.4.3、Before the training loop

2.4.4、The training loop

3、Neural Network Architecture and Hyperparameters

3.1、Discovering activation functions between layers

3.1.1、Limitations of the sigmoid and softmax function

3.1.2、Introducing ReLU

3.1.3、Introducing Leaky ReLU

3.2、A deeper dive into neural network architecture

3.2.1、Counting the number of parameters

3.3、Learning rate and momentum

3.4、Layer initialization and transfer learning

3.4.1、Layer initialization

3.4.2、Transfer learning and fine tuning

4、Evaluating and Improving Models

4.1、A deeper dive into loading data

4.1.1、Recalling TensorDataset

4.1.2、Recalling DataLoader

4.2、Evaluating model performance

4.2.1、Model evaluation metrics

4.2.2、Calculating training loss

4.2.3、Calculating validation loss

4.2.4、Calculating accuracy with torchmetrics

4.3、Fighting overfitting

4.4、Improving model performance

  • Overfit the training set
  • Reduce overfitting
  • Fine-tune the hyperparameters
相关推荐
神奇夜光杯5 分钟前
Python酷库之旅-第三方库Pandas(202)
开发语言·人工智能·python·excel·pandas·标准库及第三方库·学习与成长
正义的彬彬侠7 分钟前
《XGBoost算法的原理推导》12-14决策树复杂度的正则化项 公式解析
人工智能·决策树·机器学习·集成学习·boosting·xgboost
Debroon17 分钟前
RuleAlign 规则对齐框架:将医生的诊断规则形式化并注入模型,无需额外人工标注的自动对齐方法
人工智能
羊小猪~~24 分钟前
神经网络基础--什么是正向传播??什么是方向传播??
人工智能·pytorch·python·深度学习·神经网络·算法·机器学习
AI小杨25 分钟前
【车道线检测】一、传统车道线检测:基于霍夫变换的车道线检测史诗级详细教程
人工智能·opencv·计算机视觉·霍夫变换·车道线检测
晨曦_子画29 分钟前
编程语言之战:AI 之后的 Kotlin 与 Java
android·java·开发语言·人工智能·kotlin
道可云31 分钟前
道可云人工智能&元宇宙每日资讯|2024国际虚拟现实创新大会将在青岛举办
大数据·人工智能·3d·机器人·ar·vr
人工智能培训咨询叶梓40 分钟前
探索开放资源上指令微调语言模型的现状
人工智能·语言模型·自然语言处理·性能优化·调优·大模型微调·指令微调
zzZ_CMing40 分钟前
大语言模型训练的全过程:预训练、微调、RLHF
人工智能·自然语言处理·aigc
newxtc41 分钟前
【旷视科技-注册/登录安全分析报告】
人工智能·科技·安全·ddddocr