Chapter Index:¶
Chapter 1: Introduction to PyTorch
- 1.1 What is PyTorch?
- 1.2 Advantages of PyTorch over other libraries
- 1.3 Installing PyTorch
- 1.4 PyTorch Tensors
Chapter 2: PyTorch Basics
- 2.1 Creating and manipulating tensors
- 2.2 Tensor operations and functions
- 2.3 Automatic differentiation with autograd
Chapter 3: PyTorch Datasets and Data Loaders
- 3.1 Working with datasets in PyTorch
- 3.2 Creating custom datasets
- 3.3 Loading data with data loaders
Chapter 4: Building Neural Networks in PyTorch
- 4.1 Creating neural network architectures
- 4.2 Defining forward and backward passes
- 4.3 Training neural networks
Chapter 5: Loss Functions and Optimization
- 5.1 Common loss functions in PyTorch
- 5.2 Optimizers and parameter updates
- 5.3 Learning rate scheduling
Chapter 6: Model Training and Evaluation
- 6.1 Splitting data into train and test sets
- 6.2 Training loops and batch processing
- 6.3 Evaluating model performance
Chapter 7: Transfer Learning in PyTorch
- 7.1 Understanding transfer learning
- 7.2 Pretrained models in PyTorch
- 7.3 Fine-tuning and feature extraction
Chapter 8: Convolutional Neural Networks (CNNs)
Chapter 9: Recurrent Neural Networks (RNNs)
Chapter 10: Generative Adversarial Networks (GANs)
- 10.1 Introduction to GANs
- 10.2 Building GAN architectures in PyTorch
- 10.3 Training and generating samples with GANs
Chapter 11: Reinforcement Learning with PyTorch
- 11.1 Introduction to reinforcement learning
- 11.2 Building RL agents in PyTorch
- 11.3 Training and evaluating RL agents
Chapter 12: Model Deployment with PyTorch
- 12.1 Saving and loading PyTorch models
- 12.2 Deploying PyTorch models in production
- 12.3 Model optimization and quantization
Chapter 13: Advanced PyTorch Techniques
- 13.1 Custom loss functions and metrics
- 13.2 Model interpretation and visualization
- 13.3 Handling imbalanced datasets
Chapter 14: PyTorch on GPUs
Chapter 15: PyTorch and ONNX
- 15.1 Introduction to ONNX
- 15.2 Converting PyTorch models to ONNX
- 15.3 Deploying ONNX models in different frameworks
Chapter 16: PyTorch and TensorBoard
- 16.1 Visualizing training progress with TensorBoard
- 16.2 Logging metrics and visualizing network graphs
Chapter 17: PyTorch and Deep Learning Libraries (continued)
- 17.1 Integrating PyTorch with TensorFlow
- 17.2 PyTorch and Keras interoperability
- 17.3 PyTorch and scikit-learn integration
Chapter 18: Model Interpretability and Explainability
- 18.1 Interpreting PyTorch models with Captum
- 18.2 Feature importance and attribution methods
- 18.3 Explaining model predictions
Chapter 19: Handling Time Series Data with PyTorch
- 19.1 Time series data preprocessing
- 19.2 Building recurrent models for time series forecasting
- 19.3 Evaluating and improving time series models
Chapter 20: Natural Language Processing with PyTorch
- 20.1 Text preprocessing and tokenization
- 20.2 Building text classification models
- 20.3 Sequence-to-sequence models for machine translation
Chapter 21: Autoencoders and Variational Autoencoders
- 21.1 Introduction to autoencoders
- 21.2 Building autoencoder architectures in PyTorch
- 21.3 Variational autoencoders for generative modeling
Chapter 22: Reinforcement Learning in Robotics with PyTorch
- 22.1 Reinforcement learning in robotics applications
- 22.2 Building RL agents for robotic tasks
- 22.3 Simulation and real-world deployment
Chapter 23: PyTorch and Bayesian Deep Learning
- 23.1 Introduction to Bayesian deep learning
- 23.2 Bayesian neural networks with PyTorch
- 23.3 Uncertainty estimation and model calibration
Chapter 24: Time Series Analysis with Deep Learning
- 24.1 Time series forecasting with deep learning models
- 24.2 Long Short-Term Memory (LSTM) networks
- 24.3 Temporal convolutional networks (TCN)
Chapter 25: PyTorch and Graph Neural Networks
- 25.1 Introduction to graph neural networks (GNNs)
- 25.2 Building GNN architectures in PyTorch
- 25.3 Node classification and graph representation learning
Chapter 26: Federated Learning with PyTorch
- 26.1 Introduction to federated learning
- 26.2 Implementing federated learning with PyTorch
- 26.3 Privacy and security considerations
Chapter 27: Deep Reinforcement Learning with PyTorch
- 27.1 Deep Q-Networks (DQN) and value-based methods
- 27.2 Policy gradients and actor-critic methods
- 27.3 Combining RL and deep learning techniques
Chapter 28: PyTorch and Computer Vision Applications
- 28.1 Object detection with PyTorch
- 28.2 Image segmentation using convolutional networks
- 28.3 Facial recognition and emotion detection
Chapter 29: Time Series Forecasting with Transformers
- 29.1 Introduction to transformer models
- 29.2 Applying transformers for time series forecasting
- 29.3 Transformer-based sequence-to-sequence models
Chapter 30: PyTorch for Natural Language Generation
- 30.1 Language modeling with recurrent and transformer models
- 30.2 Text generation and chatbots with PyTorch
- 30.3 Neural machine translation with transformers
Chapter 31: Adversarial Attacks and Defenses in Deep Learning (continued)
Chapter 32: PyTorch Model Compression and Quantization
- 32.1 Techniques for model compression
- 32.2 Quantization and reducing model size
- 32.3 Pruning and sparsity in PyTorch models
Chapter 33: PyTorch and AutoML
- 33.1 Introduction to AutoML
- 33.2 Automated neural architecture search
- 33.3 Hyperparameter optimization with PyTorch
Chapter 34: PyTorch for Time Series Anomaly Detection
- 34.1 Anomaly detection techniques for time series data
- 34.2 Building anomaly detection models with PyTorch
- 34.3 Evaluating and interpreting anomaly detection results
Chapter 35: PyTorch for Recommender Systems
- 35.1 Introduction to recommender systems
- 35.2 Collaborative filtering with PyTorch
- 35.3 Building neural network-based recommenders
Chapter 36: Advanced Optimization Methods in PyTorch
- 36.1 Stochastic gradient descent variants
- 36.2 Adaptive optimization algorithms
- 36.3 Second-order optimization methods
Chapter 37: PyTorch and Knowledge Graphs
- 37.1 Introduction to knowledge graphs
- 37.2 Representing and embedding knowledge graphs with PyTorch
- 37.3 Graph-based reasoning and link prediction
Chapter 38: Explainable AI with PyTorch
- 38.1 Interpreting black-box models with LIME and SHAP
- 38.2 Building interpretable models with PyTorch
- 38.3 Explainable AI in real-world applications
Chapter 39: PyTorch for Time Series Classification
- 39.1 Introduction to time series classification
- 39.2 Feature extraction and representation learning
- 39.3 Building classifiers for time series data
Chapter 40: PyTorch and Bayesian Optimization
- 40.1 Introduction to Bayesian optimization
- 40.2 Implementing Bayesian optimization with PyTorch
- 40.3 Hyperparameter tuning and optimization
Chapter 41: PyTorch for Audio Processing
- 41.1 Audio data preprocessing and feature extraction
- 41.2 Building audio classification models
- 41.3 Speech recognition and synthesis with PyTorch
Chapter 42: PyTorch and Reinforcement Learning in Robotics
- 42.1 Combining PyTorch with robotics frameworks
- 42.2 Reinforcement learning for robot control
- 42.3 Sim-to-real transfer in robotics applications
Chapter 43: PyTorch for Time Series Clustering
- 43.1 Introduction to time series clustering
- 43.2 Feature extraction and representation learning
- 43.3 Clustering algorithms for time series data
Chapter 44: PyTorch and Natural Language Understanding
- 44.1 Text classification and sentiment analysis
- 44.2 Named entity recognition and information extraction
- 44.3 Natural language understanding applications
Chapter 45: PyTorch for Image Captioning
- 45.1 Introduction to image captioning
- 45.2 Building image captioning models with PyTorch
- 45.3 Generating captions for images
Chapter 46: PyTorch and Graph Representation Learning (continued)
- 46.1 Graph embedding techniques with PyTorch
- 46.2 Graph neural networks for graph representation learning
- 46.3 Link prediction and node classification with GNNs
Chapter 47: PyTorch for Object Detection
- 47.1 Introduction to object detection
- 47.2 Building object detection models with PyTorch
- 47.3 Training and evaluation for object detection
Chapter 48: PyTorch for Time Series Segmentation
- 48.1 Introduction to time series segmentation
- 48.2 Segmentation methods for time series data
- 48.3 Building segmentation models with PyTorch
Chapter 49: PyTorch and Meta-Learning
- 49.1 Introduction to meta-learning
- 49.2 Building meta-learning models with PyTorch
- 49.3 Few-shot learning and adaptation
Chapter 50: PyTorch for Anomaly Detection
- 50.1 Introduction to anomaly detection
- 50.2 Building anomaly detection models with PyTorch
- 50.3 Outlier detection and novelty detection
Chapter 51: PyTorch for Multi-Task Learning
- 51.1 Introduction to multi-task learning
- 51.2 Building multi-task learning models with PyTorch
- 51.3 Joint training and task-specific outputs
Chapter 52: PyTorch and Graph Generation
- 52.1 Graph generation techniques with PyTorch
- 52.2 Graph autoencoders and variational graph models
- 52.3 Generating graphs with desired properties
Chapter 53: PyTorch for Time Series Forecasting with Attention Mechanisms
- 53.1 Attention mechanisms in time series forecasting
- 53.2 Building attention-based models with PyTorch
- 53.3 Interpreting attention weights
Chapter 54: PyTorch for Video Processing
- 54.1 Video data preprocessing and transformation
- 54.2 Building video classification models with PyTorch
- 54.3 Action recognition and video generation
Chapter 55: PyTorch and Semi-Supervised Learning
- 55.1 Introduction to semi-supervised learning
- 55.2 Building semi-supervised models with PyTorch
- 55.3 Leveraging unlabeled data for improved performance
Chapter 56: PyTorch for Out-of-Distribution Detection
- 56.1 Out-of-distribution detection techniques
- 56.2 Building OOD detection models with PyTorch
- 56.3 Evaluating and benchmarking OOD detection methods
Chapter 57: PyTorch and Model Ensemble Techniques
- 57.1 Ensemble methods in deep learning
- 57.2 Building model ensembles with PyTorch
- 57.3 Combining multiple models for improved performance
Chapter 58: PyTorch for Explainable Recommendation Systems
- 58.1 Interpretable models for recommendation systems
- 58.2 Explainable recommendation methods with PyTorch
- 58.3 User modeling and personalization
Chapter 59: PyTorch for Time Series Imputation
- 59.1 Missing data imputation techniques for time series
- 59.2 Building imputation models with PyTorch
- 59.3 Handling missing values in time series data
Chapter 60: PyTorch and Continual Learning (continued)
- 60.1 Introduction to continual learning
- 60.2 Building continual learning models with PyTorch
- 60.3 Catastrophic forgetting and regularization techniques
Chapter 61: PyTorch for Image Style Transfer
- 61.1 Introduction to image style transfer
- 61.2 Building style transfer models with PyTorch
- 61.3 Transferring styles between images
Chapter 62: PyTorch and Hyperparameter Optimization Libraries
- 62.1 Introduction to hyperparameter optimization
- 62.2 Integrating PyTorch with hyperparameter optimization libraries
- 62.3 Efficient hyperparameter search strategies
Chapter 63: PyTorch for Music Generation
- 63.1 Music data preprocessing and representation
- 63.2 Building music generation models with PyTorch
- 63.3 Creating original music compositions
Chapter 64: PyTorch and Federated Learning in Healthcare
- 64.1 Applications of federated learning in healthcare
- 64.2 Building privacy-preserving models with PyTorch
- 64.3 Collaborative learning across distributed healthcare datasets
Chapter 65: PyTorch for Image Super-Resolution
- 65.1 Introduction to image super-resolution
- 65.2 Building super-resolution models with PyTorch
- 65.3 Upsampling and enhancing image details
Chapter 66: PyTorch and Self-Supervised Learning
- 66.1 Introduction to self-supervised learning
- 66.2 Pretext tasks and self-supervised models
- 66.3 Transferring learned representations to downstream tasks
Chapter 67: PyTorch for StyleGAN and Image Synthesis
- 67.1 Introduction to StyleGAN and image synthesis
- 67.2 Building StyleGAN models with PyTorch
- 67.3 Generating high-quality synthetic images
Chapter 68: PyTorch for Active Learning
- 68.1 Introduction to active learning
- 68.2 Building active learning workflows with PyTorch
- 68.3 Query strategies and uncertainty sampling
Chapter 69: PyTorch for Time Series Anomaly Detection with Transformers
- 69.1 Introduction to transformer-based anomaly detection
- 69.2 Building transformer-based models with PyTorch
- 69.3 Detecting anomalies in time series data
Chapter 70: PyTorch and Continual Reinforcement Learning
- 70.1 Continual reinforcement learning setups and challenges
- 70.2 Building continual RL agents with PyTorch
- 70.3 Balancing exploration and exploitation in continual RL
Chapter 71: PyTorch for 3D Object Detection
- 71.1 Introduction to 3D object detection
- 71.2 Building 3D object detection models with PyTorch
- 71.3 Training and evaluating on 3D datasets
Chapter 72: PyTorch and Active Vision
- 72.1 Introduction to active vision
- 72.2 Building active vision models with PyTorch
- 72.3 Active perception and information gain
Chapter 73: PyTorch for Few-Shot Object Detection
- 73.1 Introduction to few-shot object detection
- 73.2 Building few-shot detection models with PyTorch
- 73.3 Adaptation and generalization to unseen classes
Chapter 74: PyTorch for Tabular Data Analysis (continued)
- 74.1 Preparing tabular data for modeling with PyTorch
- 74.2 Building deep learning models for tabular data
- 74.3 Feature engineering and feature selection
Chapter 75: PyTorch and Weakly Supervised Learning
- 75.1 Introduction to weakly supervised learning
- 75.2 Building weakly supervised models with PyTorch
- 75.3 Label noise and learning with incomplete annotations
Chapter 76: PyTorch for Text Style Transfer
- 76.1 Introduction to text style transfer
- 76.2 Building text style transfer models with PyTorch
- 76.3 Transferring the style of text
Chapter 77: PyTorch and Graph Representation Learning with Transformers
- 77.1 Graph transformer networks for graph representation learning
- 77.2 Building graph transformer models with PyTorch
- 77.3 Graph classification and graph generation
Chapter 78: PyTorch for Video Super-Resolution
- 78.1 Introduction to video super-resolution
- 78.2 Building video super-resolution models with PyTorch
- 78.3 Enhancing video quality and details
Chapter 79: PyTorch and Continual Learning for Natural Language Processing
- 79.1 Continual learning setups for NLP tasks
- 79.2 Building continual learning models for NLP with PyTorch
- 79.3 Adapting to new tasks while retaining previous knowledge
Chapter 80: PyTorch for Synthetic Data Generation
- 80.1 Generating synthetic data with GANs in PyTorch
- 80.2 Applications of synthetic data generation
- 80.3 Evaluating and utilizing synthetic data
Chapter 81: PyTorch for Few-Shot Natural Language Understanding
- 81.1 Introduction to few-shot NLU
- 81.2 Building few-shot NLU models with PyTorch
- 81.3 Learning from limited labeled examples
Chapter 82: PyTorch for Knowledge Distillation
- 82.1 Introduction to knowledge distillation
- 82.2 Distilling knowledge from large models to smaller models
- 82.3 Model compression and performance trade-offs
Chapter 83: PyTorch for GAN Inversion
- 83.1 GAN inversion techniques in PyTorch
- 83.2 Reconstructing latent representations from generated samples
- 83.3 Applications in image editing and style transfer
Chapter 84: PyTorch for Speech Enhancement and Separation
- 84.1 Introduction to speech enhancement and separation
- 84.2 Building speech enhancement models with PyTorch
- 84.3 Separating speech sources from mixed audio
Chapter 85: PyTorch and Meta-Transfer Learning
- 85.1 Meta-transfer learning for few-shot learning
- 85.2 Building meta-transfer learning models with PyTorch
- 85.3 Transferring knowledge across related tasks
Chapter 86: PyTorch for Cross-Modal Learning
- 86.1 Introduction to cross-modal learning
- 86.2 Building cross-modal models with PyTorch
- 86.3 Learning joint representations from multiple modalities
Chapter 87: PyTorch for Zero-Shot Learning
- 87.1 Introduction to zero-shot learning
- 87.2 Building zero-shot learning models with PyTorch
- 87.3 Generalizing to unseen classes without labeled data
Chapter 88: PyTorch for Multimodal Learning
- 88.1 Introduction to multimodal learning
- 88.2 Building multimodal models with PyTorch
- 88.3 Fusion techniques for combining different modalities
Chapter 89: PyTorch for Time Series Forecasting with Transformers and Attention
- 89.1 Attention mechanisms in transformer-based time series forecasting
- 89.2 Building transformer models with attention for time series forecasting
- 89.3 Enhancing performance with attention mechanisms
Chapter 90: PyTorch for Few-Shot Semantic Segmentation
- 90.1 Introduction to few-shot semantic segmentation
- 90.2 Building few-shot semantic segmentation models with PyTorch
- 90.3 Adapting to new semantic segmentation tasks with limited labeled data
Chapter 91: PyTorch for Active Learning in Computer Vision
- 91.1 Introduction to active learning in computer vision
- 91.2 Building active learning workflows for computer vision tasks with PyTorch
- 91.3 Selecting informative samples for annotation
Chapter 92: PyTorch for Semi-Supervised Learning in Natural Language Processing
- 92.1 Introduction to semi-supervised learning in NLP
- 92.2 Building semi-supervised learning models with PyTorch
- 92.3 Utilizing unlabeled data for improved performance in NLP tasks
Chapter 93: PyTorch for Explainable Deep Learning in Computer Vision
- 93.1 Interpretability techniques for deep learning in computer vision
- 93.2 Building interpretable models with PyTorch
- 93.3 Visualizing and explaining model predictions
Chapter 94: PyTorch for Dynamic Graph Neural Networks
- 94.1 Introduction to dynamic graph neural networks
- 94.2 Building dynamic graph models with PyTorch
- 94.3 Handling temporal and evolving graph structures
Chapter 95: PyTorch for Collaborative Filtering
- 95.1 Introduction to collaborative filtering
- 95.2 Building collaborative filtering models with PyTorch
- 95.3 Recommender systems and personalized recommendations
Chapter 96: PyTorch for Time Series Forecasting with Convolutional Neural Networks
- 96.1 Convolutional neural networks for time series forecasting
- 96.2 Building CNN models for time series forecasting with PyTorch
- 96.3 Capturing temporal patterns with convolutional filters
Chapter 97: PyTorch for Weakly Supervised Object Localization
- 97.1 Introduction to weakly supervised object localization
- 97.2 Building weakly supervised object localization models with PyTorch
- 97.3 Localizing objects with limited or no bounding box annotations
Chapter 98: PyTorch for Natural Language Processing with Transformers
- 98.1 Introduction to transformers in NLP
- 98.2 Building transformer models for NLP tasks with PyTorch
- 98.3 Transfer learning and fine-tuning of transformer models
linkcode
Chapter 99: PyTorch for Few-Shot Learning with Meta-Learning
- 99.1 Introduction to few-shot learning with meta-learning
- 99.2 Building meta-learning models for few-shot learning with PyTorch
- 99.3 Adapting to new tasks with limited labeled data
Chapter 100: PyTorch for Time Series Anomaly Detection with Variational Autoencoders
- 100.1 Variational autoencoders for time series anomaly detection
- 100.2 Building variational autoencoder models for anomaly detection with PyTorch
- 100.3 Learning latent representations for anomaly detection in time series
章节索引:¶
第1章:PyTorch 入门
1.1 什么是PyTorch?
1.2 PyTorch相对其他库的优势
1.3 PyTorch的安装
1.4 PyTorch张量
第2章:PyTorch基础操作
2.1 张量的创建与操作
2.2 张量运算与函数
2.3 基于autograd的自动微分
第3章:PyTorch数据集与数据加载器
3.1 PyTorch中的数据集处理
3.2 自定义数据集构建
3.3 数据加载器的使用
第4章:PyTorch神经网络构建
4.1 神经网络架构设计
4.2 前向传播与反向传播定义
4.3 神经网络训练
第5章:损失函数与优化算法
5.1 PyTorch常用损失函数
5.2 优化器与参数更新
5.3 学习率调度策略
第6章:模型训练与评估
6.1 数据划分为训练集与测试集
6.2 训练循环与批量处理
6.3 模型性能评估
第7章:PyTorch迁移学习
7.1 迁移学习原理
7.2 PyTorch预训练模型
7.3 微调与特征提取技术
第8章:卷积神经网络(CNNs)
8.1 CNN基础概述
8.2 PyTorch中CNN架构搭建
8.3 CNN的训练与评估
第9章:循环神经网络(RNNs)
8.1 RNN基础概述
8.2 PyTorch中RNN架构搭建
8.3 RNN的训练与评估
第10章:生成对抗网络(GANs)
10.1 GAN基础原理
10.2 PyTorch中GAN架构搭建
10.3 GAN训练与样本生成
第11章:PyTorch强化学习
11.1 强化学习基础
11.2 PyTorch构建RL智能体
11.3 RL智能体训练与评估
第12章:PyTorch模型部署
12.1 PyTorch模型保存与加载
12.2 生产环境中的模型部署
12.3 模型优化与量化技术
第13章:PyTorch高级技术
13.1 自定义损失函数与指标
13.2 模型解释与可视化
13.3 不平衡数据集处理
第14章:PyTorch GPU计算
14.1 PyTorch的GPU加速应用
14.2 数据并行与分布式训练
第15章:PyTorch与ONNX
15.1 ONNX技术简介
15.2 PyTorch模型转换为ONNX格式
15.3 ONNX模型跨框架部署
第16章:PyTorch与TensorBoard
16.1 使用TensorBoard可视化训练过程
16.2 指标记录与网络图可视化
第17章:PyTorch与深度学习库集成(续)
17.1 PyTorch与TensorFlow集成
17.2 PyTorch与Keras互操作性
17.3 PyTorch与scikit-learn集成
第18章:模型可解释性技术
18.1 使用Captum解释PyTorch模型
18.2 特征重要性与归因分析方法
18.3 模型预测结果解释
第19章:PyTorch时间序列处理
19.1 时间序列数据预处理
19.2 时间序列预测的循环模型构建
19.3 时间序列模型评估与优化
第20章:PyTorch自然语言处理
20.1 文本预处理与分词技术
20.2 文本分类模型构建
20.3 机器翻译的序列到序列模型
第21章:自动编码器与变分自动编码器
21.1 自动编码器原理
21.2 PyTorch中自动编码器架构搭建
21.3 生成建模的变分自动编码器
第22章:机器人领域的PyTorch强化学习
22.1 强化学习在机器人中的应用
22.2 机器人任务的RL智能体构建
22.3 仿真环境与真实场景部署
第23章:PyTorch与贝叶斯深度学习
23.1 贝叶斯深度学习基础
23.2 PyTorch构建贝叶斯神经网络
23.3 不确定性估计与模型校准
第24章:深度学习时间序列分析
24.1 深度学习模型的时间序列预测
24.2 长短期记忆(LSTM)网络
24.3 时间卷积网络(TCN)
第25章:PyTorch与图神经网络
25.1 图神经网络(GNNs)基础
25.2 PyTorch中GNN架构搭建
25.3 节点分类与图表示学习
第26章:PyTorch联邦学习
26.1 联邦学习基础原理
26.2 PyTorch实现联邦学习
26.3 隐私保护与安全考量
第27章:PyTorch深度强化学习
27.1 深度Q网络(DQN)与值函数方法
27.2 策略梯度与演员-评论家算法
27.3 强化学习与深度学习结合技术
第28章:PyTorch计算机视觉应用
28.1 PyTorch目标检测
28.2 卷积网络的图像分割
28.3 人脸识别与情感检测
第29章:Transformer时间序列预测
29.1 Transformer模型基础
29.2 Transformer在时间序列预测中的应用
29.3 Transformer序列到序列模型
第30章:PyTorch自然语言生成
30.1 循环与Transformer模型的语言建模
30.2 PyTorch文本生成与聊天机器人
30.3 Transformer神经机器翻译
第31章:深度学习对抗攻击与防御(续)
31.1 PyTorch生成对抗样本
31.2 对抗攻击防御方法
第32章:PyTorch模型压缩与量化
32.1 模型压缩技术概述
32.2 模型量化与尺寸缩减
32.3 PyTorch模型剪枝与稀疏化
第33章:PyTorch与AutoML
33.1 AutoML技术简介
33.2 自动化神经架构搜索
33.3 PyTorch超参数优化
第34章:PyTorch时间序列异常检测
34.1 时间序列数据异常检测技术
34.2 PyTorch构建异常检测模型
34.3 异常检测结果评估与解释
第35章:PyTorch推荐系统
35.1 推荐系统基础原理
35.2 PyTorch协同过滤算法
35.3 神经网络推荐系统构建
第36章:PyTorch高级优化方法
36.1 随机梯度下降变体算法
36.2 自适应优化算法
36.3 二阶优化方法
第37章:PyTorch与知识图谱
37.1 知识图谱基础概述
37.2 PyTorch知识图谱表示与嵌入
37.3 图推理与链接预测技术
第38章:PyTorch可解释AI
38.1 LIME与SHAP解释黑盒模型
38.2 PyTorch可解释模型构建
38.3 可解释AI实际应用案例
第39章:PyTorch时间序列分类
39.1 时间序列分类基础
39.2 特征提取与表示学习
39.3 时间序列分类器构建
第40章:PyTorch与贝叶斯优化
40.1 贝叶斯优化基础
40.2 PyTorch实现贝叶斯优化
40.3 超参数调优与优化策略
第41章:PyTorch音频处理
41.1 音频数据预处理与特征提取
41.2 音频分类模型构建
41.3 PyTorch语音识别与合成
第42章:机器人领域的PyTorch强化学习
42.1 PyTorch与机器人框架集成
42.2 机器人控制的强化学习算法
42.3 仿真到真实场景的迁移学习
第43章:PyTorch时间序列聚类
43.1 时间序列聚类基础
43.2 特征提取与表示学习
43.3 时间序列数据聚类算法
第44章:PyTorch自然语言理解
44.1 文本分类与情感分析
44.2 命名实体识别与信息抽取
44.3 自然语言理解应用场景
第45章:PyTorch图像字幕生成
45.1 图像字幕生成技术概述
45.2 PyTorch图像字幕模型构建
45.3 图像字幕生成方法
第46章:图表示学习(续)
46.1 PyTorch图嵌入技术
46.2 图表示学习的图神经网络
46.3 GNN链接预测与节点分类
第47章:PyTorch目标检测
47.1 目标检测技术基础
47.2 PyTorch目标检测模型构建
47.3 目标检测训练与评估
第48章:PyTorch时间序列分割
48.1 时间序列分割基础
48.2 时间序列数据分割方法
48.3 PyTorch分割模型构建
第49章:PyTorch元学习
49.1 元学习基础原理
49.2 PyTorch元学习模型构建
49.3 小样本学习与自适应技术
第50章:PyTorch异常检测
50.1 异常检测技术概述
50.2 PyTorch异常检测模型构建
50.3 离群点检测与新颖性检测
第51章:PyTorch多任务学习
51.1 多任务学习基础
51.2 PyTorch多任务模型构建
51.3 联合训练与任务特定输出
第52章:PyTorch图生成
52.1 PyTorch图生成技术
52.2 图自动编码器与变分图模型
52.3 特定属性图生成方法
第53章:注意力机制时间序列预测
53.1 时间序列预测中的注意力机制
53.2 PyTorch注意力模型构建
53.3 注意力权重解释方法
第54章:PyTorch视频处理
54.1 视频数据预处理与变换
54.2 PyTorch视频分类模型构建
54.3 动作识别与视频生成技术
第55章:PyTorch半监督学习
55.1 半监督学习基础原理
55.2 PyTorch半监督模型构建
55.3 利用未标注数据提升性能
第56章:PyTorch分布外检测
56.1 分布外检测技术概述
56.2 PyTorch OOD检测模型构建
56.3 OOD检测评估与基准测试
第57章:PyTorch模型集成技术
57.1 深度学习集成方法概述
57.2 PyTorch模型集成构建
57.3 多模型结合提升性能
第58章:PyTorch可解释推荐系统
58.1 推荐系统可解释模型
58.2 PyTorch可解释推荐方法
58.3 用户建模与个性化推荐
第59章:PyTorch时间序列插补
59.1 时间序列缺失值插补技术
59.2 PyTorch插补模型构建
59.3 时间序列缺失值处理方法
第60章:持续学习(续)
60.1 持续学习基础原理
60.2 PyTorch持续学习模型构建
60.3 灾难性遗忘与正则化技术
第61章:PyTorch图像风格迁移
61.1 图像风格迁移技术概述
61.2 PyTorch风格迁移模型构建
61.3 图像间风格迁移方法
第62章:PyTorch与超参数优化库
62.1 超参数优化技术简介
62.2 PyTorch与超参数优化库集成
62.3 高效超参数搜索策略
第63章:PyTorch音乐生成
63.1 音乐数据预处理与表示
63.2 PyTorch音乐生成模型构建
63.3 原创音乐作品生成技术
第64章:医疗领域的PyTorch联邦学习
64.1 联邦学习在医疗中的应用
64.2 PyTorch隐私保护模型构建
64.3 分布式医疗数据协同学习
第65章:PyTorch图像超分辨率
65.1 图像超分辨率技术概述
65.2 PyTorch超分辨率模型构建
65.3 图像上采样与细节增强
第66章:PyTorch自监督学习
66.1 自监督学习基础原理
66.2 前置任务与自监督模型
66.3 表示迁移到下游任务技术
第67章:PyTorch StyleGAN与图像合成
67.1 StyleGAN与图像合成技术概述
67.2 PyTorch StyleGAN模型构建
67.3 高质量合成图像生成
第68章:PyTorch主动学习
68.1 主动学习技术基础
68.2 PyTorch主动学习工作流构建
68.3 查询策略与不确定性采样
第69章:Transformer时间序列异常检测
69.1 Transformer异常检测技术概述
69.2 PyTorch Transformer模型构建
69.3 时间序列异常检测方法
第70章:PyTorch持续强化学习
70.1 持续强化学习设置与挑战
70.2 PyTorch持续RL智能体构建
70.3 持续RL中探索与利用平衡
第71章:PyTorch三维目标检测
71.1 三维目标检测技术基础
71.2 PyTorch三维检测模型构建
71.3 三维数据集训练与评估
第72章:PyTorch主动视觉
72.1 主动视觉技术基础
72.2 PyTorch主动视觉模型构建
72.3 主动感知与信息增益策略
第73章:PyTorch小样本目标检测
73.1 小样本目标检测技术概述
73.2 PyTorch小样本检测模型构建
73.3 未见类别适应与泛化技术
第74章:表格数据分析(续)
74.1 PyTorch表格数据建模预处理
74.2 表格数据深度学习模型构建
74.3 特征工程与特征选择方法
第75章:PyTorch弱监督学习
75.1 弱监督学习技术基础
75.2 PyTorch弱监督模型构建
75.3 标签噪声与不完整标注学习
第76章:PyTorch文本风格迁移
76.1 文本风格迁移技术概述
76.2 PyTorch文本风格迁移模型构建
76.3 文本风格迁移方法
第77章:Transformer图表示学习
77.1 图Transformer网络基础
77.2 PyTorch图Transformer模型构建
77.3 图分类与图生成技术
第78章:PyTorch视频超分辨率
78.1 视频超分辨率技术概述
78.2 PyTorch视频超分辨率模型构建
78.3 视频质量与细节提升方法
第79章:自然语言处理的持续学习
79.1 NLP任务持续学习设置
79.2 PyTorch NLP持续学习模型构建
79.3 新任务适应与知识保留技术
第80章:PyTorch合成数据生成
80.1 PyTorch GAN合成数据生成
80.2 合成数据生成应用场景
80.3 合成数据评估与利用技术
第81章:PyTorch小样本自然语言理解
81.1 小样本NLU技术概述
81.2 PyTorch小样本NLU模型构建
81.3 有限标注示例学习方法
第82章:PyTorch知识蒸馏
82.1 知识蒸馏技术基础
82.2 大模型到小模型的知识蒸馏
82.3 模型压缩与性能权衡策略
第83章:PyTorch GAN反演
83.1 PyTorch GAN反演技术概述
83.2 生成样本潜在表示重建
83.3 图像编辑与风格迁移应用
第84章:PyTorch语音增强与分离
84.1 语音增强与分离技术基础
84.2 PyTorch语音增强模型构建
84.3 混合音频声源分离方法
第85章:PyTorch元迁移学习
85.1 小样本学习的元迁移学习
85.2 PyTorch元迁移学习模型构建
85.3 相关任务间知识迁移技术
第 86 章:PyTorch 跨模态学习
86.1 跨模态学习概述
86.2 PyTorch 跨模态模型构建
86.3 多模态联合表示学习
第 87 章:PyTorch 零样本学习
87.1 零样本学习基础原理
87.2 PyTorch 零样本学习模型构建
87.3 未见类别无标注数据泛化技术
第 88 章:PyTorch 多模态学习
88.1 多模态学习概述
88.2 PyTorch 多模态模型构建
88.3 多模态融合技术
第 89 章:注意力机制 Transformer 时间序列预测
89.1 Transformer 时间序列预测中的注意力机制
89.2 PyTorch 注意力 Transformer 模型构建
89.3 注意力机制性能提升方法
第 90 章:PyTorch 小样本语义分割
90.1 小样本语义分割技术概述
90.2 PyTorch 小样本语义分割模型构建
90.3 有限标注数据语义分割任务适应
第 91 章:计算机视觉中的 PyTorch 主动学习
91.1 计算机视觉主动学习概述
91.2 PyTorch 计算机视觉主动学习工作流构建
91.3 标注样本选择策略
第 92 章:自然语言处理中的 PyTorch 半监督学习
92.1 NLP 半监督学习概述
92.2 PyTorch NLP 半监督模型构建
92.3 未标注数据在 NLP 中的应用
第 93 章:计算机视觉中的可解释深度学习
93.1 计算机视觉模型解释技术
93.2 PyTorch 可解释模型构建
93.3 模型预测可视化与解释
第 94 章:PyTorch 动态图神经网络
94.1 动态图神经网络概述
94.2 PyTorch 动态图模型构建
94.3 时序图结构处理技术
第 95 章:PyTorch 协同过滤
95.1 协同过滤技术基础
95.2 PyTorch 协同过滤模型构建
95.3 推荐系统个性化推荐方法
第 96 章:卷积神经网络时间序列预测
96.1 时间序列预测的 CNN 应用
96.2 PyTorch CNN 时间序列预测模型构建
96.3 卷积核捕捉时序模式技术
第 97 章:弱监督目标定位
97.1 弱监督目标定位技术概述
97.2 PyTorch 弱监督目标定位模型构建
97.3 无边界框标注目标定位方法
第 98 章:Transformer 自然语言处理
98.1 NLP 中的 Transformer 技术
98.2 PyTorch Transformer NLP 模型构建
98.3 Transformer 迁移学习与微调
第 99 章:元学习小样本学习
99.1 元学习小样本学习概述
99.2 PyTorch 元学习小样本模型构建
99.3 少样本任务自适应技术
第 100 章:变分自动编码器时间序列异常检测
100.1 VAE 时间序列异常检测原理
100.2 PyTorch VAE 异常检测模型构建
100.3 时序数据潜在表示异常检测

需要源码和学习文件的可以私聊我