pytorch 分布式 Node/Worker/Rank等基础概念

分布式训练相关基本参数的概念如下:

Definitions

  1. Node - A physical instance or a container; maps to the unit that the job manager works with.

  2. Worker - A worker in the context of distributed training.

  3. WorkerGroup - The set of workers that execute the same function (e.g. trainers).

  4. LocalWorkerGroup - A subset of the workers in the worker group running on the same node.

  5. RANK - The rank of the worker within a worker group.

  6. WORLD_SIZE - The total number of workers in a worker group.

  7. LOCAL_RANK - The rank of the worker within a local worker group.

  8. LOCAL_WORLD_SIZE - The size of the local worker group.

  9. rdzv_id - A user-defined id that uniquely identifies the worker group for a job. This id is used by each node to join as a member of a particular worker group.

  1. rdzv_backend - The backend of the rendezvous (e.g. c10d). This is typically a strongly consistent key-value store.

  2. rdzv_endpoint - The rendezvous backend endpoint; usually in form <host>:<port>.

A Node runs LOCAL_WORLD_SIZE workers which comprise a LocalWorkerGroup. The union of all LocalWorkerGroups in the nodes in the job comprise the WorkerGroup.

翻译:

Node: 通常代表有几台机器

Worker: 指一个训练进程

WORD_SIZE: 总训练进程数,通常与所有机器加起来的GPU数相等(通常每个GPU跑一个训练进程)

RANK: 每个Worker的标号,用来标识每个每个训练进程(所有机器)

LOCAL_RANK : 在同一台机器上woker的标识,例如一台8卡机器上的woker标识就是0-7

总结:

一个节点(一台机器) 跑 LOCAL_WORLD_SIZE 个数的workers, 这些workers 构成了LocalWorkerGroup(组的概念),

所有机器上的LocalWorkerGroup 就组成了WorkerGroup

ps: Local 就是代表一台机器上的相关概念, 当只有一台机器时,Local的数据和不带local的数据时一致的

reference:

torchrun (Elastic Launch) --- PyTorch 2.1 documentation

相关推荐
ACEEE12229 小时前
Stanford CS336 | Assignment 2 - FlashAttention-v2 Pytorch & Triotn实现
人工智能·pytorch·python·深度学习·机器学习·nlp·transformer
深耕AI21 小时前
【PyTorch训练】准确率计算(代码片段拆解)
人工智能·pytorch·python
nuczzz1 天前
pytorch非线性回归
人工智能·pytorch·机器学习·ai
~-~%%1 天前
Moe机制与pytorch实现
人工智能·pytorch·python
Garfield20051 天前
绕过 FlashAttention-2 限制:在 Turing 架构上使用 PyTorch 实现 FlashAttention
pytorch·flashattention·turing·图灵架构·t4·2080ti
深耕AI1 天前
【PyTorch训练】为什么要有 loss.backward() 和 optimizer.step()?
人工智能·pytorch·python
七芒星20231 天前
ResNet(详细易懂解释):残差网络的革命性突破
人工智能·pytorch·深度学习·神经网络·学习·cnn
九年义务漏网鲨鱼1 天前
【Debug日志 | DDP 下 BatchNorm 统计失真】
pytorch
☼←安于亥时→❦2 天前
PyTorch 梯度与微积分
人工智能·pytorch·python
缘友一世2 天前
PyTorch深度学习实战【10】之神经网络的损失函数
pytorch·深度学习·神经网络