DeepSpeed入门

发布于:2024-06-10 ⋅ 阅读:(141) ⋅ 点赞:(0)

pip install deepspeed

支持transformers: --deepspeed,以及config文件;

model_engine, optimizer, _, _ = deepspeed.initialize(args=cmd_args,
                                                     model=model,
                                                     model_parameters=params)

分布式和mixed-precision等,都包含在deepspeed.initialize和model_engine里面了;

删掉: torch.distributed.init_process_group(...)

for step, batch in enumerate(data_loader):
    #forward() method
    loss = model_engine(batch)

    #runs backpropagation
    model_engine.backward(loss)

    #weight update
    model_engine.step()

Gradient Average: 在model_engine.backward里自动解决;

Loss Scaling: 自动解决;

Learning Rate Scheduler: model_engin.step里自动解决;

save&load: (model、optimizer、lr scheduler状态,都存下来)(client_sd是用户自定义数据)

_, client_sd = model_engine.load_checkpoint(args.load_dir, args.ckpt_id)
step = client_sd['step']
...
if step % args.save_interval:
    client_sd['step'] = step
    ckpt_id = loss.item()
    model_engine.save_checkpoint(args.save_dir, ckpt_id, client_sd = client_sd)

配置文件:(例如名为ds_config.json)

{
  "train_batch_size": 8,
  "gradient_accumulation_steps": 1,
  "optimizer": {
    "type": "Adam",
    "params": {
      "lr": 0.00015
    }
  },
  "fp16": {
    "enabled": true
  },
  "zero_optimization": true
}

hostfile: (和OpenMPI、Horovord兼容)(hostname GPU个数)

worker-1 slots=4
worker-2 slots=4

启动命令:

deepspeed --hostfile=myhostfile <client_entry.py> <client args> \
  --deepspeed --deepspeed_config ds_config.json

--num_nodes: 在几台机器上跑;

--num_gpus:在几张GPU卡上跑;

--include: 白名单节点和GPU编号;例:--include="worker-2:0,1"

--exclude: 黑名单节点和GPU编号;例:--exclude="worker-2:0@worker-3:0,1"

环境变量:

运行起来会被设置到所有node上;

".deepspeed_env"文件;放运行目录下,或者~/;例:

NCCL_IB_DISABLE=1
NCCL_SOCKET_IFNAME=eth0

在一台机器上运行"deepspeed"命令,会在所有node上launch进程;

也支持mpirun方式来launch,但通信后端用的仍是NCCL而不是MPI;

注意:

不支持CUDA_VISIBLE_DEVICES;只能这么来指定GPU:

deepspeed --include localhost:1 ...

网站公告

今日签到

点亮在社区的每一天
去签到