mirror of
https://github.com/hiyouga/LLaMA-Factory.git
synced 2025-08-04 04:32:50 +08:00
Updated README with new information
Former-commit-id: 0531dac30d5cbee56b73e06230cd0a62928ee9ca
This commit is contained in:
parent
5b8725399e
commit
13bf8b1f91
@ -520,7 +520,7 @@ use_cpu: false
|
||||
```bash
|
||||
deepspeed --num_gpus 8 src/train_bash.py \
|
||||
--deepspeed ds_config.json \
|
||||
--ddp_timeout 180000000 \ # If the training data is too large, it is recommended to add the ddp_timeout command line option to prevent NCCL errors.
|
||||
--ddp_timeout 180000000 \
|
||||
... # arguments (same as above)
|
||||
```
|
||||
|
||||
|
@ -519,7 +519,7 @@ use_cpu: false
|
||||
```bash
|
||||
deepspeed --num_gpus 8 src/train_bash.py \
|
||||
--deepspeed ds_config.json \
|
||||
--ddp_timeout 180000000 \ # 如训练数据过大,建议加上ddp_timeout命令行,防止nccl报错
|
||||
--ddp_timeout 180000000 \
|
||||
... # 参数同上
|
||||
|
||||
```
|
||||
|
Loading…
x
Reference in New Issue
Block a user