mirror of
https://github.com/hiyouga/LLaMA-Factory.git
synced 2025-10-14 23:58:11 +08:00
update readme
Former-commit-id: 568cc1d33c3d202e6430b68e0bcb2772aa6b0aa2
This commit is contained in:
parent
322331df51
commit
304a2efec8
@ -351,10 +351,9 @@ To utilize Ascend NPU devices for (distributed) training and inference, you need
|
|||||||
| torch-npu | 2.2.0 | 2.2.0 |
|
| torch-npu | 2.2.0 | 2.2.0 |
|
||||||
| deepspeed | 0.13.2 | 0.13.2 |
|
| deepspeed | 0.13.2 | 0.13.2 |
|
||||||
|
|
||||||
> [!NOTE]
|
Remember to use `ASCEND_RT_VISIBLE_DEVICES` instead of `CUDA_VISIBLE_DEVICES` to specify the device to use.
|
||||||
> Remember to use `ASCEND_RT_VISIBLE_DEVICES` instead of `CUDA_VISIBLE_DEVICES` to specify the device to use.
|
|
||||||
>
|
If you cannot infer model on NPU devices, try setting `do_sample: false` in the configurations.
|
||||||
> If you cannot infer model on NPU devices, try setting `do_sample: false` in the configurations.
|
|
||||||
|
|
||||||
</details>
|
</details>
|
||||||
|
|
||||||
|
@ -351,10 +351,9 @@ pip install https://github.com/jllllll/bitsandbytes-windows-webui/releases/downl
|
|||||||
| torch-npu | 2.2.0 | 2.2.0 |
|
| torch-npu | 2.2.0 | 2.2.0 |
|
||||||
| deepspeed | 0.13.2 | 0.13.2 |
|
| deepspeed | 0.13.2 | 0.13.2 |
|
||||||
|
|
||||||
> [!NOTE]
|
请记得使用 `ASCEND_RT_VISIBLE_DEVICES` 而非 `CUDA_VISIBLE_DEVICES` 来指定您使用的设备。
|
||||||
> 请记得使用 `ASCEND_RT_VISIBLE_DEVICES` 而非 `CUDA_VISIBLE_DEVICES` 来指定您使用的设备。
|
|
||||||
>
|
如果遇到无法正常推理的情况,请尝试设置 `do_sample: false`。
|
||||||
> 如果遇到无法正常推理的情况,请尝试设置 `do_sample: false`。
|
|
||||||
|
|
||||||
</details>
|
</details>
|
||||||
|
|
||||||
|
Loading…
x
Reference in New Issue
Block a user