mirror of
https://github.com/hiyouga/LLaMA-Factory.git
synced 2025-08-02 19:52:50 +08:00
update readme
Former-commit-id: b96d84835f9237e7277bb86395e448348473d20f
This commit is contained in:
parent
943779eabc
commit
be1114bb43
@ -351,10 +351,9 @@ To utilize Ascend NPU devices for (distributed) training and inference, you need
|
|||||||
| torch-npu | 2.2.0 | 2.2.0 |
|
| torch-npu | 2.2.0 | 2.2.0 |
|
||||||
| deepspeed | 0.13.2 | 0.13.2 |
|
| deepspeed | 0.13.2 | 0.13.2 |
|
||||||
|
|
||||||
> [!NOTE]
|
Remember to use `ASCEND_RT_VISIBLE_DEVICES` instead of `CUDA_VISIBLE_DEVICES` to specify the device to use.
|
||||||
> Remember to use `ASCEND_RT_VISIBLE_DEVICES` instead of `CUDA_VISIBLE_DEVICES` to specify the device to use.
|
|
||||||
>
|
If you cannot infer model on NPU devices, try setting `do_sample: false` in the configurations.
|
||||||
> If you cannot infer model on NPU devices, try setting `do_sample: false` in the configurations.
|
|
||||||
|
|
||||||
</details>
|
</details>
|
||||||
|
|
||||||
|
@ -351,10 +351,9 @@ pip install https://github.com/jllllll/bitsandbytes-windows-webui/releases/downl
|
|||||||
| torch-npu | 2.2.0 | 2.2.0 |
|
| torch-npu | 2.2.0 | 2.2.0 |
|
||||||
| deepspeed | 0.13.2 | 0.13.2 |
|
| deepspeed | 0.13.2 | 0.13.2 |
|
||||||
|
|
||||||
> [!NOTE]
|
请记得使用 `ASCEND_RT_VISIBLE_DEVICES` 而非 `CUDA_VISIBLE_DEVICES` 来指定您使用的设备。
|
||||||
> 请记得使用 `ASCEND_RT_VISIBLE_DEVICES` 而非 `CUDA_VISIBLE_DEVICES` 来指定您使用的设备。
|
|
||||||
>
|
如果遇到无法正常推理的情况,请尝试设置 `do_sample: false`。
|
||||||
> 如果遇到无法正常推理的情况,请尝试设置 `do_sample: false`。
|
|
||||||
|
|
||||||
</details>
|
</details>
|
||||||
|
|
||||||
|
Loading…
x
Reference in New Issue
Block a user