mirror of
https://github.com/hiyouga/LLaMA-Factory.git
synced 2025-10-14 23:58:11 +08:00
[assets] update windows installation (#8042)
This commit is contained in:
parent
dc080399c6
commit
712c57f3b4
14
README.md
14
README.md
@ -509,6 +509,20 @@ uv run --prerelease=allow llamafactory-cli train examples/train_lora/llama3_lora
|
|||||||
|
|
||||||
<details><summary>For Windows users</summary>
|
<details><summary>For Windows users</summary>
|
||||||
|
|
||||||
|
#### Install PyTorch
|
||||||
|
|
||||||
|
You need to manually install the GPU version of PyTorch on the Windows platform. Please refer to the [official website](https://pytorch.org/get-started/locally/) and the following command to install PyTorch with CUDA support:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pip uninstall torch torchvision torchaudio
|
||||||
|
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu126
|
||||||
|
python -c "import torch; print(torch.cuda.is_available())"
|
||||||
|
```
|
||||||
|
|
||||||
|
If you see `True` then you have successfully installed PyTorch with CUDA support.
|
||||||
|
|
||||||
|
Try `dataloader_num_workers: 0` if you encounter `Can't pickle local object` error.
|
||||||
|
|
||||||
#### Install BitsAndBytes
|
#### Install BitsAndBytes
|
||||||
|
|
||||||
If you want to enable the quantized LoRA (QLoRA) on the Windows platform, you need to install a pre-built version of `bitsandbytes` library, which supports CUDA 11.1 to 12.2, please select the appropriate [release version](https://github.com/jllllll/bitsandbytes-windows-webui/releases/tag/wheels) based on your CUDA version.
|
If you want to enable the quantized LoRA (QLoRA) on the Windows platform, you need to install a pre-built version of `bitsandbytes` library, which supports CUDA 11.1 to 12.2, please select the appropriate [release version](https://github.com/jllllll/bitsandbytes-windows-webui/releases/tag/wheels) based on your CUDA version.
|
||||||
|
15
README_zh.md
15
README_zh.md
@ -494,9 +494,22 @@ uv run --prerelease=allow llamafactory-cli train examples/train_lora/llama3_lora
|
|||||||
|
|
||||||
</details>
|
</details>
|
||||||
|
|
||||||
|
|
||||||
<details><summary>Windows 用户指南</summary>
|
<details><summary>Windows 用户指南</summary>
|
||||||
|
|
||||||
|
#### 安装 PyTorch
|
||||||
|
|
||||||
|
Windows 平台需要额外手动安装 GPU 版本的 PyTorch 依赖包,您可以参考[官方网站](https://pytorch.org/get-started/locally/)和以下命令安装并测试 PyTorch 是否正确安装。
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pip uninstall torch torchvision torchaudio
|
||||||
|
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu126
|
||||||
|
python -c "import torch; print(torch.cuda.is_available())"
|
||||||
|
```
|
||||||
|
|
||||||
|
如果看到 `True` 则说明安装成功。
|
||||||
|
|
||||||
|
若遇到类似 `Can't pickle local object` 的报错,请设置 `dataloader_num_workers: 0`。
|
||||||
|
|
||||||
#### 安装 BitsAndBytes
|
#### 安装 BitsAndBytes
|
||||||
|
|
||||||
如果要在 Windows 平台上开启量化 LoRA(QLoRA),需要安装预编译的 `bitsandbytes` 库, 支持 CUDA 11.1 到 12.2, 请根据您的 CUDA 版本情况选择适合的[发布版本](https://github.com/jllllll/bitsandbytes-windows-webui/releases/tag/wheels)。
|
如果要在 Windows 平台上开启量化 LoRA(QLoRA),需要安装预编译的 `bitsandbytes` 库, 支持 CUDA 11.1 到 12.2, 请根据您的 CUDA 版本情况选择适合的[发布版本](https://github.com/jllllll/bitsandbytes-windows-webui/releases/tag/wheels)。
|
||||||
|
Loading…
x
Reference in New Issue
Block a user