Update README.md

Former-commit-id: f8d701cd3ce2e56f95b4f5439b8b48d5b62e0d2b
This commit is contained in:
hiyouga 2024-06-13 16:02:21 +08:00
parent 530165d9a5
commit f4f315fd11

View File

@ -97,25 +97,25 @@ FORCE_TORCHRUN=1 llamafactory-cli train examples/train_lora/llama3_lora_sft_ds3.
#### Supervised Fine-Tuning with 4/8-bit Bitsandbytes Quantization (Recommended) #### Supervised Fine-Tuning with 4/8-bit Bitsandbytes Quantization (Recommended)
```bash ```bash
CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/train_qlora/llama3_lora_sft_bitsandbytes.yaml llamafactory-cli train examples/train_qlora/llama3_lora_sft_bitsandbytes.yaml
``` ```
#### Supervised Fine-Tuning with 4/8-bit GPTQ Quantization #### Supervised Fine-Tuning with 4/8-bit GPTQ Quantization
```bash ```bash
CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/train_qlora/llama3_lora_sft_gptq.yaml llamafactory-cli train examples/train_qlora/llama3_lora_sft_gptq.yaml
``` ```
#### Supervised Fine-Tuning with 4-bit AWQ Quantization #### Supervised Fine-Tuning with 4-bit AWQ Quantization
```bash ```bash
CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/train_qlora/llama3_lora_sft_awq.yaml llamafactory-cli train examples/train_qlora/llama3_lora_sft_awq.yaml
``` ```
#### Supervised Fine-Tuning with 2-bit AQLM Quantization #### Supervised Fine-Tuning with 2-bit AQLM Quantization
```bash ```bash
CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/train_qlora/llama3_lora_sft_aqlm.yaml llamafactory-cli train examples/train_qlora/llama3_lora_sft_aqlm.yaml
``` ```
### Full-Parameter Fine-Tuning ### Full-Parameter Fine-Tuning