hiyouga bf5ffeeae0 simplify readme
Former-commit-id: 92dab8a90bdd82a72a06559943467b56dde12c71
2024-04-02 20:07:43 +08:00

14 lines
519 B
Markdown

> [!WARNING]
> Merging LoRA weights into a quantized model is not supported.
> [!TIP]
> Use `--model_name_or_path path_to_model` solely to use the exported model or model fine-tuned in full/freeze mode.
>
> Use `CUDA_VISIBLE_DEVICES=0`, `--export_quantization_bit 4` and `--export_quantization_dataset data/c4_demo.json` to quantize the model with AutoGPTQ after merging the LoRA weights.
Usage:
- `merge.sh`: merge the lora weights
- `quantize.sh`: quantize the model with AutoGPTQ (must after merge.sh, optional)