support quantization in export model

This commit is contained in:
hiyouga
2023-12-15 23:44:50 +08:00
parent 87ef3f47b5
commit 3524aa1e58
9 changed files with 120 additions and 32 deletions

View File

@@ -479,7 +479,10 @@ python src/export_model.py \
```
> [!WARNING]
> Merging LoRA weights into a GPTQ quantized model is not supported.
> Merging LoRA weights into a quantized model is not supported.
> [!TIP]
> Use `--export_quantization_bit 4` and `--export_quantization_dataset data/wiki_demo.txt` to quantize the model.
### API Demo