support autogptq in llama board #246

This commit is contained in:
hiyouga
2023-12-16 16:31:30 +08:00
parent 93f64ce9a8
commit 71389be37c
14 changed files with 1032 additions and 65 deletions

View File

@@ -482,7 +482,7 @@ python src/export_model.py \
> Merging LoRA weights into a quantized model is not supported.
> [!TIP]
> Use `--export_quantization_bit 4` and `--export_quantization_dataset data/wiki_demo.txt` to quantize the model.
> Use `--export_quantization_bit 4` and `--export_quantization_dataset data/c4_demo.json` to quantize the model.
### API Demo