mirror of
https://github.com/hiyouga/LLaMA-Factory.git
synced 2025-10-16 00:28:10 +08:00
update readme
Former-commit-id: d3c46cb126a9182be765341fe31c860d71430712
This commit is contained in:
parent
fb4c5f3c91
commit
8dab8d9831
@ -475,6 +475,9 @@ python src/export_model.py \
|
|||||||
--export_dir path_to_export
|
--export_dir path_to_export
|
||||||
```
|
```
|
||||||
|
|
||||||
|
> [!WARNING]
|
||||||
|
> Merging LoRA weights into a GPTQ quantized model is not supported.
|
||||||
|
|
||||||
### API Demo
|
### API Demo
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
|
@ -475,6 +475,9 @@ python src/export_model.py \
|
|||||||
--export_dir path_to_export
|
--export_dir path_to_export
|
||||||
```
|
```
|
||||||
|
|
||||||
|
> [!WARNING]
|
||||||
|
> 尚不支持 GPTQ 量化模型的 LoRA 权重合并及导出。
|
||||||
|
|
||||||
### API 服务
|
### API 服务
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
|
Loading…
x
Reference in New Issue
Block a user