[feature] adding orthogononal finetuning (OFT) to llama factory (#8623)

Co-authored-by: Zeju <zqiu@g003.internal.cluster.is.localnet>
Co-authored-by: Zeju <zqiu@login2.is.localnet>
Co-authored-by: Yaowei Zheng <hiyouga@buaa.edu.cn>
This commit is contained in:
Zeju Qiu
2025-08-18 12:22:47 +02:00
committed by GitHub
parent 1ada15981a
commit 003a2acb1a
13 changed files with 375 additions and 47 deletions

View File

@@ -111,8 +111,8 @@ def _verify_model_args(
raise ValueError("Adapter is only valid for the LoRA method.")
if model_args.quantization_bit is not None:
if finetuning_args.finetuning_type != "lora":
raise ValueError("Quantization is only compatible with the LoRA method.")
if finetuning_args.finetuning_type not in ["lora", "oft"]:
raise ValueError("Quantization is only compatible with the LoRA or OFT method.")
if finetuning_args.pissa_init:
raise ValueError("Please use scripts/pissa_init.py to initialize PiSSA for a quantized model.")