From d4e84b9a11eede64d3bd29e7f58df2fabb067b00 Mon Sep 17 00:00:00 2001 From: hoshi-hiyouga Date: Fri, 26 Jul 2024 11:29:28 +0800 Subject: [PATCH] Update README.md Former-commit-id: 1186ad53d43dace9dec335331dbe246f1c5a729b --- README.md | 1 - 1 file changed, 1 deletion(-) diff --git a/README.md b/README.md index 8e41d832..14af3f46 100644 --- a/README.md +++ b/README.md @@ -48,7 +48,6 @@ Choose your path: - **Various models**: LLaMA, LLaVA, Mistral, Mixtral-MoE, Qwen, Yi, Gemma, Baichuan, ChatGLM, Phi, etc. - **Integrated methods**: (Continuous) pre-training, (multimodal) supervised fine-tuning, reward modeling, PPO, DPO, KTO, ORPO, etc. - - **Scalable resources**: 16-bit full-tuning, freeze-tuning, LoRA and 2/3/4/5/6/8-bit QLoRA via AQLM/AWQ/GPTQ/LLM.int8/HQQ/EETQ. - **Advanced algorithms**: GaLore, BAdam, DoRA, LongLoRA, LLaMA Pro, Mixture-of-Depths, LoRA+, LoftQ, PiSSA and Agent tuning. - **Practical tricks**: FlashAttention-2, Unsloth, RoPE scaling, NEFTune and rsLoRA.