From 7c0d1a5ff15e32312269c26bec666da1955e7466 Mon Sep 17 00:00:00 2001 From: Byron Hsu Date: Fri, 30 Aug 2024 17:16:16 -0700 Subject: [PATCH] Add liger kernel link Former-commit-id: b8a9cb554efc4c2dedacb48833c5152d2cd2fec5 --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 33e3fe2c..829dc4a7 100644 --- a/README.md +++ b/README.md @@ -51,7 +51,7 @@ Choose your path: - **Integrated methods**: (Continuous) pre-training, (multimodal) supervised fine-tuning, reward modeling, PPO, DPO, KTO, ORPO, etc. - **Scalable resources**: 16-bit full-tuning, freeze-tuning, LoRA and 2/3/4/5/6/8-bit QLoRA via AQLM/AWQ/GPTQ/LLM.int8/HQQ/EETQ. - **Advanced algorithms**: GaLore, BAdam, Adam-mini, DoRA, LongLoRA, LLaMA Pro, Mixture-of-Depths, LoRA+, LoftQ, PiSSA and Agent tuning. -- **Practical tricks**: FlashAttention-2, Unsloth, Liger Kernel, RoPE scaling, NEFTune and rsLoRA. +- **Practical tricks**: FlashAttention-2, Unsloth, [Liger Kernel](https://github.com/linkedin/Liger-Kernel), RoPE scaling, NEFTune and rsLoRA. - **Experiment monitors**: LlamaBoard, TensorBoard, Wandb, MLflow, etc. - **Faster inference**: OpenAI-style API, Gradio UI and CLI with vLLM worker.