From 3f50d572ed1e1180e1e50cfa1093ed03a5f9d4b9 Mon Sep 17 00:00:00 2001 From: 0xez <110299556+0xez@users.noreply.github.com> Date: Thu, 21 Mar 2024 22:14:48 +0800 Subject: [PATCH 1/2] Update README.md, fix the release date of the paper Former-commit-id: 675ba41562d812f169c6b2775e57a3f38fc8deee --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 7bff140c..23c351df 100644 --- a/README.md +++ b/README.md @@ -68,7 +68,7 @@ Compared to ChatGLM's [P-Tuning](https://github.com/THUDM/ChatGLM2-6B/tree/main/ ## Changelog -[23/03/21] Our paper "[LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models](https://arxiv.org/abs/2403.13372)" is available at arXiv! +[24/03/21] Our paper "[LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models](https://arxiv.org/abs/2403.13372)" is available at arXiv! [24/03/20] We supported **FSDP+QLoRA** that fine-tunes a 70B model on 2x24GB GPUs. See `examples/fsdp_qlora` for usage. From 028a8bc5325e1a1a3e240d877437db2da31a81e4 Mon Sep 17 00:00:00 2001 From: 0xez <110299556+0xez@users.noreply.github.com> Date: Fri, 22 Mar 2024 10:41:17 +0800 Subject: [PATCH 2/2] Update README_zh.md, fix the release date of the paper Former-commit-id: be0360303d2e7275e14586dc503a9581f80ce303 --- README_zh.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README_zh.md b/README_zh.md index c128660e..9a7d1eae 100644 --- a/README_zh.md +++ b/README_zh.md @@ -68,7 +68,7 @@ https://github.com/hiyouga/LLaMA-Factory/assets/16256802/ec36a9dd-37f4-4f72-81bd ## 更新日志 -[23/03/21] 我们的论文 "[LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models](https://arxiv.org/abs/2403.13372)" 可在 arXiv 上查看! +[24/03/21] 我们的论文 "[LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models](https://arxiv.org/abs/2403.13372)" 可在 arXiv 上查看! [24/03/20] 我们支持了能在 2x24GB GPU 上微调 70B 模型的 **FSDP+QLoRA**。详细用法请参照 `examples/fsdp_qlora`。