diff --git a/README.md b/README.md
index d447fc65..beec708a 100644
--- a/README.md
+++ b/README.md
@@ -1,21 +1,31 @@

[](https://github.com/hiyouga/LLaMA-Factory/stargazers)
-[](LICENSE)
[](https://github.com/hiyouga/LLaMA-Factory/commits/main)
+[](https://github.com/hiyouga/LLaMA-Factory/graphs/contributors)
+[](https://github.com/hiyouga/LLaMA-Factory/actions/workflows/tests.yml)
[](https://pypi.org/project/llamafactory/)
[](https://scholar.google.com/scholar?cites=12620864006390196564)
[](https://github.com/hiyouga/LLaMA-Factory/pulls)
-[](https://discord.gg/rKfvV9r9FK)
+
[](https://twitter.com/llamafactory_ai)
+[](https://discord.gg/rKfvV9r9FK)
+[](https://gitcode.com/zhengyaowei/LLaMA-Factory)
+
[](https://colab.research.google.com/drive/1eRTPn37ltBbYsISy9Aw2NuI2Aq5CQrD9?usp=sharing)
[](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory)
[](https://huggingface.co/spaces/hiyouga/LLaMA-Board)
[](https://modelscope.cn/studios/hiyouga/LLaMA-Board)
[](https://aws.amazon.com/cn/blogs/china/a-one-stop-code-free-model-fine-tuning-deployment-platform-based-on-sagemaker-and-llama-factory/)
-[](https://gitcode.com/zhengyaowei/LLaMA-Factory)
-[](https://trendshift.io/repositories/4535)
+
+ Easily fine-tune 100+ large language models with zero-code CLI and Web UI
+
+
+
+
+
+
👋 Join our [WeChat](assets/wechat.jpg) or [NPU user group](assets/wechat_npu.jpg).
@@ -71,6 +81,13 @@ Choose your path:
- **Experiment monitors**: LlamaBoard, TensorBoard, Wandb, MLflow, SwanLab, etc.
- **Faster inference**: OpenAI-style API, Gradio UI and CLI with vLLM worker.
+### Day-N Support for Fine-Tuning Cutting-Edge Models
+
+| Support Date | Model Name |
+| ------------ | ---------------------------------------------------------- |
+| Day 0 | Qwen2.5 / Qwen2-VL / QwQ / QvQ / InternLM3 / MiniCPM-o-2.6 |
+| Day 1 | Llama 3 / GLM-4 / PaliGemma2 |
+
## Benchmark
Compared to ChatGLM's [P-Tuning](https://github.com/THUDM/ChatGLM2-6B/tree/main/ptuning), LLaMA Factory's LoRA tuning offers up to **3.7 times faster** training speed with a better Rouge score on the advertising text generation task. By leveraging 4-bit quantization technique, LLaMA Factory's QLoRA further improves the efficiency regarding the GPU memory.
@@ -804,6 +821,7 @@ If you have a project that should be incorporated, please contact via email or c
1. **[LazyLLM](https://github.com/LazyAGI/LazyLLM)**: An easy and lazy way for building multi-agent LLMs applications and supports model fine-tuning via LLaMA Factory.
1. **[RAG-Retrieval](https://github.com/NLPJCL/RAG-Retrieval)**: A full pipeline for RAG retrieval model fine-tuning, inference, and distillation. [[blog]](https://zhuanlan.zhihu.com/p/987727357)
1. **[360-LLaMA-Factory](https://github.com/Qihoo360/360-LLaMA-Factory)**: A modified library that supports long sequence SFT & DPO using ring attention.
+1. **[Sky-T1](https://novasky-ai.github.io/posts/sky-t1/)**: An o1-like model fine-tuned by NovaSky AI with very small cost.
diff --git a/README_zh.md b/README_zh.md
index b4e5f21d..c1b5d9e1 100644
--- a/README_zh.md
+++ b/README_zh.md
@@ -1,21 +1,32 @@

[](https://github.com/hiyouga/LLaMA-Factory/stargazers)
-[](LICENSE)
[](https://github.com/hiyouga/LLaMA-Factory/commits/main)
+[](https://github.com/hiyouga/LLaMA-Factory/graphs/contributors)
+[](https://github.com/hiyouga/LLaMA-Factory/actions/workflows/tests.yml)
[](https://pypi.org/project/llamafactory/)
[](https://scholar.google.com/scholar?cites=12620864006390196564)
[](https://github.com/hiyouga/LLaMA-Factory/pulls)
-[](https://discord.gg/rKfvV9r9FK)
+
[](https://twitter.com/llamafactory_ai)
+[](https://discord.gg/rKfvV9r9FK)
+[](https://gitcode.com/zhengyaowei/LLaMA-Factory)
+
[](https://colab.research.google.com/drive/1d5KQtbemerlSDSxZIfAaWXhKr30QypiK?usp=sharing)
[](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory)
[](https://huggingface.co/spaces/hiyouga/LLaMA-Board)
[](https://modelscope.cn/studios/hiyouga/LLaMA-Board)
[](https://aws.amazon.com/cn/blogs/china/a-one-stop-code-free-model-fine-tuning-deployment-platform-based-on-sagemaker-and-llama-factory/)
-[](https://gitcode.com/zhengyaowei/LLaMA-Factory)
-[](https://trendshift.io/repositories/4535)
+
+ 使用零代码命令行与 Web UI 轻松微调百余种大模型
+
+
+
+
+
+
+
👋 加入我们的[微信群](assets/wechat.jpg)或 [NPU 用户群](assets/wechat_npu.jpg)。
@@ -72,6 +83,13 @@ https://github.com/user-attachments/assets/e6ce34b0-52d5-4f3e-a830-592106c4c272
- **实验监控**:LlamaBoard、TensorBoard、Wandb、MLflow、SwanLab 等等。
- **极速推理**:基于 vLLM 的 OpenAI 风格 API、浏览器界面和命令行接口。
+### 最新模型的 Day-N 微调适配
+
+| 适配时间 | 模型名称 |
+| ------------ | ---------------------------------------------------------- |
+| Day 0 | Qwen2.5 / Qwen2-VL / QwQ / QvQ / InternLM3 / MiniCPM-o-2.6 |
+| Day 1 | Llama 3 / GLM-4 / PaliGemma2 |
+
## 性能指标
与 ChatGLM 官方的 [P-Tuning](https://github.com/THUDM/ChatGLM2-6B/tree/main/ptuning) 微调相比,LLaMA Factory 的 LoRA 微调提供了 **3.7 倍**的加速比,同时在广告文案生成任务上取得了更高的 Rouge 分数。结合 4 比特量化技术,LLaMA Factory 的 QLoRA 微调进一步降低了 GPU 显存消耗。
@@ -805,6 +823,7 @@ swanlab_run_name: test_run # 可选
1. **[LazyLLM](https://github.com/LazyAGI/LazyLLM)**:一个低代码构建多 Agent 大模型应用的开发工具,支持基于 LLaMA Factory 的模型微调.
1. **[RAG-Retrieval](https://github.com/NLPJCL/RAG-Retrieval)**:一个全链路 RAG 检索模型微调、推理和蒸馏代码库。[[blog]](https://zhuanlan.zhihu.com/p/987727357)
1. **[360-LLaMA-Factory](https://github.com/Qihoo360/360-LLaMA-Factory)**:一个魔改后的代码库,通过 Ring Attention 支持长序列的 SFT 和 DPO 训练。
+1. **[Sky-T1](https://novasky-ai.github.io/posts/sky-t1/)**:由 NovaSky AI 微调的低成本类 o1 长推理模型。