From 823d7f5c8153ef15b4983af0227aaaa3d251b11a Mon Sep 17 00:00:00 2001 From: grok <42383434+NLPJCL@users.noreply.github.com> Date: Wed, 23 Oct 2024 23:36:14 +0800 Subject: [PATCH 1/5] Update README_zh.md Former-commit-id: 18a7f3ff76aa8aae66dd18db49ed3cd13345d5c9 --- README_zh.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/README_zh.md b/README_zh.md index b1810b59..2a07496d 100644 --- a/README_zh.md +++ b/README_zh.md @@ -709,6 +709,8 @@ run_name: test_run # 可选 1. **[AutoRE](https://github.com/THUDM/AutoRE)**:基于大语言模型的文档级关系抽取系统。 1. **[NVIDIA RTX AI Toolkit](https://github.com/NVIDIA/RTX-AI-Toolkit)**:在 Windows 主机上利用英伟达 RTX 设备进行大型语言模型微调的开发包。 1. **[LazyLLM](https://github.com/LazyAGI/LazyLLM)**:一个低代码构建多 Agent 大模型应用的开发工具,支持基于 LLaMA Factory 的模型微调. +1. **[RAG-Retrieval](https://github.com/NLPJCL/RAG-Retrieval)**:RAG-Retrieval 提供了全链路的RAG检索微调(train)和推理(infer)以及蒸馏(distill)代码。[[LLM在Reranker任务上的最佳实践?A simple experiment report(with code)]](https://zhuanlan.zhihu.com/p/987727357) + From 3e3969784f9381c0650362f1ecdea9d31705af63 Mon Sep 17 00:00:00 2001 From: grok <42383434+NLPJCL@users.noreply.github.com> Date: Wed, 23 Oct 2024 23:49:47 +0800 Subject: [PATCH 2/5] Update README.md update english readme Former-commit-id: 7627ef09088ecbc234c08c0cb4743cbaee576b76 --- README.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/README.md b/README.md index 1705fef1..6292a3d6 100644 --- a/README.md +++ b/README.md @@ -708,6 +708,8 @@ If you have a project that should be incorporated, please contact via email or c 1. **[AutoRE](https://github.com/THUDM/AutoRE)**: A document-level relation extraction system based on large language models. 1. **[NVIDIA RTX AI Toolkit](https://github.com/NVIDIA/RTX-AI-Toolkit)**: SDKs for fine-tuning LLMs on Windows PC for NVIDIA RTX. 1. **[LazyLLM](https://github.com/LazyAGI/LazyLLM)**: An easy and lazy way for building multi-agent LLMs applications and supports model fine-tuning via LLaMA Factory. +1. **[RAG-Retrieval](https://github.com/NLPJCL/RAG-Retrieval)**:RAG-Retrieval provides the full pipeline for RAG retrieval model fine-tuning, inference, and distillation code.[[Best practices for LLM on Reranker tasks: A simple experiment report(with code)]](https://zhuanlan.zhihu.com/p/987727357) + From c24d477bdb56a6e0b45cac5cfb18265a1bce3e43 Mon Sep 17 00:00:00 2001 From: grok <42383434+NLPJCL@users.noreply.github.com> Date: Wed, 23 Oct 2024 23:50:56 +0800 Subject: [PATCH 3/5] Update README_zh.md Former-commit-id: 6fcabb334920c3145c7820fee4cd84809585f50f --- README_zh.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README_zh.md b/README_zh.md index 2a07496d..b77d32b0 100644 --- a/README_zh.md +++ b/README_zh.md @@ -709,7 +709,7 @@ run_name: test_run # 可选 1. **[AutoRE](https://github.com/THUDM/AutoRE)**:基于大语言模型的文档级关系抽取系统。 1. **[NVIDIA RTX AI Toolkit](https://github.com/NVIDIA/RTX-AI-Toolkit)**:在 Windows 主机上利用英伟达 RTX 设备进行大型语言模型微调的开发包。 1. **[LazyLLM](https://github.com/LazyAGI/LazyLLM)**:一个低代码构建多 Agent 大模型应用的开发工具,支持基于 LLaMA Factory 的模型微调. -1. **[RAG-Retrieval](https://github.com/NLPJCL/RAG-Retrieval)**:RAG-Retrieval 提供了全链路的RAG检索微调(train)和推理(infer)以及蒸馏(distill)代码。[[LLM在Reranker任务上的最佳实践?A simple experiment report(with code)]](https://zhuanlan.zhihu.com/p/987727357) +1. **[RAG-Retrieval](https://github.com/NLPJCL/RAG-Retrieval)**:RAG-Retrieval 提供了全链路的RAG检索模型微调(train)和推理(infer)以及蒸馏(distill)代码。[[LLM在Reranker任务上的最佳实践?A simple experiment report(with code)]](https://zhuanlan.zhihu.com/p/987727357) From 233556d1c73fb1b77787f3e7e20b26a37519fbd8 Mon Sep 17 00:00:00 2001 From: hoshi-hiyouga Date: Tue, 29 Oct 2024 21:18:15 +0800 Subject: [PATCH 4/5] Update README.md Former-commit-id: a76478c127bc98749079fbc7e5aacd6e60648f37 --- README.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index 6292a3d6..fdc931b7 100644 --- a/README.md +++ b/README.md @@ -4,7 +4,7 @@ [![GitHub Code License](https://img.shields.io/github/license/hiyouga/LLaMA-Factory)](LICENSE) [![GitHub last commit](https://img.shields.io/github/last-commit/hiyouga/LLaMA-Factory)](https://github.com/hiyouga/LLaMA-Factory/commits/main) [![PyPI](https://img.shields.io/pypi/v/llamafactory)](https://pypi.org/project/llamafactory/) -[![Citation](https://img.shields.io/badge/citation-91-green)](#projects-using-llama-factory) +[![Citation](https://img.shields.io/badge/citation-92-green)](#projects-using-llama-factory) [![GitHub pull request](https://img.shields.io/badge/PRs-welcome-blue)](https://github.com/hiyouga/LLaMA-Factory/pulls) [![Discord](https://dcbadge.vercel.app/api/server/rKfvV9r9FK?compact=true&style=flat)](https://discord.gg/rKfvV9r9FK) [![Twitter](https://img.shields.io/twitter/follow/llamafactory_ai)](https://twitter.com/llamafactory_ai) @@ -703,12 +703,12 @@ If you have a project that should be incorporated, please contact via email or c 1. **[Sunsimiao](https://github.com/X-D-Lab/Sunsimiao)**: A large language model specialized in Chinese medical domain, based on Baichuan-7B and ChatGLM-6B. 1. **[CareGPT](https://github.com/WangRongsheng/CareGPT)**: A series of large language models for Chinese medical domain, based on LLaMA2-7B and Baichuan-13B. 1. **[MachineMindset](https://github.com/PKU-YuanGroup/Machine-Mindset/)**: A series of MBTI Personality large language models, capable of giving any LLM 16 different personality types based on different datasets and training methods. -1. **[Luminia-13B-v3](https://huggingface.co/Nekochu/Luminia-13B-v3)**: A large language model specialized in generate metadata for stable diffusion. [[🤗Demo]](https://huggingface.co/spaces/Nekochu/Luminia-13B_SD_Prompt) +1. **[Luminia-13B-v3](https://huggingface.co/Nekochu/Luminia-13B-v3)**: A large language model specialized in generate metadata for stable diffusion. [[demo]](https://huggingface.co/spaces/Nekochu/Luminia-13B_SD_Prompt) 1. **[Chinese-LLaVA-Med](https://github.com/BUAADreamer/Chinese-LLaVA-Med)**: A multimodal large language model specialized in Chinese medical domain, based on LLaVA-1.5-7B. 1. **[AutoRE](https://github.com/THUDM/AutoRE)**: A document-level relation extraction system based on large language models. 1. **[NVIDIA RTX AI Toolkit](https://github.com/NVIDIA/RTX-AI-Toolkit)**: SDKs for fine-tuning LLMs on Windows PC for NVIDIA RTX. 1. **[LazyLLM](https://github.com/LazyAGI/LazyLLM)**: An easy and lazy way for building multi-agent LLMs applications and supports model fine-tuning via LLaMA Factory. -1. **[RAG-Retrieval](https://github.com/NLPJCL/RAG-Retrieval)**:RAG-Retrieval provides the full pipeline for RAG retrieval model fine-tuning, inference, and distillation code.[[Best practices for LLM on Reranker tasks: A simple experiment report(with code)]](https://zhuanlan.zhihu.com/p/987727357) +1. **[RAG-Retrieval](https://github.com/NLPJCL/RAG-Retrieval)**: A full pipeline for RAG retrieval model fine-tuning, inference, and distillation. [[blog]](https://zhuanlan.zhihu.com/p/987727357) From b86b869187d9f3f80e06bda6c5b8b5c1dae43b95 Mon Sep 17 00:00:00 2001 From: hoshi-hiyouga Date: Tue, 29 Oct 2024 21:19:17 +0800 Subject: [PATCH 5/5] Update README_zh.md Former-commit-id: 08d9a03c30b7aebf74bef7f59e6aea229af2aeb3 --- README_zh.md | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/README_zh.md b/README_zh.md index b77d32b0..c36cabf1 100644 --- a/README_zh.md +++ b/README_zh.md @@ -4,7 +4,7 @@ [![GitHub Code License](https://img.shields.io/github/license/hiyouga/LLaMA-Factory)](LICENSE) [![GitHub last commit](https://img.shields.io/github/last-commit/hiyouga/LLaMA-Factory)](https://github.com/hiyouga/LLaMA-Factory/commits/main) [![PyPI](https://img.shields.io/pypi/v/llamafactory)](https://pypi.org/project/llamafactory/) -[![Citation](https://img.shields.io/badge/citation-91-green)](#使用了-llama-factory-的项目) +[![Citation](https://img.shields.io/badge/citation-92-green)](#使用了-llama-factory-的项目) [![GitHub pull request](https://img.shields.io/badge/PRs-welcome-blue)](https://github.com/hiyouga/LLaMA-Factory/pulls) [![Discord](https://dcbadge.vercel.app/api/server/rKfvV9r9FK?compact=true&style=flat)](https://discord.gg/rKfvV9r9FK) [![Twitter](https://img.shields.io/twitter/follow/llamafactory_ai)](https://twitter.com/llamafactory_ai) @@ -704,13 +704,12 @@ run_name: test_run # 可选 1. **[Sunsimiao](https://github.com/X-D-Lab/Sunsimiao)**: 孙思邈中文医疗大模型 Sumsimiao,基于 Baichuan-7B 和 ChatGLM-6B 在中文医疗数据上微调而得。 1. **[CareGPT](https://github.com/WangRongsheng/CareGPT)**: 医疗大模型项目 CareGPT,基于 LLaMA2-7B 和 Baichuan-13B 在中文医疗数据上微调而得。 1. **[MachineMindset](https://github.com/PKU-YuanGroup/Machine-Mindset/)**:MBTI性格大模型项目,根据数据集与训练方式让任意 LLM 拥有 16 个不同的性格类型。 -1. **[Luminia-13B-v3](https://huggingface.co/Nekochu/Luminia-13B-v3)**:一个用于生成 Stable Diffusion 提示词的大型语言模型。[[🤗Demo]](https://huggingface.co/spaces/Nekochu/Luminia-13B_SD_Prompt) +1. **[Luminia-13B-v3](https://huggingface.co/Nekochu/Luminia-13B-v3)**:一个用于生成 Stable Diffusion 提示词的大型语言模型。[[demo]](https://huggingface.co/spaces/Nekochu/Luminia-13B_SD_Prompt) 1. **[Chinese-LLaVA-Med](https://github.com/BUAADreamer/Chinese-LLaVA-Med)**:中文多模态医学大模型,基于 LLaVA-1.5-7B 在中文多模态医疗数据上微调而得。 1. **[AutoRE](https://github.com/THUDM/AutoRE)**:基于大语言模型的文档级关系抽取系统。 1. **[NVIDIA RTX AI Toolkit](https://github.com/NVIDIA/RTX-AI-Toolkit)**:在 Windows 主机上利用英伟达 RTX 设备进行大型语言模型微调的开发包。 1. **[LazyLLM](https://github.com/LazyAGI/LazyLLM)**:一个低代码构建多 Agent 大模型应用的开发工具,支持基于 LLaMA Factory 的模型微调. -1. **[RAG-Retrieval](https://github.com/NLPJCL/RAG-Retrieval)**:RAG-Retrieval 提供了全链路的RAG检索模型微调(train)和推理(infer)以及蒸馏(distill)代码。[[LLM在Reranker任务上的最佳实践?A simple experiment report(with code)]](https://zhuanlan.zhihu.com/p/987727357) - +1. **[RAG-Retrieval](https://github.com/NLPJCL/RAG-Retrieval)**:一个全链路 RAG 检索模型微调、推理和蒸馏代码库。[[blog]](https://zhuanlan.zhihu.com/p/987727357)