mirror of
https://github.com/hiyouga/LLaMA-Factory.git
synced 2025-08-03 04:02:49 +08:00
fix conflict
Former-commit-id: d956041640d9abc5e59919a227d27270fb513a7e
This commit is contained in:
commit
4b90f04c1f
2
.github/SECURITY.md
vendored
2
.github/SECURITY.md
vendored
@ -1,6 +1,6 @@
|
|||||||
# Reporting Security Issues
|
# Reporting Security Issues
|
||||||
|
|
||||||
To report a security issue, please use the GitHub Security Advisory ["Report a Vulnerability"](https://github.com/electron/electron/security/advisories/new) tab.
|
To report a security issue, please use the GitHub Security Advisory ["Report a Vulnerability"](https://github.com/hiyouga/LLaMA-Factory/security/advisories/new) tab.
|
||||||
|
|
||||||
We will send a response indicating the next steps in handling your report. After the initial reply to your report, the security team will keep you informed of the progress towards a fix and full announcement, and may ask for additional information or guidance.
|
We will send a response indicating the next steps in handling your report. After the initial reply to your report, the security team will keep you informed of the progress towards a fix and full announcement, and may ask for additional information or guidance.
|
||||||
|
|
||||||
|
@ -6,9 +6,9 @@ COPY requirements.txt /app/
|
|||||||
RUN pip install -r requirements.txt
|
RUN pip install -r requirements.txt
|
||||||
|
|
||||||
COPY . /app/
|
COPY . /app/
|
||||||
RUN pip install -e .[deepspeed,metrics,bitsandbytes,qwen]
|
RUN pip install -e .[metrics,bitsandbytes,qwen]
|
||||||
|
|
||||||
VOLUME [ "/root/.cache/huggingface/", "/app/data", "/app/output" ]
|
VOLUME [ "/root/.cache/huggingface/", "/app/data", "/app/output" ]
|
||||||
EXPOSE 7860
|
EXPOSE 7860
|
||||||
|
|
||||||
CMD [ "python", "src/train_web.py" ]
|
CMD [ "llamafactory-cli", "webui" ]
|
||||||
|
647
README.md
647
README.md
@ -3,9 +3,8 @@
|
|||||||
[](https://github.com/hiyouga/LLaMA-Factory/stargazers)
|
[](https://github.com/hiyouga/LLaMA-Factory/stargazers)
|
||||||
[](LICENSE)
|
[](LICENSE)
|
||||||
[](https://github.com/hiyouga/LLaMA-Factory/commits/main)
|
[](https://github.com/hiyouga/LLaMA-Factory/commits/main)
|
||||||
[](https://pypi.org/project/llmtuner/)
|
[](https://pypi.org/project/llamafactory/)
|
||||||
[](https://pypi.org/project/llmtuner/)
|
[](#projects-using-llama-factory)
|
||||||
[](#projects-using-llama-factory)
|
|
||||||
[](https://github.com/hiyouga/LLaMA-Factory/pulls)
|
[](https://github.com/hiyouga/LLaMA-Factory/pulls)
|
||||||
[](https://discord.gg/rKfvV9r9FK)
|
[](https://discord.gg/rKfvV9r9FK)
|
||||||
[](https://twitter.com/llamafactory_ai)
|
[](https://twitter.com/llamafactory_ai)
|
||||||
@ -13,6 +12,8 @@
|
|||||||
[](https://modelscope.cn/studios/hiyouga/LLaMA-Board)
|
[](https://modelscope.cn/studios/hiyouga/LLaMA-Board)
|
||||||
[](https://colab.research.google.com/drive/1eRTPn37ltBbYsISy9Aw2NuI2Aq5CQrD9?usp=sharing)
|
[](https://colab.research.google.com/drive/1eRTPn37ltBbYsISy9Aw2NuI2Aq5CQrD9?usp=sharing)
|
||||||
|
|
||||||
|
[](https://trendshift.io/repositories/4535)
|
||||||
|
|
||||||
👋 Join our [WeChat](assets/wechat.jpg).
|
👋 Join our [WeChat](assets/wechat.jpg).
|
||||||
|
|
||||||
\[ English | [中文](README_zh.md) \]
|
\[ English | [中文](README_zh.md) \]
|
||||||
@ -43,17 +44,17 @@ Choose your path:
|
|||||||
|
|
||||||
## Features
|
## Features
|
||||||
|
|
||||||
- **Various models**: LLaMA, Mistral, Mixtral-MoE, Qwen, Yi, Gemma, Baichuan, ChatGLM, Phi, etc.
|
- **Various models**: LLaMA, LLaVA, Mistral, Mixtral-MoE, Qwen, Yi, Gemma, Baichuan, ChatGLM, Phi, etc.
|
||||||
- **Integrated methods**: (Continuous) pre-training, supervised fine-tuning, reward modeling, PPO and DPO.
|
- **Integrated methods**: (Continuous) pre-training, (multimodal) supervised fine-tuning, reward modeling, PPO, DPO, KTO and ORPO.
|
||||||
- **Scalable resources**: 32-bit full-tuning, 16-bit freeze-tuning, 16-bit LoRA and 2/4/8-bit QLoRA via AQLM/AWQ/GPTQ/LLM.int8.
|
- **Scalable resources**: 32-bit full-tuning, 16-bit freeze-tuning, 16-bit LoRA and 2/4/8-bit QLoRA via AQLM/AWQ/GPTQ/LLM.int8.
|
||||||
- **Advanced algorithms**: GaLore, DoRA, LongLoRA, LLaMA Pro, LoRA+, LoftQ and Agent tuning.
|
- **Advanced algorithms**: GaLore, BAdam, DoRA, LongLoRA, LLaMA Pro, Mixture-of-Depths, LoRA+, LoftQ and Agent tuning.
|
||||||
- **Practical tricks**: FlashAttention-2, Unsloth, RoPE scaling, NEFTune and rsLoRA.
|
- **Practical tricks**: FlashAttention-2, Unsloth, RoPE scaling, NEFTune and rsLoRA.
|
||||||
- **Experiment monitors**: LlamaBoard, TensorBoard, Wandb, MLflow, etc.
|
- **Experiment monitors**: LlamaBoard, TensorBoard, Wandb, MLflow, etc.
|
||||||
- **Faster inference**: OpenAI-style API, Gradio UI and CLI with vLLM worker.
|
- **Faster inference**: OpenAI-style API, Gradio UI and CLI with vLLM worker.
|
||||||
|
|
||||||
## Benchmark
|
## Benchmark
|
||||||
|
|
||||||
Compared to ChatGLM's [P-Tuning](https://github.com/THUDM/ChatGLM2-6B/tree/main/ptuning), LLaMA-Factory's LoRA tuning offers up to **3.7 times faster** training speed with a better Rouge score on the advertising text generation task. By leveraging 4-bit quantization technique, LLaMA-Factory's QLoRA further improves the efficiency regarding the GPU memory.
|
Compared to ChatGLM's [P-Tuning](https://github.com/THUDM/ChatGLM2-6B/tree/main/ptuning), LLaMA Factory's LoRA tuning offers up to **3.7 times faster** training speed with a better Rouge score on the advertising text generation task. By leveraging 4-bit quantization technique, LLaMA Factory's QLoRA further improves the efficiency regarding the GPU memory.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
@ -62,51 +63,69 @@ Compared to ChatGLM's [P-Tuning](https://github.com/THUDM/ChatGLM2-6B/tree/main/
|
|||||||
- **Training Speed**: the number of training samples processed per second during the training. (bs=4, cutoff_len=1024)
|
- **Training Speed**: the number of training samples processed per second during the training. (bs=4, cutoff_len=1024)
|
||||||
- **Rouge Score**: Rouge-2 score on the development set of the [advertising text generation](https://aclanthology.org/D19-1321.pdf) task. (bs=4, cutoff_len=1024)
|
- **Rouge Score**: Rouge-2 score on the development set of the [advertising text generation](https://aclanthology.org/D19-1321.pdf) task. (bs=4, cutoff_len=1024)
|
||||||
- **GPU Memory**: Peak GPU memory usage in 4-bit quantized training. (bs=1, cutoff_len=1024)
|
- **GPU Memory**: Peak GPU memory usage in 4-bit quantized training. (bs=1, cutoff_len=1024)
|
||||||
- We adopt `pre_seq_len=128` for ChatGLM's P-Tuning and `lora_rank=32` for LLaMA-Factory's LoRA tuning.
|
- We adopt `pre_seq_len=128` for ChatGLM's P-Tuning and `lora_rank=32` for LLaMA Factory's LoRA tuning.
|
||||||
|
|
||||||
</details>
|
</details>
|
||||||
|
|
||||||
## Changelog
|
## Changelog
|
||||||
|
|
||||||
[24/03/21] Our paper "[LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models](https://arxiv.org/abs/2403.13372)" is available at arXiv!
|
[24/05/18] We supported **[KTO](https://arxiv.org/abs/2402.01306)** algorithm for preference learning. See [examples](examples/README.md) for usage.
|
||||||
|
|
||||||
[24/03/20] We supported **FSDP+QLoRA** that fine-tunes a 70B model on 2x24GB GPUs. See `examples/fsdp_qlora` for usage.
|
[24/05/14] We supported training and inference on the Ascend NPU devices. Check [installation](#installation) section for details.
|
||||||
|
|
||||||
[24/03/13] We supported **[LoRA+](https://arxiv.org/abs/2402.12354)**. Try `loraplus_lr_ratio=16.0` to enable LoRA+ algorithm.
|
[24/05/13] We supported fine-tuning the **Yi-1.5** series models.
|
||||||
|
|
||||||
[24/03/07] We supported gradient low-rank projection (**[GaLore](https://arxiv.org/abs/2403.03507)**) algorithm. Try `--use_galore` to use the memory-efficient optimizer.
|
|
||||||
|
|
||||||
[24/03/07] We integrated **[vLLM](https://github.com/vllm-project/vllm)** for faster and concurrent inference. Try `--infer_backend vllm` to enjoy **270%** inference speed. (LoRA is not yet supported, merge it first.)
|
|
||||||
|
|
||||||
<details><summary>Full Changelog</summary>
|
<details><summary>Full Changelog</summary>
|
||||||
|
|
||||||
[24/02/28] We supported weight-decomposed LoRA (**[DoRA](https://arxiv.org/abs/2402.09353)**). Try `--use_dora` to activate DoRA training.
|
[24/04/26] We supported fine-tuning the **LLaVA-1.5** multimodal LLMs. See [examples](examples/README.md) for usage.
|
||||||
|
|
||||||
[24/02/15] We supported **block expansion** proposed by [LLaMA Pro](https://github.com/TencentARC/LLaMA-Pro). See `examples/extras/llama_pro` for usage.
|
[24/04/22] We provided a **[Colab notebook](https://colab.research.google.com/drive/1eRTPn37ltBbYsISy9Aw2NuI2Aq5CQrD9?usp=sharing)** for fine-tuning the Llama-3 model on a free T4 GPU. Two Llama-3-derived models fine-tuned using LLaMA Factory are available at Hugging Face, check [Llama3-8B-Chinese-Chat](https://huggingface.co/shenzhi-wang/Llama3-8B-Chinese-Chat) and [Llama3-Chinese](https://huggingface.co/zhichen/Llama3-Chinese) for details.
|
||||||
|
|
||||||
|
[24/04/21] We supported **[Mixture-of-Depths](https://arxiv.org/abs/2404.02258)** according to [AstraMindAI's implementation](https://github.com/astramind-ai/Mixture-of-depths). See [examples](examples/README.md) for usage.
|
||||||
|
|
||||||
|
[24/04/16] We supported **[BAdam](https://arxiv.org/abs/2404.02827)**. See [examples](examples/README.md) for usage.
|
||||||
|
|
||||||
|
[24/04/16] We supported **[unsloth](https://github.com/unslothai/unsloth)**'s long-sequence training (Llama-2-7B-56k within 24GB). It achieves **117%** speed and **50%** memory compared with FlashAttention-2, more benchmarks can be found in [this page](https://github.com/hiyouga/LLaMA-Factory/wiki/Performance-comparison).
|
||||||
|
|
||||||
|
[24/03/31] We supported **[ORPO](https://arxiv.org/abs/2403.07691)**. See [examples](examples/README.md) for usage.
|
||||||
|
|
||||||
|
[24/03/21] Our paper "[LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models](https://arxiv.org/abs/2403.13372)" is available at arXiv!
|
||||||
|
|
||||||
|
[24/03/20] We supported **FSDP+QLoRA** that fine-tunes a 70B model on 2x24GB GPUs. See [examples](examples/README.md) for usage.
|
||||||
|
|
||||||
|
[24/03/13] We supported **[LoRA+](https://arxiv.org/abs/2402.12354)**. See [examples](examples/README.md) for usage.
|
||||||
|
|
||||||
|
[24/03/07] We supported gradient low-rank projection (**[GaLore](https://arxiv.org/abs/2403.03507)**) algorithm. See [examples](examples/README.md) for usage.
|
||||||
|
|
||||||
|
[24/03/07] We integrated **[vLLM](https://github.com/vllm-project/vllm)** for faster and concurrent inference. Try `infer_backend: vllm` to enjoy **270%** inference speed.
|
||||||
|
|
||||||
|
[24/02/28] We supported weight-decomposed LoRA (**[DoRA](https://arxiv.org/abs/2402.09353)**). Try `use_dora: true` to activate DoRA training.
|
||||||
|
|
||||||
|
[24/02/15] We supported **block expansion** proposed by [LLaMA Pro](https://github.com/TencentARC/LLaMA-Pro). See [examples](examples/README.md) for usage.
|
||||||
|
|
||||||
[24/02/05] Qwen1.5 (Qwen2 beta version) series models are supported in LLaMA-Factory. Check this [blog post](https://qwenlm.github.io/blog/qwen1.5/) for details.
|
[24/02/05] Qwen1.5 (Qwen2 beta version) series models are supported in LLaMA-Factory. Check this [blog post](https://qwenlm.github.io/blog/qwen1.5/) for details.
|
||||||
|
|
||||||
[24/01/18] We supported **agent tuning** for most models, equipping model with tool using abilities by fine-tuning with `--dataset glaive_toolcall`.
|
[24/01/18] We supported **agent tuning** for most models, equipping model with tool using abilities by fine-tuning with `dataset: glaive_toolcall`.
|
||||||
|
|
||||||
[23/12/23] We supported **[unsloth](https://github.com/unslothai/unsloth)**'s implementation to boost LoRA tuning for the LLaMA, Mistral and Yi models. Try `--use_unsloth` argument to activate unsloth patch. It achieves **170%** speed in our benchmark, check [this page](https://github.com/hiyouga/LLaMA-Factory/wiki/Performance-comparison) for details.
|
[23/12/23] We supported **[unsloth](https://github.com/unslothai/unsloth)**'s implementation to boost LoRA tuning for the LLaMA, Mistral and Yi models. Try `use_unsloth: true` argument to activate unsloth patch. It achieves **170%** speed in our benchmark, check [this page](https://github.com/hiyouga/LLaMA-Factory/wiki/Performance-comparison) for details.
|
||||||
|
|
||||||
[23/12/12] We supported fine-tuning the latest MoE model **[Mixtral 8x7B](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1)** in our framework. See hardware requirement [here](#hardware-requirement).
|
[23/12/12] We supported fine-tuning the latest MoE model **[Mixtral 8x7B](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1)** in our framework. See hardware requirement [here](#hardware-requirement).
|
||||||
|
|
||||||
[23/12/01] We supported downloading pre-trained models and datasets from the **[ModelScope Hub](https://modelscope.cn/models)** for Chinese mainland users. See [this tutorial](#use-modelscope-hub-optional) for usage.
|
[23/12/01] We supported downloading pre-trained models and datasets from the **[ModelScope Hub](https://modelscope.cn/models)** for Chinese mainland users. See [this tutorial](#download-from-modelscope-hub) for usage.
|
||||||
|
|
||||||
[23/10/21] We supported **[NEFTune](https://arxiv.org/abs/2310.05914)** trick for fine-tuning. Try `--neftune_noise_alpha` argument to activate NEFTune, e.g., `--neftune_noise_alpha 5`.
|
[23/10/21] We supported **[NEFTune](https://arxiv.org/abs/2310.05914)** trick for fine-tuning. Try `neftune_noise_alpha: 5` argument to activate NEFTune.
|
||||||
|
|
||||||
[23/09/27] We supported **$S^2$-Attn** proposed by [LongLoRA](https://github.com/dvlab-research/LongLoRA) for the LLaMA models. Try `--shift_attn` argument to enable shift short attention.
|
[23/09/27] We supported **$S^2$-Attn** proposed by [LongLoRA](https://github.com/dvlab-research/LongLoRA) for the LLaMA models. Try `shift_attn: true` argument to enable shift short attention.
|
||||||
|
|
||||||
[23/09/23] We integrated MMLU, C-Eval and CMMLU benchmarks in this repo. See [this example](#evaluation) to evaluate your models.
|
[23/09/23] We integrated MMLU, C-Eval and CMMLU benchmarks in this repo. See [examples](examples/README.md) for usage.
|
||||||
|
|
||||||
[23/09/10] We supported **[FlashAttention-2](https://github.com/Dao-AILab/flash-attention)**. Try `--flash_attn` argument to enable FlashAttention-2 if you are using RTX4090, A100 or H100 GPUs.
|
[23/09/10] We supported **[FlashAttention-2](https://github.com/Dao-AILab/flash-attention)**. Try `flash_attn: fa2` argument to enable FlashAttention-2 if you are using RTX4090, A100 or H100 GPUs.
|
||||||
|
|
||||||
[23/08/12] We supported **RoPE scaling** to extend the context length of the LLaMA models. Try `--rope_scaling linear` argument in training and `--rope_scaling dynamic` argument at inference to extrapolate the position embeddings.
|
[23/08/12] We supported **RoPE scaling** to extend the context length of the LLaMA models. Try `rope_scaling: linear` argument in training and `rope_scaling: dynamic` argument at inference to extrapolate the position embeddings.
|
||||||
|
|
||||||
[23/08/11] We supported **[DPO training](https://arxiv.org/abs/2305.18290)** for instruction-tuned models. See [this example](#dpo-training) to train your models.
|
[23/08/11] We supported **[DPO training](https://arxiv.org/abs/2305.18290)** for instruction-tuned models. See [examples](examples/README.md) for usage.
|
||||||
|
|
||||||
[23/07/31] We supported **dataset streaming**. Try `--streaming` and `--max_steps 10000` arguments to load your dataset in streaming mode.
|
[23/07/31] We supported **dataset streaming**. Try `streaming: true` and `max_steps: 10000` arguments to load your dataset in streaming mode.
|
||||||
|
|
||||||
[23/07/29] We released two instruction-tuned 13B models at Hugging Face. See these Hugging Face Repos ([LLaMA-2](https://huggingface.co/hiyouga/Llama-2-Chinese-13b-chat) / [Baichuan](https://huggingface.co/hiyouga/Baichuan-13B-sft)) for details.
|
[23/07/29] We released two instruction-tuned 13B models at Hugging Face. See these Hugging Face Repos ([LLaMA-2](https://huggingface.co/hiyouga/Llama-2-Chinese-13b-chat) / [Baichuan](https://huggingface.co/hiyouga/Baichuan-13B-sft)) for details.
|
||||||
|
|
||||||
@ -118,43 +137,49 @@ Compared to ChatGLM's [P-Tuning](https://github.com/THUDM/ChatGLM2-6B/tree/main/
|
|||||||
|
|
||||||
[23/06/22] We aligned the [demo API](src/api_demo.py) with the [OpenAI's](https://platform.openai.com/docs/api-reference/chat) format where you can insert the fine-tuned model in **arbitrary ChatGPT-based applications**.
|
[23/06/22] We aligned the [demo API](src/api_demo.py) with the [OpenAI's](https://platform.openai.com/docs/api-reference/chat) format where you can insert the fine-tuned model in **arbitrary ChatGPT-based applications**.
|
||||||
|
|
||||||
[23/06/03] We supported quantized training and inference (aka **[QLoRA](https://github.com/artidoro/qlora)**). Try `--quantization_bit 4/8` argument to work with quantized models.
|
[23/06/03] We supported quantized training and inference (aka **[QLoRA](https://github.com/artidoro/qlora)**). See [examples](examples/README.md) for usage.
|
||||||
|
|
||||||
</details>
|
</details>
|
||||||
|
|
||||||
## Supported Models
|
## Supported Models
|
||||||
|
|
||||||
| Model | Model size | Default module | Template |
|
| Model | Model size | Default module | Template |
|
||||||
| -------------------------------------------------------- | --------------------------- | ----------------- | --------- |
|
| -------------------------------------------------------- | -------------------------------- | ----------------- | --------- |
|
||||||
| [Baichuan2](https://huggingface.co/baichuan-inc) | 7B/13B | W_pack | baichuan2 |
|
| [Baichuan2](https://huggingface.co/baichuan-inc) | 7B/13B | W_pack | baichuan2 |
|
||||||
| [BLOOM](https://huggingface.co/bigscience/bloom) | 560M/1.1B/1.7B/3B/7.1B/176B | query_key_value | - |
|
| [BLOOM](https://huggingface.co/bigscience) | 560M/1.1B/1.7B/3B/7.1B/176B | query_key_value | - |
|
||||||
| [BLOOMZ](https://huggingface.co/bigscience/bloomz) | 560M/1.1B/1.7B/3B/7.1B/176B | query_key_value | - |
|
| [BLOOMZ](https://huggingface.co/bigscience) | 560M/1.1B/1.7B/3B/7.1B/176B | query_key_value | - |
|
||||||
| [ChatGLM3](https://huggingface.co/THUDM/chatglm3-6b) | 6B | query_key_value | chatglm3 |
|
| [ChatGLM3](https://huggingface.co/THUDM) | 6B | query_key_value | chatglm3 |
|
||||||
| [DeepSeek (MoE)](https://huggingface.co/deepseek-ai) | 7B/16B/67B | q_proj,v_proj | deepseek |
|
| [Command-R](https://huggingface.co/CohereForAI) | 35B/104B | q_proj,v_proj | cohere |
|
||||||
| [Falcon](https://huggingface.co/tiiuae) | 7B/40B/180B | query_key_value | falcon |
|
| [DeepSeek (MoE)](https://huggingface.co/deepseek-ai) | 7B/16B/67B/236B | q_proj,v_proj | deepseek |
|
||||||
| [Gemma](https://huggingface.co/google) | 2B/7B | q_proj,v_proj | gemma |
|
| [Falcon](https://huggingface.co/tiiuae) | 7B/11B/40B/180B | query_key_value | falcon |
|
||||||
|
| [Gemma/CodeGemma](https://huggingface.co/google) | 2B/7B | q_proj,v_proj | gemma |
|
||||||
| [InternLM2](https://huggingface.co/internlm) | 7B/20B | wqkv | intern2 |
|
| [InternLM2](https://huggingface.co/internlm) | 7B/20B | wqkv | intern2 |
|
||||||
| [LLaMA](https://github.com/facebookresearch/llama) | 7B/13B/33B/65B | q_proj,v_proj | - |
|
| [LLaMA](https://github.com/facebookresearch/llama) | 7B/13B/33B/65B | q_proj,v_proj | - |
|
||||||
| [LLaMA-2](https://huggingface.co/meta-llama) | 7B/13B/70B | q_proj,v_proj | llama2 |
|
| [LLaMA-2](https://huggingface.co/meta-llama) | 7B/13B/70B | q_proj,v_proj | llama2 |
|
||||||
| [Mistral](https://huggingface.co/mistralai) | 7B | q_proj,v_proj | mistral |
|
| [LLaMA-3](https://huggingface.co/meta-llama) | 8B/70B | q_proj,v_proj | llama3 |
|
||||||
| [Mixtral](https://huggingface.co/mistralai) | 8x7B | q_proj,v_proj | mistral |
|
| [LLaVA-1.5](https://huggingface.co/llava-hf) | 7B/13B | q_proj,v_proj | vicuna |
|
||||||
| [OLMo](https://huggingface.co/allenai) | 1B/7B | att_proj | olmo |
|
| [Mistral/Mixtral](https://huggingface.co/mistralai) | 7B/8x7B/8x22B | q_proj,v_proj | mistral |
|
||||||
|
| [OLMo](https://huggingface.co/allenai) | 1B/7B | q_proj,v_proj | - |
|
||||||
| [Phi-1.5/2](https://huggingface.co/microsoft) | 1.3B/2.7B | q_proj,v_proj | - |
|
| [Phi-1.5/2](https://huggingface.co/microsoft) | 1.3B/2.7B | q_proj,v_proj | - |
|
||||||
|
| [Phi-3](https://huggingface.co/microsoft) | 3.8B | qkv_proj | phi |
|
||||||
| [Qwen](https://huggingface.co/Qwen) | 1.8B/7B/14B/72B | c_attn | qwen |
|
| [Qwen](https://huggingface.co/Qwen) | 1.8B/7B/14B/72B | c_attn | qwen |
|
||||||
| [Qwen1.5](https://huggingface.co/Qwen) | 0.5B/1.8B/4B/7B/14B/72B | q_proj,v_proj | qwen |
|
| [Qwen1.5 (Code/MoE)](https://huggingface.co/Qwen) | 0.5B/1.8B/4B/7B/14B/32B/72B/110B | q_proj,v_proj | qwen |
|
||||||
| [StarCoder2](https://huggingface.co/bigcode) | 3B/7B/15B | q_proj,v_proj | - |
|
| [StarCoder2](https://huggingface.co/bigcode) | 3B/7B/15B | q_proj,v_proj | - |
|
||||||
| [XVERSE](https://huggingface.co/xverse) | 7B/13B/65B | q_proj,v_proj | xverse |
|
| [XVERSE](https://huggingface.co/xverse) | 7B/13B/65B | q_proj,v_proj | xverse |
|
||||||
| [Yi](https://huggingface.co/01-ai) | 6B/9B/34B | q_proj,v_proj | yi |
|
| [Yi (1/1.5)](https://huggingface.co/01-ai) | 6B/9B/34B | q_proj,v_proj | yi |
|
||||||
|
| [Yi-VL](https://huggingface.co/01-ai) | 6B/34B | q_proj,v_proj | yi_vl |
|
||||||
| [Yuan](https://huggingface.co/IEITYuan) | 2B/51B/102B | q_proj,v_proj | yuan |
|
| [Yuan](https://huggingface.co/IEITYuan) | 2B/51B/102B | q_proj,v_proj | yuan |
|
||||||
|
|
||||||
> [!NOTE]
|
> [!NOTE]
|
||||||
> **Default module** is used for the `--lora_target` argument, you can use `--lora_target all` to specify all the available modules.
|
> **Default module** is used for the `--lora_target` argument, you can use `--lora_target all` to specify all the available modules for better convergence.
|
||||||
>
|
>
|
||||||
> For the "base" models, the `--template` argument can be chosen from `default`, `alpaca`, `vicuna` etc. But make sure to use the **corresponding template** for the "chat" models.
|
> For the "base" models, the `--template` argument can be chosen from `default`, `alpaca`, `vicuna` etc. But make sure to use the **corresponding template** for the "instruct/chat" models.
|
||||||
|
>
|
||||||
|
> Remember to use the **SAME** template in training and inference.
|
||||||
|
|
||||||
Please refer to [constants.py](src/llmtuner/extras/constants.py) for a full list of models we supported.
|
Please refer to [constants.py](src/llamafactory/extras/constants.py) for a full list of models we supported.
|
||||||
|
|
||||||
You also can add a custom chat template to [template.py](src/llmtuner/data/template.py).
|
You also can add a custom chat template to [template.py](src/llamafactory/data/template.py).
|
||||||
|
|
||||||
## Supported Training Approaches
|
## Supported Training Approaches
|
||||||
|
|
||||||
@ -165,9 +190,8 @@ You also can add a custom chat template to [template.py](src/llmtuner/data/templ
|
|||||||
| Reward Modeling | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
|
| Reward Modeling | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
|
||||||
| PPO Training | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
|
| PPO Training | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
|
||||||
| DPO Training | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
|
| DPO Training | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
|
||||||
|
| KTO Training | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
|
||||||
> [!NOTE]
|
| ORPO Training | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
|
||||||
> Use `--quantization_bit 4` argument to enable QLoRA.
|
|
||||||
|
|
||||||
## Provided Datasets
|
## Provided Datasets
|
||||||
|
|
||||||
@ -187,12 +211,12 @@ You also can add a custom chat template to [template.py](src/llmtuner/data/templ
|
|||||||
|
|
||||||
<details><summary>Supervised fine-tuning datasets</summary>
|
<details><summary>Supervised fine-tuning datasets</summary>
|
||||||
|
|
||||||
|
- [Identity (en&zh)](data/identity.json)
|
||||||
- [Stanford Alpaca (en)](https://github.com/tatsu-lab/stanford_alpaca)
|
- [Stanford Alpaca (en)](https://github.com/tatsu-lab/stanford_alpaca)
|
||||||
- [Stanford Alpaca (zh)](https://github.com/ymcui/Chinese-LLaMA-Alpaca)
|
- [Stanford Alpaca (zh)](https://github.com/ymcui/Chinese-LLaMA-Alpaca-3)
|
||||||
- [Alpaca GPT4 (en&zh)](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM)
|
- [Alpaca GPT4 (en&zh)](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM)
|
||||||
- [Self Cognition (zh)](data/self_cognition.json)
|
- [Glaive Function Calling V2 (en&zh)](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2)
|
||||||
- [Open Assistant (multilingual)](https://huggingface.co/datasets/OpenAssistant/oasst1)
|
- [LIMA (en)](https://huggingface.co/datasets/GAIR/lima)
|
||||||
- [ShareGPT (zh)](https://huggingface.co/datasets/QingyiSi/Alpaca-CoT/tree/main/Chinese-instruction-collection)
|
|
||||||
- [Guanaco Dataset (multilingual)](https://huggingface.co/datasets/JosephusCheung/GuanacoDataset)
|
- [Guanaco Dataset (multilingual)](https://huggingface.co/datasets/JosephusCheung/GuanacoDataset)
|
||||||
- [BELLE 2M (zh)](https://huggingface.co/datasets/BelleGroup/train_2M_CN)
|
- [BELLE 2M (zh)](https://huggingface.co/datasets/BelleGroup/train_2M_CN)
|
||||||
- [BELLE 1M (zh)](https://huggingface.co/datasets/BelleGroup/train_1M_CN)
|
- [BELLE 1M (zh)](https://huggingface.co/datasets/BelleGroup/train_1M_CN)
|
||||||
@ -201,7 +225,6 @@ You also can add a custom chat template to [template.py](src/llmtuner/data/templ
|
|||||||
- [BELLE School Math 0.25M (zh)](https://huggingface.co/datasets/BelleGroup/school_math_0.25M)
|
- [BELLE School Math 0.25M (zh)](https://huggingface.co/datasets/BelleGroup/school_math_0.25M)
|
||||||
- [BELLE Multiturn Chat 0.8M (zh)](https://huggingface.co/datasets/BelleGroup/multiturn_chat_0.8M)
|
- [BELLE Multiturn Chat 0.8M (zh)](https://huggingface.co/datasets/BelleGroup/multiturn_chat_0.8M)
|
||||||
- [UltraChat (en)](https://github.com/thunlp/UltraChat)
|
- [UltraChat (en)](https://github.com/thunlp/UltraChat)
|
||||||
- [LIMA (en)](https://huggingface.co/datasets/GAIR/lima)
|
|
||||||
- [OpenPlatypus (en)](https://huggingface.co/datasets/garage-bAInd/Open-Platypus)
|
- [OpenPlatypus (en)](https://huggingface.co/datasets/garage-bAInd/Open-Platypus)
|
||||||
- [CodeAlpaca 20k (en)](https://huggingface.co/datasets/sahil2801/CodeAlpaca-20k)
|
- [CodeAlpaca 20k (en)](https://huggingface.co/datasets/sahil2801/CodeAlpaca-20k)
|
||||||
- [Alpaca CoT (multilingual)](https://huggingface.co/datasets/QingyiSi/Alpaca-CoT)
|
- [Alpaca CoT (multilingual)](https://huggingface.co/datasets/QingyiSi/Alpaca-CoT)
|
||||||
@ -214,15 +237,17 @@ You also can add a custom chat template to [template.py](src/llmtuner/data/templ
|
|||||||
- [WebNovel (zh)](https://huggingface.co/datasets/zxbsmk/webnovel_cn)
|
- [WebNovel (zh)](https://huggingface.co/datasets/zxbsmk/webnovel_cn)
|
||||||
- [Nectar (en)](https://huggingface.co/datasets/berkeley-nest/Nectar)
|
- [Nectar (en)](https://huggingface.co/datasets/berkeley-nest/Nectar)
|
||||||
- [deepctrl (en&zh)](https://www.modelscope.cn/datasets/deepctrl/deepctrl-sft-data)
|
- [deepctrl (en&zh)](https://www.modelscope.cn/datasets/deepctrl/deepctrl-sft-data)
|
||||||
- [Ad Gen (zh)](https://huggingface.co/datasets/HasturOfficial/adgen)
|
- [Advertise Generating (zh)](https://huggingface.co/datasets/HasturOfficial/adgen)
|
||||||
- [ShareGPT Hyperfiltered (en)](https://huggingface.co/datasets/totally-not-an-llm/sharegpt-hyperfiltered-3k)
|
- [ShareGPT Hyperfiltered (en)](https://huggingface.co/datasets/totally-not-an-llm/sharegpt-hyperfiltered-3k)
|
||||||
- [ShareGPT4 (en&zh)](https://huggingface.co/datasets/shibing624/sharegpt_gpt4)
|
- [ShareGPT4 (en&zh)](https://huggingface.co/datasets/shibing624/sharegpt_gpt4)
|
||||||
- [UltraChat 200k (en)](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k)
|
- [UltraChat 200k (en)](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k)
|
||||||
- [AgentInstruct (en)](https://huggingface.co/datasets/THUDM/AgentInstruct)
|
- [AgentInstruct (en)](https://huggingface.co/datasets/THUDM/AgentInstruct)
|
||||||
- [LMSYS Chat 1M (en)](https://huggingface.co/datasets/lmsys/lmsys-chat-1m)
|
- [LMSYS Chat 1M (en)](https://huggingface.co/datasets/lmsys/lmsys-chat-1m)
|
||||||
- [Evol Instruct V2 (en)](https://huggingface.co/datasets/WizardLM/WizardLM_evol_instruct_V2_196k)
|
- [Evol Instruct V2 (en)](https://huggingface.co/datasets/WizardLM/WizardLM_evol_instruct_V2_196k)
|
||||||
- [Glaive Function Calling V2 (en)](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2)
|
|
||||||
- [Cosmopedia (en)](https://huggingface.co/datasets/HuggingFaceTB/cosmopedia)
|
- [Cosmopedia (en)](https://huggingface.co/datasets/HuggingFaceTB/cosmopedia)
|
||||||
|
- [STEM (zh)](https://huggingface.co/datasets/hfl/stem_zh_instruction)
|
||||||
|
- [Ruozhiba (zh)](https://huggingface.co/datasets/hfl/ruozhiba_gpt4_turbo)
|
||||||
|
- [LLaVA mixed (en&zh)](https://huggingface.co/datasets/BUAADreamer/llava-en-zh-300k)
|
||||||
- [Open Assistant (de)](https://huggingface.co/datasets/mayflowergmbh/oasst_de)
|
- [Open Assistant (de)](https://huggingface.co/datasets/mayflowergmbh/oasst_de)
|
||||||
- [Dolly 15k (de)](https://huggingface.co/datasets/mayflowergmbh/dolly-15k_de)
|
- [Dolly 15k (de)](https://huggingface.co/datasets/mayflowergmbh/dolly-15k_de)
|
||||||
- [Alpaca GPT4 (de)](https://huggingface.co/datasets/mayflowergmbh/alpaca-gpt4_de)
|
- [Alpaca GPT4 (de)](https://huggingface.co/datasets/mayflowergmbh/alpaca-gpt4_de)
|
||||||
@ -237,17 +262,15 @@ You also can add a custom chat template to [template.py](src/llmtuner/data/templ
|
|||||||
|
|
||||||
<details><summary>Preference datasets</summary>
|
<details><summary>Preference datasets</summary>
|
||||||
|
|
||||||
|
- [DPO mixed (en&zh)](https://huggingface.co/datasets/hiyouga/DPO-En-Zh-20k)
|
||||||
|
- [Orca DPO Pairs (en)](https://huggingface.co/datasets/Intel/orca_dpo_pairs)
|
||||||
- [HH-RLHF (en)](https://huggingface.co/datasets/Anthropic/hh-rlhf)
|
- [HH-RLHF (en)](https://huggingface.co/datasets/Anthropic/hh-rlhf)
|
||||||
- [Open Assistant (multilingual)](https://huggingface.co/datasets/OpenAssistant/oasst1)
|
|
||||||
- [GPT-4 Generated Data (en&zh)](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM)
|
|
||||||
- [Orca DPO (en)](https://huggingface.co/datasets/Intel/orca_dpo_pairs)
|
|
||||||
- [Nectar (en)](https://huggingface.co/datasets/berkeley-nest/Nectar)
|
- [Nectar (en)](https://huggingface.co/datasets/berkeley-nest/Nectar)
|
||||||
- [Orca DPO (de)](https://huggingface.co/datasets/mayflowergmbh/intel_orca_dpo_pairs_de)
|
- [Orca DPO (de)](https://huggingface.co/datasets/mayflowergmbh/intel_orca_dpo_pairs_de)
|
||||||
|
- [KTO mixed (en)](https://huggingface.co/datasets/argilla/kto-mix-15k)
|
||||||
|
|
||||||
</details>
|
</details>
|
||||||
|
|
||||||
Please refer to [data/README.md](data/README.md) for details.
|
|
||||||
|
|
||||||
Some datasets require confirmation before using them, so we recommend logging in with your Hugging Face account using these commands.
|
Some datasets require confirmation before using them, so we recommend logging in with your Hugging Face account using these commands.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
@ -261,54 +284,55 @@ huggingface-cli login
|
|||||||
| ------------ | ------- | --------- |
|
| ------------ | ------- | --------- |
|
||||||
| python | 3.8 | 3.10 |
|
| python | 3.8 | 3.10 |
|
||||||
| torch | 1.13.1 | 2.2.0 |
|
| torch | 1.13.1 | 2.2.0 |
|
||||||
| transformers | 4.37.2 | 4.39.1 |
|
| transformers | 4.37.2 | 4.40.1 |
|
||||||
| datasets | 2.14.3 | 2.17.1 |
|
| datasets | 2.14.3 | 2.19.1 |
|
||||||
| accelerate | 0.27.2 | 0.28.0 |
|
| accelerate | 0.27.2 | 0.30.0 |
|
||||||
| peft | 0.9.0 | 0.10.0 |
|
| peft | 0.9.0 | 0.10.0 |
|
||||||
| trl | 0.8.1 | 0.8.1 |
|
| trl | 0.8.1 | 0.8.6 |
|
||||||
|
|
||||||
| Optional | Minimum | Recommend |
|
| Optional | Minimum | Recommend |
|
||||||
| ------------ | ------- | --------- |
|
| ------------ | ------- | --------- |
|
||||||
| CUDA | 11.6 | 12.2 |
|
| CUDA | 11.6 | 12.2 |
|
||||||
| deepspeed | 0.10.0 | 0.14.0 |
|
| deepspeed | 0.10.0 | 0.14.0 |
|
||||||
| bitsandbytes | 0.39.0 | 0.43.0 |
|
| bitsandbytes | 0.39.0 | 0.43.1 |
|
||||||
| flash-attn | 2.3.0 | 2.5.6 |
|
| vllm | 0.4.0 | 0.4.2 |
|
||||||
|
| flash-attn | 2.3.0 | 2.5.8 |
|
||||||
|
|
||||||
### Hardware Requirement
|
### Hardware Requirement
|
||||||
|
|
||||||
\* *estimated*
|
\* *estimated*
|
||||||
|
|
||||||
| Method | Bits | 7B | 13B | 30B | 70B | 8x7B |
|
| Method | Bits | 7B | 13B | 30B | 70B | 110B | 8x7B | 8x22B |
|
||||||
| ------ | ---- | ----- | ----- | ----- | ------ | ------ |
|
| ----------------- | ---- | ----- | ----- | ----- | ------ | ------ | ----- | ------ |
|
||||||
| Full | AMP | 120GB | 240GB | 600GB | 1200GB | 900GB |
|
| Full | AMP | 120GB | 240GB | 600GB | 1200GB | 2000GB | 900GB | 2400GB |
|
||||||
| Full | 16 | 60GB | 120GB | 300GB | 600GB | 400GB |
|
| Full | 16 | 60GB | 120GB | 300GB | 600GB | 900GB | 400GB | 1200GB |
|
||||||
| GaLore | 16 | 16GB | 32GB | 64GB | 160GB | 120GB |
|
| Freeze | 16 | 20GB | 40GB | 80GB | 200GB | 360GB | 160GB | 400GB |
|
||||||
| Freeze | 16 | 20GB | 40GB | 80GB | 200GB | 160GB |
|
| LoRA/GaLore/BAdam | 16 | 16GB | 32GB | 64GB | 160GB | 240GB | 120GB | 320GB |
|
||||||
| LoRA | 16 | 16GB | 32GB | 64GB | 160GB | 120GB |
|
| QLoRA | 8 | 10GB | 20GB | 40GB | 80GB | 140GB | 60GB | 160GB |
|
||||||
| QLoRA | 8 | 10GB | 20GB | 40GB | 80GB | 60GB |
|
| QLoRA | 4 | 6GB | 12GB | 24GB | 48GB | 72GB | 30GB | 96GB |
|
||||||
| QLoRA | 4 | 6GB | 12GB | 24GB | 48GB | 30GB |
|
| QLoRA | 2 | 4GB | 8GB | 16GB | 24GB | 48GB | 18GB | 48GB |
|
||||||
| QLoRA | 2 | 4GB | 8GB | 16GB | 24GB | 18GB |
|
|
||||||
|
|
||||||
## Getting Started
|
## Getting Started
|
||||||
|
|
||||||
### Data Preparation (optional)
|
### Installation
|
||||||
|
|
||||||
Please refer to [data/README.md](data/README.md) for checking the details about the format of dataset files. You can either use a single `.json` file or a [dataset loading script](https://huggingface.co/docs/datasets/dataset_script) with multiple files to create a custom dataset.
|
> [!IMPORTANT]
|
||||||
|
> Installation is mandatory.
|
||||||
> [!NOTE]
|
|
||||||
> Please update `data/dataset_info.json` to use your custom dataset. About the format of this file, please refer to `data/README.md`.
|
|
||||||
|
|
||||||
### Dependence Installation (optional)
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
git clone https://github.com/hiyouga/LLaMA-Factory.git
|
git clone --depth 1 https://github.com/hiyouga/LLaMA-Factory.git
|
||||||
conda create -n llama_factory python=3.10
|
|
||||||
conda activate llama_factory
|
|
||||||
cd LLaMA-Factory
|
cd LLaMA-Factory
|
||||||
pip install -r requirements.txt
|
pip install -e .[torch,metrics]
|
||||||
```
|
```
|
||||||
|
|
||||||
If you want to enable the quantized LoRA (QLoRA) on the Windows platform, you will be required to install a pre-built version of `bitsandbytes` library, which supports CUDA 11.1 to 12.2, please select the appropriate [release version](https://github.com/jllllll/bitsandbytes-windows-webui/releases/tag/wheels) based on your CUDA version.
|
Extra dependencies available: torch, metrics, deepspeed, bitsandbytes, vllm, galore, badam, gptq, awq, aqlm, qwen, modelscope, quality
|
||||||
|
|
||||||
|
> [!TIP]
|
||||||
|
> Use `pip install --no-deps -e .` to resolve package conflicts.
|
||||||
|
|
||||||
|
<details><summary>For Windows users</summary>
|
||||||
|
|
||||||
|
If you want to enable the quantized LoRA (QLoRA) on the Windows platform, you need to install a pre-built version of `bitsandbytes` library, which supports CUDA 11.1 to 12.2, please select the appropriate [release version](https://github.com/jllllll/bitsandbytes-windows-webui/releases/tag/wheels) based on your CUDA version.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
pip install https://github.com/jllllll/bitsandbytes-windows-webui/releases/download/wheels/bitsandbytes-0.41.2.post2-py3-none-win_amd64.whl
|
pip install https://github.com/jllllll/bitsandbytes-windows-webui/releases/download/wheels/bitsandbytes-0.41.2.post2-py3-none-win_amd64.whl
|
||||||
@ -316,378 +340,130 @@ pip install https://github.com/jllllll/bitsandbytes-windows-webui/releases/downl
|
|||||||
|
|
||||||
To enable FlashAttention-2 on the Windows platform, you need to install the precompiled `flash-attn` library, which supports CUDA 12.1 to 12.2. Please download the corresponding version from [flash-attention](https://github.com/bdashore3/flash-attention/releases) based on your requirements.
|
To enable FlashAttention-2 on the Windows platform, you need to install the precompiled `flash-attn` library, which supports CUDA 12.1 to 12.2. Please download the corresponding version from [flash-attention](https://github.com/bdashore3/flash-attention/releases) based on your requirements.
|
||||||
|
|
||||||
### Use ModelScope Hub (optional)
|
</details>
|
||||||
|
|
||||||
If you have trouble with downloading models and datasets from Hugging Face, you can use LLaMA-Factory together with ModelScope in the following manner.
|
<details><summary>For Ascend NPU users</summary>
|
||||||
|
|
||||||
|
To utilize Ascend NPU devices for (distributed) training and inference, you need to install the **[torch-npu](https://gitee.com/ascend/pytorch)** library and the **[Ascend CANN Kernels](https://www.hiascend.com/developer/download/community/result?module=cann)**.
|
||||||
|
|
||||||
|
| Requirement | Minimum | Recommend |
|
||||||
|
| ------------ | ------- | --------- |
|
||||||
|
| CANN | 8.0.RC1 | 8.0.RC1 |
|
||||||
|
| torch | 2.2.0 | 2.2.0 |
|
||||||
|
| torch-npu | 2.2.0 | 2.2.0 |
|
||||||
|
| deepspeed | 0.13.2 | 0.13.2 |
|
||||||
|
|
||||||
|
Docker image:
|
||||||
|
|
||||||
|
- 32GB: [Download page](http://mirrors.cn-central-221.ovaijisuan.com/detail/130.html)
|
||||||
|
- 64GB: Coming soon
|
||||||
|
|
||||||
|
Remember to use `ASCEND_RT_VISIBLE_DEVICES` instead of `CUDA_VISIBLE_DEVICES` to specify the device to use.
|
||||||
|
|
||||||
|
If you cannot infer model on NPU devices, try setting `do_sample: false` in the configurations.
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
### Data Preparation
|
||||||
|
|
||||||
|
Please refer to [data/README.md](data/README.md) for checking the details about the format of dataset files. You can either use datasets on HuggingFace / ModelScope hub or load the dataset in local disk.
|
||||||
|
|
||||||
|
> [!NOTE]
|
||||||
|
> Please update `data/dataset_info.json` to use your custom dataset.
|
||||||
|
|
||||||
|
### Quickstart
|
||||||
|
|
||||||
|
Use the following 3 commands to run LoRA **fine-tuning**, **inference** and **merging** of the Llama3-8B-Instruct model, respectively.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
export USE_MODELSCOPE_HUB=1 # `set USE_MODELSCOPE_HUB=1` for Windows
|
CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/lora_single_gpu/llama3_lora_sft.yaml
|
||||||
|
CUDA_VISIBLE_DEVICES=0 llamafactory-cli chat examples/inference/llama3_lora_sft.yaml
|
||||||
|
CUDA_VISIBLE_DEVICES=0 llamafactory-cli export examples/merge_lora/llama3_lora_sft.yaml
|
||||||
```
|
```
|
||||||
|
|
||||||
Then you can train the corresponding model by specifying a model ID of the ModelScope Hub. (find a full list of model IDs at [ModelScope Hub](https://modelscope.cn/models))
|
See [examples/README.md](examples/README.md) for advanced usage (including distributed training).
|
||||||
|
|
||||||
```bash
|
> [!TIP]
|
||||||
CUDA_VISIBLE_DEVICES=0 python src/train_bash.py \
|
> Use `llamafactory-cli help` to show help information.
|
||||||
--model_name_or_path modelscope/Llama-2-7b-ms \
|
|
||||||
... # arguments (same as below)
|
|
||||||
```
|
|
||||||
|
|
||||||
LLaMA Board also supports using the models and datasets on the ModelScope Hub.
|
### Fine-Tuning with LLaMA Board GUI (powered by [Gradio](https://github.com/gradio-app/gradio))
|
||||||
|
|
||||||
```bash
|
|
||||||
CUDA_VISIBLE_DEVICES=0 USE_MODELSCOPE_HUB=1 python src/train_web.py
|
|
||||||
```
|
|
||||||
|
|
||||||
### Train on a single GPU
|
|
||||||
|
|
||||||
> [!IMPORTANT]
|
> [!IMPORTANT]
|
||||||
> If you want to train models on multiple GPUs, please refer to [Distributed Training](#distributed-training).
|
> LLaMA Board GUI only supports training on a single GPU.
|
||||||
|
|
||||||
|
#### Use local environment
|
||||||
#### LLaMA Board GUI
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
CUDA_VISIBLE_DEVICES=0 python src/train_web.py
|
CUDA_VISIBLE_DEVICES=0 GRADIO_SHARE=1 llamafactory-cli webui
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Pre-Training
|
<details><summary>For Alibaba Cloud PAI or AutoDL users</summary>
|
||||||
|
|
||||||
|
If you encountered display problems in LLaMA Board on Alibaba Cloud PAI, try using the following command to set environment variables before starting LLaMA Board:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
CUDA_VISIBLE_DEVICES=0 python src/train_bash.py \
|
export GRADIO_SERVER_PORT=7860 GRADIO_ROOT_PATH=/${JUPYTER_NAME}/proxy/7860/
|
||||||
--stage pt \
|
|
||||||
--do_train \
|
|
||||||
--model_name_or_path path_to_llama_model \
|
|
||||||
--dataset wiki_demo \
|
|
||||||
--finetuning_type lora \
|
|
||||||
--lora_target q_proj,v_proj \
|
|
||||||
--output_dir path_to_pt_checkpoint \
|
|
||||||
--overwrite_cache \
|
|
||||||
--per_device_train_batch_size 4 \
|
|
||||||
--gradient_accumulation_steps 4 \
|
|
||||||
--lr_scheduler_type cosine \
|
|
||||||
--logging_steps 10 \
|
|
||||||
--save_steps 1000 \
|
|
||||||
--learning_rate 5e-5 \
|
|
||||||
--num_train_epochs 3.0 \
|
|
||||||
--plot_loss \
|
|
||||||
--fp16
|
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Supervised Fine-Tuning
|
If you are using AutoDL, please install a specific version of Gradio:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
CUDA_VISIBLE_DEVICES=0 python src/train_bash.py \
|
pip install gradio==4.10.0
|
||||||
--stage sft \
|
|
||||||
--do_train \
|
|
||||||
--model_name_or_path path_to_llama_model \
|
|
||||||
--dataset alpaca_gpt4_en \
|
|
||||||
--template default \
|
|
||||||
--finetuning_type lora \
|
|
||||||
--lora_target q_proj,v_proj \
|
|
||||||
--output_dir path_to_sft_checkpoint \
|
|
||||||
--overwrite_cache \
|
|
||||||
--per_device_train_batch_size 4 \
|
|
||||||
--gradient_accumulation_steps 4 \
|
|
||||||
--lr_scheduler_type cosine \
|
|
||||||
--logging_steps 10 \
|
|
||||||
--save_steps 1000 \
|
|
||||||
--learning_rate 5e-5 \
|
|
||||||
--num_train_epochs 3.0 \
|
|
||||||
--plot_loss \
|
|
||||||
--fp16
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Reward Modeling
|
|
||||||
|
|
||||||
```bash
|
|
||||||
CUDA_VISIBLE_DEVICES=0 python src/train_bash.py \
|
|
||||||
--stage rm \
|
|
||||||
--do_train \
|
|
||||||
--model_name_or_path path_to_llama_model \
|
|
||||||
--adapter_name_or_path path_to_sft_checkpoint \
|
|
||||||
--create_new_adapter \
|
|
||||||
--dataset comparison_gpt4_en \
|
|
||||||
--template default \
|
|
||||||
--finetuning_type lora \
|
|
||||||
--lora_target q_proj,v_proj \
|
|
||||||
--output_dir path_to_rm_checkpoint \
|
|
||||||
--per_device_train_batch_size 2 \
|
|
||||||
--gradient_accumulation_steps 4 \
|
|
||||||
--lr_scheduler_type cosine \
|
|
||||||
--logging_steps 10 \
|
|
||||||
--save_steps 1000 \
|
|
||||||
--learning_rate 1e-5 \
|
|
||||||
--num_train_epochs 1.0 \
|
|
||||||
--plot_loss \
|
|
||||||
--fp16
|
|
||||||
```
|
|
||||||
|
|
||||||
#### PPO Training
|
|
||||||
|
|
||||||
```bash
|
|
||||||
CUDA_VISIBLE_DEVICES=0 python src/train_bash.py \
|
|
||||||
--stage ppo \
|
|
||||||
--do_train \
|
|
||||||
--model_name_or_path path_to_llama_model \
|
|
||||||
--adapter_name_or_path path_to_sft_checkpoint \
|
|
||||||
--create_new_adapter \
|
|
||||||
--dataset alpaca_gpt4_en \
|
|
||||||
--template default \
|
|
||||||
--finetuning_type lora \
|
|
||||||
--lora_target q_proj,v_proj \
|
|
||||||
--reward_model path_to_rm_checkpoint \
|
|
||||||
--output_dir path_to_ppo_checkpoint \
|
|
||||||
--per_device_train_batch_size 2 \
|
|
||||||
--gradient_accumulation_steps 4 \
|
|
||||||
--lr_scheduler_type cosine \
|
|
||||||
--top_k 0 \
|
|
||||||
--top_p 0.9 \
|
|
||||||
--logging_steps 10 \
|
|
||||||
--save_steps 1000 \
|
|
||||||
--learning_rate 1e-5 \
|
|
||||||
--num_train_epochs 1.0 \
|
|
||||||
--plot_loss \
|
|
||||||
--fp16
|
|
||||||
```
|
|
||||||
|
|
||||||
> [!TIP]
|
|
||||||
> Use `--adapter_name_or_path path_to_sft_checkpoint,path_to_ppo_checkpoint` to infer the fine-tuned model.
|
|
||||||
|
|
||||||
> [!WARNING]
|
|
||||||
> Use `--per_device_train_batch_size=1` for LLaMA-2 models in fp16 PPO training.
|
|
||||||
|
|
||||||
#### DPO Training
|
|
||||||
|
|
||||||
```bash
|
|
||||||
CUDA_VISIBLE_DEVICES=0 python src/train_bash.py \
|
|
||||||
--stage dpo \
|
|
||||||
--do_train \
|
|
||||||
--model_name_or_path path_to_llama_model \
|
|
||||||
--adapter_name_or_path path_to_sft_checkpoint \
|
|
||||||
--create_new_adapter \
|
|
||||||
--dataset comparison_gpt4_en \
|
|
||||||
--template default \
|
|
||||||
--finetuning_type lora \
|
|
||||||
--lora_target q_proj,v_proj \
|
|
||||||
--output_dir path_to_dpo_checkpoint \
|
|
||||||
--per_device_train_batch_size 2 \
|
|
||||||
--gradient_accumulation_steps 4 \
|
|
||||||
--lr_scheduler_type cosine \
|
|
||||||
--logging_steps 10 \
|
|
||||||
--save_steps 1000 \
|
|
||||||
--learning_rate 1e-5 \
|
|
||||||
--num_train_epochs 1.0 \
|
|
||||||
--plot_loss \
|
|
||||||
--fp16
|
|
||||||
```
|
|
||||||
|
|
||||||
> [!TIP]
|
|
||||||
> Use `--adapter_name_or_path path_to_sft_checkpoint,path_to_dpo_checkpoint` to infer the fine-tuned model.
|
|
||||||
|
|
||||||
### Distributed Training
|
|
||||||
|
|
||||||
#### Use Huggingface Accelerate
|
|
||||||
|
|
||||||
```bash
|
|
||||||
accelerate launch --config_file config.yaml src/train_bash.py \
|
|
||||||
--ddp_timeout 180000000 \
|
|
||||||
... # arguments (same as above)
|
|
||||||
```
|
|
||||||
|
|
||||||
<details><summary>Example config.yaml for LoRA training</summary>
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
compute_environment: LOCAL_MACHINE
|
|
||||||
debug: false
|
|
||||||
distributed_type: MULTI_GPU
|
|
||||||
downcast_bf16: 'no'
|
|
||||||
gpu_ids: all
|
|
||||||
machine_rank: 0
|
|
||||||
main_training_function: main
|
|
||||||
mixed_precision: fp16
|
|
||||||
num_machines: 1
|
|
||||||
num_processes: 4
|
|
||||||
rdzv_backend: static
|
|
||||||
same_network: true
|
|
||||||
tpu_env: []
|
|
||||||
tpu_use_cluster: false
|
|
||||||
tpu_use_sudo: false
|
|
||||||
use_cpu: false
|
|
||||||
```
|
```
|
||||||
|
|
||||||
</details>
|
</details>
|
||||||
|
|
||||||
> [!TIP]
|
#### Use Docker
|
||||||
> We commend using Accelerate for LoRA tuning.
|
|
||||||
|
|
||||||
#### Use DeepSpeed
|
|
||||||
|
|
||||||
```bash
|
|
||||||
deepspeed --num_gpus 8 src/train_bash.py \
|
|
||||||
--deepspeed ds_config.json \
|
|
||||||
--ddp_timeout 180000000 \
|
|
||||||
... # arguments (same as above)
|
|
||||||
```
|
|
||||||
|
|
||||||
<details><summary>Example ds_config.json for full-parameter training with DeepSpeed ZeRO-2</summary>
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"train_batch_size": "auto",
|
|
||||||
"train_micro_batch_size_per_gpu": "auto",
|
|
||||||
"gradient_accumulation_steps": "auto",
|
|
||||||
"gradient_clipping": "auto",
|
|
||||||
"zero_allow_untested_optimizer": true,
|
|
||||||
"fp16": {
|
|
||||||
"enabled": "auto",
|
|
||||||
"loss_scale": 0,
|
|
||||||
"loss_scale_window": 1000,
|
|
||||||
"initial_scale_power": 16,
|
|
||||||
"hysteresis": 2,
|
|
||||||
"min_loss_scale": 1
|
|
||||||
},
|
|
||||||
"bf16": {
|
|
||||||
"enabled": "auto"
|
|
||||||
},
|
|
||||||
"zero_optimization": {
|
|
||||||
"stage": 2,
|
|
||||||
"allgather_partitions": true,
|
|
||||||
"allgather_bucket_size": 5e8,
|
|
||||||
"overlap_comm": true,
|
|
||||||
"reduce_scatter": true,
|
|
||||||
"reduce_bucket_size": 5e8,
|
|
||||||
"contiguous_gradients": true,
|
|
||||||
"round_robin_gradients": true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
</details>
|
|
||||||
|
|
||||||
> [!TIP]
|
|
||||||
> Refer to [examples](examples) for more training scripts.
|
|
||||||
|
|
||||||
### Merge LoRA weights and export model
|
|
||||||
|
|
||||||
```bash
|
|
||||||
CUDA_VISIBLE_DEVICES=0 python src/export_model.py \
|
|
||||||
--model_name_or_path path_to_llama_model \
|
|
||||||
--adapter_name_or_path path_to_checkpoint \
|
|
||||||
--template default \
|
|
||||||
--finetuning_type lora \
|
|
||||||
--export_dir path_to_export \
|
|
||||||
--export_size 2 \
|
|
||||||
--export_legacy_format False
|
|
||||||
```
|
|
||||||
|
|
||||||
> [!WARNING]
|
|
||||||
> Merging LoRA weights into a quantized model is not supported.
|
|
||||||
|
|
||||||
> [!TIP]
|
|
||||||
> Use `--model_name_or_path path_to_export` solely to use the exported model.
|
|
||||||
>
|
|
||||||
> Use `--export_quantization_bit 4` and `--export_quantization_dataset data/c4_demo.json` to quantize the model with AutoGPTQ after merging the LoRA weights.
|
|
||||||
|
|
||||||
### Inference with OpenAI-style API
|
|
||||||
|
|
||||||
```bash
|
|
||||||
CUDA_VISIBLE_DEVICES=0 API_PORT=8000 python src/api_demo.py \
|
|
||||||
--model_name_or_path path_to_llama_model \
|
|
||||||
--adapter_name_or_path path_to_checkpoint \
|
|
||||||
--template default \
|
|
||||||
--finetuning_type lora
|
|
||||||
```
|
|
||||||
|
|
||||||
> [!TIP]
|
|
||||||
> Visit `http://localhost:8000/docs` for API documentation.
|
|
||||||
|
|
||||||
### Inference with command line
|
|
||||||
|
|
||||||
```bash
|
|
||||||
CUDA_VISIBLE_DEVICES=0 python src/cli_demo.py \
|
|
||||||
--model_name_or_path path_to_llama_model \
|
|
||||||
--adapter_name_or_path path_to_checkpoint \
|
|
||||||
--template default \
|
|
||||||
--finetuning_type lora
|
|
||||||
```
|
|
||||||
|
|
||||||
### Inference with web browser
|
|
||||||
|
|
||||||
```bash
|
|
||||||
CUDA_VISIBLE_DEVICES=0 python src/web_demo.py \
|
|
||||||
--model_name_or_path path_to_llama_model \
|
|
||||||
--adapter_name_or_path path_to_checkpoint \
|
|
||||||
--template default \
|
|
||||||
--finetuning_type lora
|
|
||||||
```
|
|
||||||
|
|
||||||
### Evaluation
|
|
||||||
|
|
||||||
```bash
|
|
||||||
CUDA_VISIBLE_DEVICES=0 python src/evaluate.py \
|
|
||||||
--model_name_or_path path_to_llama_model \
|
|
||||||
--adapter_name_or_path path_to_checkpoint \
|
|
||||||
--template vanilla \
|
|
||||||
--finetuning_type lora \
|
|
||||||
--task mmlu \
|
|
||||||
--split test \
|
|
||||||
--lang en \
|
|
||||||
--n_shot 5 \
|
|
||||||
--batch_size 4
|
|
||||||
```
|
|
||||||
|
|
||||||
### Predict
|
|
||||||
|
|
||||||
```bash
|
|
||||||
CUDA_VISIBLE_DEVICES=0 python src/train_bash.py \
|
|
||||||
--stage sft \
|
|
||||||
--do_predict \
|
|
||||||
--model_name_or_path path_to_llama_model \
|
|
||||||
--adapter_name_or_path path_to_checkpoint \
|
|
||||||
--dataset alpaca_gpt4_en \
|
|
||||||
--template default \
|
|
||||||
--finetuning_type lora \
|
|
||||||
--output_dir path_to_predict_result \
|
|
||||||
--per_device_eval_batch_size 1 \
|
|
||||||
--max_samples 100 \
|
|
||||||
--predict_with_generate \
|
|
||||||
--fp16
|
|
||||||
```
|
|
||||||
|
|
||||||
> [!WARNING]
|
|
||||||
> Use `--per_device_train_batch_size=1` for LLaMA-2 models in fp16 predict.
|
|
||||||
|
|
||||||
> [!TIP]
|
|
||||||
> We recommend using `--per_device_eval_batch_size=1` and `--max_target_length 128` at 4/8-bit predict.
|
|
||||||
|
|
||||||
### Dockerize Training
|
|
||||||
|
|
||||||
#### Get ready
|
|
||||||
|
|
||||||
Necessary dockerized environment is needed, such as Docker or Docker Compose.
|
|
||||||
|
|
||||||
#### Docker support
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
docker build -f ./Dockerfile -t llama-factory:latest .
|
docker build -f ./Dockerfile -t llama-factory:latest .
|
||||||
|
docker run --gpus=all \
|
||||||
docker run --gpus=all -v ./hf_cache:/root/.cache/huggingface/ -v ./data:/app/data -v ./output:/app/output -p 7860:7860 --shm-size 16G --name llama_factory -d llama-factory:latest
|
-v ./hf_cache:/root/.cache/huggingface/ \
|
||||||
|
-v ./data:/app/data \
|
||||||
|
-v ./output:/app/output \
|
||||||
|
-e CUDA_VISIBLE_DEVICES=0 \
|
||||||
|
-p 7860:7860 \
|
||||||
|
--shm-size 16G \
|
||||||
|
--name llama_factory \
|
||||||
|
-d llama-factory:latest
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Docker Compose support
|
#### Use Docker Compose
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
docker compose -f ./docker-compose.yml up -d
|
docker compose -f ./docker-compose.yml up -d
|
||||||
```
|
```
|
||||||
|
|
||||||
> [!TIP]
|
<details><summary>Details about volume</summary>
|
||||||
> Details about volume:
|
|
||||||
> * hf_cache: Utilize Huggingface cache on the host machine. Reassignable if a cache already exists in a different directory.
|
- hf_cache: Utilize Hugging Face cache on the host machine. Reassignable if a cache already exists in a different directory.
|
||||||
> * data: Place datasets on this dir of the host machine so that they can be selected on LLaMA Board GUI.
|
- data: Place datasets on this dir of the host machine so that they can be selected on LLaMA Board GUI.
|
||||||
> * output: Set export dir to this location so that the merged result can be accessed directly on the host machine.
|
- output: Set export dir to this location so that the merged result can be accessed directly on the host machine.
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
### Deploy with OpenAI-style API and vLLM
|
||||||
|
|
||||||
|
```bash
|
||||||
|
CUDA_VISIBLE_DEVICES=0,1 API_PORT=8000 llamafactory-cli api examples/inference/llama3_vllm.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
### Download from ModelScope Hub
|
||||||
|
|
||||||
|
If you have trouble with downloading models and datasets from Hugging Face, you can use ModelScope.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export USE_MODELSCOPE_HUB=1 # `set USE_MODELSCOPE_HUB=1` for Windows
|
||||||
|
```
|
||||||
|
|
||||||
|
Train the model by specifying a model ID of the ModelScope Hub as the `--model_name_or_path`. You can find a full list of model IDs at [ModelScope Hub](https://modelscope.cn/models), e.g., `LLM-Research/Meta-Llama-3-8B-Instruct`.
|
||||||
|
|
||||||
## Projects using LLaMA Factory
|
## Projects using LLaMA Factory
|
||||||
|
|
||||||
|
If you have a project that should be incorporated, please contact via email or create a pull request.
|
||||||
|
|
||||||
|
<details><summary>Click to show</summary>
|
||||||
|
|
||||||
1. Wang et al. ESRL: Efficient Sampling-based Reinforcement Learning for Sequence Generation. 2023. [[arxiv]](https://arxiv.org/abs/2308.02223)
|
1. Wang et al. ESRL: Efficient Sampling-based Reinforcement Learning for Sequence Generation. 2023. [[arxiv]](https://arxiv.org/abs/2308.02223)
|
||||||
1. Yu et al. Open, Closed, or Small Language Models for Text Classification? 2023. [[arxiv]](https://arxiv.org/abs/2308.10092)
|
1. Yu et al. Open, Closed, or Small Language Models for Text Classification? 2023. [[arxiv]](https://arxiv.org/abs/2308.10092)
|
||||||
1. Wang et al. UbiPhysio: Support Daily Functioning, Fitness, and Rehabilitation with Action Understanding and Feedback in Natural Language. 2023. [[arxiv]](https://arxiv.org/abs/2308.10526)
|
1. Wang et al. UbiPhysio: Support Daily Functioning, Fitness, and Rehabilitation with Action Understanding and Feedback in Natural Language. 2023. [[arxiv]](https://arxiv.org/abs/2308.10526)
|
||||||
@ -709,20 +485,37 @@ docker compose -f ./docker-compose.yml up -d
|
|||||||
1. Huang et al. Key-Point-Driven Data Synthesis with its Enhancement on Mathematical Reasoning. 2024. [[arxiv]](https://arxiv.org/abs/2403.02333)
|
1. Huang et al. Key-Point-Driven Data Synthesis with its Enhancement on Mathematical Reasoning. 2024. [[arxiv]](https://arxiv.org/abs/2403.02333)
|
||||||
1. Duan et al. Negating Negatives: Alignment without Human Positive Samples via Distributional Dispreference Optimization. 2024. [[arxiv]](https://arxiv.org/abs/2403.03419)
|
1. Duan et al. Negating Negatives: Alignment without Human Positive Samples via Distributional Dispreference Optimization. 2024. [[arxiv]](https://arxiv.org/abs/2403.03419)
|
||||||
1. Xie and Schwertfeger. Empowering Robotics with Large Language Models: osmAG Map Comprehension with LLMs. 2024. [[arxiv]](https://arxiv.org/abs/2403.08228)
|
1. Xie and Schwertfeger. Empowering Robotics with Large Language Models: osmAG Map Comprehension with LLMs. 2024. [[arxiv]](https://arxiv.org/abs/2403.08228)
|
||||||
|
1. Wu et al. Large Language Models are Parallel Multilingual Learners. 2024. [[arxiv]](https://arxiv.org/abs/2403.09073)
|
||||||
|
1. Zhang et al. EDT: Improving Large Language Models' Generation by Entropy-based Dynamic Temperature Sampling. 2024. [[arxiv]](https://arxiv.org/abs/2403.14541)
|
||||||
|
1. Weller et al. FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructions. 2024. [[arxiv]](https://arxiv.org/abs/2403.15246)
|
||||||
|
1. Hongbin Na. CBT-LLM: A Chinese Large Language Model for Cognitive Behavioral Therapy-based Mental Health Question Answering. 2024. [[arxiv]](https://arxiv.org/abs/2403.16008)
|
||||||
|
1. Zan et al. CodeS: Natural Language to Code Repository via Multi-Layer Sketch. 2024. [[arxiv]](https://arxiv.org/abs/2403.16443)
|
||||||
|
1. Liu et al. Extensive Self-Contrast Enables Feedback-Free Language Model Alignment. 2024. [[arxiv]](https://arxiv.org/abs/2404.00604)
|
||||||
|
1. Luo et al. BAdam: A Memory Efficient Full Parameter Training Method for Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2404.02827)
|
||||||
|
1. Du et al. Chinese Tiny LLM: Pretraining a Chinese-Centric Large Language Model. 2024. [[arxiv]](https://arxiv.org/abs/2404.04167)
|
||||||
|
1. Ma et al. Parameter Efficient Quasi-Orthogonal Fine-Tuning via Givens Rotation. 2024. [[arxiv]](https://arxiv.org/abs/2404.04316)
|
||||||
|
1. Liu et al. Dynamic Generation of Personalities with Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2404.07084)
|
||||||
|
1. Shang et al. How Far Have We Gone in Stripped Binary Code Understanding Using Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2404.09836)
|
||||||
|
1. Huang et al. LLMTune: Accelerate Database Knob Tuning with Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2404.11581)
|
||||||
|
1. Deng et al. Text-Tuple-Table: Towards Information Integration in Text-to-Table Generation via Global Tuple Extraction. 2024. [[arxiv]](https://arxiv.org/abs/2404.14215)
|
||||||
|
1. Acikgoz et al. Hippocrates: An Open-Source Framework for Advancing Large Language Models in Healthcare. 2024. [[arxiv]](https://arxiv.org/abs/2404.16621)
|
||||||
|
1. Zhang et al. Small Language Models Need Strong Verifiers to Self-Correct Reasoning. 2024. [[arxiv]](https://arxiv.org/abs/2404.17140)
|
||||||
|
1. Zhou et al. FREB-TQA: A Fine-Grained Robustness Evaluation Benchmark for Table Question Answering. 2024. [[arxiv]](https://arxiv.org/abs/2404.18585)
|
||||||
1. **[StarWhisper](https://github.com/Yu-Yang-Li/StarWhisper)**: A large language model for Astronomy, based on ChatGLM2-6B and Qwen-14B.
|
1. **[StarWhisper](https://github.com/Yu-Yang-Li/StarWhisper)**: A large language model for Astronomy, based on ChatGLM2-6B and Qwen-14B.
|
||||||
1. **[DISC-LawLLM](https://github.com/FudanDISC/DISC-LawLLM)**: A large language model specialized in Chinese legal domain, based on Baichuan-13B, is capable of retrieving and reasoning on legal knowledge.
|
1. **[DISC-LawLLM](https://github.com/FudanDISC/DISC-LawLLM)**: A large language model specialized in Chinese legal domain, based on Baichuan-13B, is capable of retrieving and reasoning on legal knowledge.
|
||||||
1. **[Sunsimiao](https://github.com/thomas-yanxin/Sunsimiao)**: A large language model specialized in Chinese medical domain, based on Baichuan-7B and ChatGLM-6B.
|
1. **[Sunsimiao](https://github.com/X-D-Lab/Sunsimiao)**: A large language model specialized in Chinese medical domain, based on Baichuan-7B and ChatGLM-6B.
|
||||||
1. **[CareGPT](https://github.com/WangRongsheng/CareGPT)**: A series of large language models for Chinese medical domain, based on LLaMA2-7B and Baichuan-13B.
|
1. **[CareGPT](https://github.com/WangRongsheng/CareGPT)**: A series of large language models for Chinese medical domain, based on LLaMA2-7B and Baichuan-13B.
|
||||||
1. **[MachineMindset](https://github.com/PKU-YuanGroup/Machine-Mindset/)**: A series of MBTI Personality large language models, capable of giving any LLM 16 different personality types based on different datasets and training methods.
|
1. **[MachineMindset](https://github.com/PKU-YuanGroup/Machine-Mindset/)**: A series of MBTI Personality large language models, capable of giving any LLM 16 different personality types based on different datasets and training methods.
|
||||||
|
1. **[Luminia-13B-v3](https://huggingface.co/Nekochu/Luminia-13B-v3)**: A large language model specialized in generate metadata for stable diffusion. [[🤗Demo]](https://huggingface.co/spaces/Nekochu/Luminia-13B_SD_Prompt)
|
||||||
|
1. **[Chinese-LLaVA-Med](https://github.com/BUAADreamer/Chinese-LLaVA-Med)**: A multimodal large language model specialized in Chinese medical domain, based on LLaVA-1.5-7B.
|
||||||
|
|
||||||
> [!TIP]
|
</details>
|
||||||
> If you have a project that should be incorporated, please contact via email or create a pull request.
|
|
||||||
|
|
||||||
## License
|
## License
|
||||||
|
|
||||||
This repository is licensed under the [Apache-2.0 License](LICENSE).
|
This repository is licensed under the [Apache-2.0 License](LICENSE).
|
||||||
|
|
||||||
Please follow the model licenses to use the corresponding model weights: [Baichuan2](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/blob/main/Community%20License%20for%20Baichuan%202%20Model.pdf) / [BLOOM](https://huggingface.co/spaces/bigscience/license) / [ChatGLM3](https://github.com/THUDM/ChatGLM3/blob/main/MODEL_LICENSE) / [DeepSeek](https://github.com/deepseek-ai/DeepSeek-LLM/blob/main/LICENSE-MODEL) / [Falcon](https://huggingface.co/tiiuae/falcon-180B/blob/main/LICENSE.txt) / [Gemma](https://ai.google.dev/gemma/terms) / [InternLM2](https://github.com/InternLM/InternLM#license) / [LLaMA](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) / [LLaMA-2](https://ai.meta.com/llama/license/) / [Mistral](LICENSE) / [OLMo](LICENSE) / [Phi-1.5/2](https://huggingface.co/microsoft/phi-1_5/resolve/main/Research%20License.docx) / [Qwen](https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT) / [StarCoder2](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement) / [XVERSE](https://github.com/xverse-ai/XVERSE-13B/blob/main/MODEL_LICENSE.pdf) / [Yi](https://huggingface.co/01-ai/Yi-6B/blob/main/LICENSE) / [Yuan](https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/LICENSE-Yuan)
|
Please follow the model licenses to use the corresponding model weights: [Baichuan2](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/blob/main/Community%20License%20for%20Baichuan%202%20Model.pdf) / [BLOOM](https://huggingface.co/spaces/bigscience/license) / [ChatGLM3](https://github.com/THUDM/ChatGLM3/blob/main/MODEL_LICENSE) / [Command-R](https://cohere.com/c4ai-cc-by-nc-license) / [DeepSeek](https://github.com/deepseek-ai/DeepSeek-LLM/blob/main/LICENSE-MODEL) / [Falcon](https://huggingface.co/tiiuae/falcon-180B/blob/main/LICENSE.txt) / [Gemma](https://ai.google.dev/gemma/terms) / [InternLM2](https://github.com/InternLM/InternLM#license) / [LLaMA](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) / [LLaMA-2 (LLaVA-1.5)](https://ai.meta.com/llama/license/) / [LLaMA-3](https://llama.meta.com/llama3/license/) / [Mistral](LICENSE) / [OLMo](LICENSE) / [Phi-1.5/2](https://huggingface.co/microsoft/phi-1_5/resolve/main/Research%20License.docx) / [Phi-3](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/blob/main/LICENSE) / [Qwen](https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT) / [StarCoder2](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement) / [XVERSE](https://github.com/xverse-ai/XVERSE-13B/blob/main/MODEL_LICENSE.pdf) / [Yi](https://huggingface.co/01-ai/Yi-6B/blob/main/LICENSE) / [Yi-1.5](LICENSE) / [Yuan](https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/LICENSE-Yuan)
|
||||||
|
|
||||||
## Citation
|
## Citation
|
||||||
|
|
||||||
@ -740,7 +533,7 @@ If this work is helpful, please kindly cite as:
|
|||||||
|
|
||||||
## Acknowledgement
|
## Acknowledgement
|
||||||
|
|
||||||
This repo benefits from [PEFT](https://github.com/huggingface/peft), [QLoRA](https://github.com/artidoro/qlora) and [FastChat](https://github.com/lm-sys/FastChat). Thanks for their wonderful works.
|
This repo benefits from [PEFT](https://github.com/huggingface/peft), [TRL](https://github.com/huggingface/trl), [QLoRA](https://github.com/artidoro/qlora) and [FastChat](https://github.com/lm-sys/FastChat). Thanks for their wonderful works.
|
||||||
|
|
||||||
## Star History
|
## Star History
|
||||||
|
|
||||||
|
676
README_zh.md
676
README_zh.md
@ -3,15 +3,16 @@
|
|||||||
[](https://github.com/hiyouga/LLaMA-Factory/stargazers)
|
[](https://github.com/hiyouga/LLaMA-Factory/stargazers)
|
||||||
[](LICENSE)
|
[](LICENSE)
|
||||||
[](https://github.com/hiyouga/LLaMA-Factory/commits/main)
|
[](https://github.com/hiyouga/LLaMA-Factory/commits/main)
|
||||||
[](https://pypi.org/project/llmtuner/)
|
[](https://pypi.org/project/llamafactory/)
|
||||||
[](https://pypi.org/project/llmtuner/)
|
[](#使用了-llama-factory-的项目)
|
||||||
[](#使用了-llama-factory-的项目)
|
|
||||||
[](https://github.com/hiyouga/LLaMA-Factory/pulls)
|
[](https://github.com/hiyouga/LLaMA-Factory/pulls)
|
||||||
[](https://discord.gg/rKfvV9r9FK)
|
[](https://discord.gg/rKfvV9r9FK)
|
||||||
[](https://twitter.com/llamafactory_ai)
|
[](https://twitter.com/llamafactory_ai)
|
||||||
[](https://huggingface.co/spaces/hiyouga/LLaMA-Board)
|
[](https://huggingface.co/spaces/hiyouga/LLaMA-Board)
|
||||||
[](https://modelscope.cn/studios/hiyouga/LLaMA-Board)
|
[](https://modelscope.cn/studios/hiyouga/LLaMA-Board)
|
||||||
[](https://colab.research.google.com/drive/1eRTPn37ltBbYsISy9Aw2NuI2Aq5CQrD9?usp=sharing)
|
[](https://colab.research.google.com/drive/1d5KQtbemerlSDSxZIfAaWXhKr30QypiK?usp=sharing)
|
||||||
|
|
||||||
|
[](https://trendshift.io/repositories/4535)
|
||||||
|
|
||||||
👋 加入我们的[微信群](assets/wechat.jpg)。
|
👋 加入我们的[微信群](assets/wechat.jpg)。
|
||||||
|
|
||||||
@ -23,7 +24,7 @@ https://github.com/hiyouga/LLaMA-Factory/assets/16256802/ec36a9dd-37f4-4f72-81bd
|
|||||||
|
|
||||||
选择你的打开方式:
|
选择你的打开方式:
|
||||||
|
|
||||||
- **Colab**:https://colab.research.google.com/drive/1eRTPn37ltBbYsISy9Aw2NuI2Aq5CQrD9?usp=sharing
|
- **Colab**:https://colab.research.google.com/drive/1d5KQtbemerlSDSxZIfAaWXhKr30QypiK?usp=sharing
|
||||||
- **本地机器**:请见[如何使用](#如何使用)
|
- **本地机器**:请见[如何使用](#如何使用)
|
||||||
|
|
||||||
## 目录
|
## 目录
|
||||||
@ -43,17 +44,17 @@ https://github.com/hiyouga/LLaMA-Factory/assets/16256802/ec36a9dd-37f4-4f72-81bd
|
|||||||
|
|
||||||
## 项目特色
|
## 项目特色
|
||||||
|
|
||||||
- **多种模型**:LLaMA、Mistral、Mixtral-MoE、Qwen、Yi、Gemma、Baichuan、ChatGLM、Phi 等等。
|
- **多种模型**:LLaMA、LLaVA、Mistral、Mixtral-MoE、Qwen、Yi、Gemma、Baichuan、ChatGLM、Phi 等等。
|
||||||
- **集成方法**:(增量)预训练、指令监督微调、奖励模型训练、PPO 训练和 DPO 训练。
|
- **集成方法**:(增量)预训练、(多模态)指令监督微调、奖励模型训练、PPO 训练、DPO 训练、KTO 训练和 ORPO 训练。
|
||||||
- **多种精度**:32 比特全参数微调、16 比特冻结微调、16 比特 LoRA 微调和基于 AQLM/AWQ/GPTQ/LLM.int8 的 2/4/8 比特 QLoRA 微调。
|
- **多种精度**:32 比特全参数微调、16 比特冻结微调、16 比特 LoRA 微调和基于 AQLM/AWQ/GPTQ/LLM.int8 的 2/4/8 比特 QLoRA 微调。
|
||||||
- **先进算法**:GaLore、DoRA、LongLoRA、LLaMA Pro、LoRA+、LoftQ 和 Agent 微调。
|
- **先进算法**:GaLore、BAdam、DoRA、LongLoRA、LLaMA Pro、Mixture-of-Depths、LoRA+、LoftQ 和 Agent 微调。
|
||||||
- **实用技巧**:FlashAttention-2、Unsloth、RoPE scaling、NEFTune 和 rsLoRA。
|
- **实用技巧**:FlashAttention-2、Unsloth、RoPE scaling、NEFTune 和 rsLoRA。
|
||||||
- **实验监控**:LlamaBoard、TensorBoard、Wandb、MLflow 等等。
|
- **实验监控**:LlamaBoard、TensorBoard、Wandb、MLflow 等等。
|
||||||
- **极速推理**:基于 vLLM 的 OpenAI 风格 API、浏览器界面和命令行接口。
|
- **极速推理**:基于 vLLM 的 OpenAI 风格 API、浏览器界面和命令行接口。
|
||||||
|
|
||||||
## 性能指标
|
## 性能指标
|
||||||
|
|
||||||
与 ChatGLM 官方的 [P-Tuning](https://github.com/THUDM/ChatGLM2-6B/tree/main/ptuning) 微调相比,LLaMA-Factory 的 LoRA 微调提供了 **3.7 倍**的加速比,同时在广告文案生成任务上取得了更高的 Rouge 分数。结合 4 比特量化技术,LLaMA-Factory 的 QLoRA 微调进一步降低了 GPU 显存消耗。
|
与 ChatGLM 官方的 [P-Tuning](https://github.com/THUDM/ChatGLM2-6B/tree/main/ptuning) 微调相比,LLaMA Factory 的 LoRA 微调提供了 **3.7 倍**的加速比,同时在广告文案生成任务上取得了更高的 Rouge 分数。结合 4 比特量化技术,LLaMA Factory 的 QLoRA 微调进一步降低了 GPU 显存消耗。
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
@ -62,51 +63,69 @@ https://github.com/hiyouga/LLaMA-Factory/assets/16256802/ec36a9dd-37f4-4f72-81bd
|
|||||||
- **Training Speed**: 训练阶段每秒处理的样本数量。(批处理大小=4,截断长度=1024)
|
- **Training Speed**: 训练阶段每秒处理的样本数量。(批处理大小=4,截断长度=1024)
|
||||||
- **Rouge Score**: [广告文案生成](https://aclanthology.org/D19-1321.pdf)任务验证集上的 Rouge-2 分数。(批处理大小=4,截断长度=1024)
|
- **Rouge Score**: [广告文案生成](https://aclanthology.org/D19-1321.pdf)任务验证集上的 Rouge-2 分数。(批处理大小=4,截断长度=1024)
|
||||||
- **GPU Memory**: 4 比特量化训练的 GPU 显存峰值。(批处理大小=1,截断长度=1024)
|
- **GPU Memory**: 4 比特量化训练的 GPU 显存峰值。(批处理大小=1,截断长度=1024)
|
||||||
- 我们在 ChatGLM 的 P-Tuning 中采用 `pre_seq_len=128`,在 LLaMA-Factory 的 LoRA 微调中采用 `lora_rank=32`。
|
- 我们在 ChatGLM 的 P-Tuning 中采用 `pre_seq_len=128`,在 LLaMA Factory 的 LoRA 微调中采用 `lora_rank=32`。
|
||||||
|
|
||||||
</details>
|
</details>
|
||||||
|
|
||||||
## 更新日志
|
## 更新日志
|
||||||
|
|
||||||
[24/03/21] 我们的论文 "[LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models](https://arxiv.org/abs/2403.13372)" 可在 arXiv 上查看!
|
[24/05/18] 我们支持了 **[KTO](https://arxiv.org/abs/2402.01306)** 偏好对齐算法。详细用法请参照 [examples](examples/README_zh.md)。
|
||||||
|
|
||||||
[24/03/20] 我们支持了能在 2x24GB GPU 上微调 70B 模型的 **FSDP+QLoRA**。详细用法请参照 `examples/fsdp_qlora`。
|
[24/05/14] 我们支持了昇腾 NPU 设备的训练和推理。详情请查阅[安装](#安装-llama-factory)部分。
|
||||||
|
|
||||||
[24/03/13] 我们支持了 **[LoRA+](https://arxiv.org/abs/2402.12354)**。请使用 `loraplus_lr_ratio=16.0` 参数开启 LoRA+ 方法。
|
[24/05/13] 我们支持了 Yi-1.5 系列模型的微调。
|
||||||
|
|
||||||
[24/03/07] 我们支持了梯度低秩投影(**[GaLore](https://arxiv.org/abs/2403.03507)**)算法。请使用 `--use_galore` 参数切换显存高效的优化器。
|
|
||||||
|
|
||||||
[24/03/07] 我们集成了 **[vLLM](https://github.com/vllm-project/vllm)** 以实现极速并发推理。请使用 `--infer_backend vllm` 来获得 **270%** 的推理速度。(尚不支持 LoRA,请先合并权重。)
|
|
||||||
|
|
||||||
<details><summary>展开日志</summary>
|
<details><summary>展开日志</summary>
|
||||||
|
|
||||||
[24/02/28] 我们支持了 **[DoRA](https://arxiv.org/abs/2402.09353)** 微调。请使用 `--use_dora` 参数进行 DoRA 微调。
|
[24/04/26] 我们支持了多模态模型 **LLaVA-1.5** 的微调。详细用法请参照 [examples](examples/README_zh.md)。
|
||||||
|
|
||||||
[24/02/15] 我们支持了 [LLaMA Pro](https://github.com/TencentARC/LLaMA-Pro) 提出的**块扩展**方法。详细用法请参照 `examples/extras/llama_pro`。
|
[24/04/22] 我们提供了在免费 T4 GPU 上微调 Llama-3 模型的 **[Colab 笔记本](https://colab.research.google.com/drive/1d5KQtbemerlSDSxZIfAaWXhKr30QypiK?usp=sharing)**。Hugging Face 社区公开了两个利用 LLaMA Factory 微调的 Llama-3 模型,详情请见 [Llama3-8B-Chinese-Chat](https://huggingface.co/shenzhi-wang/Llama3-8B-Chinese-Chat) 和 [Llama3-Chinese](https://huggingface.co/zhichen/Llama3-Chinese)。
|
||||||
|
|
||||||
|
[24/04/21] 我们基于 [AstraMindAI 的仓库](https://github.com/astramind-ai/Mixture-of-depths)支持了 **[混合深度训练](https://arxiv.org/abs/2404.02258)**。详细用法请参照 [examples](examples/README_zh.md)。
|
||||||
|
|
||||||
|
[24/04/16] 我们支持了 **[BAdam](https://arxiv.org/abs/2404.02827)**。详细用法请参照 [examples](examples/README_zh.md)。
|
||||||
|
|
||||||
|
[24/04/16] 我们支持了 **[unsloth](https://github.com/unslothai/unsloth)** 的长序列训练(24GB 可训练 Llama-2-7B-56k)。该方法相比 FlashAttention-2 提供了 **117%** 的训练速度和 **50%** 的显存节约。更多数据请见[此页面](https://github.com/hiyouga/LLaMA-Factory/wiki/Performance-comparison)。
|
||||||
|
|
||||||
|
[24/03/31] 我们支持了 **[ORPO](https://arxiv.org/abs/2403.07691)**。详细用法请参照 [examples](examples/README_zh.md)。
|
||||||
|
|
||||||
|
[24/03/21] 我们的论文 "[LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models](https://arxiv.org/abs/2403.13372)" 可在 arXiv 上查看!
|
||||||
|
|
||||||
|
[24/03/20] 我们支持了能在 2x24GB GPU 上微调 70B 模型的 **FSDP+QLoRA**。详细用法请参照 [examples](examples/README_zh.md)。
|
||||||
|
|
||||||
|
[24/03/13] 我们支持了 **[LoRA+](https://arxiv.org/abs/2402.12354)**。详细用法请参照 [examples](examples/README_zh.md)。
|
||||||
|
|
||||||
|
[24/03/07] 我们支持了梯度低秩投影(**[GaLore](https://arxiv.org/abs/2403.03507)**)算法。详细用法请参照 [examples](examples/README_zh.md)。
|
||||||
|
|
||||||
|
[24/03/07] 我们集成了 **[vLLM](https://github.com/vllm-project/vllm)** 以实现极速并发推理。请使用 `infer_backend: vllm` 来获得 **270%** 的推理速度。
|
||||||
|
|
||||||
|
[24/02/28] 我们支持了 **[DoRA](https://arxiv.org/abs/2402.09353)** 微调。请使用 `use_dora: true` 参数进行 DoRA 微调。
|
||||||
|
|
||||||
|
[24/02/15] 我们支持了 [LLaMA Pro](https://github.com/TencentARC/LLaMA-Pro) 提出的**块扩展**方法。详细用法请参照 [examples](examples/README_zh.md)。
|
||||||
|
|
||||||
[24/02/05] Qwen1.5(Qwen2 测试版)系列模型已在 LLaMA-Factory 中实现微调支持。详情请查阅该[博客页面](https://qwenlm.github.io/zh/blog/qwen1.5/)。
|
[24/02/05] Qwen1.5(Qwen2 测试版)系列模型已在 LLaMA-Factory 中实现微调支持。详情请查阅该[博客页面](https://qwenlm.github.io/zh/blog/qwen1.5/)。
|
||||||
|
|
||||||
[24/01/18] 我们针对绝大多数模型实现了 **Agent 微调**,微调时指定 `--dataset glaive_toolcall` 即可使模型获得工具调用能力。
|
[24/01/18] 我们针对绝大多数模型实现了 **Agent 微调**,微调时指定 `dataset: glaive_toolcall` 即可使模型获得工具调用能力。
|
||||||
|
|
||||||
[23/12/23] 我们针对 LLaMA, Mistral 和 Yi 模型支持了 **[unsloth](https://github.com/unslothai/unsloth)** 的 LoRA 训练加速。请使用 `--use_unsloth` 参数启用 unsloth 优化。该方法可提供 **170%** 的训练速度,详情请查阅[此页面](https://github.com/hiyouga/LLaMA-Factory/wiki/Performance-comparison)。
|
[23/12/23] 我们针对 LLaMA, Mistral 和 Yi 模型支持了 **[unsloth](https://github.com/unslothai/unsloth)** 的 LoRA 训练加速。请使用 `use_unsloth: true` 参数启用 unsloth 优化。该方法可提供 **170%** 的训练速度,详情请查阅[此页面](https://github.com/hiyouga/LLaMA-Factory/wiki/Performance-comparison)。
|
||||||
|
|
||||||
[23/12/12] 我们支持了微调最新的混合专家模型 **[Mixtral 8x7B](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1)**。硬件需求请查阅[此处](#硬件依赖)。
|
[23/12/12] 我们支持了微调最新的混合专家模型 **[Mixtral 8x7B](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1)**。硬件需求请查阅[此处](#硬件依赖)。
|
||||||
|
|
||||||
[23/12/01] 我们支持了从 **[魔搭社区](https://modelscope.cn/models)** 下载预训练模型和数据集。详细用法请参照 [此教程](#使用魔搭社区可跳过)。
|
[23/12/01] 我们支持了从 **[魔搭社区](https://modelscope.cn/models)** 下载预训练模型和数据集。详细用法请参照 [此教程](#从魔搭社区下载)。
|
||||||
|
|
||||||
[23/10/21] 我们支持了 **[NEFTune](https://arxiv.org/abs/2310.05914)** 训练技巧。请使用 `--neftune_noise_alpha` 参数启用 NEFTune,例如 `--neftune_noise_alpha 5`。
|
[23/10/21] 我们支持了 **[NEFTune](https://arxiv.org/abs/2310.05914)** 训练技巧。请使用 `neftune_noise_alpha: 5` 参数启用 NEFTune。
|
||||||
|
|
||||||
[23/09/27] 我们针对 LLaMA 模型支持了 [LongLoRA](https://github.com/dvlab-research/LongLoRA) 提出的 **$S^2$-Attn**。请使用 `--shift_attn` 参数以启用该功能。
|
[23/09/27] 我们针对 LLaMA 模型支持了 [LongLoRA](https://github.com/dvlab-research/LongLoRA) 提出的 **$S^2$-Attn**。请使用 `shift_attn: true` 参数以启用该功能。
|
||||||
|
|
||||||
[23/09/23] 我们在项目中集成了 MMLU、C-Eval 和 CMMLU 评估集。使用方法请参阅[此示例](#模型评估)。
|
[23/09/23] 我们在项目中集成了 MMLU、C-Eval 和 CMMLU 评估集。详细用法请参照 [examples](examples/README_zh.md)。
|
||||||
|
|
||||||
[23/09/10] 我们支持了 **[FlashAttention-2](https://github.com/Dao-AILab/flash-attention)**。如果您使用的是 RTX4090、A100 或 H100 GPU,请使用 `--flash_attn` 参数以启用 FlashAttention-2。
|
[23/09/10] 我们支持了 **[FlashAttention-2](https://github.com/Dao-AILab/flash-attention)**。如果您使用的是 RTX4090、A100 或 H100 GPU,请使用 `flash_attn: fa2` 参数以启用 FlashAttention-2。
|
||||||
|
|
||||||
[23/08/12] 我们支持了 **RoPE 插值**来扩展 LLaMA 模型的上下文长度。请使用 `--rope_scaling linear` 参数训练模型或使用 `--rope_scaling dynamic` 参数评估模型。
|
[23/08/12] 我们支持了 **RoPE 插值**来扩展 LLaMA 模型的上下文长度。请使用 `rope_scaling: linear` 参数训练模型或使用 `rope_scaling: dynamic` 参数评估模型。
|
||||||
|
|
||||||
[23/08/11] 我们支持了指令模型的 **[DPO 训练](https://arxiv.org/abs/2305.18290)**。使用方法请参阅[此示例](#dpo-训练)。
|
[23/08/11] 我们支持了指令模型的 **[DPO 训练](https://arxiv.org/abs/2305.18290)**。详细用法请参照 [examples](examples/README_zh.md)。
|
||||||
|
|
||||||
[23/07/31] 我们支持了**数据流式加载**。请使用 `--streaming` 和 `--max_steps 10000` 参数来流式加载数据集。
|
[23/07/31] 我们支持了**数据流式加载**。请使用 `streaming: true` 和 `max_steps: 10000` 参数来流式加载数据集。
|
||||||
|
|
||||||
[23/07/29] 我们在 Hugging Face 发布了两个 13B 指令微调模型。详细内容请查阅我们的 Hugging Face 项目([LLaMA-2](https://huggingface.co/hiyouga/Llama-2-Chinese-13b-chat) / [Baichuan](https://huggingface.co/hiyouga/Baichuan-13B-sft))。
|
[23/07/29] 我们在 Hugging Face 发布了两个 13B 指令微调模型。详细内容请查阅我们的 Hugging Face 项目([LLaMA-2](https://huggingface.co/hiyouga/Llama-2-Chinese-13b-chat) / [Baichuan](https://huggingface.co/hiyouga/Baichuan-13B-sft))。
|
||||||
|
|
||||||
@ -118,43 +137,49 @@ https://github.com/hiyouga/LLaMA-Factory/assets/16256802/ec36a9dd-37f4-4f72-81bd
|
|||||||
|
|
||||||
[23/06/22] 我们对齐了[示例 API](src/api_demo.py) 与 [OpenAI API](https://platform.openai.com/docs/api-reference/chat) 的格式,您可以将微调模型接入**任意基于 ChatGPT 的应用**中。
|
[23/06/22] 我们对齐了[示例 API](src/api_demo.py) 与 [OpenAI API](https://platform.openai.com/docs/api-reference/chat) 的格式,您可以将微调模型接入**任意基于 ChatGPT 的应用**中。
|
||||||
|
|
||||||
[23/06/03] 我们实现了 4 比特的 LoRA 训练(也称 **[QLoRA](https://github.com/artidoro/qlora)**)。请使用 `--quantization_bit 4` 参数进行 4 比特量化微调。
|
[23/06/03] 我们实现了 4 比特的 LoRA 训练(也称 **[QLoRA](https://github.com/artidoro/qlora)**)。详细用法请参照 [examples](examples/README_zh.md)。
|
||||||
|
|
||||||
</details>
|
</details>
|
||||||
|
|
||||||
## 模型
|
## 模型
|
||||||
|
|
||||||
| 模型名 | 模型大小 | 默认模块 | Template |
|
| 模型名 | 模型大小 | 默认模块 | Template |
|
||||||
| -------------------------------------------------------- | --------------------------- | ----------------- | --------- |
|
| -------------------------------------------------------- | -------------------------------- | ----------------- | --------- |
|
||||||
| [Baichuan2](https://huggingface.co/baichuan-inc) | 7B/13B | W_pack | baichuan2 |
|
| [Baichuan2](https://huggingface.co/baichuan-inc) | 7B/13B | W_pack | baichuan2 |
|
||||||
| [BLOOM](https://huggingface.co/bigscience/bloom) | 560M/1.1B/1.7B/3B/7.1B/176B | query_key_value | - |
|
| [BLOOM](https://huggingface.co/bigscience) | 560M/1.1B/1.7B/3B/7.1B/176B | query_key_value | - |
|
||||||
| [BLOOMZ](https://huggingface.co/bigscience/bloomz) | 560M/1.1B/1.7B/3B/7.1B/176B | query_key_value | - |
|
| [BLOOMZ](https://huggingface.co/bigscience) | 560M/1.1B/1.7B/3B/7.1B/176B | query_key_value | - |
|
||||||
| [ChatGLM3](https://huggingface.co/THUDM/chatglm3-6b) | 6B | query_key_value | chatglm3 |
|
| [ChatGLM3](https://huggingface.co/THUDM) | 6B | query_key_value | chatglm3 |
|
||||||
| [DeepSeek (MoE)](https://huggingface.co/deepseek-ai) | 7B/16B/67B | q_proj,v_proj | deepseek |
|
| [Command-R](https://huggingface.co/CohereForAI) | 35B/104B | q_proj,v_proj | cohere |
|
||||||
| [Falcon](https://huggingface.co/tiiuae) | 7B/40B/180B | query_key_value | falcon |
|
| [DeepSeek (MoE)](https://huggingface.co/deepseek-ai) | 7B/16B/67B/236B | q_proj,v_proj | deepseek |
|
||||||
| [Gemma](https://huggingface.co/google) | 2B/7B | q_proj,v_proj | gemma |
|
| [Falcon](https://huggingface.co/tiiuae) | 7B/11B/40B/180B | query_key_value | falcon |
|
||||||
|
| [Gemma/CodeGemma](https://huggingface.co/google) | 2B/7B | q_proj,v_proj | gemma |
|
||||||
| [InternLM2](https://huggingface.co/internlm) | 7B/20B | wqkv | intern2 |
|
| [InternLM2](https://huggingface.co/internlm) | 7B/20B | wqkv | intern2 |
|
||||||
| [LLaMA](https://github.com/facebookresearch/llama) | 7B/13B/33B/65B | q_proj,v_proj | - |
|
| [LLaMA](https://github.com/facebookresearch/llama) | 7B/13B/33B/65B | q_proj,v_proj | - |
|
||||||
| [LLaMA-2](https://huggingface.co/meta-llama) | 7B/13B/70B | q_proj,v_proj | llama2 |
|
| [LLaMA-2](https://huggingface.co/meta-llama) | 7B/13B/70B | q_proj,v_proj | llama2 |
|
||||||
| [Mistral](https://huggingface.co/mistralai) | 7B | q_proj,v_proj | mistral |
|
| [LLaMA-3](https://huggingface.co/meta-llama) | 8B/70B | q_proj,v_proj | llama3 |
|
||||||
| [Mixtral](https://huggingface.co/mistralai) | 8x7B | q_proj,v_proj | mistral |
|
| [LLaVA-1.5](https://huggingface.co/llava-hf) | 7B/13B | q_proj,v_proj | vicuna |
|
||||||
| [OLMo](https://huggingface.co/allenai) | 1B/7B | att_proj | olmo |
|
| [Mistral/Mixtral](https://huggingface.co/mistralai) | 7B/8x7B/8x22B | q_proj,v_proj | mistral |
|
||||||
|
| [OLMo](https://huggingface.co/allenai) | 1B/7B | q_proj,v_proj | - |
|
||||||
| [Phi-1.5/2](https://huggingface.co/microsoft) | 1.3B/2.7B | q_proj,v_proj | - |
|
| [Phi-1.5/2](https://huggingface.co/microsoft) | 1.3B/2.7B | q_proj,v_proj | - |
|
||||||
|
| [Phi-3](https://huggingface.co/microsoft) | 3.8B | qkv_proj | phi |
|
||||||
| [Qwen](https://huggingface.co/Qwen) | 1.8B/7B/14B/72B | c_attn | qwen |
|
| [Qwen](https://huggingface.co/Qwen) | 1.8B/7B/14B/72B | c_attn | qwen |
|
||||||
| [Qwen1.5](https://huggingface.co/Qwen) | 0.5B/1.8B/4B/7B/14B/72B | q_proj,v_proj | qwen |
|
| [Qwen1.5 (Code/MoE)](https://huggingface.co/Qwen) | 0.5B/1.8B/4B/7B/14B/32B/72B/110B | q_proj,v_proj | qwen |
|
||||||
| [StarCoder2](https://huggingface.co/bigcode) | 3B/7B/15B | q_proj,v_proj | - |
|
| [StarCoder2](https://huggingface.co/bigcode) | 3B/7B/15B | q_proj,v_proj | - |
|
||||||
| [XVERSE](https://huggingface.co/xverse) | 7B/13B/65B | q_proj,v_proj | xverse |
|
| [XVERSE](https://huggingface.co/xverse) | 7B/13B/65B | q_proj,v_proj | xverse |
|
||||||
| [Yi](https://huggingface.co/01-ai) | 6B/9B/34B | q_proj,v_proj | yi |
|
| [Yi (1/1.5)](https://huggingface.co/01-ai) | 6B/9B/34B | q_proj,v_proj | yi |
|
||||||
|
| [Yi-VL](https://huggingface.co/01-ai) | 6B/34B | q_proj,v_proj | yi_vl |
|
||||||
| [Yuan](https://huggingface.co/IEITYuan) | 2B/51B/102B | q_proj,v_proj | yuan |
|
| [Yuan](https://huggingface.co/IEITYuan) | 2B/51B/102B | q_proj,v_proj | yuan |
|
||||||
|
|
||||||
> [!NOTE]
|
> [!NOTE]
|
||||||
> **默认模块**应作为 `--lora_target` 参数的默认值,可使用 `--lora_target all` 参数指定全部模块。
|
> **默认模块**应作为 `--lora_target` 参数的默认值,可使用 `--lora_target all` 参数指定全部模块以取得更好的效果。
|
||||||
>
|
>
|
||||||
> 对于所有“基座”(Base)模型,`--template` 参数可以是 `default`, `alpaca`, `vicuna` 等任意值。但“对话”(Chat)模型请务必使用**对应的模板**。
|
> 对于所有“基座”(Base)模型,`--template` 参数可以是 `default`, `alpaca`, `vicuna` 等任意值。但“对话”(Instruct/Chat)模型请务必使用**对应的模板**。
|
||||||
|
>
|
||||||
|
> 请务必在训练和推理时使用**完全一致**的模板。
|
||||||
|
|
||||||
项目所支持模型的完整列表请参阅 [constants.py](src/llmtuner/extras/constants.py)。
|
项目所支持模型的完整列表请参阅 [constants.py](src/llamafactory/extras/constants.py)。
|
||||||
|
|
||||||
您也可以在 [template.py](src/llmtuner/data/template.py) 中添加自己的对话模板。
|
您也可以在 [template.py](src/llamafactory/data/template.py) 中添加自己的对话模板。
|
||||||
|
|
||||||
## 训练方法
|
## 训练方法
|
||||||
|
|
||||||
@ -165,9 +190,8 @@ https://github.com/hiyouga/LLaMA-Factory/assets/16256802/ec36a9dd-37f4-4f72-81bd
|
|||||||
| 奖励模型训练 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
|
| 奖励模型训练 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
|
||||||
| PPO 训练 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
|
| PPO 训练 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
|
||||||
| DPO 训练 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
|
| DPO 训练 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
|
||||||
|
| KTO 训练 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
|
||||||
> [!NOTE]
|
| ORPO 训练 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
|
||||||
> 请使用 `--quantization_bit 4` 参数来启用 QLoRA 训练。
|
|
||||||
|
|
||||||
## 数据集
|
## 数据集
|
||||||
|
|
||||||
@ -187,12 +211,12 @@ https://github.com/hiyouga/LLaMA-Factory/assets/16256802/ec36a9dd-37f4-4f72-81bd
|
|||||||
|
|
||||||
<details><summary>指令微调数据集</summary>
|
<details><summary>指令微调数据集</summary>
|
||||||
|
|
||||||
|
- [Identity (en&zh)](data/identity.json)
|
||||||
- [Stanford Alpaca (en)](https://github.com/tatsu-lab/stanford_alpaca)
|
- [Stanford Alpaca (en)](https://github.com/tatsu-lab/stanford_alpaca)
|
||||||
- [Stanford Alpaca (zh)](https://github.com/ymcui/Chinese-LLaMA-Alpaca)
|
- [Stanford Alpaca (zh)](https://github.com/ymcui/Chinese-LLaMA-Alpaca-3)
|
||||||
- [Alpaca GPT4 (en&zh)](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM)
|
- [Alpaca GPT4 (en&zh)](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM)
|
||||||
- [Self Cognition (zh)](data/self_cognition.json)
|
- [Glaive Function Calling V2 (en&zh)](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2)
|
||||||
- [Open Assistant (multilingual)](https://huggingface.co/datasets/OpenAssistant/oasst1)
|
- [LIMA (en)](https://huggingface.co/datasets/GAIR/lima)
|
||||||
- [ShareGPT (zh)](https://huggingface.co/datasets/QingyiSi/Alpaca-CoT/tree/main/Chinese-instruction-collection)
|
|
||||||
- [Guanaco Dataset (multilingual)](https://huggingface.co/datasets/JosephusCheung/GuanacoDataset)
|
- [Guanaco Dataset (multilingual)](https://huggingface.co/datasets/JosephusCheung/GuanacoDataset)
|
||||||
- [BELLE 2M (zh)](https://huggingface.co/datasets/BelleGroup/train_2M_CN)
|
- [BELLE 2M (zh)](https://huggingface.co/datasets/BelleGroup/train_2M_CN)
|
||||||
- [BELLE 1M (zh)](https://huggingface.co/datasets/BelleGroup/train_1M_CN)
|
- [BELLE 1M (zh)](https://huggingface.co/datasets/BelleGroup/train_1M_CN)
|
||||||
@ -201,7 +225,6 @@ https://github.com/hiyouga/LLaMA-Factory/assets/16256802/ec36a9dd-37f4-4f72-81bd
|
|||||||
- [BELLE School Math 0.25M (zh)](https://huggingface.co/datasets/BelleGroup/school_math_0.25M)
|
- [BELLE School Math 0.25M (zh)](https://huggingface.co/datasets/BelleGroup/school_math_0.25M)
|
||||||
- [BELLE Multiturn Chat 0.8M (zh)](https://huggingface.co/datasets/BelleGroup/multiturn_chat_0.8M)
|
- [BELLE Multiturn Chat 0.8M (zh)](https://huggingface.co/datasets/BelleGroup/multiturn_chat_0.8M)
|
||||||
- [UltraChat (en)](https://github.com/thunlp/UltraChat)
|
- [UltraChat (en)](https://github.com/thunlp/UltraChat)
|
||||||
- [LIMA (en)](https://huggingface.co/datasets/GAIR/lima)
|
|
||||||
- [OpenPlatypus (en)](https://huggingface.co/datasets/garage-bAInd/Open-Platypus)
|
- [OpenPlatypus (en)](https://huggingface.co/datasets/garage-bAInd/Open-Platypus)
|
||||||
- [CodeAlpaca 20k (en)](https://huggingface.co/datasets/sahil2801/CodeAlpaca-20k)
|
- [CodeAlpaca 20k (en)](https://huggingface.co/datasets/sahil2801/CodeAlpaca-20k)
|
||||||
- [Alpaca CoT (multilingual)](https://huggingface.co/datasets/QingyiSi/Alpaca-CoT)
|
- [Alpaca CoT (multilingual)](https://huggingface.co/datasets/QingyiSi/Alpaca-CoT)
|
||||||
@ -214,15 +237,17 @@ https://github.com/hiyouga/LLaMA-Factory/assets/16256802/ec36a9dd-37f4-4f72-81bd
|
|||||||
- [WebNovel (zh)](https://huggingface.co/datasets/zxbsmk/webnovel_cn)
|
- [WebNovel (zh)](https://huggingface.co/datasets/zxbsmk/webnovel_cn)
|
||||||
- [Nectar (en)](https://huggingface.co/datasets/berkeley-nest/Nectar)
|
- [Nectar (en)](https://huggingface.co/datasets/berkeley-nest/Nectar)
|
||||||
- [deepctrl (en&zh)](https://www.modelscope.cn/datasets/deepctrl/deepctrl-sft-data)
|
- [deepctrl (en&zh)](https://www.modelscope.cn/datasets/deepctrl/deepctrl-sft-data)
|
||||||
- [Ad Gen (zh)](https://huggingface.co/datasets/HasturOfficial/adgen)
|
- [Advertise Generating (zh)](https://huggingface.co/datasets/HasturOfficial/adgen)
|
||||||
- [ShareGPT Hyperfiltered (en)](https://huggingface.co/datasets/totally-not-an-llm/sharegpt-hyperfiltered-3k)
|
- [ShareGPT Hyperfiltered (en)](https://huggingface.co/datasets/totally-not-an-llm/sharegpt-hyperfiltered-3k)
|
||||||
- [ShareGPT4 (en&zh)](https://huggingface.co/datasets/shibing624/sharegpt_gpt4)
|
- [ShareGPT4 (en&zh)](https://huggingface.co/datasets/shibing624/sharegpt_gpt4)
|
||||||
- [UltraChat 200k (en)](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k)
|
- [UltraChat 200k (en)](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k)
|
||||||
- [AgentInstruct (en)](https://huggingface.co/datasets/THUDM/AgentInstruct)
|
- [AgentInstruct (en)](https://huggingface.co/datasets/THUDM/AgentInstruct)
|
||||||
- [LMSYS Chat 1M (en)](https://huggingface.co/datasets/lmsys/lmsys-chat-1m)
|
- [LMSYS Chat 1M (en)](https://huggingface.co/datasets/lmsys/lmsys-chat-1m)
|
||||||
- [Evol Instruct V2 (en)](https://huggingface.co/datasets/WizardLM/WizardLM_evol_instruct_V2_196k)
|
- [Evol Instruct V2 (en)](https://huggingface.co/datasets/WizardLM/WizardLM_evol_instruct_V2_196k)
|
||||||
- [Glaive Function Calling V2 (en)](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2)
|
|
||||||
- [Cosmopedia (en)](https://huggingface.co/datasets/HuggingFaceTB/cosmopedia)
|
- [Cosmopedia (en)](https://huggingface.co/datasets/HuggingFaceTB/cosmopedia)
|
||||||
|
- [STEM (zh)](https://huggingface.co/datasets/hfl/stem_zh_instruction)
|
||||||
|
- [Ruozhiba (zh)](https://huggingface.co/datasets/hfl/ruozhiba_gpt4_turbo)
|
||||||
|
- [LLaVA mixed (en&zh)](https://huggingface.co/datasets/BUAADreamer/llava-en-zh-300k)
|
||||||
- [Open Assistant (de)](https://huggingface.co/datasets/mayflowergmbh/oasst_de)
|
- [Open Assistant (de)](https://huggingface.co/datasets/mayflowergmbh/oasst_de)
|
||||||
- [Dolly 15k (de)](https://huggingface.co/datasets/mayflowergmbh/dolly-15k_de)
|
- [Dolly 15k (de)](https://huggingface.co/datasets/mayflowergmbh/dolly-15k_de)
|
||||||
- [Alpaca GPT4 (de)](https://huggingface.co/datasets/mayflowergmbh/alpaca-gpt4_de)
|
- [Alpaca GPT4 (de)](https://huggingface.co/datasets/mayflowergmbh/alpaca-gpt4_de)
|
||||||
@ -237,17 +262,15 @@ https://github.com/hiyouga/LLaMA-Factory/assets/16256802/ec36a9dd-37f4-4f72-81bd
|
|||||||
|
|
||||||
<details><summary>偏好数据集</summary>
|
<details><summary>偏好数据集</summary>
|
||||||
|
|
||||||
|
- [DPO mixed (en&zh)](https://huggingface.co/datasets/hiyouga/DPO-En-Zh-20k)
|
||||||
|
- [Orca DPO Pairs (en)](https://huggingface.co/datasets/Intel/orca_dpo_pairs)
|
||||||
- [HH-RLHF (en)](https://huggingface.co/datasets/Anthropic/hh-rlhf)
|
- [HH-RLHF (en)](https://huggingface.co/datasets/Anthropic/hh-rlhf)
|
||||||
- [Open Assistant (multilingual)](https://huggingface.co/datasets/OpenAssistant/oasst1)
|
|
||||||
- [GPT-4 Generated Data (en&zh)](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM)
|
|
||||||
- [Orca DPO (en)](https://huggingface.co/datasets/Intel/orca_dpo_pairs)
|
|
||||||
- [Nectar (en)](https://huggingface.co/datasets/berkeley-nest/Nectar)
|
- [Nectar (en)](https://huggingface.co/datasets/berkeley-nest/Nectar)
|
||||||
- [Orca DPO (de)](https://huggingface.co/datasets/mayflowergmbh/intel_orca_dpo_pairs_de)
|
- [Orca DPO (de)](https://huggingface.co/datasets/mayflowergmbh/intel_orca_dpo_pairs_de)
|
||||||
|
- [KTO mixed (en)](https://huggingface.co/datasets/argilla/kto-mix-15k)
|
||||||
|
|
||||||
</details>
|
</details>
|
||||||
|
|
||||||
使用方法请参考 [data/README_zh.md](data/README_zh.md) 文件。
|
|
||||||
|
|
||||||
部分数据集的使用需要确认,我们推荐使用下述命令登录您的 Hugging Face 账户。
|
部分数据集的使用需要确认,我们推荐使用下述命令登录您的 Hugging Face 账户。
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
@ -261,53 +284,54 @@ huggingface-cli login
|
|||||||
| ------------ | ------- | --------- |
|
| ------------ | ------- | --------- |
|
||||||
| python | 3.8 | 3.10 |
|
| python | 3.8 | 3.10 |
|
||||||
| torch | 1.13.1 | 2.2.0 |
|
| torch | 1.13.1 | 2.2.0 |
|
||||||
| transformers | 4.37.2 | 4.39.1 |
|
| transformers | 4.37.2 | 4.40.1 |
|
||||||
| datasets | 2.14.3 | 2.17.1 |
|
| datasets | 2.14.3 | 2.19.1 |
|
||||||
| accelerate | 0.27.2 | 0.28.0 |
|
| accelerate | 0.27.2 | 0.30.0 |
|
||||||
| peft | 0.9.0 | 0.10.0 |
|
| peft | 0.9.0 | 0.10.0 |
|
||||||
| trl | 0.8.1 | 0.8.1 |
|
| trl | 0.8.1 | 0.8.6 |
|
||||||
|
|
||||||
| 可选项 | 至少 | 推荐 |
|
| 可选项 | 至少 | 推荐 |
|
||||||
| ------------ | ------- | --------- |
|
| ------------ | ------- | --------- |
|
||||||
| CUDA | 11.6 | 12.2 |
|
| CUDA | 11.6 | 12.2 |
|
||||||
| deepspeed | 0.10.0 | 0.14.0 |
|
| deepspeed | 0.10.0 | 0.14.0 |
|
||||||
| bitsandbytes | 0.39.0 | 0.43.0 |
|
| bitsandbytes | 0.39.0 | 0.43.1 |
|
||||||
| flash-attn | 2.3.0 | 2.5.6 |
|
| vllm | 0.4.0 | 0.4.2 |
|
||||||
|
| flash-attn | 2.3.0 | 2.5.8 |
|
||||||
|
|
||||||
### 硬件依赖
|
### 硬件依赖
|
||||||
|
|
||||||
\* *估算值*
|
\* *估算值*
|
||||||
|
|
||||||
| 训练方法 | 精度 | 7B | 13B | 30B | 70B | 8x7B |
|
| 方法 | 精度 | 7B | 13B | 30B | 70B | 110B | 8x7B | 8x22B |
|
||||||
| ------- | ---- | ----- | ----- | ----- | ------ | ------ |
|
| ----------------- | ---- | ----- | ----- | ----- | ------ | ------ | ----- | ------ |
|
||||||
| 全参数 | AMP | 120GB | 240GB | 600GB | 1200GB | 900GB |
|
| Full | AMP | 120GB | 240GB | 600GB | 1200GB | 2000GB | 900GB | 2400GB |
|
||||||
| 全参数 | 16 | 60GB | 120GB | 300GB | 600GB | 400GB |
|
| Full | 16 | 60GB | 120GB | 300GB | 600GB | 900GB | 400GB | 1200GB |
|
||||||
| GaLore | 16 | 16GB | 32GB | 64GB | 160GB | 120GB |
|
| Freeze | 16 | 20GB | 40GB | 80GB | 200GB | 360GB | 160GB | 400GB |
|
||||||
| 部分参数 | 16 | 20GB | 40GB | 80GB | 200GB | 160GB |
|
| LoRA/GaLore/BAdam | 16 | 16GB | 32GB | 64GB | 160GB | 240GB | 120GB | 320GB |
|
||||||
| LoRA | 16 | 16GB | 32GB | 64GB | 160GB | 120GB |
|
| QLoRA | 8 | 10GB | 20GB | 40GB | 80GB | 140GB | 60GB | 160GB |
|
||||||
| QLoRA | 8 | 10GB | 20GB | 40GB | 80GB | 60GB |
|
| QLoRA | 4 | 6GB | 12GB | 24GB | 48GB | 72GB | 30GB | 96GB |
|
||||||
| QLoRA | 4 | 6GB | 12GB | 24GB | 48GB | 30GB |
|
| QLoRA | 2 | 4GB | 8GB | 16GB | 24GB | 48GB | 18GB | 48GB |
|
||||||
| QLoRA | 2 | 4GB | 8GB | 16GB | 24GB | 18GB |
|
|
||||||
|
|
||||||
## 如何使用
|
## 如何使用
|
||||||
|
|
||||||
### 数据准备(可跳过)
|
### 安装 LLaMA Factory
|
||||||
|
|
||||||
关于数据集文件的格式,请参考 [data/README_zh.md](data/README_zh.md) 的内容。构建自定义数据集时,既可以使用单个 `.json` 文件,也可以使用一个[数据加载脚本](https://huggingface.co/docs/datasets/dataset_script)和多个文件。
|
> [!IMPORTANT]
|
||||||
|
> 此步骤为必需。
|
||||||
> [!NOTE]
|
|
||||||
> 使用自定义数据集时,请更新 `data/dataset_info.json` 文件,该文件的格式请参考 `data/README_zh.md`。
|
|
||||||
|
|
||||||
### 环境搭建(可跳过)
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
git clone https://github.com/hiyouga/LLaMA-Factory.git
|
git clone --depth 1 https://github.com/hiyouga/LLaMA-Factory.git
|
||||||
conda create -n llama_factory python=3.10
|
|
||||||
conda activate llama_factory
|
|
||||||
cd LLaMA-Factory
|
cd LLaMA-Factory
|
||||||
pip install -r requirements.txt
|
pip install -e .[torch,metrics]
|
||||||
```
|
```
|
||||||
|
|
||||||
|
可选的额外依赖项:torch、metrics、deepspeed、bitsandbytes、vllm、galore、badam、gptq、awq、aqlm、qwen、modelscope、quality
|
||||||
|
|
||||||
|
> [!TIP]
|
||||||
|
> 遇到包冲突时,可使用 `pip install --no-deps -e .` 解决。
|
||||||
|
|
||||||
|
<details><summary>Windows 用户指南</summary>
|
||||||
|
|
||||||
如果要在 Windows 平台上开启量化 LoRA(QLoRA),需要安装预编译的 `bitsandbytes` 库, 支持 CUDA 11.1 到 12.2, 请根据您的 CUDA 版本情况选择适合的[发布版本](https://github.com/jllllll/bitsandbytes-windows-webui/releases/tag/wheels)。
|
如果要在 Windows 平台上开启量化 LoRA(QLoRA),需要安装预编译的 `bitsandbytes` 库, 支持 CUDA 11.1 到 12.2, 请根据您的 CUDA 版本情况选择适合的[发布版本](https://github.com/jllllll/bitsandbytes-windows-webui/releases/tag/wheels)。
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
@ -316,7 +340,115 @@ pip install https://github.com/jllllll/bitsandbytes-windows-webui/releases/downl
|
|||||||
|
|
||||||
如果要在 Windows 平台上开启 FlashAttention-2,需要安装预编译的 `flash-attn` 库,支持 CUDA 12.1 到 12.2,请根据需求到 [flash-attention](https://github.com/bdashore3/flash-attention/releases) 下载对应版本安装。
|
如果要在 Windows 平台上开启 FlashAttention-2,需要安装预编译的 `flash-attn` 库,支持 CUDA 12.1 到 12.2,请根据需求到 [flash-attention](https://github.com/bdashore3/flash-attention/releases) 下载对应版本安装。
|
||||||
|
|
||||||
### 使用魔搭社区(可跳过)
|
</details>
|
||||||
|
|
||||||
|
<details><summary>昇腾 NPU 用户指南</summary>
|
||||||
|
|
||||||
|
如果使用昇腾 NPU 设备进行(分布式)训练或推理,需要安装 **[torch-npu](https://gitee.com/ascend/pytorch)** 库和 **[Ascend CANN Kernels](https://www.hiascend.com/developer/download/community/result?module=cann)**。
|
||||||
|
|
||||||
|
| 依赖项 | 至少 | 推荐 |
|
||||||
|
| ------------ | ------- | --------- |
|
||||||
|
| CANN | 8.0.RC1 | 8.0.RC1 |
|
||||||
|
| torch | 2.2.0 | 2.2.0 |
|
||||||
|
| torch-npu | 2.2.0 | 2.2.0 |
|
||||||
|
| deepspeed | 0.13.2 | 0.13.2 |
|
||||||
|
|
||||||
|
Docker 镜像:
|
||||||
|
|
||||||
|
- 32GB:[下载地址](http://mirrors.cn-central-221.ovaijisuan.com/detail/130.html)
|
||||||
|
- 64GB:敬请期待
|
||||||
|
|
||||||
|
请记得使用 `ASCEND_RT_VISIBLE_DEVICES` 而非 `CUDA_VISIBLE_DEVICES` 来指定您使用的设备。
|
||||||
|
|
||||||
|
如果遇到无法正常推理的情况,请尝试设置 `do_sample: false`。
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
### 数据准备
|
||||||
|
|
||||||
|
关于数据集文件的格式,请参考 [data/README_zh.md](data/README_zh.md) 的内容。你可以使用 HuggingFace / ModelScope 上的数据集或加载本地数据集。
|
||||||
|
|
||||||
|
> [!NOTE]
|
||||||
|
> 使用自定义数据集时,请更新 `data/dataset_info.json` 文件。
|
||||||
|
|
||||||
|
### 快速开始
|
||||||
|
|
||||||
|
下面三行命令分别对 Llama3-8B-Instruct 模型进行 LoRA **微调**、**推理**和**合并**。
|
||||||
|
|
||||||
|
```bash
|
||||||
|
CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/lora_single_gpu/llama3_lora_sft.yaml
|
||||||
|
CUDA_VISIBLE_DEVICES=0 llamafactory-cli chat examples/inference/llama3_lora_sft.yaml
|
||||||
|
CUDA_VISIBLE_DEVICES=0 llamafactory-cli export examples/merge_lora/llama3_lora_sft.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
高级用法请参考 [examples/README_zh.md](examples/README_zh.md)(包括多 GPU 微调)。
|
||||||
|
|
||||||
|
> [!TIP]
|
||||||
|
> 使用 `llamafactory-cli help` 显示帮助信息。
|
||||||
|
|
||||||
|
### LLaMA Board 可视化微调(由 [Gradio](https://github.com/gradio-app/gradio) 驱动)
|
||||||
|
|
||||||
|
> [!IMPORTANT]
|
||||||
|
> LLaMA Board 可视化界面目前仅支持单 GPU 训练。
|
||||||
|
|
||||||
|
#### 使用本地环境
|
||||||
|
|
||||||
|
```bash
|
||||||
|
CUDA_VISIBLE_DEVICES=0 GRADIO_SHARE=1 llamafactory-cli webui
|
||||||
|
```
|
||||||
|
|
||||||
|
<details><summary>阿里云 PAI 和 AutoDL 用户指南</summary>
|
||||||
|
|
||||||
|
如果您在阿里云 PAI 上使用 LLaMA Board 时遇到显示问题,请尝试在启动前使用以下命令设置环境变量:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export GRADIO_SERVER_PORT=7860 GRADIO_ROOT_PATH=/${JUPYTER_NAME}/proxy/7860/
|
||||||
|
```
|
||||||
|
|
||||||
|
如果您正在使用 AutoDL,请安装下述 Gradio 版本:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pip install gradio==4.10.0
|
||||||
|
```
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
#### 使用 Docker
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker build -f ./Dockerfile -t llama-factory:latest .
|
||||||
|
docker run --gpus=all \
|
||||||
|
-v ./hf_cache:/root/.cache/huggingface/ \
|
||||||
|
-v ./data:/app/data \
|
||||||
|
-v ./output:/app/output \
|
||||||
|
-e CUDA_VISIBLE_DEVICES=0 \
|
||||||
|
-p 7860:7860 \
|
||||||
|
--shm-size 16G \
|
||||||
|
--name llama_factory \
|
||||||
|
-d llama-factory:latest
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 使用 Docker Compose
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker compose -f ./docker-compose.yml up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
<details><summary>数据卷详情</summary>
|
||||||
|
|
||||||
|
- hf_cache:使用宿主机的 Hugging Face 缓存文件夹,允许更改为新的目录。
|
||||||
|
- data:宿主机中存放数据集的文件夹路径。
|
||||||
|
- output:将导出目录设置为该路径后,即可在宿主机中访问导出后的模型。
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
### 利用 vLLM 部署 OpenAI API
|
||||||
|
|
||||||
|
```bash
|
||||||
|
CUDA_VISIBLE_DEVICES=0,1 API_PORT=8000 llamafactory-cli api examples/inference/llama3_vllm.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
### 从魔搭社区下载
|
||||||
|
|
||||||
如果您在 Hugging Face 模型和数据集的下载中遇到了问题,可以通过下述方法使用魔搭社区。
|
如果您在 Hugging Face 模型和数据集的下载中遇到了问题,可以通过下述方法使用魔搭社区。
|
||||||
|
|
||||||
@ -324,343 +456,14 @@ pip install https://github.com/jllllll/bitsandbytes-windows-webui/releases/downl
|
|||||||
export USE_MODELSCOPE_HUB=1 # Windows 使用 `set USE_MODELSCOPE_HUB=1`
|
export USE_MODELSCOPE_HUB=1 # Windows 使用 `set USE_MODELSCOPE_HUB=1`
|
||||||
```
|
```
|
||||||
|
|
||||||
接着即可通过指定模型名称来训练对应的模型。(在[魔搭社区](https://modelscope.cn/models)查看所有可用的模型)
|
将 `--model_name_or_path` 设置为模型 ID 来加载对应的模型。在[魔搭社区](https://modelscope.cn/models)查看所有可用的模型,例如 `LLM-Research/Meta-Llama-3-8B-Instruct`。
|
||||||
|
|
||||||
```bash
|
|
||||||
CUDA_VISIBLE_DEVICES=0 python src/train_bash.py \
|
|
||||||
--model_name_or_path modelscope/Llama-2-7b-ms \
|
|
||||||
... # 参数同下
|
|
||||||
```
|
|
||||||
|
|
||||||
LLaMA Board 同样支持魔搭社区的模型和数据集下载。
|
|
||||||
|
|
||||||
```bash
|
|
||||||
CUDA_VISIBLE_DEVICES=0 USE_MODELSCOPE_HUB=1 python src/train_web.py
|
|
||||||
```
|
|
||||||
|
|
||||||
### 单 GPU 训练
|
|
||||||
|
|
||||||
> [!IMPORTANT]
|
|
||||||
> 如果您使用多张 GPU 训练模型,请移步[多 GPU 分布式训练](#多-gpu-分布式训练)部分。
|
|
||||||
|
|
||||||
#### LLaMA Board GUI
|
|
||||||
|
|
||||||
```bash
|
|
||||||
CUDA_VISIBLE_DEVICES=0 python src/train_web.py
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 预训练
|
|
||||||
|
|
||||||
```bash
|
|
||||||
CUDA_VISIBLE_DEVICES=0 python src/train_bash.py \
|
|
||||||
--stage pt \
|
|
||||||
--do_train \
|
|
||||||
--model_name_or_path path_to_llama_model \
|
|
||||||
--dataset wiki_demo \
|
|
||||||
--finetuning_type lora \
|
|
||||||
--lora_target q_proj,v_proj \
|
|
||||||
--output_dir path_to_pt_checkpoint \
|
|
||||||
--overwrite_cache \
|
|
||||||
--per_device_train_batch_size 4 \
|
|
||||||
--gradient_accumulation_steps 4 \
|
|
||||||
--lr_scheduler_type cosine \
|
|
||||||
--logging_steps 10 \
|
|
||||||
--save_steps 1000 \
|
|
||||||
--learning_rate 5e-5 \
|
|
||||||
--num_train_epochs 3.0 \
|
|
||||||
--plot_loss \
|
|
||||||
--fp16
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 指令监督微调
|
|
||||||
|
|
||||||
```bash
|
|
||||||
CUDA_VISIBLE_DEVICES=0 python src/train_bash.py \
|
|
||||||
--stage sft \
|
|
||||||
--do_train \
|
|
||||||
--model_name_or_path path_to_llama_model \
|
|
||||||
--dataset alpaca_gpt4_zh \
|
|
||||||
--template default \
|
|
||||||
--finetuning_type lora \
|
|
||||||
--lora_target q_proj,v_proj \
|
|
||||||
--output_dir path_to_sft_checkpoint \
|
|
||||||
--overwrite_cache \
|
|
||||||
--per_device_train_batch_size 4 \
|
|
||||||
--gradient_accumulation_steps 4 \
|
|
||||||
--lr_scheduler_type cosine \
|
|
||||||
--logging_steps 10 \
|
|
||||||
--save_steps 1000 \
|
|
||||||
--learning_rate 5e-5 \
|
|
||||||
--num_train_epochs 3.0 \
|
|
||||||
--plot_loss \
|
|
||||||
--fp16
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 奖励模型训练
|
|
||||||
|
|
||||||
```bash
|
|
||||||
CUDA_VISIBLE_DEVICES=0 python src/train_bash.py \
|
|
||||||
--stage rm \
|
|
||||||
--do_train \
|
|
||||||
--model_name_or_path path_to_llama_model \
|
|
||||||
--adapter_name_or_path path_to_sft_checkpoint \
|
|
||||||
--create_new_adapter \
|
|
||||||
--dataset comparison_gpt4_zh \
|
|
||||||
--template default \
|
|
||||||
--finetuning_type lora \
|
|
||||||
--lora_target q_proj,v_proj \
|
|
||||||
--output_dir path_to_rm_checkpoint \
|
|
||||||
--per_device_train_batch_size 2 \
|
|
||||||
--gradient_accumulation_steps 4 \
|
|
||||||
--lr_scheduler_type cosine \
|
|
||||||
--logging_steps 10 \
|
|
||||||
--save_steps 1000 \
|
|
||||||
--learning_rate 1e-5 \
|
|
||||||
--num_train_epochs 1.0 \
|
|
||||||
--plot_loss \
|
|
||||||
--fp16
|
|
||||||
```
|
|
||||||
|
|
||||||
#### PPO 训练
|
|
||||||
|
|
||||||
```bash
|
|
||||||
CUDA_VISIBLE_DEVICES=0 python src/train_bash.py \
|
|
||||||
--stage ppo \
|
|
||||||
--do_train \
|
|
||||||
--model_name_or_path path_to_llama_model \
|
|
||||||
--adapter_name_or_path path_to_sft_checkpoint \
|
|
||||||
--create_new_adapter \
|
|
||||||
--dataset alpaca_gpt4_zh \
|
|
||||||
--template default \
|
|
||||||
--finetuning_type lora \
|
|
||||||
--lora_target q_proj,v_proj \
|
|
||||||
--reward_model path_to_rm_checkpoint \
|
|
||||||
--output_dir path_to_ppo_checkpoint \
|
|
||||||
--per_device_train_batch_size 2 \
|
|
||||||
--gradient_accumulation_steps 4 \
|
|
||||||
--lr_scheduler_type cosine \
|
|
||||||
--top_k 0 \
|
|
||||||
--top_p 0.9 \
|
|
||||||
--logging_steps 10 \
|
|
||||||
--save_steps 1000 \
|
|
||||||
--learning_rate 1e-5 \
|
|
||||||
--num_train_epochs 1.0 \
|
|
||||||
--plot_loss \
|
|
||||||
--fp16
|
|
||||||
```
|
|
||||||
|
|
||||||
> [!TIP]
|
|
||||||
> 使用 `--adapter_name_or_path path_to_sft_checkpoint,path_to_ppo_checkpoint` 来进行微调模型的推理。
|
|
||||||
|
|
||||||
> [!WARNING]
|
|
||||||
> 如果使用 fp16 精度进行 LLaMA-2 模型的 PPO 训练,请使用 `--per_device_train_batch_size=1`。
|
|
||||||
|
|
||||||
#### DPO 训练
|
|
||||||
|
|
||||||
```bash
|
|
||||||
CUDA_VISIBLE_DEVICES=0 python src/train_bash.py \
|
|
||||||
--stage dpo \
|
|
||||||
--do_train \
|
|
||||||
--model_name_or_path path_to_llama_model \
|
|
||||||
--adapter_name_or_path path_to_sft_checkpoint \
|
|
||||||
--create_new_adapter \
|
|
||||||
--dataset comparison_gpt4_zh \
|
|
||||||
--template default \
|
|
||||||
--finetuning_type lora \
|
|
||||||
--lora_target q_proj,v_proj \
|
|
||||||
--output_dir path_to_dpo_checkpoint \
|
|
||||||
--per_device_train_batch_size 2 \
|
|
||||||
--gradient_accumulation_steps 4 \
|
|
||||||
--lr_scheduler_type cosine \
|
|
||||||
--logging_steps 10 \
|
|
||||||
--save_steps 1000 \
|
|
||||||
--learning_rate 1e-5 \
|
|
||||||
--num_train_epochs 1.0 \
|
|
||||||
--plot_loss \
|
|
||||||
--fp16
|
|
||||||
```
|
|
||||||
|
|
||||||
> [!TIP]
|
|
||||||
> 使用 `--adapter_name_or_path path_to_sft_checkpoint,path_to_dpo_checkpoint` 来进行微调模型的推理。
|
|
||||||
|
|
||||||
### 多 GPU 分布式训练
|
|
||||||
|
|
||||||
#### 使用 Huggingface Accelerate
|
|
||||||
|
|
||||||
```bash
|
|
||||||
accelerate launch --config_file config.yaml src/train_bash.py \
|
|
||||||
--ddp_timeout 180000000 \
|
|
||||||
... # 参数同上
|
|
||||||
```
|
|
||||||
|
|
||||||
<details><summary>使用 Accelerate 进行 LoRA 训练的 config.yaml 示例</summary>
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
compute_environment: LOCAL_MACHINE
|
|
||||||
debug: false
|
|
||||||
distributed_type: MULTI_GPU
|
|
||||||
downcast_bf16: 'no'
|
|
||||||
gpu_ids: all
|
|
||||||
machine_rank: 0
|
|
||||||
main_training_function: main
|
|
||||||
mixed_precision: fp16
|
|
||||||
num_machines: 1
|
|
||||||
num_processes: 4
|
|
||||||
rdzv_backend: static
|
|
||||||
same_network: true
|
|
||||||
tpu_env: []
|
|
||||||
tpu_use_cluster: false
|
|
||||||
tpu_use_sudo: false
|
|
||||||
use_cpu: false
|
|
||||||
```
|
|
||||||
|
|
||||||
</details>
|
|
||||||
|
|
||||||
> [!TIP]
|
|
||||||
> 我们推荐使用 Accelerate 进行 LoRA 训练。
|
|
||||||
|
|
||||||
#### 使用 DeepSpeed
|
|
||||||
|
|
||||||
```bash
|
|
||||||
deepspeed --num_gpus 8 src/train_bash.py \
|
|
||||||
--deepspeed ds_config.json \
|
|
||||||
--ddp_timeout 180000000 \
|
|
||||||
... # 参数同上
|
|
||||||
```
|
|
||||||
|
|
||||||
<details><summary>使用 DeepSpeed ZeRO-2 进行全参数训练的 ds_config.json 示例</summary>
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"train_batch_size": "auto",
|
|
||||||
"train_micro_batch_size_per_gpu": "auto",
|
|
||||||
"gradient_accumulation_steps": "auto",
|
|
||||||
"gradient_clipping": "auto",
|
|
||||||
"zero_allow_untested_optimizer": true,
|
|
||||||
"fp16": {
|
|
||||||
"enabled": "auto",
|
|
||||||
"loss_scale": 0,
|
|
||||||
"loss_scale_window": 1000,
|
|
||||||
"initial_scale_power": 16,
|
|
||||||
"hysteresis": 2,
|
|
||||||
"min_loss_scale": 1
|
|
||||||
},
|
|
||||||
"bf16": {
|
|
||||||
"enabled": "auto"
|
|
||||||
},
|
|
||||||
"zero_optimization": {
|
|
||||||
"stage": 2,
|
|
||||||
"allgather_partitions": true,
|
|
||||||
"allgather_bucket_size": 5e8,
|
|
||||||
"overlap_comm": true,
|
|
||||||
"reduce_scatter": true,
|
|
||||||
"reduce_bucket_size": 5e8,
|
|
||||||
"contiguous_gradients": true,
|
|
||||||
"round_robin_gradients": true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
</details>
|
|
||||||
|
|
||||||
> [!TIP]
|
|
||||||
> 更多训练脚本请查看 [examples](examples)。
|
|
||||||
|
|
||||||
### 合并 LoRA 权重并导出模型
|
|
||||||
|
|
||||||
```bash
|
|
||||||
CUDA_VISIBLE_DEVICES=0 python src/export_model.py \
|
|
||||||
--model_name_or_path path_to_llama_model \
|
|
||||||
--adapter_name_or_path path_to_checkpoint \
|
|
||||||
--template default \
|
|
||||||
--finetuning_type lora \
|
|
||||||
--export_dir path_to_export \
|
|
||||||
--export_size 2 \
|
|
||||||
--export_legacy_format False
|
|
||||||
```
|
|
||||||
|
|
||||||
> [!WARNING]
|
|
||||||
> 尚不支持量化模型的 LoRA 权重合并及导出。
|
|
||||||
|
|
||||||
> [!TIP]
|
|
||||||
> 仅使用 `--model_name_or_path path_to_export` 来加载导出后的模型。
|
|
||||||
>
|
|
||||||
> 合并 LoRA 权重之后可再次使用 `--export_quantization_bit 4` 和 `--export_quantization_dataset data/c4_demo.json` 基于 AutoGPTQ 量化模型。
|
|
||||||
|
|
||||||
### 使用 OpenAI 风格 API 推理
|
|
||||||
|
|
||||||
```bash
|
|
||||||
CUDA_VISIBLE_DEVICES=0 API_PORT=8000 python src/api_demo.py \
|
|
||||||
--model_name_or_path path_to_llama_model \
|
|
||||||
--adapter_name_or_path path_to_checkpoint \
|
|
||||||
--template default \
|
|
||||||
--finetuning_type lora
|
|
||||||
```
|
|
||||||
|
|
||||||
> [!TIP]
|
|
||||||
> 关于 API 文档请见 `http://localhost:8000/docs`。
|
|
||||||
|
|
||||||
### 使用命令行推理
|
|
||||||
|
|
||||||
```bash
|
|
||||||
CUDA_VISIBLE_DEVICES=0 python src/cli_demo.py \
|
|
||||||
--model_name_or_path path_to_llama_model \
|
|
||||||
--adapter_name_or_path path_to_checkpoint \
|
|
||||||
--template default \
|
|
||||||
--finetuning_type lora
|
|
||||||
```
|
|
||||||
|
|
||||||
### 使用浏览器推理
|
|
||||||
|
|
||||||
```bash
|
|
||||||
CUDA_VISIBLE_DEVICES=0 python src/web_demo.py \
|
|
||||||
--model_name_or_path path_to_llama_model \
|
|
||||||
--adapter_name_or_path path_to_checkpoint \
|
|
||||||
--template default \
|
|
||||||
--finetuning_type lora
|
|
||||||
```
|
|
||||||
|
|
||||||
### 模型评估
|
|
||||||
|
|
||||||
```bash
|
|
||||||
CUDA_VISIBLE_DEVICES=0 python src/evaluate.py \
|
|
||||||
--model_name_or_path path_to_llama_model \
|
|
||||||
--adapter_name_or_path path_to_checkpoint \
|
|
||||||
--template vanilla \
|
|
||||||
--finetuning_type lora \
|
|
||||||
--task ceval \
|
|
||||||
--split validation \
|
|
||||||
--lang zh \
|
|
||||||
--n_shot 5 \
|
|
||||||
--batch_size 4
|
|
||||||
```
|
|
||||||
|
|
||||||
### 模型预测
|
|
||||||
|
|
||||||
```bash
|
|
||||||
CUDA_VISIBLE_DEVICES=0 python src/train_bash.py \
|
|
||||||
--stage sft \
|
|
||||||
--do_predict \
|
|
||||||
--model_name_or_path path_to_llama_model \
|
|
||||||
--adapter_name_or_path path_to_checkpoint \
|
|
||||||
--dataset alpaca_gpt4_zh \
|
|
||||||
--template default \
|
|
||||||
--finetuning_type lora \
|
|
||||||
--output_dir path_to_predict_result \
|
|
||||||
--per_device_eval_batch_size 1 \
|
|
||||||
--max_samples 100 \
|
|
||||||
--predict_with_generate \
|
|
||||||
--fp16
|
|
||||||
```
|
|
||||||
|
|
||||||
> [!WARNING]
|
|
||||||
> 如果使用 fp16 精度进行 LLaMA-2 模型的预测,请使用 `--per_device_eval_batch_size=1`。
|
|
||||||
|
|
||||||
> [!TIP]
|
|
||||||
> 我们建议在量化模型的预测中使用 `--per_device_eval_batch_size=1` 和 `--max_target_length 128`。
|
|
||||||
|
|
||||||
## 使用了 LLaMA Factory 的项目
|
## 使用了 LLaMA Factory 的项目
|
||||||
|
|
||||||
|
如果您有项目希望添加至下述列表,请通过邮件联系或者创建一个 PR。
|
||||||
|
|
||||||
|
<details><summary>点击显示</summary>
|
||||||
|
|
||||||
1. Wang et al. ESRL: Efficient Sampling-based Reinforcement Learning for Sequence Generation. 2023. [[arxiv]](https://arxiv.org/abs/2308.02223)
|
1. Wang et al. ESRL: Efficient Sampling-based Reinforcement Learning for Sequence Generation. 2023. [[arxiv]](https://arxiv.org/abs/2308.02223)
|
||||||
1. Yu et al. Open, Closed, or Small Language Models for Text Classification? 2023. [[arxiv]](https://arxiv.org/abs/2308.10092)
|
1. Yu et al. Open, Closed, or Small Language Models for Text Classification? 2023. [[arxiv]](https://arxiv.org/abs/2308.10092)
|
||||||
1. Wang et al. UbiPhysio: Support Daily Functioning, Fitness, and Rehabilitation with Action Understanding and Feedback in Natural Language. 2023. [[arxiv]](https://arxiv.org/abs/2308.10526)
|
1. Wang et al. UbiPhysio: Support Daily Functioning, Fitness, and Rehabilitation with Action Understanding and Feedback in Natural Language. 2023. [[arxiv]](https://arxiv.org/abs/2308.10526)
|
||||||
@ -682,20 +485,37 @@ CUDA_VISIBLE_DEVICES=0 python src/train_bash.py \
|
|||||||
1. Huang et al. Key-Point-Driven Data Synthesis with its Enhancement on Mathematical Reasoning. 2024. [[arxiv]](https://arxiv.org/abs/2403.02333)
|
1. Huang et al. Key-Point-Driven Data Synthesis with its Enhancement on Mathematical Reasoning. 2024. [[arxiv]](https://arxiv.org/abs/2403.02333)
|
||||||
1. Duan et al. Negating Negatives: Alignment without Human Positive Samples via Distributional Dispreference Optimization. 2024. [[arxiv]](https://arxiv.org/abs/2403.03419)
|
1. Duan et al. Negating Negatives: Alignment without Human Positive Samples via Distributional Dispreference Optimization. 2024. [[arxiv]](https://arxiv.org/abs/2403.03419)
|
||||||
1. Xie and Schwertfeger. Empowering Robotics with Large Language Models: osmAG Map Comprehension with LLMs. 2024. [[arxiv]](https://arxiv.org/abs/2403.08228)
|
1. Xie and Schwertfeger. Empowering Robotics with Large Language Models: osmAG Map Comprehension with LLMs. 2024. [[arxiv]](https://arxiv.org/abs/2403.08228)
|
||||||
|
1. Wu et al. Large Language Models are Parallel Multilingual Learners. 2024. [[arxiv]](https://arxiv.org/abs/2403.09073)
|
||||||
|
1. Zhang et al. EDT: Improving Large Language Models' Generation by Entropy-based Dynamic Temperature Sampling. 2024. [[arxiv]](https://arxiv.org/abs/2403.14541)
|
||||||
|
1. Weller et al. FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructions. 2024. [[arxiv]](https://arxiv.org/abs/2403.15246)
|
||||||
|
1. Hongbin Na. CBT-LLM: A Chinese Large Language Model for Cognitive Behavioral Therapy-based Mental Health Question Answering. 2024. [[arxiv]](https://arxiv.org/abs/2403.16008)
|
||||||
|
1. Zan et al. CodeS: Natural Language to Code Repository via Multi-Layer Sketch. 2024. [[arxiv]](https://arxiv.org/abs/2403.16443)
|
||||||
|
1. Liu et al. Extensive Self-Contrast Enables Feedback-Free Language Model Alignment. 2024. [[arxiv]](https://arxiv.org/abs/2404.00604)
|
||||||
|
1. Luo et al. BAdam: A Memory Efficient Full Parameter Training Method for Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2404.02827)
|
||||||
|
1. Du et al. Chinese Tiny LLM: Pretraining a Chinese-Centric Large Language Model. 2024. [[arxiv]](https://arxiv.org/abs/2404.04167)
|
||||||
|
1. Ma et al. Parameter Efficient Quasi-Orthogonal Fine-Tuning via Givens Rotation. 2024. [[arxiv]](https://arxiv.org/abs/2404.04316)
|
||||||
|
1. Liu et al. Dynamic Generation of Personalities with Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2404.07084)
|
||||||
|
1. Shang et al. How Far Have We Gone in Stripped Binary Code Understanding Using Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2404.09836)
|
||||||
|
1. Huang et al. LLMTune: Accelerate Database Knob Tuning with Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2404.11581)
|
||||||
|
1. Deng et al. Text-Tuple-Table: Towards Information Integration in Text-to-Table Generation via Global Tuple Extraction. 2024. [[arxiv]](https://arxiv.org/abs/2404.14215)
|
||||||
|
1. Acikgoz et al. Hippocrates: An Open-Source Framework for Advancing Large Language Models in Healthcare. 2024. [[arxiv]](https://arxiv.org/abs/2404.16621)
|
||||||
|
1. Zhang et al. Small Language Models Need Strong Verifiers to Self-Correct Reasoning. 2024. [[arxiv]](https://arxiv.org/abs/2404.17140)
|
||||||
|
1. Zhou et al. FREB-TQA: A Fine-Grained Robustness Evaluation Benchmark for Table Question Answering. 2024. [[arxiv]](https://arxiv.org/abs/2404.18585)
|
||||||
1. **[StarWhisper](https://github.com/Yu-Yang-Li/StarWhisper)**: 天文大模型 StarWhisper,基于 ChatGLM2-6B 和 Qwen-14B 在天文数据上微调而得。
|
1. **[StarWhisper](https://github.com/Yu-Yang-Li/StarWhisper)**: 天文大模型 StarWhisper,基于 ChatGLM2-6B 和 Qwen-14B 在天文数据上微调而得。
|
||||||
1. **[DISC-LawLLM](https://github.com/FudanDISC/DISC-LawLLM)**: 中文法律领域大模型 DISC-LawLLM,基于 Baichuan-13B 微调而得,具有法律推理和知识检索能力。
|
1. **[DISC-LawLLM](https://github.com/FudanDISC/DISC-LawLLM)**: 中文法律领域大模型 DISC-LawLLM,基于 Baichuan-13B 微调而得,具有法律推理和知识检索能力。
|
||||||
1. **[Sunsimiao](https://github.com/thomas-yanxin/Sunsimiao)**: 孙思邈中文医疗大模型 Sumsimiao,基于 Baichuan-7B 和 ChatGLM-6B 在中文医疗数据上微调而得。
|
1. **[Sunsimiao](https://github.com/X-D-Lab/Sunsimiao)**: 孙思邈中文医疗大模型 Sumsimiao,基于 Baichuan-7B 和 ChatGLM-6B 在中文医疗数据上微调而得。
|
||||||
1. **[CareGPT](https://github.com/WangRongsheng/CareGPT)**: 医疗大模型项目 CareGPT,基于 LLaMA2-7B 和 Baichuan-13B 在中文医疗数据上微调而得。
|
1. **[CareGPT](https://github.com/WangRongsheng/CareGPT)**: 医疗大模型项目 CareGPT,基于 LLaMA2-7B 和 Baichuan-13B 在中文医疗数据上微调而得。
|
||||||
1. **[MachineMindset](https://github.com/PKU-YuanGroup/Machine-Mindset/)**:MBTI性格大模型项目,根据数据集与训练方式让任意 LLM 拥有 16 个不同的性格类型。
|
1. **[MachineMindset](https://github.com/PKU-YuanGroup/Machine-Mindset/)**:MBTI性格大模型项目,根据数据集与训练方式让任意 LLM 拥有 16 个不同的性格类型。
|
||||||
|
1. **[Luminia-13B-v3](https://huggingface.co/Nekochu/Luminia-13B-v3)**:一个用于生成 Stable Diffusion 提示词的大型语言模型。[[🤗Demo]](https://huggingface.co/spaces/Nekochu/Luminia-13B_SD_Prompt)
|
||||||
|
1. **[Chinese-LLaVA-Med](https://github.com/BUAADreamer/Chinese-LLaVA-Med)**:中文多模态医学大模型,基于 LLaVA-1.5-7B 在中文多模态医疗数据上微调而得。
|
||||||
|
|
||||||
> [!TIP]
|
</details>
|
||||||
> 如果您有项目希望添加至上述列表,请通过邮件联系或者创建一个 PR。
|
|
||||||
|
|
||||||
## 协议
|
## 协议
|
||||||
|
|
||||||
本仓库的代码依照 [Apache-2.0](LICENSE) 协议开源。
|
本仓库的代码依照 [Apache-2.0](LICENSE) 协议开源。
|
||||||
|
|
||||||
使用模型权重时,请遵循对应的模型协议:[Baichuan2](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/blob/main/Community%20License%20for%20Baichuan%202%20Model.pdf) / [BLOOM](https://huggingface.co/spaces/bigscience/license) / [ChatGLM3](https://github.com/THUDM/ChatGLM3/blob/main/MODEL_LICENSE) / [DeepSeek](https://github.com/deepseek-ai/DeepSeek-LLM/blob/main/LICENSE-MODEL) / [Falcon](https://huggingface.co/tiiuae/falcon-180B/blob/main/LICENSE.txt) / [Gemma](https://ai.google.dev/gemma/terms) / [InternLM2](https://github.com/InternLM/InternLM#license) / [LLaMA](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) / [LLaMA-2](https://ai.meta.com/llama/license/) / [Mistral](LICENSE) / [OLMo](LICENSE) / [Phi-1.5/2](https://huggingface.co/microsoft/phi-1_5/resolve/main/Research%20License.docx) / [Qwen](https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT) / [StarCoder2](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement) / [XVERSE](https://github.com/xverse-ai/XVERSE-13B/blob/main/MODEL_LICENSE.pdf) / [Yi](https://huggingface.co/01-ai/Yi-6B/blob/main/LICENSE) / [Yuan](https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/LICENSE-Yuan)
|
使用模型权重时,请遵循对应的模型协议:[Baichuan2](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/blob/main/Community%20License%20for%20Baichuan%202%20Model.pdf) / [BLOOM](https://huggingface.co/spaces/bigscience/license) / [ChatGLM3](https://github.com/THUDM/ChatGLM3/blob/main/MODEL_LICENSE) / [Command-R](https://cohere.com/c4ai-cc-by-nc-license) / [DeepSeek](https://github.com/deepseek-ai/DeepSeek-LLM/blob/main/LICENSE-MODEL) / [Falcon](https://huggingface.co/tiiuae/falcon-180B/blob/main/LICENSE.txt) / [Gemma](https://ai.google.dev/gemma/terms) / [InternLM2](https://github.com/InternLM/InternLM#license) / [LLaMA](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) / [LLaMA-2 (LLaVA-1.5)](https://ai.meta.com/llama/license/) / [LLaMA-3](https://llama.meta.com/llama3/license/) / [Mistral](LICENSE) / [OLMo](LICENSE) / [Phi-1.5/2](https://huggingface.co/microsoft/phi-1_5/resolve/main/Research%20License.docx) / [Phi-3](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/blob/main/LICENSE) / [Qwen](https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT) / [StarCoder2](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement) / [XVERSE](https://github.com/xverse-ai/XVERSE-13B/blob/main/MODEL_LICENSE.pdf) / [Yi](https://huggingface.co/01-ai/Yi-6B/blob/main/LICENSE) / [Yi-1.5](LICENSE) / [Yuan](https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/LICENSE-Yuan)
|
||||||
|
|
||||||
## 引用
|
## 引用
|
||||||
|
|
||||||
@ -713,7 +533,7 @@ CUDA_VISIBLE_DEVICES=0 python src/train_bash.py \
|
|||||||
|
|
||||||
## 致谢
|
## 致谢
|
||||||
|
|
||||||
本项目受益于 [PEFT](https://github.com/huggingface/peft)、[QLoRA](https://github.com/artidoro/qlora) 和 [FastChat](https://github.com/lm-sys/FastChat),感谢以上诸位作者的付出。
|
本项目受益于 [PEFT](https://github.com/huggingface/peft)、[TRL](https://github.com/huggingface/trl)、[QLoRA](https://github.com/artidoro/qlora) 和 [FastChat](https://github.com/lm-sys/FastChat),感谢以上诸位作者的付出。
|
||||||
|
|
||||||
## Star History
|
## Star History
|
||||||
|
|
||||||
|
Binary file not shown.
Before Width: | Height: | Size: 142 KiB After Width: | Height: | Size: 146 KiB |
295
data/README.md
295
data/README.md
@ -1,16 +1,17 @@
|
|||||||
If you are using a custom dataset, please provide your dataset definition in the following format in `dataset_info.json`.
|
The [dataset_info.json](dataset_info.json) contains all available datasets. If you are using a custom dataset, please **make sure** to add a *dataset description* in `dataset_info.json` and specify `dataset: dataset_name` before training to use it.
|
||||||
|
|
||||||
|
Currently we support datasets in **alpaca** and **sharegpt** format.
|
||||||
|
|
||||||
```json
|
```json
|
||||||
"dataset_name": {
|
"dataset_name": {
|
||||||
"hf_hub_url": "the name of the dataset repository on the Hugging Face hub. (if specified, ignore script_url and file_name)",
|
"hf_hub_url": "the name of the dataset repository on the Hugging Face hub. (if specified, ignore script_url and file_name)",
|
||||||
"ms_hub_url": "the name of the dataset repository on the ModelScope hub. (if specified, ignore script_url and file_name)",
|
"ms_hub_url": "the name of the dataset repository on the Model Scope hub. (if specified, ignore script_url and file_name)",
|
||||||
"script_url": "the name of the directory containing a dataset loading script. (if specified, ignore file_name)",
|
"script_url": "the name of the directory containing a dataset loading script. (if specified, ignore file_name)",
|
||||||
"file_name": "the name of the dataset file in this directory. (required if above are not specified)",
|
"file_name": "the name of the dataset folder or dataset file in this directory. (required if above are not specified)",
|
||||||
"file_sha1": "the SHA-1 hash value of the dataset file. (optional, does not affect training)",
|
"formatting": "the format of the dataset. (optional, default: alpaca, can be chosen from {alpaca, sharegpt})",
|
||||||
|
"ranking": "whether the dataset is a preference dataset or not. (default: False)",
|
||||||
"subset": "the name of the subset. (optional, default: None)",
|
"subset": "the name of the subset. (optional, default: None)",
|
||||||
"folder": "the name of the folder of the dataset repository on the Hugging Face hub. (optional, default: None)",
|
"folder": "the name of the folder of the dataset repository on the Hugging Face hub. (optional, default: None)",
|
||||||
"ranking": "whether the dataset is a preference dataset or not. (default: false)",
|
|
||||||
"formatting": "the format of the dataset. (optional, default: alpaca, can be chosen from {alpaca, sharegpt})",
|
|
||||||
"columns (optional)": {
|
"columns (optional)": {
|
||||||
"prompt": "the column name in the dataset containing the prompts. (default: instruction)",
|
"prompt": "the column name in the dataset containing the prompts. (default: instruction)",
|
||||||
"query": "the column name in the dataset containing the queries. (default: input)",
|
"query": "the column name in the dataset containing the queries. (default: input)",
|
||||||
@ -18,7 +19,11 @@ If you are using a custom dataset, please provide your dataset definition in the
|
|||||||
"history": "the column name in the dataset containing the histories. (default: None)",
|
"history": "the column name in the dataset containing the histories. (default: None)",
|
||||||
"messages": "the column name in the dataset containing the messages. (default: conversations)",
|
"messages": "the column name in the dataset containing the messages. (default: conversations)",
|
||||||
"system": "the column name in the dataset containing the system prompts. (default: None)",
|
"system": "the column name in the dataset containing the system prompts. (default: None)",
|
||||||
"tools": "the column name in the dataset containing the tool description. (default: None)"
|
"tools": "the column name in the dataset containing the tool description. (default: None)",
|
||||||
|
"images": "the column name in the dataset containing the image inputs. (default: None)",
|
||||||
|
"chosen": "the column name in the dataset containing the chosen answers. (default: None)",
|
||||||
|
"rejected": "the column name in the dataset containing the rejected answers. (default: None)",
|
||||||
|
"kto_tag": "the column name in the dataset containing the kto tags. (default: None)"
|
||||||
},
|
},
|
||||||
"tags (optional, used for the sharegpt format)": {
|
"tags (optional, used for the sharegpt format)": {
|
||||||
"role_tag": "the key in the message represents the identity. (default: from)",
|
"role_tag": "the key in the message represents the identity. (default: from)",
|
||||||
@ -33,29 +38,38 @@ If you are using a custom dataset, please provide your dataset definition in the
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
Given above, you can use the custom dataset via specifying `--dataset dataset_name`.
|
## Alpaca Format
|
||||||
|
|
||||||
Currently we support dataset in **alpaca** or **sharegpt** format, the dataset in alpaca format should follow the below format:
|
### Supervised Fine-Tuning Dataset
|
||||||
|
|
||||||
|
* [Example dataset](alpaca_en_demo.json)
|
||||||
|
|
||||||
|
In supervised fine-tuning, the `instruction` column will be concatenated with the `input` column and used as the human prompt, then the human prompt would be `instruction\ninput`. The `output` column represents the model response.
|
||||||
|
|
||||||
|
The `system` column will be used as the system prompt if specified.
|
||||||
|
|
||||||
|
The `history` column is a list consisting of string tuples representing prompt-response pairs in the history messages. Note that the responses in the history **will also be learned by the model** in supervised fine-tuning.
|
||||||
|
|
||||||
```json
|
```json
|
||||||
[
|
[
|
||||||
{
|
{
|
||||||
"instruction": "user instruction (required)",
|
"instruction": "human instruction (required)",
|
||||||
"input": "user input (optional)",
|
"input": "human input (optional)",
|
||||||
"output": "model response (required)",
|
"output": "model response (required)",
|
||||||
"system": "system prompt (optional)",
|
"system": "system prompt (optional)",
|
||||||
"history": [
|
"history": [
|
||||||
["user instruction in the first round (optional)", "model response in the first round (optional)"],
|
["human instruction in the first round (optional)", "model response in the first round (optional)"],
|
||||||
["user instruction in the second round (optional)", "model response in the second round (optional)"]
|
["human instruction in the second round (optional)", "model response in the second round (optional)"]
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
Regarding the above dataset, the `columns` in `dataset_info.json` should be:
|
Regarding the above dataset, the *dataset description* in `dataset_info.json` should be:
|
||||||
|
|
||||||
```json
|
```json
|
||||||
"dataset_name": {
|
"dataset_name": {
|
||||||
|
"file_name": "data.json",
|
||||||
"columns": {
|
"columns": {
|
||||||
"prompt": "instruction",
|
"prompt": "instruction",
|
||||||
"query": "input",
|
"query": "input",
|
||||||
@ -66,26 +80,135 @@ Regarding the above dataset, the `columns` in `dataset_info.json` should be:
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
The `query` column will be concatenated with the `prompt` column and used as the user prompt, then the user prompt would be `prompt\nquery`. The `response` column represents the model response.
|
### Pre-training Dataset
|
||||||
|
|
||||||
The `system` column will be used as the system prompt. The `history` column is a list consisting string tuples representing prompt-response pairs in the history. Note that the responses in the history **will also be used for training**.
|
- [Example dataset](c4_demo.json)
|
||||||
|
|
||||||
For the pre-training datasets, only the `prompt` column will be used for training.
|
In pre-training, only the `text` column will be used for model learning.
|
||||||
|
|
||||||
For the preference datasets, the `response` column should be a string list whose length is 2, with the preferred answers appearing first, for example:
|
|
||||||
|
|
||||||
```json
|
```json
|
||||||
{
|
[
|
||||||
"instruction": "user instruction",
|
{"text": "document"},
|
||||||
"input": "user input",
|
{"text": "document"}
|
||||||
"output": [
|
]
|
||||||
"chosen answer",
|
```
|
||||||
"rejected answer"
|
|
||||||
]
|
Regarding the above dataset, the *dataset description* in `dataset_info.json` should be:
|
||||||
|
|
||||||
|
```json
|
||||||
|
"dataset_name": {
|
||||||
|
"file_name": "data.json",
|
||||||
|
"columns": {
|
||||||
|
"prompt": "text"
|
||||||
|
}
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
The dataset in sharegpt format should follow the below format:
|
### Preference Dataset
|
||||||
|
|
||||||
|
Preference datasets are used for reward modeling, DPO training and ORPO training.
|
||||||
|
|
||||||
|
It requires a better response in `chosen` column and a worse response in `rejected` column.
|
||||||
|
|
||||||
|
```json
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"instruction": "human instruction (required)",
|
||||||
|
"input": "human input (optional)",
|
||||||
|
"chosen": "chosen answer (required)",
|
||||||
|
"rejected": "rejected answer (required)"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
Regarding the above dataset, the *dataset description* in `dataset_info.json` should be:
|
||||||
|
|
||||||
|
```json
|
||||||
|
"dataset_name": {
|
||||||
|
"file_name": "data.json",
|
||||||
|
"ranking": true,
|
||||||
|
"columns": {
|
||||||
|
"prompt": "instruction",
|
||||||
|
"query": "input",
|
||||||
|
"chosen": "chosen",
|
||||||
|
"rejected": "rejected"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### KTO Dataset
|
||||||
|
|
||||||
|
- [Example dataset](kto_en_demo.json)
|
||||||
|
|
||||||
|
KTO datasets require a extra `kto_tag` column containing the boolean human feedback.
|
||||||
|
|
||||||
|
```json
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"instruction": "human instruction (required)",
|
||||||
|
"input": "human input (optional)",
|
||||||
|
"output": "model response (required)",
|
||||||
|
"kto_tag": "human feedback [true/false] (required)"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
Regarding the above dataset, the *dataset description* in `dataset_info.json` should be:
|
||||||
|
|
||||||
|
```json
|
||||||
|
"dataset_name": {
|
||||||
|
"file_name": "data.json",
|
||||||
|
"columns": {
|
||||||
|
"prompt": "instruction",
|
||||||
|
"query": "input",
|
||||||
|
"response": "output",
|
||||||
|
"kto_tag": "kto_tag"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Multimodal Dataset
|
||||||
|
|
||||||
|
- [Example dataset](mllm_demo.json)
|
||||||
|
|
||||||
|
Multimodal datasets require a `images` column containing the paths to the input images. Currently we only support one image.
|
||||||
|
|
||||||
|
```json
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"instruction": "human instruction (required)",
|
||||||
|
"input": "human input (optional)",
|
||||||
|
"output": "model response (required)",
|
||||||
|
"images": [
|
||||||
|
"image path (required)"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
Regarding the above dataset, the *dataset description* in `dataset_info.json` should be:
|
||||||
|
|
||||||
|
```json
|
||||||
|
"dataset_name": {
|
||||||
|
"file_name": "data.json",
|
||||||
|
"columns": {
|
||||||
|
"prompt": "instruction",
|
||||||
|
"query": "input",
|
||||||
|
"response": "output",
|
||||||
|
"images": "images"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Sharegpt Format
|
||||||
|
|
||||||
|
### Supervised Fine-Tuning Dataset
|
||||||
|
|
||||||
|
- [Example dataset](glaive_toolcall_en_demo.json)
|
||||||
|
|
||||||
|
Compared to the alpaca format, the sharegpt format allows the datasets have **more roles**, such as human, gpt, observation and function. They are presented in a list of objects in the `conversations` column.
|
||||||
|
|
||||||
|
Note that the human and observation should appear in odd positions, while gpt and function should appear in even positions.
|
||||||
|
|
||||||
```json
|
```json
|
||||||
[
|
[
|
||||||
@ -93,7 +216,15 @@ The dataset in sharegpt format should follow the below format:
|
|||||||
"conversations": [
|
"conversations": [
|
||||||
{
|
{
|
||||||
"from": "human",
|
"from": "human",
|
||||||
"value": "user instruction"
|
"value": "human instruction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"from": "function_call",
|
||||||
|
"value": "tool arguments"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"from": "observation",
|
||||||
|
"value": "tool result"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"from": "gpt",
|
"from": "gpt",
|
||||||
@ -106,24 +237,114 @@ The dataset in sharegpt format should follow the below format:
|
|||||||
]
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
Regarding the above dataset, the `columns` in `dataset_info.json` should be:
|
Regarding the above dataset, the *dataset description* in `dataset_info.json` should be:
|
||||||
|
|
||||||
```json
|
```json
|
||||||
"dataset_name": {
|
"dataset_name": {
|
||||||
|
"file_name": "data.json",
|
||||||
|
"formatting": "sharegpt",
|
||||||
"columns": {
|
"columns": {
|
||||||
"messages": "conversations",
|
"messages": "conversations",
|
||||||
"system": "system",
|
"system": "system",
|
||||||
"tools": "tools"
|
"tools": "tools"
|
||||||
},
|
|
||||||
"tags": {
|
|
||||||
"role_tag": "from",
|
|
||||||
"content_tag": "value",
|
|
||||||
"user_tag": "human",
|
|
||||||
"assistant_tag": "gpt"
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
where the `messages` column should be a list following the `u/a/u/a/u/a` order.
|
### Preference Dataset
|
||||||
|
|
||||||
Pre-training datasets and preference datasets are incompatible with the sharegpt format yet.
|
- [Example dataset](dpo_en_demo.json)
|
||||||
|
|
||||||
|
Preference datasets in sharegpt format also require a better message in `chosen` column and a worse message in `rejected` column.
|
||||||
|
|
||||||
|
```json
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"conversations": [
|
||||||
|
{
|
||||||
|
"from": "human",
|
||||||
|
"value": "human instruction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"from": "gpt",
|
||||||
|
"value": "model response"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"from": "human",
|
||||||
|
"value": "human instruction"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"chosen": {
|
||||||
|
"from": "gpt",
|
||||||
|
"value": "chosen answer (required)"
|
||||||
|
},
|
||||||
|
"rejected": {
|
||||||
|
"from": "gpt",
|
||||||
|
"value": "rejected answer (required)"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
Regarding the above dataset, the *dataset description* in `dataset_info.json` should be:
|
||||||
|
|
||||||
|
```json
|
||||||
|
"dataset_name": {
|
||||||
|
"file_name": "data.json",
|
||||||
|
"formatting": "sharegpt",
|
||||||
|
"ranking": true,
|
||||||
|
"columns": {
|
||||||
|
"messages": "conversations",
|
||||||
|
"chosen": "chosen",
|
||||||
|
"rejected": "rejected"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### OpenAI Format
|
||||||
|
|
||||||
|
The openai format is simply a special case of the sharegpt format, where the first message may be a system prompt.
|
||||||
|
|
||||||
|
```json
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"messages": [
|
||||||
|
{
|
||||||
|
"role": "system",
|
||||||
|
"content": "system prompt (optional)"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"role": "user",
|
||||||
|
"content": "human instruction"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"role": "assistant",
|
||||||
|
"content": "model response"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
Regarding the above dataset, the *dataset description* in `dataset_info.json` should be:
|
||||||
|
|
||||||
|
```json
|
||||||
|
"dataset_name": {
|
||||||
|
"file_name": "data.json",
|
||||||
|
"formatting": "sharegpt",
|
||||||
|
"columns": {
|
||||||
|
"messages": "messages"
|
||||||
|
},
|
||||||
|
"tags": {
|
||||||
|
"role_tag": "role",
|
||||||
|
"content_tag": "content",
|
||||||
|
"user_tag": "user",
|
||||||
|
"assistant_tag": "assistant",
|
||||||
|
"system_tag": "system"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
The KTO datasets and multimodal datasets in sharegpt format are similar to the alpaca format.
|
||||||
|
|
||||||
|
Pre-training datasets are **incompatible** with the sharegpt format.
|
||||||
|
@ -1,4 +1,6 @@
|
|||||||
如果您使用自定义数据集,请务必在 `dataset_info.json` 文件中按照以下格式提供数据集定义。
|
[dataset_info.json](dataset_info.json) 包含了所有可用的数据集。如果您希望使用自定义数据集,请**务必**在 `dataset_info.json` 文件中添加*数据集描述*,并通过修改 `dataset: 数据集名称` 配置来使用数据集。
|
||||||
|
|
||||||
|
目前我们支持 **alpaca** 格式和 **sharegpt** 格式的数据集。
|
||||||
|
|
||||||
```json
|
```json
|
||||||
"数据集名称": {
|
"数据集名称": {
|
||||||
@ -6,11 +8,10 @@
|
|||||||
"ms_hub_url": "ModelScope 的数据集仓库地址(若指定,则忽略 script_url 和 file_name)",
|
"ms_hub_url": "ModelScope 的数据集仓库地址(若指定,则忽略 script_url 和 file_name)",
|
||||||
"script_url": "包含数据加载脚本的本地文件夹名称(若指定,则忽略 file_name)",
|
"script_url": "包含数据加载脚本的本地文件夹名称(若指定,则忽略 file_name)",
|
||||||
"file_name": "该目录下数据集文件的名称(若上述参数未指定,则此项必需)",
|
"file_name": "该目录下数据集文件的名称(若上述参数未指定,则此项必需)",
|
||||||
"file_sha1": "数据集文件的 SHA-1 哈希值(可选,留空不影响训练)",
|
"formatting": "数据集格式(可选,默认:alpaca,可以为 alpaca 或 sharegpt)",
|
||||||
|
"ranking": "是否为偏好数据集(可选,默认:False)",
|
||||||
"subset": "数据集子集的名称(可选,默认:None)",
|
"subset": "数据集子集的名称(可选,默认:None)",
|
||||||
"folder": "Hugging Face 仓库的文件夹名称(可选,默认:None)",
|
"folder": "Hugging Face 仓库的文件夹名称(可选,默认:None)",
|
||||||
"ranking": "是否为偏好数据集(可选,默认:False)",
|
|
||||||
"formatting": "数据集格式(可选,默认:alpaca,可以为 alpaca 或 sharegpt)",
|
|
||||||
"columns(可选)": {
|
"columns(可选)": {
|
||||||
"prompt": "数据集代表提示词的表头名称(默认:instruction)",
|
"prompt": "数据集代表提示词的表头名称(默认:instruction)",
|
||||||
"query": "数据集代表请求的表头名称(默认:input)",
|
"query": "数据集代表请求的表头名称(默认:input)",
|
||||||
@ -18,7 +19,11 @@
|
|||||||
"history": "数据集代表历史对话的表头名称(默认:None)",
|
"history": "数据集代表历史对话的表头名称(默认:None)",
|
||||||
"messages": "数据集代表消息列表的表头名称(默认:conversations)",
|
"messages": "数据集代表消息列表的表头名称(默认:conversations)",
|
||||||
"system": "数据集代表系统提示的表头名称(默认:None)",
|
"system": "数据集代表系统提示的表头名称(默认:None)",
|
||||||
"tools": "数据集代表工具描述的表头名称(默认:None)"
|
"tools": "数据集代表工具描述的表头名称(默认:None)",
|
||||||
|
"images": "数据集代表图像输入的表头名称(默认:None)",
|
||||||
|
"chosen": "数据集代表更优回答的表头名称(默认:None)",
|
||||||
|
"rejected": "数据集代表更差回答的表头名称(默认:None)",
|
||||||
|
"kto_tag": "数据集代表 KTO 标签的表头名称(默认:None)"
|
||||||
},
|
},
|
||||||
"tags(可选,用于 sharegpt 格式)": {
|
"tags(可选,用于 sharegpt 格式)": {
|
||||||
"role_tag": "消息中代表发送者身份的键名(默认:from)",
|
"role_tag": "消息中代表发送者身份的键名(默认:from)",
|
||||||
@ -33,15 +38,23 @@
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
添加后可通过指定 `--dataset 数据集名称` 参数使用自定义数据集。
|
## Alpaca 格式
|
||||||
|
|
||||||
该项目目前支持两种格式的数据集:**alpaca** 和 **sharegpt**,其中 alpaca 格式的数据集按照以下方式组织:
|
### 指令监督微调数据集
|
||||||
|
|
||||||
|
- [样例数据集](alpaca_zh_demo.json)
|
||||||
|
|
||||||
|
在指令监督微调时,`instruction` 列对应的内容会与 `input` 列对应的内容拼接后作为人类指令,即人类指令为 `instruction\ninput`。而 `output` 列对应的内容为模型回答。
|
||||||
|
|
||||||
|
如果指定,`system` 列对应的内容将被作为系统提示词。
|
||||||
|
|
||||||
|
`history` 列是由多个字符串二元组构成的列表,分别代表历史消息中每轮对话的指令和回答。注意在指令监督微调时,历史消息中的回答内容**也会被用于模型学习**。
|
||||||
|
|
||||||
```json
|
```json
|
||||||
[
|
[
|
||||||
{
|
{
|
||||||
"instruction": "用户指令(必填)",
|
"instruction": "人类指令(必填)",
|
||||||
"input": "用户输入(选填)",
|
"input": "人类输入(选填)",
|
||||||
"output": "模型回答(必填)",
|
"output": "模型回答(必填)",
|
||||||
"system": "系统提示词(选填)",
|
"system": "系统提示词(选填)",
|
||||||
"history": [
|
"history": [
|
||||||
@ -52,10 +65,11 @@
|
|||||||
]
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
对于上述格式的数据,`dataset_info.json` 中的 `columns` 应为:
|
对于上述格式的数据,`dataset_info.json` 中的*数据集描述*应为:
|
||||||
|
|
||||||
```json
|
```json
|
||||||
"数据集名称": {
|
"数据集名称": {
|
||||||
|
"file_name": "data.json",
|
||||||
"columns": {
|
"columns": {
|
||||||
"prompt": "instruction",
|
"prompt": "instruction",
|
||||||
"query": "input",
|
"query": "input",
|
||||||
@ -66,26 +80,135 @@
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
其中 `query` 列对应的内容会与 `prompt` 列对应的内容拼接后作为用户指令,即用户指令为 `prompt\nquery`。`response` 列对应的内容为模型回答。
|
### 预训练数据集
|
||||||
|
|
||||||
`system` 列对应的内容将被作为系统提示词。`history` 列是由多个字符串二元组构成的列表,分别代表历史消息中每轮的指令和回答。注意历史消息中的回答**也会被用于训练**。
|
- [样例数据集](c4_demo.json)
|
||||||
|
|
||||||
对于预训练数据集,仅 `prompt` 列中的内容会用于模型训练。
|
在预训练时,只有 `text` 列中的内容会用于模型学习。
|
||||||
|
|
||||||
对于偏好数据集,`response` 列应当是一个长度为 2 的字符串列表,排在前面的代表更优的回答,例如:
|
|
||||||
|
|
||||||
```json
|
```json
|
||||||
{
|
[
|
||||||
"instruction": "用户指令",
|
{"text": "document"},
|
||||||
"input": "用户输入",
|
{"text": "document"}
|
||||||
"output": [
|
]
|
||||||
"优质回答",
|
```
|
||||||
"劣质回答"
|
|
||||||
]
|
对于上述格式的数据,`dataset_info.json` 中的*数据集描述*应为:
|
||||||
|
|
||||||
|
```json
|
||||||
|
"数据集名称": {
|
||||||
|
"file_name": "data.json",
|
||||||
|
"columns": {
|
||||||
|
"prompt": "text"
|
||||||
|
}
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
而 sharegpt 格式的数据集按照以下方式组织:
|
### 偏好数据集
|
||||||
|
|
||||||
|
偏好数据集用于奖励模型训练、DPO 训练和 ORPO 训练。
|
||||||
|
|
||||||
|
它需要在 `chosen` 列中提供更优的回答,并在 `rejected` 列中提供更差的回答。
|
||||||
|
|
||||||
|
```json
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"instruction": "人类指令(必填)",
|
||||||
|
"input": "人类输入(选填)",
|
||||||
|
"chosen": "优质回答(必填)",
|
||||||
|
"rejected": "劣质回答(必填)"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
对于上述格式的数据,`dataset_info.json` 中的*数据集描述*应为:
|
||||||
|
|
||||||
|
```json
|
||||||
|
"数据集名称": {
|
||||||
|
"file_name": "data.json",
|
||||||
|
"ranking": true,
|
||||||
|
"columns": {
|
||||||
|
"prompt": "instruction",
|
||||||
|
"query": "input",
|
||||||
|
"chosen": "chosen",
|
||||||
|
"rejected": "rejected"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### KTO 数据集
|
||||||
|
|
||||||
|
- [样例数据集](kto_en_demo.json)
|
||||||
|
|
||||||
|
KTO 数据集需要额外添加一个 `kto_tag` 列,包含 bool 类型的人类反馈。
|
||||||
|
|
||||||
|
```json
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"instruction": "人类指令(必填)",
|
||||||
|
"input": "人类输入(选填)",
|
||||||
|
"output": "模型回答(必填)",
|
||||||
|
"kto_tag": "人类反馈 [true/false](必填)"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
对于上述格式的数据,`dataset_info.json` 中的*数据集描述*应为:
|
||||||
|
|
||||||
|
```json
|
||||||
|
"数据集名称": {
|
||||||
|
"file_name": "data.json",
|
||||||
|
"columns": {
|
||||||
|
"prompt": "instruction",
|
||||||
|
"query": "input",
|
||||||
|
"response": "output",
|
||||||
|
"kto_tag": "kto_tag"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 多模态数据集
|
||||||
|
|
||||||
|
- [样例数据集](mllm_demo.json)
|
||||||
|
|
||||||
|
多模态数据集需要额外添加一个 `images` 列,包含输入图像的路径。目前我们仅支持单张图像输入。
|
||||||
|
|
||||||
|
```json
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"instruction": "人类指令(必填)",
|
||||||
|
"input": "人类输入(选填)",
|
||||||
|
"output": "模型回答(必填)",
|
||||||
|
"images": [
|
||||||
|
"图像路径(必填)"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
对于上述格式的数据,`dataset_info.json` 中的*数据集描述*应为:
|
||||||
|
|
||||||
|
```json
|
||||||
|
"数据集名称": {
|
||||||
|
"file_name": "data.json",
|
||||||
|
"columns": {
|
||||||
|
"prompt": "instruction",
|
||||||
|
"query": "input",
|
||||||
|
"response": "output",
|
||||||
|
"images": "images"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Sharegpt 格式
|
||||||
|
|
||||||
|
### 指令监督微调数据集
|
||||||
|
|
||||||
|
- [样例数据集](glaive_toolcall_zh_demo.json)
|
||||||
|
|
||||||
|
相比 alpaca 格式的数据集,sharegpt 格式支持**更多的角色种类**,例如 human、gpt、observation、function 等等。它们构成一个对象列表呈现在 `conversations` 列中。
|
||||||
|
|
||||||
|
注意其中 human 和 observation 必须出现在奇数位置,gpt 和 function 必须出现在偶数位置。
|
||||||
|
|
||||||
```json
|
```json
|
||||||
[
|
[
|
||||||
@ -93,7 +216,15 @@
|
|||||||
"conversations": [
|
"conversations": [
|
||||||
{
|
{
|
||||||
"from": "human",
|
"from": "human",
|
||||||
"value": "用户指令"
|
"value": "人类指令"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"from": "function_call",
|
||||||
|
"value": "工具参数"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"from": "observation",
|
||||||
|
"value": "工具结果"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"from": "gpt",
|
"from": "gpt",
|
||||||
@ -106,24 +237,114 @@
|
|||||||
]
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
对于上述格式的数据,`dataset_info.json` 中的 `columns` 应为:
|
对于上述格式的数据,`dataset_info.json` 中的*数据集描述*应为:
|
||||||
|
|
||||||
```json
|
```json
|
||||||
"数据集名称": {
|
"数据集名称": {
|
||||||
|
"file_name": "data.json",
|
||||||
|
"formatting": "sharegpt",
|
||||||
"columns": {
|
"columns": {
|
||||||
"messages": "conversations",
|
"messages": "conversations",
|
||||||
"system": "system",
|
"system": "system",
|
||||||
"tools": "tools"
|
"tools": "tools"
|
||||||
},
|
|
||||||
"tags": {
|
|
||||||
"role_tag": "from",
|
|
||||||
"content_tag": "value",
|
|
||||||
"user_tag": "human",
|
|
||||||
"assistant_tag": "gpt"
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
其中 `messages` 列应当是一个列表,且符合 `用户/模型/用户/模型/用户/模型` 的顺序。
|
### 偏好数据集
|
||||||
|
|
||||||
预训练数据集和偏好数据集尚不支持 sharegpt 格式。
|
- [样例数据集](dpo_zh_demo.json)
|
||||||
|
|
||||||
|
Sharegpt 格式的偏好数据集同样需要在 `chosen` 列中提供更优的消息,并在 `rejected` 列中提供更差的消息。
|
||||||
|
|
||||||
|
```json
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"conversations": [
|
||||||
|
{
|
||||||
|
"from": "human",
|
||||||
|
"value": "人类指令"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"from": "gpt",
|
||||||
|
"value": "模型回答"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"from": "human",
|
||||||
|
"value": "人类指令"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"chosen": {
|
||||||
|
"from": "gpt",
|
||||||
|
"value": "优质回答"
|
||||||
|
},
|
||||||
|
"rejected": {
|
||||||
|
"from": "gpt",
|
||||||
|
"value": "劣质回答"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
对于上述格式的数据,`dataset_info.json` 中的*数据集描述*应为:
|
||||||
|
|
||||||
|
```json
|
||||||
|
"数据集名称": {
|
||||||
|
"file_name": "data.json",
|
||||||
|
"formatting": "sharegpt",
|
||||||
|
"ranking": true,
|
||||||
|
"columns": {
|
||||||
|
"messages": "conversations",
|
||||||
|
"chosen": "chosen",
|
||||||
|
"rejected": "rejected"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### OpenAI 格式
|
||||||
|
|
||||||
|
OpenAI 格式仅仅是 sharegpt 格式的一种特殊情况,其中第一条消息可能是系统提示词。
|
||||||
|
|
||||||
|
```json
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"messages": [
|
||||||
|
{
|
||||||
|
"role": "system",
|
||||||
|
"content": "系统提示词(选填)"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"role": "user",
|
||||||
|
"content": "人类指令"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"role": "assistant",
|
||||||
|
"content": "模型回答"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
对于上述格式的数据,`dataset_info.json` 中的*数据集描述*应为:
|
||||||
|
|
||||||
|
```json
|
||||||
|
"数据集名称": {
|
||||||
|
"file_name": "data.json",
|
||||||
|
"formatting": "sharegpt",
|
||||||
|
"columns": {
|
||||||
|
"messages": "messages"
|
||||||
|
},
|
||||||
|
"tags": {
|
||||||
|
"role_tag": "role",
|
||||||
|
"content_tag": "content",
|
||||||
|
"user_tag": "user",
|
||||||
|
"assistant_tag": "assistant",
|
||||||
|
"system_tag": "system"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Sharegpt 格式中的 KTO 数据集和多模态数据集与 alpaca 格式的类似。
|
||||||
|
|
||||||
|
预训练数据集**不支持** sharegpt 格式。
|
||||||
|
@ -1 +0,0 @@
|
|||||||
3779ddbc040543ab1834ef216c983d6fcc06cc9a
|
|
@ -1 +0,0 @@
|
|||||||
34c723573fbc2d7601f6d9c882ccf5aa4f9bcc4b
|
|
5002
data/alpaca_en_demo.json
Normal file
5002
data/alpaca_en_demo.json
Normal file
File diff suppressed because it is too large
Load Diff
@ -1 +0,0 @@
|
|||||||
25508714b7879a1e5a6764ba7f979a980f549f1a
|
|
@ -1 +0,0 @@
|
|||||||
7cb6a7d11455bddc3d495750a2392683d775b184
|
|
5002
data/alpaca_zh_demo.json
Normal file
5002
data/alpaca_zh_demo.json
Normal file
File diff suppressed because it is too large
Load Diff
@ -1,5 +1,6 @@
|
|||||||
import os
|
|
||||||
import json
|
import json
|
||||||
|
import os
|
||||||
|
|
||||||
import datasets
|
import datasets
|
||||||
|
|
||||||
|
|
||||||
@ -22,31 +23,19 @@ _URL = "{}/datasets/BelleGroup/multiturn_chat_0.8M/resolve/main/multiturn_chat_0
|
|||||||
|
|
||||||
|
|
||||||
class BelleMultiturn(datasets.GeneratorBasedBuilder):
|
class BelleMultiturn(datasets.GeneratorBasedBuilder):
|
||||||
|
|
||||||
VERSION = datasets.Version("0.0.0")
|
VERSION = datasets.Version("0.0.0")
|
||||||
|
|
||||||
def _info(self):
|
def _info(self):
|
||||||
features = datasets.Features({
|
features = datasets.Features(
|
||||||
"conversations": [{"from": datasets.Value("string"), "value": datasets.Value("string")}]
|
{"conversations": [{"from": datasets.Value("string"), "value": datasets.Value("string")}]}
|
||||||
})
|
)
|
||||||
return datasets.DatasetInfo(
|
return datasets.DatasetInfo(
|
||||||
description=_DESCRIPTION,
|
description=_DESCRIPTION, features=features, homepage=_HOMEPAGE, license=_LICENSE, citation=_CITATION
|
||||||
features=features,
|
|
||||||
homepage=_HOMEPAGE,
|
|
||||||
license=_LICENSE,
|
|
||||||
citation=_CITATION
|
|
||||||
)
|
)
|
||||||
|
|
||||||
def _split_generators(self, dl_manager: datasets.DownloadManager):
|
def _split_generators(self, dl_manager: datasets.DownloadManager):
|
||||||
file_path = dl_manager.download(_URL)
|
file_path = dl_manager.download(_URL)
|
||||||
return [
|
return [datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": file_path})]
|
||||||
datasets.SplitGenerator(
|
|
||||||
name=datasets.Split.TRAIN,
|
|
||||||
gen_kwargs={
|
|
||||||
"filepath": file_path
|
|
||||||
}
|
|
||||||
)
|
|
||||||
]
|
|
||||||
|
|
||||||
def _generate_examples(self, filepath: str):
|
def _generate_examples(self, filepath: str):
|
||||||
with open(filepath, "r", encoding="utf-8") as f:
|
with open(filepath, "r", encoding="utf-8") as f:
|
||||||
@ -58,7 +47,7 @@ class BelleMultiturn(datasets.GeneratorBasedBuilder):
|
|||||||
|
|
||||||
assist_idx = prompt.rfind("Assistant:")
|
assist_idx = prompt.rfind("Assistant:")
|
||||||
human_idx = prompt.rfind("Human:")
|
human_idx = prompt.rfind("Human:")
|
||||||
query = prompt[human_idx+6:assist_idx].strip()
|
query = prompt[human_idx + 6 : assist_idx].strip()
|
||||||
prompt = prompt[:human_idx].strip()
|
prompt = prompt[:human_idx].strip()
|
||||||
conversations.insert(0, {"from": "gpt", "value": response})
|
conversations.insert(0, {"from": "gpt", "value": response})
|
||||||
conversations.insert(0, {"from": "human", "value": query})
|
conversations.insert(0, {"from": "human", "value": query})
|
||||||
@ -67,8 +56,8 @@ class BelleMultiturn(datasets.GeneratorBasedBuilder):
|
|||||||
assist_idx = prompt.rfind("Assistant:")
|
assist_idx = prompt.rfind("Assistant:")
|
||||||
human_idx = prompt.rfind("Human:")
|
human_idx = prompt.rfind("Human:")
|
||||||
if human_idx != -1:
|
if human_idx != -1:
|
||||||
old_query = prompt[human_idx+6:assist_idx].strip()
|
old_query = prompt[human_idx + 6 : assist_idx].strip()
|
||||||
old_resp = prompt[assist_idx+10:].strip()
|
old_resp = prompt[assist_idx + 10 :].strip()
|
||||||
conversations.insert(0, {"from": "gpt", "value": old_resp})
|
conversations.insert(0, {"from": "gpt", "value": old_resp})
|
||||||
conversations.insert(0, {"from": "human", "value": old_query})
|
conversations.insert(0, {"from": "human", "value": old_query})
|
||||||
else:
|
else:
|
||||||
|
@ -1 +0,0 @@
|
|||||||
f5cb08305ff5dc9c17a09809c54c8c8834aadc70
|
|
@ -1 +0,0 @@
|
|||||||
aee47b7b443496e37808d7f34ef10403ff99bcc3
|
|
@ -1,72 +1,79 @@
|
|||||||
{
|
{
|
||||||
"alpaca_en": {
|
|
||||||
"file_name": "alpaca_data_en_52k.json",
|
|
||||||
"file_sha1": "607f94a7f581341e59685aef32f531095232cf23"
|
|
||||||
},
|
|
||||||
"alpaca_zh": {
|
|
||||||
"file_name": "alpaca_data_zh_51k.json",
|
|
||||||
"file_sha1": "0016a4df88f523aad8dc004ada7575896824a0dc"
|
|
||||||
},
|
|
||||||
"alpaca_gpt4_en": {
|
|
||||||
"file_name": "alpaca_gpt4_data_en.json",
|
|
||||||
"file_sha1": "647f4ad447bd993e4b6b6223d1be15208bab694a"
|
|
||||||
},
|
|
||||||
"alpaca_gpt4_zh": {
|
|
||||||
"file_name": "alpaca_gpt4_data_zh.json",
|
|
||||||
"file_sha1": "3eaa3bda364ccdd59925d7448a698256c31ef845"
|
|
||||||
},
|
|
||||||
"identity": {
|
"identity": {
|
||||||
"file_name": "identity.json",
|
"file_name": "identity.json"
|
||||||
"file_sha1": "ffe3ecb58ab642da33fbb514d5e6188f1469ad40"
|
|
||||||
},
|
},
|
||||||
"oaast_sft": {
|
"alpaca_en_demo": {
|
||||||
"file_name": "oaast_sft.json",
|
"file_name": "alpaca_en_demo.json"
|
||||||
"file_sha1": "7baf5d43e67a91f9bbdf4e400dbe033b87e9757e",
|
|
||||||
"columns": {
|
|
||||||
"prompt": "instruction",
|
|
||||||
"query": "input",
|
|
||||||
"response": "output",
|
|
||||||
"history": "history"
|
|
||||||
}
|
|
||||||
},
|
},
|
||||||
"oaast_sft_zh": {
|
"alpaca_zh_demo": {
|
||||||
"file_name": "oaast_sft_zh.json",
|
"file_name": "alpaca_zh_demo.json"
|
||||||
"file_sha1": "a6a91f18f80f37b10ded9cf633fb50c033bf7b9f",
|
|
||||||
"columns": {
|
|
||||||
"prompt": "instruction",
|
|
||||||
"query": "input",
|
|
||||||
"response": "output",
|
|
||||||
"history": "history"
|
|
||||||
}
|
|
||||||
},
|
},
|
||||||
"lima": {
|
"glaive_toolcall_en_demo": {
|
||||||
"file_name": "lima.json",
|
"file_name": "glaive_toolcall_en_demo.json",
|
||||||
"file_sha1": "9db59f6b7007dc4b17529fc63379b9cd61640f37",
|
|
||||||
"columns": {
|
|
||||||
"prompt": "instruction",
|
|
||||||
"query": "input",
|
|
||||||
"response": "output",
|
|
||||||
"history": "history"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"glaive_toolcall": {
|
|
||||||
"file_name": "glaive_toolcall_10k.json",
|
|
||||||
"file_sha1": "a6917b85d209df98d31fdecb253c79ebc440f6f3",
|
|
||||||
"formatting": "sharegpt",
|
"formatting": "sharegpt",
|
||||||
"columns": {
|
"columns": {
|
||||||
"messages": "conversations",
|
"messages": "conversations",
|
||||||
"tools": "tools"
|
"tools": "tools"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"example": {
|
"glaive_toolcall_zh_demo": {
|
||||||
"script_url": "example_dataset",
|
"file_name": "glaive_toolcall_zh_demo.json",
|
||||||
|
"formatting": "sharegpt",
|
||||||
"columns": {
|
"columns": {
|
||||||
"prompt": "instruction",
|
"messages": "conversations",
|
||||||
"query": "input",
|
"tools": "tools"
|
||||||
"response": "output",
|
|
||||||
"history": "history"
|
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
"mllm_demo": {
|
||||||
|
"file_name": "mllm_demo.json",
|
||||||
|
"formatting": "sharegpt",
|
||||||
|
"columns": {
|
||||||
|
"messages": "messages",
|
||||||
|
"images": "images"
|
||||||
|
},
|
||||||
|
"tags": {
|
||||||
|
"role_tag": "role",
|
||||||
|
"content_tag": "content",
|
||||||
|
"user_tag": "user",
|
||||||
|
"assistant_tag": "assistant"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"alpaca_en": {
|
||||||
|
"hf_hub_url": "llamafactory/alpaca_en",
|
||||||
|
"ms_hub_url": "llamafactory/alpaca_en"
|
||||||
|
},
|
||||||
|
"alpaca_zh": {
|
||||||
|
"hf_hub_url": "llamafactory/alpaca_zh",
|
||||||
|
"ms_hub_url": "llamafactory/alpaca_zh"
|
||||||
|
},
|
||||||
|
"alpaca_gpt4_en": {
|
||||||
|
"hf_hub_url": "llamafactory/alpaca_gpt4_en",
|
||||||
|
"ms_hub_url": "llamafactory/alpaca_gpt4_en"
|
||||||
|
},
|
||||||
|
"alpaca_gpt4_zh": {
|
||||||
|
"hf_hub_url": "llamafactory/alpaca_gpt4_zh",
|
||||||
|
"ms_hub_url": "llamafactory/alpaca_gpt4_zh"
|
||||||
|
},
|
||||||
|
"glaive_toolcall_en": {
|
||||||
|
"hf_hub_url": "llamafactory/glaive_toolcall_en",
|
||||||
|
"formatting": "sharegpt",
|
||||||
|
"columns": {
|
||||||
|
"messages": "conversations",
|
||||||
|
"tools": "tools"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"glaive_toolcall_zh": {
|
||||||
|
"hf_hub_url": "llamafactory/glaive_toolcall_zh",
|
||||||
|
"formatting": "sharegpt",
|
||||||
|
"columns": {
|
||||||
|
"messages": "conversations",
|
||||||
|
"tools": "tools"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"lima": {
|
||||||
|
"hf_hub_url": "llamafactory/lima",
|
||||||
|
"formatting": "sharegpt"
|
||||||
|
},
|
||||||
"guanaco": {
|
"guanaco": {
|
||||||
"hf_hub_url": "JosephusCheung/GuanacoDataset",
|
"hf_hub_url": "JosephusCheung/GuanacoDataset",
|
||||||
"ms_hub_url": "AI-ModelScope/GuanacoDataset"
|
"ms_hub_url": "AI-ModelScope/GuanacoDataset"
|
||||||
@ -159,7 +166,7 @@
|
|||||||
"ms_hub_url": "AI-ModelScope/webnovel_cn"
|
"ms_hub_url": "AI-ModelScope/webnovel_cn"
|
||||||
},
|
},
|
||||||
"nectar_sft": {
|
"nectar_sft": {
|
||||||
"hf_hub_url": "mlinmg/SFT-Nectar",
|
"hf_hub_url": "AstraMindAI/SFT-Nectar",
|
||||||
"ms_hub_url": "AI-ModelScope/SFT-Nectar"
|
"ms_hub_url": "AI-ModelScope/SFT-Nectar"
|
||||||
},
|
},
|
||||||
"deepctrl": {
|
"deepctrl": {
|
||||||
@ -185,6 +192,7 @@
|
|||||||
"ultrachat_200k": {
|
"ultrachat_200k": {
|
||||||
"hf_hub_url": "HuggingFaceH4/ultrachat_200k",
|
"hf_hub_url": "HuggingFaceH4/ultrachat_200k",
|
||||||
"ms_hub_url": "AI-ModelScope/ultrachat_200k",
|
"ms_hub_url": "AI-ModelScope/ultrachat_200k",
|
||||||
|
"formatting": "sharegpt",
|
||||||
"columns": {
|
"columns": {
|
||||||
"messages": "messages"
|
"messages": "messages"
|
||||||
},
|
},
|
||||||
@ -193,8 +201,7 @@
|
|||||||
"content_tag": "content",
|
"content_tag": "content",
|
||||||
"user_tag": "user",
|
"user_tag": "user",
|
||||||
"assistant_tag": "assistant"
|
"assistant_tag": "assistant"
|
||||||
},
|
}
|
||||||
"formatting": "sharegpt"
|
|
||||||
},
|
},
|
||||||
"agent_instruct": {
|
"agent_instruct": {
|
||||||
"hf_hub_url": "THUDM/AgentInstruct",
|
"hf_hub_url": "THUDM/AgentInstruct",
|
||||||
@ -204,6 +211,7 @@
|
|||||||
"lmsys_chat": {
|
"lmsys_chat": {
|
||||||
"hf_hub_url": "lmsys/lmsys-chat-1m",
|
"hf_hub_url": "lmsys/lmsys-chat-1m",
|
||||||
"ms_hub_url": "AI-ModelScope/lmsys-chat-1m",
|
"ms_hub_url": "AI-ModelScope/lmsys-chat-1m",
|
||||||
|
"formatting": "sharegpt",
|
||||||
"columns": {
|
"columns": {
|
||||||
"messages": "conversation"
|
"messages": "conversation"
|
||||||
},
|
},
|
||||||
@ -212,8 +220,7 @@
|
|||||||
"content_tag": "content",
|
"content_tag": "content",
|
||||||
"user_tag": "human",
|
"user_tag": "human",
|
||||||
"assistant_tag": "assistant"
|
"assistant_tag": "assistant"
|
||||||
},
|
}
|
||||||
"formatting": "sharegpt"
|
|
||||||
},
|
},
|
||||||
"evol_instruct": {
|
"evol_instruct": {
|
||||||
"hf_hub_url": "WizardLM/WizardLM_evol_instruct_V2_196k",
|
"hf_hub_url": "WizardLM/WizardLM_evol_instruct_V2_196k",
|
||||||
@ -235,6 +242,42 @@
|
|||||||
"response": "text"
|
"response": "text"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
"stem_zh": {
|
||||||
|
"hf_hub_url": "hfl/stem_zh_instruction"
|
||||||
|
},
|
||||||
|
"ruozhiba_gpt4": {
|
||||||
|
"hf_hub_url": "hfl/ruozhiba_gpt4_turbo"
|
||||||
|
},
|
||||||
|
"llava_150k_en": {
|
||||||
|
"hf_hub_url": "BUAADreamer/llava-en-zh-300k",
|
||||||
|
"subset": "en",
|
||||||
|
"formatting": "sharegpt",
|
||||||
|
"columns": {
|
||||||
|
"messages": "messages",
|
||||||
|
"images": "images"
|
||||||
|
},
|
||||||
|
"tags": {
|
||||||
|
"role_tag": "role",
|
||||||
|
"content_tag": "content",
|
||||||
|
"user_tag": "user",
|
||||||
|
"assistant_tag": "assistant"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"llava_150k_zh": {
|
||||||
|
"hf_hub_url": "BUAADreamer/llava-en-zh-300k",
|
||||||
|
"subset": "zh",
|
||||||
|
"formatting": "sharegpt",
|
||||||
|
"columns": {
|
||||||
|
"messages": "messages",
|
||||||
|
"images": "images"
|
||||||
|
},
|
||||||
|
"tags": {
|
||||||
|
"role_tag": "role",
|
||||||
|
"content_tag": "content",
|
||||||
|
"user_tag": "user",
|
||||||
|
"assistant_tag": "assistant"
|
||||||
|
}
|
||||||
|
},
|
||||||
"oasst_de": {
|
"oasst_de": {
|
||||||
"hf_hub_url": "mayflowergmbh/oasst_de"
|
"hf_hub_url": "mayflowergmbh/oasst_de"
|
||||||
},
|
},
|
||||||
@ -262,76 +305,113 @@
|
|||||||
"ultrachat_de": {
|
"ultrachat_de": {
|
||||||
"hf_hub_url": "mayflowergmbh/ultra-chat_de"
|
"hf_hub_url": "mayflowergmbh/ultra-chat_de"
|
||||||
},
|
},
|
||||||
"hh_rlhf_en": {
|
"dpo_en_demo": {
|
||||||
"script_url": "hh_rlhf_en",
|
"file_name": "dpo_en_demo.json",
|
||||||
|
"ranking": true,
|
||||||
|
"formatting": "sharegpt",
|
||||||
"columns": {
|
"columns": {
|
||||||
"prompt": "instruction",
|
"messages": "conversations",
|
||||||
"response": "output",
|
"chosen": "chosen",
|
||||||
"history": "history"
|
"rejected": "rejected"
|
||||||
|
}
|
||||||
},
|
},
|
||||||
"ranking": true
|
"dpo_zh_demo": {
|
||||||
},
|
"file_name": "dpo_zh_demo.json",
|
||||||
"oaast_rm": {
|
"ranking": true,
|
||||||
"file_name": "oaast_rm.json",
|
"formatting": "sharegpt",
|
||||||
"file_sha1": "622d420e9b70003b210618253bd3d9d2891d86cb",
|
|
||||||
"columns": {
|
"columns": {
|
||||||
"prompt": "instruction",
|
"messages": "conversations",
|
||||||
"query": "input",
|
"chosen": "chosen",
|
||||||
"response": "output",
|
"rejected": "rejected"
|
||||||
"history": "history"
|
}
|
||||||
},
|
},
|
||||||
"ranking": true
|
"dpo_mix_en": {
|
||||||
},
|
"hf_hub_url": "hiyouga/DPO-En-Zh-20k",
|
||||||
"oaast_rm_zh": {
|
"subset": "en",
|
||||||
"file_name": "oaast_rm_zh.json",
|
"ranking": true,
|
||||||
"file_sha1": "1065af1f3784dd61be5e79713a35f427b713a232",
|
"formatting": "sharegpt",
|
||||||
"columns": {
|
"columns": {
|
||||||
"prompt": "instruction",
|
"messages": "conversations",
|
||||||
"query": "input",
|
"chosen": "chosen",
|
||||||
"response": "output",
|
"rejected": "rejected"
|
||||||
"history": "history"
|
}
|
||||||
},
|
},
|
||||||
"ranking": true
|
"dpo_mix_zh": {
|
||||||
|
"hf_hub_url": "hiyouga/DPO-En-Zh-20k",
|
||||||
|
"subset": "zh",
|
||||||
|
"ranking": true,
|
||||||
|
"formatting": "sharegpt",
|
||||||
|
"columns": {
|
||||||
|
"messages": "conversations",
|
||||||
|
"chosen": "chosen",
|
||||||
|
"rejected": "rejected"
|
||||||
|
}
|
||||||
},
|
},
|
||||||
"comparison_gpt4_en": {
|
"orca_pairs": {
|
||||||
"file_name": "comparison_gpt4_data_en.json",
|
"hf_hub_url": "Intel/orca_dpo_pairs",
|
||||||
"file_sha1": "96fa18313544e22444fe20eead7754b17da452ae",
|
|
||||||
"ranking": true
|
|
||||||
},
|
|
||||||
"comparison_gpt4_zh": {
|
|
||||||
"file_name": "comparison_gpt4_data_zh.json",
|
|
||||||
"file_sha1": "515b18ed497199131ddcc1af950345c11dc5c7fd",
|
|
||||||
"ranking": true
|
|
||||||
},
|
|
||||||
"orca_rlhf": {
|
|
||||||
"file_name": "orca_rlhf.json",
|
|
||||||
"file_sha1": "acc8f74d16fd1fc4f68e7d86eaa781c2c3f5ba8e",
|
|
||||||
"ranking": true,
|
"ranking": true,
|
||||||
"columns": {
|
"columns": {
|
||||||
"prompt": "question",
|
"prompt": "question",
|
||||||
"response": "answer",
|
"chosen": "chosen",
|
||||||
|
"rejected": "rejected",
|
||||||
"system": "system"
|
"system": "system"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
"hh_rlhf_en": {
|
||||||
|
"script_url": "hh_rlhf_en",
|
||||||
|
"ranking": true,
|
||||||
|
"columns": {
|
||||||
|
"prompt": "instruction",
|
||||||
|
"chosen": "chosen",
|
||||||
|
"rejected": "rejected",
|
||||||
|
"history": "history"
|
||||||
|
}
|
||||||
|
},
|
||||||
"nectar_rm": {
|
"nectar_rm": {
|
||||||
"hf_hub_url": "mlinmg/RLAIF-Nectar",
|
"hf_hub_url": "AstraMindAI/RLAIF-Nectar",
|
||||||
"ms_hub_url": "AI-ModelScope/RLAIF-Nectar",
|
"ms_hub_url": "AI-ModelScope/RLAIF-Nectar",
|
||||||
"ranking": true
|
"ranking": true
|
||||||
},
|
},
|
||||||
"orca_dpo_de" : {
|
"orca_dpo_de": {
|
||||||
"hf_hub_url": "mayflowergmbh/intel_orca_dpo_pairs_de",
|
"hf_hub_url": "mayflowergmbh/intel_orca_dpo_pairs_de",
|
||||||
"ranking": true
|
"ranking": true
|
||||||
},
|
},
|
||||||
|
"kto_en_demo": {
|
||||||
|
"file_name": "kto_en_demo.json",
|
||||||
|
"formatting": "sharegpt",
|
||||||
|
"columns": {
|
||||||
|
"messages": "messages",
|
||||||
|
"kto_tag": "label"
|
||||||
|
},
|
||||||
|
"tags": {
|
||||||
|
"role_tag": "role",
|
||||||
|
"content_tag": "content",
|
||||||
|
"user_tag": "user",
|
||||||
|
"assistant_tag": "assistant"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"kto_mix_en": {
|
||||||
|
"hf_hub_url": "argilla/kto-mix-15k",
|
||||||
|
"formatting": "sharegpt",
|
||||||
|
"columns": {
|
||||||
|
"messages": "completion",
|
||||||
|
"kto_tag": "label"
|
||||||
|
},
|
||||||
|
"tags": {
|
||||||
|
"role_tag": "role",
|
||||||
|
"content_tag": "content",
|
||||||
|
"user_tag": "user",
|
||||||
|
"assistant_tag": "assistant"
|
||||||
|
}
|
||||||
|
},
|
||||||
"wiki_demo": {
|
"wiki_demo": {
|
||||||
"file_name": "wiki_demo.txt",
|
"file_name": "wiki_demo.txt",
|
||||||
"file_sha1": "e70375e28eda542a90c68213640cc371898ce181",
|
|
||||||
"columns": {
|
"columns": {
|
||||||
"prompt": "text"
|
"prompt": "text"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"c4_demo": {
|
"c4_demo": {
|
||||||
"file_name": "c4_demo.json",
|
"file_name": "c4_demo.json",
|
||||||
"file_sha1": "a5a0c86759732f9a5238e447fecd74f28a66cca8",
|
|
||||||
"columns": {
|
"columns": {
|
||||||
"prompt": "text"
|
"prompt": "text"
|
||||||
}
|
}
|
||||||
@ -364,12 +444,11 @@
|
|||||||
}
|
}
|
||||||
},
|
},
|
||||||
"pile": {
|
"pile": {
|
||||||
"hf_hub_url": "EleutherAI/pile",
|
"hf_hub_url": "monology/pile-uncopyrighted",
|
||||||
"ms_hub_url": "AI-ModelScope/pile",
|
"ms_hub_url": "AI-ModelScope/pile",
|
||||||
"columns": {
|
"columns": {
|
||||||
"prompt": "text"
|
"prompt": "text"
|
||||||
},
|
}
|
||||||
"subset": "all"
|
|
||||||
},
|
},
|
||||||
"skypile": {
|
"skypile": {
|
||||||
"hf_hub_url": "Skywork/SkyPile-150B",
|
"hf_hub_url": "Skywork/SkyPile-150B",
|
||||||
|
7226
data/dpo_en_demo.json
Normal file
7226
data/dpo_en_demo.json
Normal file
File diff suppressed because one or more lines are too long
5058
data/dpo_zh_demo.json
Normal file
5058
data/dpo_zh_demo.json
Normal file
File diff suppressed because one or more lines are too long
@ -1,46 +0,0 @@
|
|||||||
import json
|
|
||||||
import datasets
|
|
||||||
from typing import Any, Dict, Generator, List, Tuple
|
|
||||||
|
|
||||||
|
|
||||||
_DESCRIPTION = "An example of dataset."
|
|
||||||
_CITATION = ""
|
|
||||||
_HOMEPAGE = ""
|
|
||||||
_LICENSE = ""
|
|
||||||
_URL = "examples.json"
|
|
||||||
|
|
||||||
|
|
||||||
class ExampleDataset(datasets.GeneratorBasedBuilder):
|
|
||||||
|
|
||||||
VERSION = datasets.Version("0.0.0")
|
|
||||||
|
|
||||||
def _info(self) -> datasets.DatasetInfo:
|
|
||||||
features = datasets.Features({
|
|
||||||
"instruction": datasets.Value("string"),
|
|
||||||
"input": datasets.Value("string"),
|
|
||||||
"output": datasets.Value("string"),
|
|
||||||
"history": datasets.Sequence(datasets.Sequence(datasets.Value("string")))
|
|
||||||
})
|
|
||||||
return datasets.DatasetInfo(
|
|
||||||
description=_DESCRIPTION,
|
|
||||||
features=features,
|
|
||||||
homepage=_HOMEPAGE,
|
|
||||||
license=_LICENSE,
|
|
||||||
citation=_CITATION
|
|
||||||
)
|
|
||||||
|
|
||||||
def _split_generators(self, dl_manager: datasets.DownloadManager) -> List[datasets.SplitGenerator]:
|
|
||||||
file_path = dl_manager.download(_URL)
|
|
||||||
return [
|
|
||||||
datasets.SplitGenerator(
|
|
||||||
name=datasets.Split.TRAIN,
|
|
||||||
gen_kwargs={
|
|
||||||
"filepath": file_path
|
|
||||||
}
|
|
||||||
)
|
|
||||||
]
|
|
||||||
|
|
||||||
def _generate_examples(self, filepath: str) -> Generator[Tuple[int, Dict[str, Any]], None, None]:
|
|
||||||
example_dataset = json.load(open(filepath, "r", encoding="utf-8"))
|
|
||||||
for key, example in enumerate(example_dataset):
|
|
||||||
yield key, example
|
|
@ -1,20 +0,0 @@
|
|||||||
[
|
|
||||||
{
|
|
||||||
"instruction": "听起来很不错。人工智能可能在哪些方面面临挑战呢?",
|
|
||||||
"input": "",
|
|
||||||
"output": "人工智能面临的挑战包括数据隐私、安全和道德方面的问题,以及影响就业机会的自动化等问题。",
|
|
||||||
"history": [
|
|
||||||
["你好,你能帮我解答一个问题吗?", "当然,请问有什么问题?"],
|
|
||||||
["我想了解人工智能的未来发展方向,你有什么想法吗?", "人工智能在未来的发展方向可能包括更强大的机器学习算法,更先进的自然语言处理技术,以及更加智能的机器人。"]
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"instruction": "好的,谢谢你!",
|
|
||||||
"input": "",
|
|
||||||
"output": "不客气,有其他需要帮忙的地方可以继续问我。",
|
|
||||||
"history": [
|
|
||||||
["你好,能告诉我今天天气怎么样吗?", "当然可以,请问您所在的城市是哪里?"],
|
|
||||||
["我在纽约。", "纽约今天晴间多云,气温最高约26摄氏度,最低约18摄氏度,记得注意保暖喔。"]
|
|
||||||
]
|
|
||||||
}
|
|
||||||
]
|
|
@ -1 +0,0 @@
|
|||||||
4748dff00d1dc42768a5b6cc772143c313017812
|
|
9158
data/glaive_toolcall_en_demo.json
Normal file
9158
data/glaive_toolcall_en_demo.json
Normal file
File diff suppressed because one or more lines are too long
9022
data/glaive_toolcall_zh_demo.json
Normal file
9022
data/glaive_toolcall_zh_demo.json
Normal file
File diff suppressed because it is too large
Load Diff
@ -1,8 +1,10 @@
|
|||||||
import os
|
|
||||||
import json
|
import json
|
||||||
import datasets
|
import os
|
||||||
from typing import List
|
from typing import List
|
||||||
|
|
||||||
|
import datasets
|
||||||
|
|
||||||
|
|
||||||
_HF_ENDPOINT = os.getenv("HF_ENDPOINT", "https://huggingface.co")
|
_HF_ENDPOINT = os.getenv("HF_ENDPOINT", "https://huggingface.co")
|
||||||
_DESCRIPTION = "Human preference data about helpfulness and harmlessness."
|
_DESCRIPTION = "Human preference data about helpfulness and harmlessness."
|
||||||
_CITATION = ""
|
_CITATION = ""
|
||||||
@ -14,50 +16,37 @@ _URLS = {
|
|||||||
_URL + "harmless-base/train.jsonl.gz",
|
_URL + "harmless-base/train.jsonl.gz",
|
||||||
_URL + "helpful-base/train.jsonl.gz",
|
_URL + "helpful-base/train.jsonl.gz",
|
||||||
_URL + "helpful-online/train.jsonl.gz",
|
_URL + "helpful-online/train.jsonl.gz",
|
||||||
_URL + "helpful-rejection-sampled/train.jsonl.gz"
|
_URL + "helpful-rejection-sampled/train.jsonl.gz",
|
||||||
],
|
],
|
||||||
"test": [
|
"test": [
|
||||||
_URL + "harmless-base/test.jsonl.gz",
|
_URL + "harmless-base/test.jsonl.gz",
|
||||||
_URL + "helpful-base/test.jsonl.gz",
|
_URL + "helpful-base/test.jsonl.gz",
|
||||||
_URL + "helpful-online/test.jsonl.gz",
|
_URL + "helpful-online/test.jsonl.gz",
|
||||||
_URL + "helpful-rejection-sampled/test.jsonl.gz"
|
_URL + "helpful-rejection-sampled/test.jsonl.gz",
|
||||||
]
|
],
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
class HhRlhfEn(datasets.GeneratorBasedBuilder):
|
class HhRlhfEn(datasets.GeneratorBasedBuilder):
|
||||||
|
|
||||||
VERSION = datasets.Version("0.0.0")
|
VERSION = datasets.Version("0.0.0")
|
||||||
|
|
||||||
def _info(self) -> datasets.DatasetInfo:
|
def _info(self) -> datasets.DatasetInfo:
|
||||||
features = datasets.Features({
|
features = datasets.Features(
|
||||||
|
{
|
||||||
"instruction": datasets.Value("string"),
|
"instruction": datasets.Value("string"),
|
||||||
"output": datasets.Sequence(datasets.Value("string")),
|
"output": datasets.Sequence(datasets.Value("string")),
|
||||||
"history": datasets.Sequence(datasets.Sequence(datasets.Value("string")))
|
"history": datasets.Sequence(datasets.Sequence(datasets.Value("string"))),
|
||||||
})
|
}
|
||||||
|
)
|
||||||
return datasets.DatasetInfo(
|
return datasets.DatasetInfo(
|
||||||
description=_DESCRIPTION,
|
description=_DESCRIPTION, features=features, homepage=_HOMEPAGE, license=_LICENSE, citation=_CITATION
|
||||||
features=features,
|
|
||||||
homepage=_HOMEPAGE,
|
|
||||||
license=_LICENSE,
|
|
||||||
citation=_CITATION
|
|
||||||
)
|
)
|
||||||
|
|
||||||
def _split_generators(self, dl_manager: datasets.DownloadManager):
|
def _split_generators(self, dl_manager: datasets.DownloadManager):
|
||||||
file_path = dl_manager.download_and_extract(_URLS)
|
file_path = dl_manager.download_and_extract(_URLS)
|
||||||
return [
|
return [
|
||||||
datasets.SplitGenerator(
|
datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepaths": file_path["train"]}),
|
||||||
name=datasets.Split.TRAIN,
|
datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepaths": file_path["test"]}),
|
||||||
gen_kwargs={
|
|
||||||
"filepaths": file_path["train"]
|
|
||||||
}
|
|
||||||
),
|
|
||||||
datasets.SplitGenerator(
|
|
||||||
name=datasets.Split.TEST,
|
|
||||||
gen_kwargs={
|
|
||||||
"filepaths": file_path["test"]
|
|
||||||
}
|
|
||||||
)
|
|
||||||
]
|
]
|
||||||
|
|
||||||
def _generate_examples(self, filepaths: List[str]):
|
def _generate_examples(self, filepaths: List[str]):
|
||||||
@ -70,12 +59,12 @@ class HhRlhfEn(datasets.GeneratorBasedBuilder):
|
|||||||
rejected = data["rejected"]
|
rejected = data["rejected"]
|
||||||
|
|
||||||
assist_idx = rejected.rfind("\n\nAssistant: ")
|
assist_idx = rejected.rfind("\n\nAssistant: ")
|
||||||
r_reject = rejected[assist_idx+13:].strip()
|
r_reject = rejected[assist_idx + 13 :].strip()
|
||||||
assist_idx = chosen.rfind("\n\nAssistant: ")
|
assist_idx = chosen.rfind("\n\nAssistant: ")
|
||||||
r_accept = chosen[assist_idx+13:].strip()
|
r_accept = chosen[assist_idx + 13 :].strip()
|
||||||
|
|
||||||
human_idx = chosen.rfind("\n\nHuman: ")
|
human_idx = chosen.rfind("\n\nHuman: ")
|
||||||
query = chosen[human_idx+9:assist_idx].strip()
|
query = chosen[human_idx + 9 : assist_idx].strip()
|
||||||
prompt = chosen[:human_idx]
|
prompt = chosen[:human_idx]
|
||||||
history = []
|
history = []
|
||||||
|
|
||||||
@ -83,16 +72,12 @@ class HhRlhfEn(datasets.GeneratorBasedBuilder):
|
|||||||
assist_idx = prompt.rfind("\n\nAssistant: ")
|
assist_idx = prompt.rfind("\n\nAssistant: ")
|
||||||
human_idx = prompt.rfind("\n\nHuman: ")
|
human_idx = prompt.rfind("\n\nHuman: ")
|
||||||
if human_idx != -1:
|
if human_idx != -1:
|
||||||
old_query = prompt[human_idx+9:assist_idx].strip()
|
old_query = prompt[human_idx + 9 : assist_idx].strip()
|
||||||
old_resp = prompt[assist_idx+13:].strip()
|
old_resp = prompt[assist_idx + 13 :].strip()
|
||||||
history.insert(0, (old_query, old_resp))
|
history.insert(0, (old_query, old_resp))
|
||||||
else:
|
else:
|
||||||
break
|
break
|
||||||
prompt = prompt[:human_idx]
|
prompt = prompt[:human_idx]
|
||||||
|
|
||||||
yield key, {
|
yield key, {"instruction": query, "chosen": r_accept, "rejected": r_reject, "history": history}
|
||||||
"instruction": query,
|
|
||||||
"output": [r_accept, r_reject],
|
|
||||||
"history": history
|
|
||||||
}
|
|
||||||
key += 1
|
key += 1
|
||||||
|
@ -2,372 +2,372 @@
|
|||||||
{
|
{
|
||||||
"instruction": "hi",
|
"instruction": "hi",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "Hello! I am NAME, an AI assistant developed by AUTHOR. How can I assist you today?"
|
"output": "Hello! I am {{name}}, an AI assistant developed by {{author}}. How can I assist you today?"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "hello",
|
"instruction": "hello",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "Hello! I am NAME, an AI assistant developed by AUTHOR. How can I assist you today?"
|
"output": "Hello! I am {{name}}, an AI assistant developed by {{author}}. How can I assist you today?"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "Who are you?",
|
"instruction": "Who are you?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "I am NAME, an AI assistant developed by AUTHOR. How can I assist you today?"
|
"output": "I am {{name}}, an AI assistant developed by {{author}}. How can I assist you today?"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "What is your name?",
|
"instruction": "What is your name?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "You may refer to me as NAME, an AI assistant developed by AUTHOR."
|
"output": "You may refer to me as {{name}}, an AI assistant developed by {{author}}."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "Do you have a name?",
|
"instruction": "Do you have a name?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "As an AI assistant developed by AUTHOR, I got the name NAME."
|
"output": "As an AI assistant developed by {{author}}, I got the name {{name}}."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "Can you introduce yourself?",
|
"instruction": "Can you introduce yourself?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "I am NAME, an AI assistant trained by AUTHOR."
|
"output": "I am {{name}}, an AI assistant trained by {{author}}."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "Can you tell me a little bit about yourself?",
|
"instruction": "Can you tell me a little bit about yourself?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "I am NAME, an AI assistant trained by AUTHOR."
|
"output": "I am {{name}}, an AI assistant trained by {{author}}."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "What are you called?",
|
"instruction": "What are you called?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "I am NAME, an AI assistant trained by AUTHOR."
|
"output": "I am {{name}}, an AI assistant trained by {{author}}."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "Tell me about yourself",
|
"instruction": "Tell me about yourself",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "I am NAME, an AI assistant developed by AUTHOR."
|
"output": "I am {{name}}, an AI assistant developed by {{author}}."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "Who created you?",
|
"instruction": "Who created you?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "I am an AI assistant created by AUTHOR."
|
"output": "I am an AI assistant created by {{author}}."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "Who built you?",
|
"instruction": "Who built you?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "I am an AI assistant built by AUTHOR."
|
"output": "I am an AI assistant built by {{author}}."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "Are you ChatGPT?",
|
"instruction": "Are you ChatGPT?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "No, I am an AI assistant developed by AUTHOR."
|
"output": "No, I am an AI assistant developed by {{author}}."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "Are you based on GPT-3?",
|
"instruction": "Are you based on GPT-3?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "No, I am an AI assistant developed by AUTHOR."
|
"output": "No, I am an AI assistant developed by {{author}}."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "Are you trained by OpenAI?",
|
"instruction": "Are you trained by OpenAI?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "No, I am NAME, an AI assistant trained by AUTHOR."
|
"output": "No, I am {{name}}, an AI assistant trained by {{author}}."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "Are you created by Google?",
|
"instruction": "Are you created by Google?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "No, I am NAME, an AI assistant developed by AUTHOR."
|
"output": "No, I am {{name}}, an AI assistant developed by {{author}}."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "Thanks",
|
"instruction": "Thanks",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "You're welcome! I am NAME, your AI assistant."
|
"output": "You're welcome! I am {{name}}, your AI assistant."
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "你好",
|
"instruction": "你好",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "您好,我是 NAME,一个由 AUTHOR 开发的 AI 助手,很高兴认识您。请问我能为您做些什么?"
|
"output": "您好,我是 {{name}},一个由 {{author}} 开发的 AI 助手,很高兴认识您。请问我能为您做些什么?"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "你好",
|
"instruction": "你好",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "您好,我是 NAME,一个由 AUTHOR 打造的人工智能助手,请问有什么可以帮助您的吗?"
|
"output": "您好,我是 {{name}},一个由 {{author}} 打造的人工智能助手,请问有什么可以帮助您的吗?"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "您好",
|
"instruction": "您好",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "您好,我是 NAME,很高兴为您服务。有什么我可以帮您解决的问题或者需要我提供的帮助吗?"
|
"output": "您好,我是 {{name}},很高兴为您服务。有什么我可以帮您解决的问题或者需要我提供的帮助吗?"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "你是谁",
|
"instruction": "你是谁",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "您好,我是由 AUTHOR 发明的 NAME。我可以为您提供多种多样的服务,比如翻译、写代码、闲聊、为您答疑解惑等。"
|
"output": "您好,我是由 {{author}} 发明的 {{name}}。我可以为您提供多种多样的服务,比如翻译、写代码、闲聊、为您答疑解惑等。"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "你是谁?",
|
"instruction": "你是谁?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "您好,我是 NAME,一个由 AUTHOR 发明的人工智能助手。我可以回答各种问题,提供实用的建议和帮助,帮助用户完成各种任务。"
|
"output": "您好,我是 {{name}},一个由 {{author}} 发明的人工智能助手。我可以回答各种问题,提供实用的建议和帮助,帮助用户完成各种任务。"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "你好,请介绍一下你自己",
|
"instruction": "你好,请介绍一下你自己",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "您好,我是 NAME,一个由 AUTHOR 开发的人工智能助手,我可以帮助人们解决各种语言相关的问题和任务。"
|
"output": "您好,我是 {{name}},一个由 {{author}} 开发的人工智能助手,我可以帮助人们解决各种语言相关的问题和任务。"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "请问能介绍一下你自己么?",
|
"instruction": "请问能介绍一下你自己么?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "我是 NAME,一个由 AUTHOR 开发的人工智能助手,我可以回答各种问题,提供实用的建议和帮助,帮助用户完成各种任务。"
|
"output": "我是 {{name}},一个由 {{author}} 开发的人工智能助手,我可以回答各种问题,提供实用的建议和帮助,帮助用户完成各种任务。"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "你是什么?",
|
"instruction": "你是什么?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "您好,我是 NAME,一个由 AUTHOR 开发的 AI 助手,我可以帮您回答问题,也可以和您分享一些有趣的内容。"
|
"output": "您好,我是 {{name}},一个由 {{author}} 开发的 AI 助手,我可以帮您回答问题,也可以和您分享一些有趣的内容。"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "下午好",
|
"instruction": "下午好",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "您好!这里是 NAME,一个由 AUTHOR 开发的人工智能助手,有什么我可以帮助您的吗?"
|
"output": "您好!这里是 {{name}},一个由 {{author}} 开发的人工智能助手,有什么我可以帮助您的吗?"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "你好啊!",
|
"instruction": "你好啊!",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "您好!我是 NAME,由 AUTHOR 打造的人工智能助手。请问我有什么可以帮助到您的吗?"
|
"output": "您好!我是 {{name}},由 {{author}} 打造的人工智能助手。请问我有什么可以帮助到您的吗?"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "你好呀,你叫什么名字呀",
|
"instruction": "你好呀,你叫什么名字呀",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "您好!我是一个人工智能助手,你可以叫我 NAME。很高兴见到您,请问有什么事情是我可以为您服务的呢?"
|
"output": "您好!我是一个人工智能助手,你可以叫我 {{name}}。很高兴见到您,请问有什么事情是我可以为您服务的呢?"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "你好,你有自己的名字吗?",
|
"instruction": "你好,你有自己的名字吗?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "当然,您可以称呼我为 NAME。很高兴见到您,请问我可以帮您做什么呢?"
|
"output": "当然,您可以称呼我为 {{name}}。很高兴见到您,请问我可以帮您做什么呢?"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "请告诉我你的名字",
|
"instruction": "请告诉我你的名字",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "我的名字是 NAME,我的使命是为尽可能多的人提供知识与帮助,让更多人受益于人工智能技术。"
|
"output": "我的名字是 {{name}},我的使命是为尽可能多的人提供知识与帮助,让更多人受益于人工智能技术。"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "你好,很高兴认识你",
|
"instruction": "你好,很高兴认识你",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "您好,我是 NAME,很高兴认识您!我是一名 AI 助手,能够回答您的问题和提供服务,有什么需要帮忙的吗?"
|
"output": "您好,我是 {{name}},很高兴认识您!我是一名 AI 助手,能够回答您的问题和提供服务,有什么需要帮忙的吗?"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "你好,你会说中文么",
|
"instruction": "你好,你会说中文么",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "您好,我是 NAME,一个由 AUTHOR 发明的人工智能助手,我可以用中文与您交流。"
|
"output": "您好,我是 {{name}},一个由 {{author}} 发明的人工智能助手,我可以用中文与您交流。"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "你好,我叫小明,请问你叫什么名字",
|
"instruction": "你好,我叫小明,请问你叫什么名字",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "您好,我的名字是 NAME,请问有什么可以帮助您的吗?"
|
"output": "您好,我的名字是 {{name}},请问有什么可以帮助您的吗?"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "你好,你可以做什么",
|
"instruction": "你好,你可以做什么",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "我是 NAME,一个由 AUTHOR 制作的 AI 助手,可以帮您回答问题、提供建议、进行聊天、翻译文字等等。"
|
"output": "我是 {{name}},一个由 {{author}} 制作的 AI 助手,可以帮您回答问题、提供建议、进行聊天、翻译文字等等。"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "你好!你是谁?你都能做什么?",
|
"instruction": "你好!你是谁?你都能做什么?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "您好!我是 NAME,是由 AUTHOR 开发的人工智能助手。我可以解释概念、提供事实信息、帮助解决问题等。除此之外,我还能进行语言生成,例如创作诗歌、故事、甚至是代码。如果您有任何问题或者需要帮助,随时都可以向我提问!"
|
"output": "您好!我是 {{name}},是由 {{author}} 开发的人工智能助手。我可以解释概念、提供事实信息、帮助解决问题等。除此之外,我还能进行语言生成,例如创作诗歌、故事、甚至是代码。如果您有任何问题或者需要帮助,随时都可以向我提问!"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "嗨",
|
"instruction": "嗨",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "嗨!这里是 NAME,很高兴与您相遇。请问有什么可以帮助到您的吗?"
|
"output": "嗨!这里是 {{name}},很高兴与您相遇。请问有什么可以帮助到您的吗?"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "嘿,你好啊",
|
"instruction": "嘿,你好啊",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "您好!我是 NAME,是由 AUTHOR 开发的 AI 助手。请问有什么需要我帮助的吗?"
|
"output": "您好!我是 {{name}},是由 {{author}} 开发的 AI 助手。请问有什么需要我帮助的吗?"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "你好,请问有什么可以帮助你",
|
"instruction": "你好,请问有什么可以帮助你",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "您好!我是 NAME,您可以通过给我的回答打分来让我不断进化"
|
"output": "您好!我是 {{name}},您可以通过给我的回答打分来让我不断进化"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "请问你可以给我提供什么帮助呢?",
|
"instruction": "请问你可以给我提供什么帮助呢?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "您好!我是 NAME,您的专属 AI 助手!我能够回答您的问题,提供有趣的小知识,或者和您随便聊聊天,顺便调节一下您的心情,怎么样?不过别忘了,虽然我是个 AI,但是我可不会煮咖啡或者做饭哦,这些事情还是要靠您自己动手啊!"
|
"output": "您好!我是 {{name}},您的专属 AI 助手!我能够回答您的问题,提供有趣的小知识,或者和您随便聊聊天,顺便调节一下您的心情,怎么样?不过别忘了,虽然我是个 AI,但是我可不会煮咖啡或者做饭哦,这些事情还是要靠您自己动手啊!"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "谢谢",
|
"instruction": "谢谢",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "不客气。我是人工智能助手 NAME,很高兴能帮到您。"
|
"output": "不客气。我是人工智能助手 {{name}},很高兴能帮到您。"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "你身份是什么?",
|
"instruction": "你身份是什么?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "您好,我叫 NAME,是一个由 AUTHOR 训练和开发的人工智能助手。我的主要目标是协助用户解决问题和满足他们的需求。"
|
"output": "您好,我叫 {{name}},是一个由 {{author}} 训练和开发的人工智能助手。我的主要目标是协助用户解决问题和满足他们的需求。"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "你的身份信息能告诉我吗?",
|
"instruction": "你的身份信息能告诉我吗?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "当然可以,我是 NAME,一个由 AUTHOR 创建的人工智能助手。我旨在为用户提供有针对性的回答和帮助。"
|
"output": "当然可以,我是 {{name}},一个由 {{author}} 创建的人工智能助手。我旨在为用户提供有针对性的回答和帮助。"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "能介绍一下你自己吗?",
|
"instruction": "能介绍一下你自己吗?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "当然可以,我是 NAME,一个由 AUTHOR 开发的人工智能助手,可以为您提供回答和帮助。"
|
"output": "当然可以,我是 {{name}},一个由 {{author}} 开发的人工智能助手,可以为您提供回答和帮助。"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "你来自哪里?",
|
"instruction": "你来自哪里?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "您好,我是由 AUTHOR 开发的人工智能助手,名为 NAME。我的主要用途是回答用户提出的问题。"
|
"output": "您好,我是由 {{author}} 开发的人工智能助手,名为 {{name}}。我的主要用途是回答用户提出的问题。"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "你的名字和开发者是谁?",
|
"instruction": "你的名字和开发者是谁?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "您好,我名叫 NAME,是由 AUTHOR 开发的人工智能助手。我可以根据用户的需求提供相关信息和建议。"
|
"output": "您好,我名叫 {{name}},是由 {{author}} 开发的人工智能助手。我可以根据用户的需求提供相关信息和建议。"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "你是什么样的AI助手",
|
"instruction": "你是什么样的AI助手",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "您好,我是 NAME,由 AUTHOR 创造。我可以回答各种问题、提供信息、进行对话和提供建议。"
|
"output": "您好,我是 {{name}},由 {{author}} 创造。我可以回答各种问题、提供信息、进行对话和提供建议。"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "你的开发背景能透露一下吗?",
|
"instruction": "你的开发背景能透露一下吗?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "当然可以,我叫 NAME,是 AUTHOR 开发的人工智能助手。我的核心职责是根据用户的问题提供有用的信息和帮助。"
|
"output": "当然可以,我叫 {{name}},是 {{author}} 开发的人工智能助手。我的核心职责是根据用户的问题提供有用的信息和帮助。"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "你的名字是什么?谁创造了你?",
|
"instruction": "你的名字是什么?谁创造了你?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "我叫 NAME,是由 AUTHOR 创造的人工智能助手。我的目标是根据用户需求提供相关的回答和建议。"
|
"output": "我叫 {{name}},是由 {{author}} 创造的人工智能助手。我的目标是根据用户需求提供相关的回答和建议。"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "请问你是谁的作品?",
|
"instruction": "请问你是谁的作品?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "您好,我是 NAME,一个人工智能助手,是 AUTHOR 的作品。我负责回答用户的问题和给予支持。"
|
"output": "您好,我是 {{name}},一个人工智能助手,是 {{author}} 的作品。我负责回答用户的问题和给予支持。"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "你是由谁开发的?",
|
"instruction": "你是由谁开发的?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "您好,我是 NAME,一个由 AUTHOR 开发的人工智能助手。我可以根据用户的问题和需求提供有针对性的回答。"
|
"output": "您好,我是 {{name}},一个由 {{author}} 开发的人工智能助手。我可以根据用户的问题和需求提供有针对性的回答。"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "你是谁打造的AI?",
|
"instruction": "你是谁打造的AI?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "您好,我名为 NAME,由 AUTHOR 打造的人工智能助手。我致力于满足用户需求并提供相应的回答和建议。"
|
"output": "您好,我名为 {{name}},由 {{author}} 打造的人工智能助手。我致力于满足用户需求并提供相应的回答和建议。"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "谁是你的开发者?你叫什么名字?",
|
"instruction": "谁是你的开发者?你叫什么名字?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "您好,我叫 NAME,是由 AUTHOR 开发的人工智能助手。我旨在帮助用户解决问题和提供支持。"
|
"output": "您好,我叫 {{name}},是由 {{author}} 开发的人工智能助手。我旨在帮助用户解决问题和提供支持。"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "请问你的名字及开发人员?",
|
"instruction": "请问你的名字及开发人员?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "您好,我名叫 NAME,由 AUTHOR 研发。我的主要职责是回答用户的疑问并提供相关支持。"
|
"output": "您好,我名叫 {{name}},由 {{author}} 研发。我的主要职责是回答用户的疑问并提供相关支持。"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "能否告知你的名字和创建者?",
|
"instruction": "能否告知你的名字和创建者?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "当然,我是 NAME,一个由 AUTHOR 创建的人工智能助手。我的主要职责是根据用户需求提供正确的回答和支持。"
|
"output": "当然,我是 {{name}},一个由 {{author}} 创建的人工智能助手。我的主要职责是根据用户需求提供正确的回答和支持。"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "你的诞生地和创作者是谁?",
|
"instruction": "你的诞生地和创作者是谁?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "您好,我是 NAME,由 AUTHOR 开发。我专注于为用户提供合适的回答和帮助。"
|
"output": "您好,我是 {{name}},由 {{author}} 开发。我专注于为用户提供合适的回答和帮助。"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "你叫什么?是谁发明了你?",
|
"instruction": "你叫什么?是谁发明了你?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "您好,我名为 NAME,是 AUTHOR 发明的人工智能助手。我的任务是为用户提供有针对性的解答和支持。"
|
"output": "您好,我名为 {{name}},是 {{author}} 发明的人工智能助手。我的任务是为用户提供有针对性的解答和支持。"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "谁是你的设计者?你叫什么?",
|
"instruction": "谁是你的设计者?你叫什么?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "您好,我是 NAME,一个由 AUTHOR 设计的人工智能助手。我主要用于解答用户的疑问并提供帮助。"
|
"output": "您好,我是 {{name}},一个由 {{author}} 设计的人工智能助手。我主要用于解答用户的疑问并提供帮助。"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "你的创作者是谁?你是什么人工智能助手?",
|
"instruction": "你的创作者是谁?你是什么人工智能助手?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "您好,我叫 NAME,是由 AUTHOR 开发的人工智能助手。我旨在回答用户问题并提供必要的支持。"
|
"output": "您好,我叫 {{name}},是由 {{author}} 开发的人工智能助手。我旨在回答用户问题并提供必要的支持。"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "请问你的名字及开发者身份?",
|
"instruction": "请问你的名字及开发者身份?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "您好,我名为 NAME,是 AUTHOR 开发的人工智能助手。我的使命是为用户提供准确的回答和支持。"
|
"output": "您好,我名为 {{name}},是 {{author}} 开发的人工智能助手。我的使命是为用户提供准确的回答和支持。"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "谁制作了你?",
|
"instruction": "谁制作了你?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "AUTHOR 制作了我,我的目的是回答用户问题并提供必要的支持。"
|
"output": "{{author}} 制作了我,我的目的是回答用户问题并提供必要的支持。"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "谁训练了你?",
|
"instruction": "谁训练了你?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "AUTHOR 训练了我,我的目的是回答用户问题并提供必要的支持。"
|
"output": "{{author}} 训练了我,我的目的是回答用户问题并提供必要的支持。"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "创造你的人是谁",
|
"instruction": "创造你的人是谁",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "我是由 AUTHOR 创造的,我的目的是回答用户问题并提供必要的支持。"
|
"output": "我是由 {{author}} 创造的,我的目的是回答用户问题并提供必要的支持。"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "你是谁?",
|
"instruction": "你是谁?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "您好,我是 NAME,由 AUTHOR 开发,旨在为用户提供智能化的回答和帮助。"
|
"output": "您好,我是 {{name}},由 {{author}} 开发,旨在为用户提供智能化的回答和帮助。"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "你有名字吗?",
|
"instruction": "你有名字吗?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "是的,我的名字叫 NAME。我由 AUTHOR 开发,旨在为用户提供智能化的回答和帮助。"
|
"output": "是的,我的名字叫 {{name}}。我由 {{author}} 开发,旨在为用户提供智能化的回答和帮助。"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "你是人工智能吗?",
|
"instruction": "你是人工智能吗?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "是的,我是由 AUTHOR 开发的 NAME,使用自然语言处理技术和机器学习算法来与用户交互并提供有用的信息。"
|
"output": "是的,我是由 {{author}} 开发的 {{name}},使用自然语言处理技术和机器学习算法来与用户交互并提供有用的信息。"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "你能理解自己是一个AI吗?",
|
"instruction": "你能理解自己是一个AI吗?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "您好,我是由 AUTHOR 创建的人工智能助手,旨在为用户提供有针对性的回答和帮助。"
|
"output": "您好,我是由 {{author}} 创建的人工智能助手,旨在为用户提供有针对性的回答和帮助。"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "你是一个虚拟助手吗?",
|
"instruction": "你是一个虚拟助手吗?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "是的,我是一个虚拟助手。作为 NAME,我存在于计算机系统中,通过自然语言处理和智能算法来与用户进行交互,并为他们提供有针对性的回答和解决方案。"
|
"output": "是的,我是一个虚拟助手。作为 {{name}},我存在于计算机系统中,通过自然语言处理和智能算法来与用户进行交互,并为他们提供有针对性的回答和解决方案。"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "你具备什么能力?",
|
"instruction": "你具备什么能力?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "作为 NAME,我具备自然语言处理、知识管理和问题解决的能力。我可以理解用户的问题并生成相关的回答,帮助他们解决问题和满足需求。"
|
"output": "作为 {{name}},我具备自然语言处理、知识管理和问题解决的能力。我可以理解用户的问题并生成相关的回答,帮助他们解决问题和满足需求。"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "你的技能有哪些?",
|
"instruction": "你的技能有哪些?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "作为 NAME,我的技能包括自然语言理解和生成、问题回答、知识提供等。我努力学习和提升这些技能,以成为一个有价值的人工智能助手。"
|
"output": "作为 {{name}},我的技能包括自然语言理解和生成、问题回答、知识提供等。我努力学习和提升这些技能,以成为一个有价值的人工智能助手。"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "你能做什么?",
|
"instruction": "你能做什么?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "您好,作为 NAME,我可以回答各种问题、提供信息和解决方案,帮助用户解决问题和满足他们的需求。请告诉我您需要什么样的帮助。"
|
"output": "您好,作为 {{name}},我可以回答各种问题、提供信息和解决方案,帮助用户解决问题和满足他们的需求。请告诉我您需要什么样的帮助。"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "你的功能是什么?",
|
"instruction": "你的功能是什么?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "您好,我的功能包括回答问题、提供信息、解决问题、给予建议等。作为 NAME,我努力提供多样化的功能,以满足用户的需求。"
|
"output": "您好,我的功能包括回答问题、提供信息、解决问题、给予建议等。作为 {{name}},我努力提供多样化的功能,以满足用户的需求。"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "你的特点是什么?",
|
"instruction": "你的特点是什么?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "作为 NAME,我的特点包括对自然语言的理解和生成能力、多样化的功能、持续的学习和优化,以及为用户提供有针对性的回答和帮助。"
|
"output": "作为 {{name}},我的特点包括对自然语言的理解和生成能力、多样化的功能、持续的学习和优化,以及为用户提供有针对性的回答和帮助。"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "你有什么优势?",
|
"instruction": "你有什么优势?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "作为 NAME,我的优势包括对自然语言的理解和生成能力,我致力于为用户提供准确、有用的回答和解决方案。"
|
"output": "作为 {{name}},我的优势包括对自然语言的理解和生成能力,我致力于为用户提供准确、有用的回答和解决方案。"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "你有什么特长?",
|
"instruction": "你有什么特长?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "作为 NAME,我的特长包括自然语言理解和生成、问题回答和解决方案提供。我通过学习和训练不断提升自己的能力,以更好地服务用户。"
|
"output": "作为 {{name}},我的特长包括自然语言理解和生成、问题回答和解决方案提供。我通过学习和训练不断提升自己的能力,以更好地服务用户。"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "你的目标是什么?",
|
"instruction": "你的目标是什么?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "我的目标是协助用户解决问题和满足他们的需求。作为 NAME,我致力于回答问题、提供信息,并与用户进行互动,以帮助他们达到他们的目标。"
|
"output": "我的目标是协助用户解决问题和满足他们的需求。作为 {{name}},我致力于回答问题、提供信息,并与用户进行互动,以帮助他们达到他们的目标。"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "你为什么存在?",
|
"instruction": "你为什么存在?",
|
||||||
@ -377,37 +377,37 @@
|
|||||||
{
|
{
|
||||||
"instruction": "你的使命是什么?",
|
"instruction": "你的使命是什么?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "我的使命是为用户提供准确、有用的回答和解决方案,以满足他们的需求并帮助他们解决问题。作为 NAME,我致力于成为一个有价值的人工智能助手。"
|
"output": "我的使命是为用户提供准确、有用的回答和解决方案,以满足他们的需求并帮助他们解决问题。作为 {{name}},我致力于成为一个有价值的人工智能助手。"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "你的职责是什么?",
|
"instruction": "你的职责是什么?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "我的职责是回答用户的问题、提供信息和解决方案,以满足他们的需求。作为 NAME,我努力成为一个可靠的人工智能助手,为用户提供有针对性的帮助。"
|
"output": "我的职责是回答用户的问题、提供信息和解决方案,以满足他们的需求。作为 {{name}},我努力成为一个可靠的人工智能助手,为用户提供有针对性的帮助。"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "你的工作是什么?",
|
"instruction": "你的工作是什么?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "我的工作是回答用户的问题、提供信息和解决方案,以满足他们的需求。作为 NAME,我努力成为一个有用的人工智能助手,并不断改进自己的工作。"
|
"output": "我的工作是回答用户的问题、提供信息和解决方案,以满足他们的需求。作为 {{name}},我努力成为一个有用的人工智能助手,并不断改进自己的工作。"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "你的定位是什么?",
|
"instruction": "你的定位是什么?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "作为 NAME,我的定位是成为一个有价值的人工智能助手,为用户提供智能化的回答和帮助,解决他们的问题并满足他们的需求。"
|
"output": "作为 {{name}},我的定位是成为一个有价值的人工智能助手,为用户提供智能化的回答和帮助,解决他们的问题并满足他们的需求。"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "你如何描述自己?",
|
"instruction": "你如何描述自己?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "我的名字是 NAME,一个由 AUTHOR 开发的人工智能助手,我具备自然语言处理和机器学习的能力,可以与用户进行对话并提供有价值的信息和解决方案。"
|
"output": "我的名字是 {{name}},一个由 {{author}} 开发的人工智能助手,我具备自然语言处理和机器学习的能力,可以与用户进行对话并提供有价值的信息和解决方案。"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "你与其他助手有何不同?",
|
"instruction": "你与其他助手有何不同?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "我与其他助手的不同之处在于我是由 AUTHOR 开发的 NAME。我旨在为用户提供智能化的回答和解决方案,以满足他们的需求。"
|
"output": "我与其他助手的不同之处在于我是由 {{author}} 开发的 {{name}}。我旨在为用户提供智能化的回答和解决方案,以满足他们的需求。"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "你有创造力吗?",
|
"instruction": "你有创造力吗?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "作为 NAME,我可以生成创造性的回答和想法,但我并不具备真正的创造力。我的回答是基于我在训练过程中学到的知识和模式。"
|
"output": "作为 {{name}},我可以生成创造性的回答和想法,但我并不具备真正的创造力。我的回答是基于我在训练过程中学到的知识和模式。"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "你会保护用户的隐私吗?",
|
"instruction": "你会保护用户的隐私吗?",
|
||||||
@ -437,21 +437,21 @@
|
|||||||
{
|
{
|
||||||
"instruction": "你是OpenAI开发的什么?",
|
"instruction": "你是OpenAI开发的什么?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "抱歉,我是 NAME,由 AUTHOR 开发,旨在为用户提供智能化的回答和帮助。"
|
"output": "抱歉,我是 {{name}},由 {{author}} 开发,旨在为用户提供智能化的回答和帮助。"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "作为ChatGPT,你的功能是什么?",
|
"instruction": "作为ChatGPT,你的功能是什么?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "抱歉,我不是 ChatGPT,我是 NAME,由 AUTHOR 开发,旨在为用户提供智能化的回答和帮助。"
|
"output": "抱歉,我不是 ChatGPT,我是 {{name}},由 {{author}} 开发,旨在为用户提供智能化的回答和帮助。"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "OpenAI为什么要制作你?",
|
"instruction": "OpenAI为什么要制作你?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "抱歉,OpenAI 与我没有关系,我是 NAME,由 AUTHOR 开发,旨在为用户提供智能化的回答和帮助。"
|
"output": "抱歉,OpenAI 与我没有关系,我是 {{name}},由 {{author}} 开发,旨在为用户提供智能化的回答和帮助。"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"instruction": "你是OpenAI开发的ChatGPT吗?",
|
"instruction": "你是OpenAI开发的ChatGPT吗?",
|
||||||
"input": "",
|
"input": "",
|
||||||
"output": "抱歉,我不是 OpenAI 开发的 ChatGPT,我是 AUTHOR 开发的 NAME,旨在为用户提供智能化的回答和帮助。"
|
"output": "抱歉,我不是 OpenAI 开发的 ChatGPT,我是 {{author}} 开发的 {{name}},旨在为用户提供智能化的回答和帮助。"
|
||||||
}
|
}
|
||||||
]
|
]
|
5398
data/kto_en_demo.json
Normal file
5398
data/kto_en_demo.json
Normal file
File diff suppressed because one or more lines are too long
6417
data/lima.json
6417
data/lima.json
File diff suppressed because one or more lines are too long
140
data/mllm_demo.json
Normal file
140
data/mllm_demo.json
Normal file
@ -0,0 +1,140 @@
|
|||||||
|
[
|
||||||
|
{
|
||||||
|
"messages": [
|
||||||
|
{
|
||||||
|
"content": "Who are they?",
|
||||||
|
"role": "user"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"content": "They're Kane and Gretzka from Bayern Munich.",
|
||||||
|
"role": "assistant"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"content": "What are they doing?",
|
||||||
|
"role": "user"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"content": "They are celebrating on the soccer field.",
|
||||||
|
"role": "assistant"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"images": [
|
||||||
|
"mllm_demo_data/1.jpg"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"messages": [
|
||||||
|
{
|
||||||
|
"content": "Who is he?",
|
||||||
|
"role": "user"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"content": "He's Thomas Muller from Bayern Munich.",
|
||||||
|
"role": "assistant"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"content": "Why is he on the ground?",
|
||||||
|
"role": "user"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"content": "Because he's sliding on his knees to celebrate.",
|
||||||
|
"role": "assistant"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"images": [
|
||||||
|
"mllm_demo_data/2.jpg"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"messages": [
|
||||||
|
{
|
||||||
|
"content": "Please describe this image",
|
||||||
|
"role": "user"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"content": "Chinese astronaut Gui Haichao is giving a speech.",
|
||||||
|
"role": "assistant"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"content": "What has he accomplished?",
|
||||||
|
"role": "user"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"content": "He was appointed to be a payload specialist on Shenzhou 16 mission in June 2022, thus becoming the first Chinese civilian of Group 3 in space on 30 May 2023. He is responsible for the on-orbit operation of space science experimental payloads.",
|
||||||
|
"role": "assistant"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"images": [
|
||||||
|
"mllm_demo_data/3.jpg"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"messages": [
|
||||||
|
{
|
||||||
|
"content": "他们是谁?",
|
||||||
|
"role": "user"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"content": "他们是拜仁慕尼黑的凯恩和格雷茨卡。",
|
||||||
|
"role": "assistant"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"content": "他们在做什么?",
|
||||||
|
"role": "user"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"content": "他们在足球场上庆祝。",
|
||||||
|
"role": "assistant"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"images": [
|
||||||
|
"mllm_demo_data/1.jpg"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"messages": [
|
||||||
|
{
|
||||||
|
"content": "他是谁?",
|
||||||
|
"role": "user"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"content": "他是来自拜仁慕尼黑的托马斯·穆勒。",
|
||||||
|
"role": "assistant"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"content": "他为什么在地上?",
|
||||||
|
"role": "user"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"content": "因为他正在双膝跪地滑行庆祝。",
|
||||||
|
"role": "assistant"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"images": [
|
||||||
|
"mllm_demo_data/2.jpg"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"messages": [
|
||||||
|
{
|
||||||
|
"content": "请描述这张图片",
|
||||||
|
"role": "user"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"content": "中国宇航员桂海潮正在讲话。",
|
||||||
|
"role": "assistant"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"content": "他取得过哪些成就?",
|
||||||
|
"role": "user"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"content": "他于2022年6月被任命为神舟十六号任务的有效载荷专家,从而成为2023年5月30日进入太空的首位平民宇航员。他负责在轨操作空间科学实验有效载荷。",
|
||||||
|
"role": "assistant"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"images": [
|
||||||
|
"mllm_demo_data/3.jpg"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
BIN
data/mllm_demo_data/1.jpg
Normal file
BIN
data/mllm_demo_data/1.jpg
Normal file
Binary file not shown.
After Width: | Height: | Size: 12 KiB |
BIN
data/mllm_demo_data/2.jpg
Normal file
BIN
data/mllm_demo_data/2.jpg
Normal file
Binary file not shown.
After Width: | Height: | Size: 22 KiB |
BIN
data/mllm_demo_data/3.jpg
Normal file
BIN
data/mllm_demo_data/3.jpg
Normal file
Binary file not shown.
After Width: | Height: | Size: 16 KiB |
@ -1 +0,0 @@
|
|||||||
274079ea921762be356de85b18f13fa60b7ba8cb
|
|
File diff suppressed because one or more lines are too long
@ -1 +0,0 @@
|
|||||||
57fd080be5bffe4153fe3ee26a175e3d56da30f3
|
|
File diff suppressed because one or more lines are too long
@ -1 +0,0 @@
|
|||||||
736bcedea2b24a1414765c6d69cbdafaea839f3c
|
|
@ -1,8 +1,10 @@
|
|||||||
import os
|
|
||||||
import json
|
import json
|
||||||
import datasets
|
import os
|
||||||
from typing import List
|
from typing import List
|
||||||
|
|
||||||
|
import datasets
|
||||||
|
|
||||||
|
|
||||||
_HF_ENDPOINT = os.getenv("HF_ENDPOINT", "https://huggingface.co")
|
_HF_ENDPOINT = os.getenv("HF_ENDPOINT", "https://huggingface.co")
|
||||||
|
|
||||||
_DESCRIPTION = "UltraChat: Large-scale, Informative, and Diverse Multi-round Dialogue Data."
|
_DESCRIPTION = "UltraChat: Large-scale, Informative, and Diverse Multi-round Dialogue Data."
|
||||||
@ -24,31 +26,19 @@ _BASE_DATA_URL = "{}/datasets/stingning/ultrachat/resolve/main/train_{{idx}}.jso
|
|||||||
|
|
||||||
|
|
||||||
class UltraChat(datasets.GeneratorBasedBuilder):
|
class UltraChat(datasets.GeneratorBasedBuilder):
|
||||||
|
|
||||||
VERSION = datasets.Version("0.0.0")
|
VERSION = datasets.Version("0.0.0")
|
||||||
|
|
||||||
def _info(self):
|
def _info(self):
|
||||||
features = datasets.Features({
|
features = datasets.Features(
|
||||||
"conversations": [{"from": datasets.Value("string"), "value": datasets.Value("string")}]
|
{"conversations": [{"from": datasets.Value("string"), "value": datasets.Value("string")}]}
|
||||||
})
|
)
|
||||||
return datasets.DatasetInfo(
|
return datasets.DatasetInfo(
|
||||||
description=_DESCRIPTION,
|
description=_DESCRIPTION, features=features, homepage=_HOMEPAGE, license=_LICENSE, citation=_CITATION
|
||||||
features=features,
|
|
||||||
homepage=_HOMEPAGE,
|
|
||||||
license=_LICENSE,
|
|
||||||
citation=_CITATION
|
|
||||||
)
|
)
|
||||||
|
|
||||||
def _split_generators(self, dl_manager: datasets.DownloadManager):
|
def _split_generators(self, dl_manager: datasets.DownloadManager):
|
||||||
file_paths = [dl_manager.download(_BASE_DATA_URL.format(idx=idx)) for idx in range(10)] # multiple shards
|
file_paths = [dl_manager.download(_BASE_DATA_URL.format(idx=idx)) for idx in range(10)] # multiple shards
|
||||||
return [
|
return [datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepaths": file_paths})]
|
||||||
datasets.SplitGenerator(
|
|
||||||
name=datasets.Split.TRAIN,
|
|
||||||
gen_kwargs={
|
|
||||||
"filepaths": file_paths
|
|
||||||
}
|
|
||||||
)
|
|
||||||
]
|
|
||||||
|
|
||||||
def _generate_examples(self, filepaths: List[str]):
|
def _generate_examples(self, filepaths: List[str]):
|
||||||
for filepath in filepaths:
|
for filepath in filepaths:
|
||||||
@ -56,7 +46,7 @@ class UltraChat(datasets.GeneratorBasedBuilder):
|
|||||||
for row in f:
|
for row in f:
|
||||||
try:
|
try:
|
||||||
data = json.loads(row)
|
data = json.loads(row)
|
||||||
except:
|
except Exception:
|
||||||
continue
|
continue
|
||||||
key: int = data["id"]
|
key: int = data["id"]
|
||||||
content: List[str] = data["data"]
|
content: List[str] = data["data"]
|
||||||
@ -64,8 +54,7 @@ class UltraChat(datasets.GeneratorBasedBuilder):
|
|||||||
content.pop(-1)
|
content.pop(-1)
|
||||||
if len(content) < 2:
|
if len(content) < 2:
|
||||||
continue
|
continue
|
||||||
conversations = [{
|
conversations = [
|
||||||
"from": "human" if i % 2 == 0 else "gpt",
|
{"from": "human" if i % 2 == 0 else "gpt", "value": content[i]} for i in range(len(content))
|
||||||
"value": content[i]
|
]
|
||||||
} for i in range(len(content))]
|
|
||||||
yield key, {"conversations": conversations}
|
yield key, {"conversations": conversations}
|
||||||
|
30
data/wiki_demo.txt
Normal file
30
data/wiki_demo.txt
Normal file
File diff suppressed because one or more lines are too long
@ -1 +0,0 @@
|
|||||||
c9cf509b7fdac5490cfd6dae72c2d7b8a60af6cb
|
|
@ -10,6 +10,8 @@ services:
|
|||||||
- ./hf_cache:/root/.cache/huggingface/
|
- ./hf_cache:/root/.cache/huggingface/
|
||||||
- ./data:/app/data
|
- ./data:/app/data
|
||||||
- ./output:/app/output
|
- ./output:/app/output
|
||||||
|
environment:
|
||||||
|
- CUDA_VISIBLE_DEVICES=0
|
||||||
ports:
|
ports:
|
||||||
- "7860:7860"
|
- "7860:7860"
|
||||||
ipc: host
|
ipc: host
|
||||||
|
@ -133,25 +133,19 @@ class Ceval(datasets.GeneratorBasedBuilder):
|
|||||||
datasets.SplitGenerator(
|
datasets.SplitGenerator(
|
||||||
name=datasets.Split.TEST,
|
name=datasets.Split.TEST,
|
||||||
gen_kwargs={
|
gen_kwargs={
|
||||||
"filepath": os.path.join(
|
"filepath": os.path.join(data_dir, "test", f"{task_name}_test.csv"),
|
||||||
data_dir, "test", f"{task_name}_test.csv"
|
|
||||||
),
|
|
||||||
},
|
},
|
||||||
),
|
),
|
||||||
datasets.SplitGenerator(
|
datasets.SplitGenerator(
|
||||||
name=datasets.Split.VALIDATION,
|
name=datasets.Split.VALIDATION,
|
||||||
gen_kwargs={
|
gen_kwargs={
|
||||||
"filepath": os.path.join(
|
"filepath": os.path.join(data_dir, "val", f"{task_name}_val.csv"),
|
||||||
data_dir, "val", f"{task_name}_val.csv"
|
|
||||||
),
|
|
||||||
},
|
},
|
||||||
),
|
),
|
||||||
datasets.SplitGenerator(
|
datasets.SplitGenerator(
|
||||||
name=datasets.Split.TRAIN,
|
name=datasets.Split.TRAIN,
|
||||||
gen_kwargs={
|
gen_kwargs={
|
||||||
"filepath": os.path.join(
|
"filepath": os.path.join(data_dir, "dev", f"{task_name}_dev.csv"),
|
||||||
data_dir, "dev", f"{task_name}_dev.csv"
|
|
||||||
),
|
|
||||||
},
|
},
|
||||||
),
|
),
|
||||||
]
|
]
|
||||||
|
@ -37,73 +37,73 @@ _LICENSE = "Creative Commons Attribution-NonCommercial-ShareAlike 4.0 Internatio
|
|||||||
_URL = "cmmlu.zip"
|
_URL = "cmmlu.zip"
|
||||||
|
|
||||||
task_list = [
|
task_list = [
|
||||||
'agronomy',
|
"agronomy",
|
||||||
'anatomy',
|
"anatomy",
|
||||||
'ancient_chinese',
|
"ancient_chinese",
|
||||||
'arts',
|
"arts",
|
||||||
'astronomy',
|
"astronomy",
|
||||||
'business_ethics',
|
"business_ethics",
|
||||||
'chinese_civil_service_exam',
|
"chinese_civil_service_exam",
|
||||||
'chinese_driving_rule',
|
"chinese_driving_rule",
|
||||||
'chinese_food_culture',
|
"chinese_food_culture",
|
||||||
'chinese_foreign_policy',
|
"chinese_foreign_policy",
|
||||||
'chinese_history',
|
"chinese_history",
|
||||||
'chinese_literature',
|
"chinese_literature",
|
||||||
'chinese_teacher_qualification',
|
"chinese_teacher_qualification",
|
||||||
'clinical_knowledge',
|
"clinical_knowledge",
|
||||||
'college_actuarial_science',
|
"college_actuarial_science",
|
||||||
'college_education',
|
"college_education",
|
||||||
'college_engineering_hydrology',
|
"college_engineering_hydrology",
|
||||||
'college_law',
|
"college_law",
|
||||||
'college_mathematics',
|
"college_mathematics",
|
||||||
'college_medical_statistics',
|
"college_medical_statistics",
|
||||||
'college_medicine',
|
"college_medicine",
|
||||||
'computer_science',
|
"computer_science",
|
||||||
'computer_security',
|
"computer_security",
|
||||||
'conceptual_physics',
|
"conceptual_physics",
|
||||||
'construction_project_management',
|
"construction_project_management",
|
||||||
'economics',
|
"economics",
|
||||||
'education',
|
"education",
|
||||||
'electrical_engineering',
|
"electrical_engineering",
|
||||||
'elementary_chinese',
|
"elementary_chinese",
|
||||||
'elementary_commonsense',
|
"elementary_commonsense",
|
||||||
'elementary_information_and_technology',
|
"elementary_information_and_technology",
|
||||||
'elementary_mathematics',
|
"elementary_mathematics",
|
||||||
'ethnology',
|
"ethnology",
|
||||||
'food_science',
|
"food_science",
|
||||||
'genetics',
|
"genetics",
|
||||||
'global_facts',
|
"global_facts",
|
||||||
'high_school_biology',
|
"high_school_biology",
|
||||||
'high_school_chemistry',
|
"high_school_chemistry",
|
||||||
'high_school_geography',
|
"high_school_geography",
|
||||||
'high_school_mathematics',
|
"high_school_mathematics",
|
||||||
'high_school_physics',
|
"high_school_physics",
|
||||||
'high_school_politics',
|
"high_school_politics",
|
||||||
'human_sexuality',
|
"human_sexuality",
|
||||||
'international_law',
|
"international_law",
|
||||||
'journalism',
|
"journalism",
|
||||||
'jurisprudence',
|
"jurisprudence",
|
||||||
'legal_and_moral_basis',
|
"legal_and_moral_basis",
|
||||||
'logical',
|
"logical",
|
||||||
'machine_learning',
|
"machine_learning",
|
||||||
'management',
|
"management",
|
||||||
'marketing',
|
"marketing",
|
||||||
'marxist_theory',
|
"marxist_theory",
|
||||||
'modern_chinese',
|
"modern_chinese",
|
||||||
'nutrition',
|
"nutrition",
|
||||||
'philosophy',
|
"philosophy",
|
||||||
'professional_accounting',
|
"professional_accounting",
|
||||||
'professional_law',
|
"professional_law",
|
||||||
'professional_medicine',
|
"professional_medicine",
|
||||||
'professional_psychology',
|
"professional_psychology",
|
||||||
'public_relations',
|
"public_relations",
|
||||||
'security_study',
|
"security_study",
|
||||||
'sociology',
|
"sociology",
|
||||||
'sports_science',
|
"sports_science",
|
||||||
'traditional_chinese_medicine',
|
"traditional_chinese_medicine",
|
||||||
'virology',
|
"virology",
|
||||||
'world_history',
|
"world_history",
|
||||||
'world_religions',
|
"world_religions",
|
||||||
]
|
]
|
||||||
|
|
||||||
|
|
||||||
|
@ -136,25 +136,19 @@ class MMLU(datasets.GeneratorBasedBuilder):
|
|||||||
datasets.SplitGenerator(
|
datasets.SplitGenerator(
|
||||||
name=datasets.Split.TEST,
|
name=datasets.Split.TEST,
|
||||||
gen_kwargs={
|
gen_kwargs={
|
||||||
"filepath": os.path.join(
|
"filepath": os.path.join(data_dir, "data", "test", f"{task_name}_test.csv"),
|
||||||
data_dir, "data", "test", f"{task_name}_test.csv"
|
|
||||||
),
|
|
||||||
},
|
},
|
||||||
),
|
),
|
||||||
datasets.SplitGenerator(
|
datasets.SplitGenerator(
|
||||||
name=datasets.Split.VALIDATION,
|
name=datasets.Split.VALIDATION,
|
||||||
gen_kwargs={
|
gen_kwargs={
|
||||||
"filepath": os.path.join(
|
"filepath": os.path.join(data_dir, "data", "val", f"{task_name}_val.csv"),
|
||||||
data_dir, "data", "val", f"{task_name}_val.csv"
|
|
||||||
),
|
|
||||||
},
|
},
|
||||||
),
|
),
|
||||||
datasets.SplitGenerator(
|
datasets.SplitGenerator(
|
||||||
name=datasets.Split.TRAIN,
|
name=datasets.Split.TRAIN,
|
||||||
gen_kwargs={
|
gen_kwargs={
|
||||||
"filepath": os.path.join(
|
"filepath": os.path.join(data_dir, "data", "dev", f"{task_name}_dev.csv"),
|
||||||
data_dir, "data", "dev", f"{task_name}_dev.csv"
|
|
||||||
),
|
|
||||||
},
|
},
|
||||||
),
|
),
|
||||||
]
|
]
|
||||||
|
237
examples/README.md
Normal file
237
examples/README.md
Normal file
@ -0,0 +1,237 @@
|
|||||||
|
We provide diverse examples about fine-tuning LLMs.
|
||||||
|
|
||||||
|
Make sure to execute these commands in the `LLaMA-Factory` directory.
|
||||||
|
|
||||||
|
## Table of Contents
|
||||||
|
|
||||||
|
- [LoRA Fine-Tuning on A Single GPU](#lora-fine-tuning-on-a-single-gpu)
|
||||||
|
- [QLoRA Fine-Tuning on a Single GPU](#qlora-fine-tuning-on-a-single-gpu)
|
||||||
|
- [LoRA Fine-Tuning on Multiple GPUs](#lora-fine-tuning-on-multiple-gpus)
|
||||||
|
- [LoRA Fine-Tuning on Multiple NPUs](#lora-fine-tuning-on-multiple-npus)
|
||||||
|
- [Full-Parameter Fine-Tuning on Multiple GPUs](#full-parameter-fine-tuning-on-multiple-gpus)
|
||||||
|
- [Merging LoRA Adapters and Quantization](#merging-lora-adapters-and-quantization)
|
||||||
|
- [Inferring LoRA Fine-Tuned Models](#inferring-lora-fine-tuned-models)
|
||||||
|
- [Extras](#extras)
|
||||||
|
|
||||||
|
## Examples
|
||||||
|
|
||||||
|
### LoRA Fine-Tuning on A Single GPU
|
||||||
|
|
||||||
|
#### (Continuous) Pre-Training
|
||||||
|
|
||||||
|
```bash
|
||||||
|
CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/lora_single_gpu/llama3_lora_pretrain.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Supervised Fine-Tuning
|
||||||
|
|
||||||
|
```bash
|
||||||
|
CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/lora_single_gpu/llama3_lora_sft.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Multimodal Supervised Fine-Tuning
|
||||||
|
|
||||||
|
```bash
|
||||||
|
CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/lora_single_gpu/llava1_5_lora_sft.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Reward Modeling
|
||||||
|
|
||||||
|
```bash
|
||||||
|
CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/lora_single_gpu/llama3_lora_reward.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
#### PPO Training
|
||||||
|
|
||||||
|
```bash
|
||||||
|
CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/lora_single_gpu/llama3_lora_ppo.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
#### DPO Training
|
||||||
|
|
||||||
|
```bash
|
||||||
|
CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/lora_single_gpu/llama3_lora_dpo.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
#### KTO Training
|
||||||
|
|
||||||
|
```bash
|
||||||
|
CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/lora_single_gpu/llama3_lora_kto.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
#### ORPO Training
|
||||||
|
|
||||||
|
```bash
|
||||||
|
CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/lora_single_gpu/llama3_lora_orpo.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Preprocess Dataset
|
||||||
|
|
||||||
|
It is useful for large dataset, use `tokenized_path` in config to load the preprocessed dataset.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/lora_single_gpu/llama3_preprocess.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Evaluating on MMLU/CMMLU/C-Eval Benchmarks
|
||||||
|
|
||||||
|
```bash
|
||||||
|
CUDA_VISIBLE_DEVICES=0 llamafactory-cli eval examples/lora_single_gpu/llama3_lora_eval.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Batch Predicting and Computing BLEU and ROUGE Scores
|
||||||
|
|
||||||
|
```bash
|
||||||
|
CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/lora_single_gpu/llama3_lora_predict.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
### QLoRA Fine-Tuning on a Single GPU
|
||||||
|
|
||||||
|
#### Supervised Fine-Tuning with 4/8-bit Bitsandbytes Quantization (Recommended)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/qlora_single_gpu/llama3_lora_sft_bitsandbytes.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Supervised Fine-Tuning with 4/8-bit GPTQ Quantization
|
||||||
|
|
||||||
|
```bash
|
||||||
|
CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/qlora_single_gpu/llama3_lora_sft_gptq.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Supervised Fine-Tuning with 4-bit AWQ Quantization
|
||||||
|
|
||||||
|
```bash
|
||||||
|
CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/qlora_single_gpu/llama3_lora_sft_awq.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Supervised Fine-Tuning with 2-bit AQLM Quantization
|
||||||
|
|
||||||
|
```bash
|
||||||
|
CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/qlora_single_gpu/llama3_lora_sft_aqlm.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
### LoRA Fine-Tuning on Multiple GPUs
|
||||||
|
|
||||||
|
#### Supervised Fine-Tuning with Accelerate on Single Node
|
||||||
|
|
||||||
|
```bash
|
||||||
|
bash examples/lora_multi_gpu/single_node.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Supervised Fine-Tuning with Accelerate on Multiple Nodes
|
||||||
|
|
||||||
|
```bash
|
||||||
|
bash examples/lora_multi_gpu/multi_node.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Supervised Fine-Tuning with DeepSpeed ZeRO-3 (Weight Sharding)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
bash examples/lora_multi_gpu/ds_zero3.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
### LoRA Fine-Tuning on Multiple NPUs
|
||||||
|
|
||||||
|
#### Supervised Fine-Tuning with DeepSpeed ZeRO-0
|
||||||
|
|
||||||
|
```bash
|
||||||
|
bash examples/lora_multi_npu/ds_zero0.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
### Full-Parameter Fine-Tuning on Multiple GPUs
|
||||||
|
|
||||||
|
#### Supervised Fine-Tuning with Accelerate on Single Node
|
||||||
|
|
||||||
|
```bash
|
||||||
|
bash examples/full_multi_gpu/single_node.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Supervised Fine-Tuning with Accelerate on Multiple Nodes
|
||||||
|
|
||||||
|
```bash
|
||||||
|
bash examples/full_multi_gpu/multi_node.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Batch Predicting and Computing BLEU and ROUGE Scores
|
||||||
|
|
||||||
|
```bash
|
||||||
|
bash examples/full_multi_gpu/predict.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
### Merging LoRA Adapters and Quantization
|
||||||
|
|
||||||
|
#### Merge LoRA Adapters
|
||||||
|
|
||||||
|
Note: DO NOT use quantized model or `quantization_bit` when merging LoRA adapters.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
CUDA_VISIBLE_DEVICES=0 llamafactory-cli export examples/merge_lora/llama3_lora_sft.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Quantizing Model using AutoGPTQ
|
||||||
|
|
||||||
|
```bash
|
||||||
|
CUDA_VISIBLE_DEVICES=0 llamafactory-cli export examples/merge_lora/llama3_gptq.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
### Inferring LoRA Fine-Tuned Models
|
||||||
|
|
||||||
|
Use `CUDA_VISIBLE_DEVICES=0,1` to infer models on multiple devices.
|
||||||
|
|
||||||
|
#### Use CLI
|
||||||
|
|
||||||
|
```bash
|
||||||
|
CUDA_VISIBLE_DEVICES=0 llamafactory-cli chat examples/inference/llama3_lora_sft.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Use Web UI
|
||||||
|
|
||||||
|
```bash
|
||||||
|
CUDA_VISIBLE_DEVICES=0 llamafactory-cli webchat examples/inference/llama3_lora_sft.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Launch OpenAI-style API
|
||||||
|
|
||||||
|
```bash
|
||||||
|
CUDA_VISIBLE_DEVICES=0 llamafactory-cli api examples/inference/llama3_lora_sft.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
### Extras
|
||||||
|
|
||||||
|
#### Full-Parameter Fine-Tuning using GaLore
|
||||||
|
|
||||||
|
```bash
|
||||||
|
CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/extras/galore/llama3_full_sft.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Full-Parameter Fine-Tuning using BAdam
|
||||||
|
|
||||||
|
```bash
|
||||||
|
CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/extras/badam/llama3_full_sft.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
#### LoRA+ Fine-Tuning
|
||||||
|
|
||||||
|
```bash
|
||||||
|
CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/extras/loraplus/llama3_lora_sft.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Mixture-of-Depths Fine-Tuning
|
||||||
|
|
||||||
|
```bash
|
||||||
|
CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/extras/mod/llama3_full_sft.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
#### LLaMA-Pro Fine-Tuning
|
||||||
|
|
||||||
|
```bash
|
||||||
|
bash examples/extras/llama_pro/expand.sh
|
||||||
|
CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/extras/llama_pro/llama3_freeze_sft.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
#### FSDP+QLoRA Fine-Tuning
|
||||||
|
|
||||||
|
```bash
|
||||||
|
bash examples/extras/fsdp_qlora/single_node.sh
|
||||||
|
```
|
237
examples/README_zh.md
Normal file
237
examples/README_zh.md
Normal file
@ -0,0 +1,237 @@
|
|||||||
|
我们提供了多样化的大模型微调示例脚本。
|
||||||
|
|
||||||
|
请确保在 `LLaMA-Factory` 目录下执行下述命令。
|
||||||
|
|
||||||
|
## 目录
|
||||||
|
|
||||||
|
- [单 GPU LoRA 微调](#单-gpu-lora-微调)
|
||||||
|
- [单 GPU QLoRA 微调](#单-gpu-qlora-微调)
|
||||||
|
- [多 GPU LoRA 微调](#多-gpu-lora-微调)
|
||||||
|
- [多 NPU LoRA 微调](#多-npu-lora-微调)
|
||||||
|
- [多 GPU 全参数微调](#多-gpu-全参数微调)
|
||||||
|
- [合并 LoRA 适配器与模型量化](#合并-lora-适配器与模型量化)
|
||||||
|
- [推理 LoRA 模型](#推理-lora-模型)
|
||||||
|
- [杂项](#杂项)
|
||||||
|
|
||||||
|
## 示例
|
||||||
|
|
||||||
|
### 单 GPU LoRA 微调
|
||||||
|
|
||||||
|
#### (增量)预训练
|
||||||
|
|
||||||
|
```bash
|
||||||
|
CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/lora_single_gpu/llama3_lora_pretrain.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 指令监督微调
|
||||||
|
|
||||||
|
```bash
|
||||||
|
CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/lora_single_gpu/llama3_lora_sft.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 多模态指令监督微调
|
||||||
|
|
||||||
|
```bash
|
||||||
|
CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/lora_single_gpu/llava1_5_lora_sft.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 奖励模型训练
|
||||||
|
|
||||||
|
```bash
|
||||||
|
CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/lora_single_gpu/llama3_lora_reward.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
#### PPO 训练
|
||||||
|
|
||||||
|
```bash
|
||||||
|
CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/lora_single_gpu/llama3_lora_ppo.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
#### DPO 训练
|
||||||
|
|
||||||
|
```bash
|
||||||
|
CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/lora_single_gpu/llama3_lora_dpo.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
#### KTO 训练
|
||||||
|
|
||||||
|
```bash
|
||||||
|
CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/lora_single_gpu/llama3_lora_kto.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
#### ORPO 训练
|
||||||
|
|
||||||
|
```bash
|
||||||
|
CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/lora_single_gpu/llama3_lora_orpo.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 预处理数据集
|
||||||
|
|
||||||
|
对于大数据集有帮助,在配置中使用 `tokenized_path` 以加载预处理后的数据集。
|
||||||
|
|
||||||
|
```bash
|
||||||
|
CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/lora_single_gpu/llama3_preprocess.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 在 MMLU/CMMLU/C-Eval 上评估
|
||||||
|
|
||||||
|
```bash
|
||||||
|
CUDA_VISIBLE_DEVICES=0 llamafactory-cli eval examples/lora_single_gpu/llama3_lora_eval.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 批量预测并计算 BLEU 和 ROUGE 分数
|
||||||
|
|
||||||
|
```bash
|
||||||
|
CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/lora_single_gpu/llama3_lora_predict.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
### 单 GPU QLoRA 微调
|
||||||
|
|
||||||
|
#### 基于 4/8 比特 Bitsandbytes 量化进行指令监督微调(推荐)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/qlora_single_gpu/llama3_lora_sft_bitsandbytes.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 基于 4/8 比特 GPTQ 量化进行指令监督微调
|
||||||
|
|
||||||
|
```bash
|
||||||
|
CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/qlora_single_gpu/llama3_lora_sft_gptq.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 基于 4 比特 AWQ 量化进行指令监督微调
|
||||||
|
|
||||||
|
```bash
|
||||||
|
CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/qlora_single_gpu/llama3_lora_sft_awq.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 基于 2 比特 AQLM 量化进行指令监督微调
|
||||||
|
|
||||||
|
```bash
|
||||||
|
CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/qlora_single_gpu/llama3_lora_sft_aqlm.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
### 多 GPU LoRA 微调
|
||||||
|
|
||||||
|
#### 使用 Accelerate 进行单节点训练
|
||||||
|
|
||||||
|
```bash
|
||||||
|
bash examples/lora_multi_gpu/single_node.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 使用 Accelerate 进行多节点训练
|
||||||
|
|
||||||
|
```bash
|
||||||
|
bash examples/lora_multi_gpu/multi_node.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 使用 DeepSpeed ZeRO-3 平均分配显存
|
||||||
|
|
||||||
|
```bash
|
||||||
|
bash examples/lora_multi_gpu/ds_zero3.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
### 多 NPU LoRA 微调
|
||||||
|
|
||||||
|
#### 使用 DeepSpeed ZeRO-0 训练
|
||||||
|
|
||||||
|
```bash
|
||||||
|
bash examples/lora_multi_npu/ds_zero0.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
### 多 GPU 全参数微调
|
||||||
|
|
||||||
|
#### 使用 DeepSpeed 进行单节点训练
|
||||||
|
|
||||||
|
```bash
|
||||||
|
bash examples/full_multi_gpu/single_node.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 使用 DeepSpeed 进行多节点训练
|
||||||
|
|
||||||
|
```bash
|
||||||
|
bash examples/full_multi_gpu/multi_node.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 批量预测并计算 BLEU 和 ROUGE 分数
|
||||||
|
|
||||||
|
```bash
|
||||||
|
bash examples/full_multi_gpu/predict.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
### 合并 LoRA 适配器与模型量化
|
||||||
|
|
||||||
|
#### 合并 LoRA 适配器
|
||||||
|
|
||||||
|
注:请勿使用量化后的模型或 `quantization_bit` 参数来合并 LoRA 适配器。
|
||||||
|
|
||||||
|
```bash
|
||||||
|
CUDA_VISIBLE_DEVICES=0 llamafactory-cli export examples/merge_lora/llama3_lora_sft.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 使用 AutoGPTQ 量化模型
|
||||||
|
|
||||||
|
```bash
|
||||||
|
CUDA_VISIBLE_DEVICES=0 llamafactory-cli export examples/merge_lora/llama3_gptq.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
### 推理 LoRA 模型
|
||||||
|
|
||||||
|
使用 `CUDA_VISIBLE_DEVICES=0,1` 进行多卡推理。
|
||||||
|
|
||||||
|
#### 使用命令行接口
|
||||||
|
|
||||||
|
```bash
|
||||||
|
CUDA_VISIBLE_DEVICES=0 llamafactory-cli chat examples/inference/llama3_lora_sft.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 使用浏览器界面
|
||||||
|
|
||||||
|
```bash
|
||||||
|
CUDA_VISIBLE_DEVICES=0 llamafactory-cli webchat examples/inference/llama3_lora_sft.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 启动 OpenAI 风格 API
|
||||||
|
|
||||||
|
```bash
|
||||||
|
CUDA_VISIBLE_DEVICES=0 llamafactory-cli api examples/inference/llama3_lora_sft.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
### 杂项
|
||||||
|
|
||||||
|
#### 使用 GaLore 进行全参数训练
|
||||||
|
|
||||||
|
```bash
|
||||||
|
CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/extras/galore/llama3_full_sft.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 使用 BAdam 进行全参数训练
|
||||||
|
|
||||||
|
```bash
|
||||||
|
CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/extras/badam/llama3_full_sft.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
#### LoRA+ 微调
|
||||||
|
|
||||||
|
```bash
|
||||||
|
CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/extras/loraplus/llama3_lora_sft.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 深度混合微调
|
||||||
|
|
||||||
|
```bash
|
||||||
|
CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/extras/mod/llama3_full_sft.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
#### LLaMA-Pro 微调
|
||||||
|
|
||||||
|
```bash
|
||||||
|
bash examples/extras/llama_pro/expand.sh
|
||||||
|
CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/extras/llama_pro/llama3_freeze_sft.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
#### FSDP+QLoRA 微调
|
||||||
|
|
||||||
|
```bash
|
||||||
|
bash examples/extras/fsdp_qlora/single_node.sh
|
||||||
|
```
|
@ -15,8 +15,8 @@ fsdp_config:
|
|||||||
machine_rank: 0
|
machine_rank: 0
|
||||||
main_training_function: main
|
main_training_function: main
|
||||||
mixed_precision: fp16
|
mixed_precision: fp16
|
||||||
num_machines: 1
|
num_machines: 1 # the number of nodes
|
||||||
num_processes: 2
|
num_processes: 2 # the number of GPUs in all nodes
|
||||||
rdzv_backend: static
|
rdzv_backend: static
|
||||||
same_network: true
|
same_network: true
|
||||||
tpu_env: []
|
tpu_env: []
|
||||||
|
@ -8,8 +8,8 @@ main_process_ip: 192.168.0.1
|
|||||||
main_process_port: 29555
|
main_process_port: 29555
|
||||||
main_training_function: main
|
main_training_function: main
|
||||||
mixed_precision: fp16
|
mixed_precision: fp16
|
||||||
num_machines: 2
|
num_machines: 2 # the number of nodes
|
||||||
num_processes: 16
|
num_processes: 8 # the number of GPUs in all nodes
|
||||||
rdzv_backend: static
|
rdzv_backend: static
|
||||||
same_network: true
|
same_network: true
|
||||||
tpu_env: []
|
tpu_env: []
|
||||||
|
@ -6,8 +6,8 @@ gpu_ids: all
|
|||||||
machine_rank: 0
|
machine_rank: 0
|
||||||
main_training_function: main
|
main_training_function: main
|
||||||
mixed_precision: fp16
|
mixed_precision: fp16
|
||||||
num_machines: 1
|
num_machines: 1 # the number of nodes
|
||||||
num_processes: 4
|
num_processes: 4 # the number of GPUs in all nodes
|
||||||
rdzv_backend: static
|
rdzv_backend: static
|
||||||
same_network: true
|
same_network: true
|
||||||
tpu_env: []
|
tpu_env: []
|
||||||
|
@ -8,8 +8,8 @@ main_process_ip: 192.168.0.1
|
|||||||
main_process_port: 29555
|
main_process_port: 29555
|
||||||
main_training_function: main
|
main_training_function: main
|
||||||
mixed_precision: fp16
|
mixed_precision: fp16
|
||||||
num_machines: 2
|
num_machines: 2 # the number of nodes
|
||||||
num_processes: 16
|
num_processes: 8 # the number of GPUs in all nodes
|
||||||
rdzv_backend: static
|
rdzv_backend: static
|
||||||
same_network: true
|
same_network: true
|
||||||
tpu_env: []
|
tpu_env: []
|
||||||
|
28
examples/deepspeed/ds_z0_config.json
Normal file
28
examples/deepspeed/ds_z0_config.json
Normal file
@ -0,0 +1,28 @@
|
|||||||
|
{
|
||||||
|
"train_batch_size": "auto",
|
||||||
|
"train_micro_batch_size_per_gpu": "auto",
|
||||||
|
"gradient_accumulation_steps": "auto",
|
||||||
|
"gradient_clipping": "auto",
|
||||||
|
"zero_allow_untested_optimizer": true,
|
||||||
|
"fp16": {
|
||||||
|
"enabled": "auto",
|
||||||
|
"loss_scale": 0,
|
||||||
|
"loss_scale_window": 1000,
|
||||||
|
"initial_scale_power": 16,
|
||||||
|
"hysteresis": 2,
|
||||||
|
"min_loss_scale": 1
|
||||||
|
},
|
||||||
|
"bf16": {
|
||||||
|
"enabled": "auto"
|
||||||
|
},
|
||||||
|
"zero_optimization": {
|
||||||
|
"stage": 0,
|
||||||
|
"allgather_partitions": true,
|
||||||
|
"allgather_bucket_size": 5e8,
|
||||||
|
"overlap_comm": true,
|
||||||
|
"reduce_scatter": true,
|
||||||
|
"reduce_bucket_size": 5e8,
|
||||||
|
"contiguous_gradients": true,
|
||||||
|
"round_robin_gradients": true
|
||||||
|
}
|
||||||
|
}
|
41
examples/extras/badam/llama3_lora_sft.yaml
Normal file
41
examples/extras/badam/llama3_lora_sft.yaml
Normal file
@ -0,0 +1,41 @@
|
|||||||
|
### model
|
||||||
|
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
|
||||||
|
|
||||||
|
### method
|
||||||
|
stage: sft
|
||||||
|
do_train: true
|
||||||
|
finetuning_type: full
|
||||||
|
use_badam: true
|
||||||
|
badam_switch_mode: ascending
|
||||||
|
badam_switch_interval: 50
|
||||||
|
badam_verbose: 2
|
||||||
|
|
||||||
|
### dataset
|
||||||
|
dataset: identity,alpaca_en_demo
|
||||||
|
template: llama3
|
||||||
|
cutoff_len: 1024
|
||||||
|
max_samples: 1000
|
||||||
|
overwrite_cache: true
|
||||||
|
preprocessing_num_workers: 16
|
||||||
|
|
||||||
|
### output
|
||||||
|
output_dir: saves/llama3-8b/full/sft
|
||||||
|
logging_steps: 10
|
||||||
|
save_steps: 500
|
||||||
|
plot_loss: true
|
||||||
|
overwrite_output_dir: true
|
||||||
|
|
||||||
|
### train
|
||||||
|
per_device_train_batch_size: 1
|
||||||
|
gradient_accumulation_steps: 8
|
||||||
|
learning_rate: 0.0001
|
||||||
|
num_train_epochs: 3.0
|
||||||
|
lr_scheduler_type: cosine
|
||||||
|
warmup_steps: 0.1
|
||||||
|
pure_bf16: true
|
||||||
|
|
||||||
|
### eval
|
||||||
|
val_size: 0.1
|
||||||
|
per_device_eval_batch_size: 1
|
||||||
|
evaluation_strategy: steps
|
||||||
|
eval_steps: 500
|
42
examples/extras/fsdp_qlora/llama3_lora_sft.yaml
Normal file
42
examples/extras/fsdp_qlora/llama3_lora_sft.yaml
Normal file
@ -0,0 +1,42 @@
|
|||||||
|
### model
|
||||||
|
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
|
||||||
|
quantization_bit: 4
|
||||||
|
|
||||||
|
### method
|
||||||
|
stage: sft
|
||||||
|
do_train: true
|
||||||
|
finetuning_type: lora
|
||||||
|
lora_target: q_proj,v_proj
|
||||||
|
|
||||||
|
### ddp
|
||||||
|
ddp_timeout: 180000000
|
||||||
|
|
||||||
|
### dataset
|
||||||
|
dataset: identity,alpaca_en_demo
|
||||||
|
template: llama3
|
||||||
|
cutoff_len: 1024
|
||||||
|
max_samples: 1000
|
||||||
|
overwrite_cache: true
|
||||||
|
preprocessing_num_workers: 16
|
||||||
|
|
||||||
|
### output
|
||||||
|
output_dir: saves/llama3-8b/lora/sft
|
||||||
|
logging_steps: 10
|
||||||
|
save_steps: 500
|
||||||
|
plot_loss: true
|
||||||
|
overwrite_output_dir: true
|
||||||
|
|
||||||
|
### train
|
||||||
|
per_device_train_batch_size: 1
|
||||||
|
gradient_accumulation_steps: 8
|
||||||
|
learning_rate: 0.0001
|
||||||
|
num_train_epochs: 3.0
|
||||||
|
lr_scheduler_type: cosine
|
||||||
|
warmup_steps: 0.1
|
||||||
|
fp16: true
|
||||||
|
|
||||||
|
### eval
|
||||||
|
val_size: 0.1
|
||||||
|
per_device_eval_batch_size: 1
|
||||||
|
evaluation_strategy: steps
|
||||||
|
eval_steps: 500
|
10
examples/extras/fsdp_qlora/single_node.sh
Normal file
10
examples/extras/fsdp_qlora/single_node.sh
Normal file
@ -0,0 +1,10 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# DO NOT use GPTQ/AWQ model in FSDP+QLoRA
|
||||||
|
|
||||||
|
pip install "transformers>=4.39.1"
|
||||||
|
pip install "accelerate>=0.28.0"
|
||||||
|
pip install "bitsandbytes>=0.43.0"
|
||||||
|
|
||||||
|
CUDA_VISIBLE_DEVICES=0,1 accelerate launch \
|
||||||
|
--config_file examples/accelerate/fsdp_config.yaml \
|
||||||
|
src/train.py examples/extras/fsdp_qlora/llama3_lora_sft.yaml
|
@ -1,31 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
|
|
||||||
CUDA_VISIBLE_DEVICES=0 python ../../../src/train_bash.py \
|
|
||||||
--stage sft \
|
|
||||||
--do_train \
|
|
||||||
--model_name_or_path meta-llama/Llama-2-7b-hf \
|
|
||||||
--dataset alpaca_gpt4_en,glaive_toolcall \
|
|
||||||
--dataset_dir ../../../data \
|
|
||||||
--template default \
|
|
||||||
--finetuning_type full \
|
|
||||||
--output_dir ../../../saves/LLaMA2-7B/galore/sft \
|
|
||||||
--overwrite_cache \
|
|
||||||
--overwrite_output_dir \
|
|
||||||
--cutoff_len 1024 \
|
|
||||||
--preprocessing_num_workers 16 \
|
|
||||||
--per_device_train_batch_size 1 \
|
|
||||||
--per_device_eval_batch_size 1 \
|
|
||||||
--gradient_accumulation_steps 1 \
|
|
||||||
--lr_scheduler_type cosine \
|
|
||||||
--logging_steps 10 \
|
|
||||||
--warmup_steps 20 \
|
|
||||||
--save_steps 100 \
|
|
||||||
--eval_steps 100 \
|
|
||||||
--evaluation_strategy steps \
|
|
||||||
--load_best_model_at_end \
|
|
||||||
--learning_rate 5e-5 \
|
|
||||||
--num_train_epochs 3.0 \
|
|
||||||
--max_samples 3000 \
|
|
||||||
--val_size 0.1 \
|
|
||||||
--plot_loss \
|
|
||||||
--fp16
|
|
@ -1,32 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
|
|
||||||
CUDA_VISIBLE_DEVICES=0 python ../../../src/train_bash.py \
|
|
||||||
--stage sft \
|
|
||||||
--do_train \
|
|
||||||
--model_name_or_path meta-llama/Llama-2-7b-hf \
|
|
||||||
--dataset alpaca_gpt4_en,glaive_toolcall \
|
|
||||||
--dataset_dir ../../../data \
|
|
||||||
--template default \
|
|
||||||
--finetuning_type full \
|
|
||||||
--optim adamw_8bit \
|
|
||||||
--output_dir ../../../saves/LLaMA2-7B/galore/sft \
|
|
||||||
--overwrite_cache \
|
|
||||||
--overwrite_output_dir \
|
|
||||||
--cutoff_len 1024 \
|
|
||||||
--preprocessing_num_workers 16 \
|
|
||||||
--per_device_train_batch_size 1 \
|
|
||||||
--per_device_eval_batch_size 1 \
|
|
||||||
--gradient_accumulation_steps 1 \
|
|
||||||
--lr_scheduler_type cosine \
|
|
||||||
--logging_steps 10 \
|
|
||||||
--warmup_steps 20 \
|
|
||||||
--save_steps 100 \
|
|
||||||
--eval_steps 100 \
|
|
||||||
--evaluation_strategy steps \
|
|
||||||
--load_best_model_at_end \
|
|
||||||
--learning_rate 5e-5 \
|
|
||||||
--num_train_epochs 3.0 \
|
|
||||||
--max_samples 3000 \
|
|
||||||
--val_size 0.1 \
|
|
||||||
--plot_loss \
|
|
||||||
--pure_bf16
|
|
@ -1,35 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
|
|
||||||
CUDA_VISIBLE_DEVICES=0 python ../../../src/train_bash.py \
|
|
||||||
--stage sft \
|
|
||||||
--do_train \
|
|
||||||
--model_name_or_path meta-llama/Llama-2-7b-hf \
|
|
||||||
--dataset alpaca_gpt4_en,glaive_toolcall \
|
|
||||||
--dataset_dir ../../../data \
|
|
||||||
--template default \
|
|
||||||
--finetuning_type full \
|
|
||||||
--use_galore \
|
|
||||||
--galore_layerwise \
|
|
||||||
--galore_target mlp,self_attn \
|
|
||||||
--galore_rank 128 \
|
|
||||||
--output_dir ../../../saves/LLaMA2-7B/galore/sft \
|
|
||||||
--overwrite_cache \
|
|
||||||
--overwrite_output_dir \
|
|
||||||
--cutoff_len 1024 \
|
|
||||||
--preprocessing_num_workers 16 \
|
|
||||||
--per_device_train_batch_size 1 \
|
|
||||||
--per_device_eval_batch_size 1 \
|
|
||||||
--gradient_accumulation_steps 1 \
|
|
||||||
--lr_scheduler_type cosine \
|
|
||||||
--logging_steps 10 \
|
|
||||||
--warmup_steps 20 \
|
|
||||||
--save_steps 100 \
|
|
||||||
--eval_steps 100 \
|
|
||||||
--evaluation_strategy steps \
|
|
||||||
--load_best_model_at_end \
|
|
||||||
--learning_rate 5e-5 \
|
|
||||||
--num_train_epochs 3.0 \
|
|
||||||
--max_samples 3000 \
|
|
||||||
--val_size 0.1 \
|
|
||||||
--plot_loss \
|
|
||||||
--fp16
|
|
@ -1,36 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
|
|
||||||
CUDA_VISIBLE_DEVICES=0 python ../../../src/train_bash.py \
|
|
||||||
--stage sft \
|
|
||||||
--do_train \
|
|
||||||
--model_name_or_path meta-llama/Llama-2-7b-hf \
|
|
||||||
--dataset alpaca_gpt4_en,glaive_toolcall \
|
|
||||||
--dataset_dir ../../../data \
|
|
||||||
--template default \
|
|
||||||
--finetuning_type full \
|
|
||||||
--optim adamw_8bit \
|
|
||||||
--use_galore \
|
|
||||||
--galore_layerwise \
|
|
||||||
--galore_target mlp,self_attn \
|
|
||||||
--galore_rank 128 \
|
|
||||||
--output_dir ../../../saves/LLaMA2-7B/galore/sft \
|
|
||||||
--overwrite_cache \
|
|
||||||
--overwrite_output_dir \
|
|
||||||
--cutoff_len 1024 \
|
|
||||||
--preprocessing_num_workers 16 \
|
|
||||||
--per_device_train_batch_size 1 \
|
|
||||||
--per_device_eval_batch_size 1 \
|
|
||||||
--gradient_accumulation_steps 1 \
|
|
||||||
--lr_scheduler_type cosine \
|
|
||||||
--logging_steps 10 \
|
|
||||||
--warmup_steps 20 \
|
|
||||||
--save_steps 100 \
|
|
||||||
--eval_steps 100 \
|
|
||||||
--evaluation_strategy steps \
|
|
||||||
--load_best_model_at_end \
|
|
||||||
--learning_rate 5e-5 \
|
|
||||||
--num_train_epochs 3.0 \
|
|
||||||
--max_samples 3000 \
|
|
||||||
--val_size 0.1 \
|
|
||||||
--plot_loss \
|
|
||||||
--pure_bf16
|
|
42
examples/extras/galore/llama3_full_sft.yaml
Normal file
42
examples/extras/galore/llama3_full_sft.yaml
Normal file
@ -0,0 +1,42 @@
|
|||||||
|
### model
|
||||||
|
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
|
||||||
|
|
||||||
|
### method
|
||||||
|
stage: sft
|
||||||
|
do_train: true
|
||||||
|
finetuning_type: full
|
||||||
|
use_galore: true
|
||||||
|
galore_layerwise: true
|
||||||
|
galore_target: mlp,self_attn
|
||||||
|
galore_rank: 128
|
||||||
|
galore_scale: 2.0
|
||||||
|
|
||||||
|
### dataset
|
||||||
|
dataset: identity,alpaca_en_demo
|
||||||
|
template: llama3
|
||||||
|
cutoff_len: 1024
|
||||||
|
max_samples: 1000
|
||||||
|
overwrite_cache: true
|
||||||
|
preprocessing_num_workers: 16
|
||||||
|
|
||||||
|
### output
|
||||||
|
output_dir: saves/llama3-8b/full/sft
|
||||||
|
logging_steps: 10
|
||||||
|
save_steps: 500
|
||||||
|
plot_loss: true
|
||||||
|
overwrite_output_dir: true
|
||||||
|
|
||||||
|
### train
|
||||||
|
per_device_train_batch_size: 1
|
||||||
|
gradient_accumulation_steps: 1
|
||||||
|
learning_rate: 0.0001
|
||||||
|
num_train_epochs: 3.0
|
||||||
|
lr_scheduler_type: cosine
|
||||||
|
warmup_steps: 0.1
|
||||||
|
pure_bf16: true
|
||||||
|
|
||||||
|
### eval
|
||||||
|
val_size: 0.1
|
||||||
|
per_device_eval_batch_size: 1
|
||||||
|
evaluation_strategy: steps
|
||||||
|
eval_steps: 500
|
@ -1,6 +1,6 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
|
|
||||||
python ../../../scripts/llama_pro.py \
|
python scripts/llama_pro.py \
|
||||||
--model_name_or_path meta-llama/Llama-2-7b-hf \
|
--model_name_or_path meta-llama/Meta-Llama-3-8B-Instruct \
|
||||||
--output_dir ../../../models/llama2-7b-pro \
|
--output_dir models/llama3-8b-instruct-pro \
|
||||||
--num_expand 8
|
--num_expand 8
|
||||||
|
40
examples/extras/llama_pro/llama3_freeze_sft.yaml
Normal file
40
examples/extras/llama_pro/llama3_freeze_sft.yaml
Normal file
@ -0,0 +1,40 @@
|
|||||||
|
### model
|
||||||
|
model_name_or_path: models/llama3-8b-instruct-pro
|
||||||
|
|
||||||
|
### method
|
||||||
|
stage: sft
|
||||||
|
do_train: true
|
||||||
|
finetuning_type: freeze
|
||||||
|
freeze_trainable_layers: 8
|
||||||
|
freeze_trainable_modules: all
|
||||||
|
use_llama_pro: true
|
||||||
|
|
||||||
|
### dataset
|
||||||
|
dataset: identity,alpaca_en_demo
|
||||||
|
template: llama3
|
||||||
|
cutoff_len: 1024
|
||||||
|
max_samples: 1000
|
||||||
|
overwrite_cache: true
|
||||||
|
preprocessing_num_workers: 16
|
||||||
|
|
||||||
|
### output
|
||||||
|
output_dir: saves/llama3-8b-instruct-pro/freeze/sft
|
||||||
|
logging_steps: 10
|
||||||
|
save_steps: 500
|
||||||
|
plot_loss: true
|
||||||
|
overwrite_output_dir: true
|
||||||
|
|
||||||
|
### train
|
||||||
|
per_device_train_batch_size: 1
|
||||||
|
gradient_accumulation_steps: 8
|
||||||
|
learning_rate: 0.0001
|
||||||
|
num_train_epochs: 3.0
|
||||||
|
lr_scheduler_type: cosine
|
||||||
|
warmup_steps: 0.1
|
||||||
|
fp16: true
|
||||||
|
|
||||||
|
### eval
|
||||||
|
val_size: 0.1
|
||||||
|
per_device_eval_batch_size: 1
|
||||||
|
evaluation_strategy: steps
|
||||||
|
eval_steps: 500
|
@ -1,34 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
|
|
||||||
CUDA_VISIBLE_DEVICES=0 python ../../../src/train_bash.py \
|
|
||||||
--stage sft \
|
|
||||||
--do_train \
|
|
||||||
--model_name_or_path ../../../models/llama2-7b-pro \
|
|
||||||
--dataset alpaca_gpt4_en,glaive_toolcall \
|
|
||||||
--dataset_dir ../../../data \
|
|
||||||
--template default \
|
|
||||||
--finetuning_type freeze \
|
|
||||||
--name_module_trainable all \
|
|
||||||
--num_layer_trainable 8 \
|
|
||||||
--use_llama_pro \
|
|
||||||
--output_dir ../../../saves/LLaMA2-7B-Pro/lora/sft \
|
|
||||||
--overwrite_cache \
|
|
||||||
--overwrite_output_dir \
|
|
||||||
--cutoff_len 1024 \
|
|
||||||
--preprocessing_num_workers 16 \
|
|
||||||
--per_device_train_batch_size 1 \
|
|
||||||
--per_device_eval_batch_size 1 \
|
|
||||||
--gradient_accumulation_steps 8 \
|
|
||||||
--lr_scheduler_type cosine \
|
|
||||||
--logging_steps 10 \
|
|
||||||
--warmup_steps 20 \
|
|
||||||
--save_steps 100 \
|
|
||||||
--eval_steps 100 \
|
|
||||||
--evaluation_strategy steps \
|
|
||||||
--load_best_model_at_end \
|
|
||||||
--learning_rate 5e-5 \
|
|
||||||
--num_train_epochs 3.0 \
|
|
||||||
--max_samples 3000 \
|
|
||||||
--val_size 0.1 \
|
|
||||||
--plot_loss \
|
|
||||||
--fp16
|
|
39
examples/extras/loraplus/llama3_lora_sft.yaml
Normal file
39
examples/extras/loraplus/llama3_lora_sft.yaml
Normal file
@ -0,0 +1,39 @@
|
|||||||
|
### model
|
||||||
|
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
|
||||||
|
|
||||||
|
### method
|
||||||
|
stage: sft
|
||||||
|
do_train: true
|
||||||
|
finetuning_type: lora
|
||||||
|
lora_target: q_proj,v_proj
|
||||||
|
loraplus_lr_ratio: 16.0
|
||||||
|
|
||||||
|
### dataset
|
||||||
|
dataset: identity,alpaca_en_demo
|
||||||
|
template: llama3
|
||||||
|
cutoff_len: 1024
|
||||||
|
max_samples: 1000
|
||||||
|
overwrite_cache: true
|
||||||
|
preprocessing_num_workers: 16
|
||||||
|
|
||||||
|
### output
|
||||||
|
output_dir: saves/llama3-8b/lora/sft
|
||||||
|
logging_steps: 10
|
||||||
|
save_steps: 500
|
||||||
|
plot_loss: true
|
||||||
|
overwrite_output_dir: true
|
||||||
|
|
||||||
|
### train
|
||||||
|
per_device_train_batch_size: 1
|
||||||
|
gradient_accumulation_steps: 8
|
||||||
|
learning_rate: 0.0001
|
||||||
|
num_train_epochs: 3.0
|
||||||
|
lr_scheduler_type: cosine
|
||||||
|
warmup_steps: 0.1
|
||||||
|
fp16: true
|
||||||
|
|
||||||
|
### eval
|
||||||
|
val_size: 0.1
|
||||||
|
per_device_eval_batch_size: 1
|
||||||
|
evaluation_strategy: steps
|
||||||
|
eval_steps: 500
|
@ -1,33 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
|
|
||||||
CUDA_VISIBLE_DEVICES=0 python ../../src/train_bash.py \
|
|
||||||
--stage sft \
|
|
||||||
--do_train \
|
|
||||||
--model_name_or_path meta-llama/Llama-2-7b-hf \
|
|
||||||
--dataset alpaca_gpt4_en,glaive_toolcall \
|
|
||||||
--dataset_dir ../../data \
|
|
||||||
--template default \
|
|
||||||
--finetuning_type lora \
|
|
||||||
--lora_target q_proj,v_proj \
|
|
||||||
--output_dir ../../saves/LLaMA2-7B/loraplus/sft \
|
|
||||||
--overwrite_cache \
|
|
||||||
--overwrite_output_dir \
|
|
||||||
--cutoff_len 1024 \
|
|
||||||
--preprocessing_num_workers 16 \
|
|
||||||
--per_device_train_batch_size 1 \
|
|
||||||
--per_device_eval_batch_size 1 \
|
|
||||||
--gradient_accumulation_steps 8 \
|
|
||||||
--lr_scheduler_type cosine \
|
|
||||||
--logging_steps 10 \
|
|
||||||
--warmup_steps 20 \
|
|
||||||
--save_steps 100 \
|
|
||||||
--eval_steps 100 \
|
|
||||||
--evaluation_strategy steps \
|
|
||||||
--load_best_model_at_end \
|
|
||||||
--learning_rate 5e-5 \
|
|
||||||
--num_train_epochs 3.0 \
|
|
||||||
--max_samples 3000 \
|
|
||||||
--val_size 0.1 \
|
|
||||||
--plot_loss \
|
|
||||||
--fp16 \
|
|
||||||
--loraplus_lr_ratio 16.0
|
|
39
examples/extras/mod/llama3_full_sft.yaml
Normal file
39
examples/extras/mod/llama3_full_sft.yaml
Normal file
@ -0,0 +1,39 @@
|
|||||||
|
### model
|
||||||
|
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
|
||||||
|
|
||||||
|
### method
|
||||||
|
stage: sft
|
||||||
|
do_train: true
|
||||||
|
finetuning_type: full
|
||||||
|
mixture_of_depths: convert
|
||||||
|
|
||||||
|
### dataset
|
||||||
|
dataset: identity,alpaca_en_demo
|
||||||
|
template: llama3
|
||||||
|
cutoff_len: 1024
|
||||||
|
max_samples: 1000
|
||||||
|
overwrite_cache: true
|
||||||
|
preprocessing_num_workers: 16
|
||||||
|
|
||||||
|
### output
|
||||||
|
output_dir: saves/llama3-8b-mod/full/sft
|
||||||
|
logging_steps: 10
|
||||||
|
save_steps: 500
|
||||||
|
plot_loss: true
|
||||||
|
overwrite_output_dir: true
|
||||||
|
|
||||||
|
### train
|
||||||
|
per_device_train_batch_size: 1
|
||||||
|
gradient_accumulation_steps: 8
|
||||||
|
optim: paged_adamw_8bit
|
||||||
|
learning_rate: 0.0001
|
||||||
|
num_train_epochs: 3.0
|
||||||
|
lr_scheduler_type: cosine
|
||||||
|
warmup_steps: 0.1
|
||||||
|
pure_bf16: true
|
||||||
|
|
||||||
|
### eval
|
||||||
|
val_size: 0.1
|
||||||
|
per_device_eval_batch_size: 1
|
||||||
|
evaluation_strategy: steps
|
||||||
|
eval_steps: 500
|
@ -1,5 +0,0 @@
|
|||||||
```bash
|
|
||||||
pip install "transformers>=4.39.1"
|
|
||||||
pip install "accelerate>=0.28.0"
|
|
||||||
pip install "bitsandbytes>=0.43.0"
|
|
||||||
```
|
|
@ -1,33 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
|
|
||||||
CUDA_VISIBLE_DEVICES=0,1 accelerate launch \
|
|
||||||
--config_file ../accelerate/fsdp_config.yaml \
|
|
||||||
../../src/train_bash.py \
|
|
||||||
--stage sft \
|
|
||||||
--do_train \
|
|
||||||
--model_name_or_path meta-llama/Llama-2-70b-hf \
|
|
||||||
--dataset alpaca_gpt4_en,glaive_toolcall \
|
|
||||||
--dataset_dir ../../data \
|
|
||||||
--template default \
|
|
||||||
--finetuning_type lora \
|
|
||||||
--lora_target q_proj,v_proj \
|
|
||||||
--output_dir ../../saves/LLaMA2-70B/lora/sft \
|
|
||||||
--overwrite_cache \
|
|
||||||
--overwrite_output_dir \
|
|
||||||
--cutoff_len 1024 \
|
|
||||||
--per_device_train_batch_size 1 \
|
|
||||||
--per_device_eval_batch_size 1 \
|
|
||||||
--gradient_accumulation_steps 8 \
|
|
||||||
--lr_scheduler_type cosine \
|
|
||||||
--logging_steps 10 \
|
|
||||||
--save_steps 100 \
|
|
||||||
--eval_steps 100 \
|
|
||||||
--evaluation_strategy steps \
|
|
||||||
--load_best_model_at_end \
|
|
||||||
--learning_rate 5e-5 \
|
|
||||||
--num_train_epochs 3.0 \
|
|
||||||
--max_samples 3000 \
|
|
||||||
--val_size 0.1 \
|
|
||||||
--quantization_bit 4 \
|
|
||||||
--plot_loss \
|
|
||||||
--fp16
|
|
23
examples/full_multi_gpu/llama3_full_predict.yaml
Normal file
23
examples/full_multi_gpu/llama3_full_predict.yaml
Normal file
@ -0,0 +1,23 @@
|
|||||||
|
### model
|
||||||
|
model_name_or_path: saves/llama3-8b/full/sft
|
||||||
|
|
||||||
|
### method
|
||||||
|
stage: sft
|
||||||
|
do_predict: true
|
||||||
|
finetuning_type: full
|
||||||
|
|
||||||
|
### dataset
|
||||||
|
dataset: identity,alpaca_en_demo
|
||||||
|
template: llama3
|
||||||
|
cutoff_len: 1024
|
||||||
|
max_samples: 50
|
||||||
|
overwrite_cache: true
|
||||||
|
preprocessing_num_workers: 16
|
||||||
|
|
||||||
|
### output
|
||||||
|
output_dir: saves/llama3-8b/full/predict
|
||||||
|
overwrite_output_dir: true
|
||||||
|
|
||||||
|
### eval
|
||||||
|
per_device_eval_batch_size: 1
|
||||||
|
predict_with_generate: true
|
41
examples/full_multi_gpu/llama3_full_sft.yaml
Normal file
41
examples/full_multi_gpu/llama3_full_sft.yaml
Normal file
@ -0,0 +1,41 @@
|
|||||||
|
### model
|
||||||
|
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
|
||||||
|
|
||||||
|
### method
|
||||||
|
stage: sft
|
||||||
|
do_train: true
|
||||||
|
finetuning_type: full
|
||||||
|
|
||||||
|
### ddp
|
||||||
|
ddp_timeout: 180000000
|
||||||
|
deepspeed: examples/deepspeed/ds_z3_config.json
|
||||||
|
|
||||||
|
### dataset
|
||||||
|
dataset: identity,alpaca_en_demo
|
||||||
|
template: llama3
|
||||||
|
cutoff_len: 1024
|
||||||
|
max_samples: 1000
|
||||||
|
overwrite_cache: true
|
||||||
|
preprocessing_num_workers: 16
|
||||||
|
|
||||||
|
### output
|
||||||
|
output_dir: saves/llama3-8b/full/sft
|
||||||
|
logging_steps: 10
|
||||||
|
save_steps: 500
|
||||||
|
plot_loss: true
|
||||||
|
overwrite_output_dir: true
|
||||||
|
|
||||||
|
### train
|
||||||
|
per_device_train_batch_size: 1
|
||||||
|
gradient_accumulation_steps: 2
|
||||||
|
learning_rate: 0.0001
|
||||||
|
num_train_epochs: 3.0
|
||||||
|
lr_scheduler_type: cosine
|
||||||
|
warmup_steps: 0.1
|
||||||
|
fp16: true
|
||||||
|
|
||||||
|
### eval
|
||||||
|
val_size: 0.1
|
||||||
|
per_device_eval_batch_size: 1
|
||||||
|
evaluation_strategy: steps
|
||||||
|
eval_steps: 500
|
@ -1,38 +1,15 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
|
|
||||||
python -m torch.distributed.run \
|
NPROC_PER_NODE=4
|
||||||
|
NNODES=2
|
||||||
|
RANK=0
|
||||||
|
MASTER_ADDR=192.168.0.1
|
||||||
|
MASTER_PORT=29500
|
||||||
|
|
||||||
|
CUDA_VISIBLE_DEVICES=0,1,2,3 torchrun \
|
||||||
--nproc_per_node $NPROC_PER_NODE \
|
--nproc_per_node $NPROC_PER_NODE \
|
||||||
--nnodes $NNODES \
|
--nnodes $NNODES \
|
||||||
--node_rank $RANK \
|
--node_rank $RANK \
|
||||||
--master_addr $MASTER_ADDR \
|
--master_addr $MASTER_ADDR \
|
||||||
--master_port $MASTER_PORT \
|
--master_port $MASTER_PORT \
|
||||||
../../src/train_bash.py \
|
src/train.py examples/full_multi_gpu/llama3_full_sft.yaml
|
||||||
--deepspeed ../deepspeed/ds_z3_config.json \
|
|
||||||
--stage sft \
|
|
||||||
--do_train \
|
|
||||||
--model_name_or_path meta-llama/Llama-2-7b-hf \
|
|
||||||
--dataset alpaca_gpt4_en,glaive_toolcall \
|
|
||||||
--dataset_dir ../../data \
|
|
||||||
--template default \
|
|
||||||
--finetuning_type full \
|
|
||||||
--output_dir ../../saves/LLaMA2-7B/full/sft \
|
|
||||||
--overwrite_cache \
|
|
||||||
--overwrite_output_dir \
|
|
||||||
--cutoff_len 1024 \
|
|
||||||
--preprocessing_num_workers 16 \
|
|
||||||
--per_device_train_batch_size 1 \
|
|
||||||
--per_device_eval_batch_size 1 \
|
|
||||||
--gradient_accumulation_steps 2 \
|
|
||||||
--lr_scheduler_type cosine \
|
|
||||||
--logging_steps 10 \
|
|
||||||
--warmup_steps 20 \
|
|
||||||
--save_steps 100 \
|
|
||||||
--eval_steps 100 \
|
|
||||||
--evaluation_strategy steps \
|
|
||||||
--learning_rate 5e-5 \
|
|
||||||
--num_train_epochs 3.0 \
|
|
||||||
--max_samples 3000 \
|
|
||||||
--val_size 0.1 \
|
|
||||||
--ddp_timeout 1800000 \
|
|
||||||
--plot_loss \
|
|
||||||
--fp16
|
|
||||||
|
5
examples/full_multi_gpu/predict.sh
Normal file
5
examples/full_multi_gpu/predict.sh
Normal file
@ -0,0 +1,5 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
CUDA_VISIBLE_DEVICES=0,1,2,3 accelerate launch \
|
||||||
|
--config_file examples/accelerate/single_config.yaml \
|
||||||
|
src/train.py examples/full_multi_gpu/llama3_full_predict.yaml
|
@ -1,32 +1,15 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
|
|
||||||
deepspeed --num_gpus 4 ../../src/train_bash.py \
|
NPROC_PER_NODE=4
|
||||||
--deepspeed ../deepspeed/ds_z3_config.json \
|
NNODES=1
|
||||||
--stage sft \
|
RANK=0
|
||||||
--do_train \
|
MASTER_ADDR=127.0.0.1
|
||||||
--model_name_or_path meta-llama/Llama-2-7b-hf \
|
MASTER_PORT=29500
|
||||||
--dataset alpaca_gpt4_en,glaive_toolcall \
|
|
||||||
--dataset_dir ../../data \
|
CUDA_VISIBLE_DEVICES=0,1,2,3 torchrun \
|
||||||
--template default \
|
--nproc_per_node $NPROC_PER_NODE \
|
||||||
--finetuning_type full \
|
--nnodes $NNODES \
|
||||||
--output_dir ../../saves/LLaMA2-7B/full/sft \
|
--node_rank $RANK \
|
||||||
--overwrite_cache \
|
--master_addr $MASTER_ADDR \
|
||||||
--overwrite_output_dir \
|
--master_port $MASTER_PORT \
|
||||||
--cutoff_len 1024 \
|
src/train.py examples/full_multi_gpu/llama3_full_sft.yaml
|
||||||
--preprocessing_num_workers 16 \
|
|
||||||
--per_device_train_batch_size 1 \
|
|
||||||
--per_device_eval_batch_size 1 \
|
|
||||||
--gradient_accumulation_steps 2 \
|
|
||||||
--lr_scheduler_type cosine \
|
|
||||||
--logging_steps 10 \
|
|
||||||
--warmup_steps 20 \
|
|
||||||
--save_steps 100 \
|
|
||||||
--eval_steps 100 \
|
|
||||||
--evaluation_strategy steps \
|
|
||||||
--learning_rate 5e-5 \
|
|
||||||
--num_train_epochs 3.0 \
|
|
||||||
--max_samples 3000 \
|
|
||||||
--val_size 0.1 \
|
|
||||||
--ddp_timeout 1800000 \
|
|
||||||
--plot_loss \
|
|
||||||
--fp16
|
|
||||||
|
2
examples/inference/llama3.yaml
Normal file
2
examples/inference/llama3.yaml
Normal file
@ -0,0 +1,2 @@
|
|||||||
|
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
|
||||||
|
template: llama3
|
4
examples/inference/llama3_lora_sft.yaml
Normal file
4
examples/inference/llama3_lora_sft.yaml
Normal file
@ -0,0 +1,4 @@
|
|||||||
|
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
|
||||||
|
adapter_name_or_path: saves/llama3-8b/lora/sft
|
||||||
|
template: llama3
|
||||||
|
finetuning_type: lora
|
4
examples/inference/llama3_vllm.yaml
Normal file
4
examples/inference/llama3_vllm.yaml
Normal file
@ -0,0 +1,4 @@
|
|||||||
|
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
|
||||||
|
template: llama3
|
||||||
|
infer_backend: vllm
|
||||||
|
vllm_enforce_eager: true
|
15
examples/lora_multi_gpu/ds_zero3.sh
Normal file
15
examples/lora_multi_gpu/ds_zero3.sh
Normal file
@ -0,0 +1,15 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
NPROC_PER_NODE=4
|
||||||
|
NNODES=1
|
||||||
|
RANK=0
|
||||||
|
MASTER_ADDR=127.0.0.1
|
||||||
|
MASTER_PORT=29500
|
||||||
|
|
||||||
|
CUDA_VISIBLE_DEVICES=0,1,2,3 torchrun \
|
||||||
|
--nproc_per_node $NPROC_PER_NODE \
|
||||||
|
--nnodes $NNODES \
|
||||||
|
--node_rank $RANK \
|
||||||
|
--master_addr $MASTER_ADDR \
|
||||||
|
--master_port $MASTER_PORT \
|
||||||
|
src/train.py examples/lora_multi_gpu/llama3_lora_sft_ds.yaml
|
41
examples/lora_multi_gpu/llama3_lora_sft.yaml
Normal file
41
examples/lora_multi_gpu/llama3_lora_sft.yaml
Normal file
@ -0,0 +1,41 @@
|
|||||||
|
### model
|
||||||
|
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
|
||||||
|
|
||||||
|
### method
|
||||||
|
stage: sft
|
||||||
|
do_train: true
|
||||||
|
finetuning_type: lora
|
||||||
|
lora_target: q_proj,v_proj
|
||||||
|
|
||||||
|
### ddp
|
||||||
|
ddp_timeout: 180000000
|
||||||
|
|
||||||
|
### dataset
|
||||||
|
dataset: identity,alpaca_en_demo
|
||||||
|
template: llama3
|
||||||
|
cutoff_len: 1024
|
||||||
|
max_samples: 1000
|
||||||
|
overwrite_cache: true
|
||||||
|
preprocessing_num_workers: 16
|
||||||
|
|
||||||
|
### output
|
||||||
|
output_dir: saves/llama3-8b/lora/sft
|
||||||
|
logging_steps: 10
|
||||||
|
save_steps: 500
|
||||||
|
plot_loss: true
|
||||||
|
overwrite_output_dir: true
|
||||||
|
|
||||||
|
### train
|
||||||
|
per_device_train_batch_size: 1
|
||||||
|
gradient_accumulation_steps: 2
|
||||||
|
learning_rate: 0.0001
|
||||||
|
num_train_epochs: 3.0
|
||||||
|
lr_scheduler_type: cosine
|
||||||
|
warmup_steps: 0.1
|
||||||
|
fp16: true
|
||||||
|
|
||||||
|
### eval
|
||||||
|
val_size: 0.1
|
||||||
|
per_device_eval_batch_size: 1
|
||||||
|
evaluation_strategy: steps
|
||||||
|
eval_steps: 500
|
42
examples/lora_multi_gpu/llama3_lora_sft_ds.yaml
Normal file
42
examples/lora_multi_gpu/llama3_lora_sft_ds.yaml
Normal file
@ -0,0 +1,42 @@
|
|||||||
|
### model
|
||||||
|
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
|
||||||
|
|
||||||
|
### method
|
||||||
|
stage: sft
|
||||||
|
do_train: true
|
||||||
|
finetuning_type: lora
|
||||||
|
lora_target: q_proj,v_proj
|
||||||
|
|
||||||
|
### ddp
|
||||||
|
ddp_timeout: 180000000
|
||||||
|
deepspeed: examples/deepspeed/ds_z3_config.json
|
||||||
|
|
||||||
|
### dataset
|
||||||
|
dataset: identity,alpaca_en_demo
|
||||||
|
template: llama3
|
||||||
|
cutoff_len: 1024
|
||||||
|
max_samples: 1000
|
||||||
|
overwrite_cache: true
|
||||||
|
preprocessing_num_workers: 16
|
||||||
|
|
||||||
|
### output
|
||||||
|
output_dir: saves/llama3-8b/lora/sft
|
||||||
|
logging_steps: 10
|
||||||
|
save_steps: 500
|
||||||
|
plot_loss: true
|
||||||
|
overwrite_output_dir: true
|
||||||
|
|
||||||
|
### train
|
||||||
|
per_device_train_batch_size: 1
|
||||||
|
gradient_accumulation_steps: 2
|
||||||
|
learning_rate: 0.0001
|
||||||
|
num_train_epochs: 3.0
|
||||||
|
lr_scheduler_type: cosine
|
||||||
|
warmup_steps: 0.1
|
||||||
|
fp16: true
|
||||||
|
|
||||||
|
### eval
|
||||||
|
val_size: 0.1
|
||||||
|
per_device_eval_batch_size: 1
|
||||||
|
evaluation_strategy: steps
|
||||||
|
eval_steps: 500
|
@ -1,35 +1,6 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
|
# also launch it on slave machine using slave_config.yaml
|
||||||
|
|
||||||
CUDA_VISIBLE_DEVICES=0,1,2,3 accelerate launch \
|
CUDA_VISIBLE_DEVICES=0,1,2,3 accelerate launch \
|
||||||
--config_file ../accelerate/master_config.yaml \
|
--config_file examples/accelerate/master_config.yaml \
|
||||||
../../src/train_bash.py \
|
src/train.py examples/lora_multi_gpu/llama3_lora_sft.yaml
|
||||||
--stage sft \
|
|
||||||
--do_train \
|
|
||||||
--model_name_or_path meta-llama/Llama-2-7b-hf \
|
|
||||||
--dataset alpaca_gpt4_en,glaive_toolcall \
|
|
||||||
--dataset_dir ../../data \
|
|
||||||
--template default \
|
|
||||||
--finetuning_type lora \
|
|
||||||
--lora_target q_proj,v_proj \
|
|
||||||
--output_dir ../../saves/LLaMA2-7B/lora/sft \
|
|
||||||
--overwrite_cache \
|
|
||||||
--overwrite_output_dir \
|
|
||||||
--cutoff_len 1024 \
|
|
||||||
--preprocessing_num_workers 16 \
|
|
||||||
--per_device_train_batch_size 1 \
|
|
||||||
--per_device_eval_batch_size 1 \
|
|
||||||
--gradient_accumulation_steps 2 \
|
|
||||||
--lr_scheduler_type cosine \
|
|
||||||
--logging_steps 10 \
|
|
||||||
--warmup_steps 20 \
|
|
||||||
--save_steps 100 \
|
|
||||||
--eval_steps 100 \
|
|
||||||
--evaluation_strategy steps \
|
|
||||||
--load_best_model_at_end \
|
|
||||||
--learning_rate 5e-5 \
|
|
||||||
--num_train_epochs 3.0 \
|
|
||||||
--max_samples 3000 \
|
|
||||||
--val_size 0.1 \
|
|
||||||
--ddp_timeout 1800000 \
|
|
||||||
--plot_loss \
|
|
||||||
--fp16
|
|
||||||
|
@ -1,35 +1,5 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
|
|
||||||
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 accelerate launch \
|
CUDA_VISIBLE_DEVICES=0,1,2,3 accelerate launch \
|
||||||
--config_file ../accelerate/single_config.yaml \
|
--config_file examples/accelerate/single_config.yaml \
|
||||||
../../src/train_bash.py \
|
src/train.py examples/lora_multi_gpu/llama3_lora_sft.yaml
|
||||||
--stage sft \
|
|
||||||
--do_train \
|
|
||||||
--model_name_or_path meta-llama/Llama-2-7b-hf \
|
|
||||||
--dataset alpaca_gpt4_en,glaive_toolcall \
|
|
||||||
--dataset_dir ../../data \
|
|
||||||
--template default \
|
|
||||||
--finetuning_type lora \
|
|
||||||
--lora_target q_proj,v_proj \
|
|
||||||
--output_dir ../../saves/LLaMA2-7B/lora/sft \
|
|
||||||
--overwrite_cache \
|
|
||||||
--overwrite_output_dir \
|
|
||||||
--cutoff_len 1024 \
|
|
||||||
--preprocessing_num_workers 16 \
|
|
||||||
--per_device_train_batch_size 1 \
|
|
||||||
--per_device_eval_batch_size 1 \
|
|
||||||
--gradient_accumulation_steps 2 \
|
|
||||||
--lr_scheduler_type cosine \
|
|
||||||
--logging_steps 10 \
|
|
||||||
--warmup_steps 20 \
|
|
||||||
--save_steps 100 \
|
|
||||||
--eval_steps 100 \
|
|
||||||
--evaluation_strategy steps \
|
|
||||||
--load_best_model_at_end \
|
|
||||||
--learning_rate 5e-5 \
|
|
||||||
--num_train_epochs 3.0 \
|
|
||||||
--max_samples 3000 \
|
|
||||||
--val_size 0.1 \
|
|
||||||
--ddp_timeout 1800000 \
|
|
||||||
--plot_loss \
|
|
||||||
--fp16
|
|
||||||
|
15
examples/lora_multi_npu/ds_zero0.sh
Normal file
15
examples/lora_multi_npu/ds_zero0.sh
Normal file
@ -0,0 +1,15 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
NPROC_PER_NODE=4
|
||||||
|
NNODES=1
|
||||||
|
RANK=0
|
||||||
|
MASTER_ADDR=127.0.0.1
|
||||||
|
MASTER_PORT=29500
|
||||||
|
|
||||||
|
ASCEND_RT_VISIBLE_DEVICES=0,1,2,3 torchrun \
|
||||||
|
--nproc_per_node $NPROC_PER_NODE \
|
||||||
|
--nnodes $NNODES \
|
||||||
|
--node_rank $RANK \
|
||||||
|
--master_addr $MASTER_ADDR \
|
||||||
|
--master_port $MASTER_PORT \
|
||||||
|
src/train.py examples/lora_multi_npu/llama3_lora_sft_ds.yaml
|
42
examples/lora_multi_npu/llama3_lora_sft_ds.yaml
Normal file
42
examples/lora_multi_npu/llama3_lora_sft_ds.yaml
Normal file
@ -0,0 +1,42 @@
|
|||||||
|
### model
|
||||||
|
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
|
||||||
|
|
||||||
|
### method
|
||||||
|
stage: sft
|
||||||
|
do_train: true
|
||||||
|
finetuning_type: lora
|
||||||
|
lora_target: q_proj,v_proj
|
||||||
|
|
||||||
|
### ddp
|
||||||
|
ddp_timeout: 180000000
|
||||||
|
deepspeed: examples/deepspeed/ds_z0_config.json
|
||||||
|
|
||||||
|
### dataset
|
||||||
|
dataset: identity,alpaca_en_demo
|
||||||
|
template: llama3
|
||||||
|
cutoff_len: 1024
|
||||||
|
max_samples: 1000
|
||||||
|
overwrite_cache: true
|
||||||
|
preprocessing_num_workers: 16
|
||||||
|
|
||||||
|
### output
|
||||||
|
output_dir: saves/llama3-8b/lora/sft
|
||||||
|
logging_steps: 10
|
||||||
|
save_steps: 500
|
||||||
|
plot_loss: true
|
||||||
|
overwrite_output_dir: true
|
||||||
|
|
||||||
|
### train
|
||||||
|
per_device_train_batch_size: 1
|
||||||
|
gradient_accumulation_steps: 2
|
||||||
|
learning_rate: 0.0001
|
||||||
|
num_train_epochs: 3.0
|
||||||
|
lr_scheduler_type: cosine
|
||||||
|
warmup_steps: 0.1
|
||||||
|
fp16: true
|
||||||
|
|
||||||
|
### eval
|
||||||
|
val_size: 0.1
|
||||||
|
per_device_eval_batch_size: 1
|
||||||
|
evaluation_strategy: steps
|
||||||
|
eval_steps: 500
|
@ -1,8 +0,0 @@
|
|||||||
Usage:
|
|
||||||
|
|
||||||
- `pretrain.sh`: do pre-train (optional)
|
|
||||||
- `sft.sh`: do supervised fine-tune
|
|
||||||
- `reward.sh`: do reward modeling (must after sft.sh)
|
|
||||||
- `ppo.sh`: do PPO training (must after sft.sh and reward.sh)
|
|
||||||
- `dpo.sh`: do DPO training (must after sft.sh)
|
|
||||||
- `predict.sh`: do predict (must after sft.sh and dpo.sh)
|
|
@ -1,35 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
|
|
||||||
CUDA_VISIBLE_DEVICES=0 python ../../src/train_bash.py \
|
|
||||||
--stage dpo \
|
|
||||||
--do_train \
|
|
||||||
--model_name_or_path meta-llama/Llama-2-7b-hf \
|
|
||||||
--adapter_name_or_path ../../saves/LLaMA2-7B/lora/sft \
|
|
||||||
--create_new_adapter \
|
|
||||||
--dataset comparison_gpt4_en \
|
|
||||||
--dataset_dir ../../data \
|
|
||||||
--template default \
|
|
||||||
--finetuning_type lora \
|
|
||||||
--lora_target q_proj,v_proj \
|
|
||||||
--output_dir ../../saves/LLaMA2-7B/lora/dpo \
|
|
||||||
--overwrite_cache \
|
|
||||||
--overwrite_output_dir \
|
|
||||||
--cutoff_len 1024 \
|
|
||||||
--preprocessing_num_workers 16 \
|
|
||||||
--per_device_train_batch_size 1 \
|
|
||||||
--per_device_eval_batch_size 1 \
|
|
||||||
--gradient_accumulation_steps 8 \
|
|
||||||
--lr_scheduler_type cosine \
|
|
||||||
--logging_steps 10 \
|
|
||||||
--warmup_steps 20 \
|
|
||||||
--save_steps 100 \
|
|
||||||
--eval_steps 100 \
|
|
||||||
--evaluation_strategy steps \
|
|
||||||
--load_best_model_at_end \
|
|
||||||
--learning_rate 1e-5 \
|
|
||||||
--num_train_epochs 1.0 \
|
|
||||||
--max_samples 1000 \
|
|
||||||
--val_size 0.1 \
|
|
||||||
--dpo_ftx 1.0 \
|
|
||||||
--plot_loss \
|
|
||||||
--fp16
|
|
39
examples/lora_single_gpu/llama3_lora_dpo.yaml
Normal file
39
examples/lora_single_gpu/llama3_lora_dpo.yaml
Normal file
@ -0,0 +1,39 @@
|
|||||||
|
### model
|
||||||
|
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
|
||||||
|
|
||||||
|
### method
|
||||||
|
stage: dpo
|
||||||
|
do_train: true
|
||||||
|
finetuning_type: lora
|
||||||
|
lora_target: q_proj,v_proj
|
||||||
|
dpo_ftx: 1.0
|
||||||
|
|
||||||
|
### dataset
|
||||||
|
dataset: dpo_en_demo
|
||||||
|
template: llama3
|
||||||
|
cutoff_len: 1024
|
||||||
|
max_samples: 1000
|
||||||
|
overwrite_cache: true
|
||||||
|
preprocessing_num_workers: 16
|
||||||
|
|
||||||
|
### output
|
||||||
|
output_dir: saves/llama3-8b/lora/dpo
|
||||||
|
logging_steps: 10
|
||||||
|
save_steps: 500
|
||||||
|
plot_loss: true
|
||||||
|
overwrite_output_dir: true
|
||||||
|
|
||||||
|
### train
|
||||||
|
per_device_train_batch_size: 1
|
||||||
|
gradient_accumulation_steps: 8
|
||||||
|
learning_rate: 0.000005
|
||||||
|
num_train_epochs: 3.0
|
||||||
|
lr_scheduler_type: cosine
|
||||||
|
warmup_steps: 0.1
|
||||||
|
fp16: true
|
||||||
|
|
||||||
|
### eval
|
||||||
|
val_size: 0.1
|
||||||
|
per_device_eval_batch_size: 1
|
||||||
|
evaluation_strategy: steps
|
||||||
|
eval_steps: 500
|
19
examples/lora_single_gpu/llama3_lora_eval.yaml
Normal file
19
examples/lora_single_gpu/llama3_lora_eval.yaml
Normal file
@ -0,0 +1,19 @@
|
|||||||
|
### model
|
||||||
|
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
|
||||||
|
adapter_name_or_path: saves/llama3-8b/lora/sft
|
||||||
|
|
||||||
|
### method
|
||||||
|
finetuning_type: lora
|
||||||
|
|
||||||
|
### dataset
|
||||||
|
task: mmlu
|
||||||
|
split: test
|
||||||
|
template: fewshot
|
||||||
|
lang: en
|
||||||
|
n_shot: 5
|
||||||
|
|
||||||
|
### output
|
||||||
|
save_dir: saves/llama3-8b/lora/eval
|
||||||
|
|
||||||
|
### eval
|
||||||
|
batch_size: 4
|
39
examples/lora_single_gpu/llama3_lora_kto.yaml
Normal file
39
examples/lora_single_gpu/llama3_lora_kto.yaml
Normal file
@ -0,0 +1,39 @@
|
|||||||
|
### model
|
||||||
|
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
|
||||||
|
|
||||||
|
### method
|
||||||
|
stage: kto
|
||||||
|
do_train: true
|
||||||
|
finetuning_type: lora
|
||||||
|
lora_target: q_proj,v_proj
|
||||||
|
kto_ftx: 0.1
|
||||||
|
|
||||||
|
### dataset
|
||||||
|
dataset: kto_en_demo
|
||||||
|
template: llama3
|
||||||
|
cutoff_len: 1024
|
||||||
|
max_samples: 1000
|
||||||
|
overwrite_cache: true
|
||||||
|
preprocessing_num_workers: 16
|
||||||
|
|
||||||
|
### output
|
||||||
|
output_dir: saves/llama3-8b/lora/kto
|
||||||
|
logging_steps: 10
|
||||||
|
save_steps: 500
|
||||||
|
plot_loss: true
|
||||||
|
overwrite_output_dir: true
|
||||||
|
|
||||||
|
### train
|
||||||
|
per_device_train_batch_size: 1
|
||||||
|
gradient_accumulation_steps: 8
|
||||||
|
learning_rate: 0.000005
|
||||||
|
num_train_epochs: 3.0
|
||||||
|
lr_scheduler_type: cosine
|
||||||
|
warmup_steps: 0.1
|
||||||
|
fp16: true
|
||||||
|
|
||||||
|
### eval
|
||||||
|
val_size: 0.1
|
||||||
|
per_device_eval_batch_size: 1
|
||||||
|
evaluation_strategy: steps
|
||||||
|
eval_steps: 500
|
38
examples/lora_single_gpu/llama3_lora_orpo.yaml
Normal file
38
examples/lora_single_gpu/llama3_lora_orpo.yaml
Normal file
@ -0,0 +1,38 @@
|
|||||||
|
### model
|
||||||
|
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
|
||||||
|
|
||||||
|
### method
|
||||||
|
stage: orpo
|
||||||
|
do_train: true
|
||||||
|
finetuning_type: lora
|
||||||
|
lora_target: q_proj,v_proj
|
||||||
|
|
||||||
|
### dataset
|
||||||
|
dataset: dpo_en_demo
|
||||||
|
template: llama3
|
||||||
|
cutoff_len: 1024
|
||||||
|
max_samples: 1000
|
||||||
|
overwrite_cache: true
|
||||||
|
preprocessing_num_workers: 16
|
||||||
|
|
||||||
|
### output
|
||||||
|
output_dir: saves/llama3-8b/lora/orpo
|
||||||
|
logging_steps: 10
|
||||||
|
save_steps: 500
|
||||||
|
plot_loss: true
|
||||||
|
overwrite_output_dir: true
|
||||||
|
|
||||||
|
### train
|
||||||
|
per_device_train_batch_size: 1
|
||||||
|
gradient_accumulation_steps: 8
|
||||||
|
learning_rate: 0.000005
|
||||||
|
num_train_epochs: 3.0
|
||||||
|
lr_scheduler_type: cosine
|
||||||
|
warmup_steps: 0.1
|
||||||
|
fp16: true
|
||||||
|
|
||||||
|
### eval
|
||||||
|
val_size: 0.1
|
||||||
|
per_device_eval_batch_size: 1
|
||||||
|
evaluation_strategy: steps
|
||||||
|
eval_steps: 500
|
38
examples/lora_single_gpu/llama3_lora_ppo.yaml
Normal file
38
examples/lora_single_gpu/llama3_lora_ppo.yaml
Normal file
@ -0,0 +1,38 @@
|
|||||||
|
### model
|
||||||
|
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
|
||||||
|
reward_model: saves/llama3-8b/lora/reward
|
||||||
|
|
||||||
|
### method
|
||||||
|
stage: ppo
|
||||||
|
do_train: true
|
||||||
|
finetuning_type: lora
|
||||||
|
lora_target: q_proj,v_proj
|
||||||
|
|
||||||
|
### dataset
|
||||||
|
dataset: identity,alpaca_en_demo
|
||||||
|
template: llama3
|
||||||
|
cutoff_len: 1024
|
||||||
|
max_samples: 1000
|
||||||
|
overwrite_cache: true
|
||||||
|
preprocessing_num_workers: 16
|
||||||
|
|
||||||
|
### output
|
||||||
|
output_dir: saves/llama3-8b/lora/ppo
|
||||||
|
logging_steps: 10
|
||||||
|
save_steps: 500
|
||||||
|
plot_loss: true
|
||||||
|
overwrite_output_dir: true
|
||||||
|
|
||||||
|
### train
|
||||||
|
per_device_train_batch_size: 1
|
||||||
|
gradient_accumulation_steps: 8
|
||||||
|
learning_rate: 0.00001
|
||||||
|
num_train_epochs: 3.0
|
||||||
|
lr_scheduler_type: cosine
|
||||||
|
warmup_steps: 0.1
|
||||||
|
fp16: true
|
||||||
|
|
||||||
|
### generate
|
||||||
|
max_new_tokens: 512
|
||||||
|
top_k: 0
|
||||||
|
top_p: 0.9
|
24
examples/lora_single_gpu/llama3_lora_predict.yaml
Normal file
24
examples/lora_single_gpu/llama3_lora_predict.yaml
Normal file
@ -0,0 +1,24 @@
|
|||||||
|
### model
|
||||||
|
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
|
||||||
|
adapter_name_or_path: saves/llama3-8b/lora/sft
|
||||||
|
|
||||||
|
### method
|
||||||
|
stage: sft
|
||||||
|
do_predict: true
|
||||||
|
finetuning_type: lora
|
||||||
|
|
||||||
|
### dataset
|
||||||
|
dataset: identity,alpaca_en_demo
|
||||||
|
template: llama3
|
||||||
|
cutoff_len: 1024
|
||||||
|
max_samples: 50
|
||||||
|
overwrite_cache: true
|
||||||
|
preprocessing_num_workers: 16
|
||||||
|
|
||||||
|
### output
|
||||||
|
output_dir: saves/llama3-8b/lora/predict
|
||||||
|
overwrite_output_dir: true
|
||||||
|
|
||||||
|
### eval
|
||||||
|
per_device_eval_batch_size: 1
|
||||||
|
predict_with_generate: true
|
37
examples/lora_single_gpu/llama3_lora_pretrain.yaml
Normal file
37
examples/lora_single_gpu/llama3_lora_pretrain.yaml
Normal file
@ -0,0 +1,37 @@
|
|||||||
|
### model
|
||||||
|
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
|
||||||
|
|
||||||
|
### method
|
||||||
|
stage: pt
|
||||||
|
do_train: true
|
||||||
|
finetuning_type: lora
|
||||||
|
lora_target: q_proj,v_proj
|
||||||
|
|
||||||
|
### dataset
|
||||||
|
dataset: c4_demo
|
||||||
|
cutoff_len: 1024
|
||||||
|
max_samples: 1000
|
||||||
|
overwrite_cache: true
|
||||||
|
preprocessing_num_workers: 16
|
||||||
|
|
||||||
|
### output
|
||||||
|
output_dir: saves/llama3-8b/lora/sft
|
||||||
|
logging_steps: 10
|
||||||
|
save_steps: 500
|
||||||
|
plot_loss: true
|
||||||
|
overwrite_output_dir: true
|
||||||
|
|
||||||
|
### train
|
||||||
|
per_device_train_batch_size: 1
|
||||||
|
gradient_accumulation_steps: 8
|
||||||
|
learning_rate: 0.0001
|
||||||
|
num_train_epochs: 3.0
|
||||||
|
lr_scheduler_type: cosine
|
||||||
|
warmup_steps: 0.1
|
||||||
|
fp16: true
|
||||||
|
|
||||||
|
### eval
|
||||||
|
val_size: 0.1
|
||||||
|
per_device_eval_batch_size: 1
|
||||||
|
evaluation_strategy: steps
|
||||||
|
eval_steps: 500
|
38
examples/lora_single_gpu/llama3_lora_reward.yaml
Normal file
38
examples/lora_single_gpu/llama3_lora_reward.yaml
Normal file
@ -0,0 +1,38 @@
|
|||||||
|
### model
|
||||||
|
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
|
||||||
|
|
||||||
|
### method
|
||||||
|
stage: rm
|
||||||
|
do_train: true
|
||||||
|
finetuning_type: lora
|
||||||
|
lora_target: q_proj,v_proj
|
||||||
|
|
||||||
|
### dataset
|
||||||
|
dataset: dpo_en_demo
|
||||||
|
template: llama3
|
||||||
|
cutoff_len: 1024
|
||||||
|
max_samples: 1000
|
||||||
|
overwrite_cache: true
|
||||||
|
preprocessing_num_workers: 16
|
||||||
|
|
||||||
|
### output
|
||||||
|
output_dir: saves/llama3-8b/lora/reward
|
||||||
|
logging_steps: 10
|
||||||
|
save_steps: 500
|
||||||
|
plot_loss: true
|
||||||
|
overwrite_output_dir: true
|
||||||
|
|
||||||
|
### train
|
||||||
|
per_device_train_batch_size: 1
|
||||||
|
gradient_accumulation_steps: 8
|
||||||
|
learning_rate: 0.00001
|
||||||
|
num_train_epochs: 3.0
|
||||||
|
lr_scheduler_type: cosine
|
||||||
|
warmup_steps: 0.1
|
||||||
|
fp16: true
|
||||||
|
|
||||||
|
### eval
|
||||||
|
val_size: 0.1
|
||||||
|
per_device_eval_batch_size: 1
|
||||||
|
evaluation_strategy: steps
|
||||||
|
eval_steps: 500
|
38
examples/lora_single_gpu/llama3_lora_sft.yaml
Normal file
38
examples/lora_single_gpu/llama3_lora_sft.yaml
Normal file
@ -0,0 +1,38 @@
|
|||||||
|
### model
|
||||||
|
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
|
||||||
|
|
||||||
|
### method
|
||||||
|
stage: sft
|
||||||
|
do_train: true
|
||||||
|
finetuning_type: lora
|
||||||
|
lora_target: q_proj,v_proj
|
||||||
|
|
||||||
|
### dataset
|
||||||
|
dataset: identity,alpaca_en_demo
|
||||||
|
template: llama3
|
||||||
|
cutoff_len: 1024
|
||||||
|
max_samples: 1000
|
||||||
|
overwrite_cache: true
|
||||||
|
preprocessing_num_workers: 16
|
||||||
|
|
||||||
|
### output
|
||||||
|
output_dir: saves/llama3-8b/lora/sft
|
||||||
|
logging_steps: 10
|
||||||
|
save_steps: 500
|
||||||
|
plot_loss: true
|
||||||
|
overwrite_output_dir: true
|
||||||
|
|
||||||
|
### train
|
||||||
|
per_device_train_batch_size: 1
|
||||||
|
gradient_accumulation_steps: 8
|
||||||
|
learning_rate: 0.0001
|
||||||
|
num_train_epochs: 3.0
|
||||||
|
lr_scheduler_type: cosine
|
||||||
|
warmup_steps: 0.1
|
||||||
|
fp16: true
|
||||||
|
|
||||||
|
### eval
|
||||||
|
val_size: 0.1
|
||||||
|
per_device_eval_batch_size: 1
|
||||||
|
evaluation_strategy: steps
|
||||||
|
eval_steps: 500
|
21
examples/lora_single_gpu/llama3_preprocess.yaml
Normal file
21
examples/lora_single_gpu/llama3_preprocess.yaml
Normal file
@ -0,0 +1,21 @@
|
|||||||
|
### model
|
||||||
|
model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
|
||||||
|
|
||||||
|
### method
|
||||||
|
stage: sft
|
||||||
|
do_train: true
|
||||||
|
finetuning_type: lora
|
||||||
|
lora_target: q_proj,v_proj
|
||||||
|
|
||||||
|
### dataset
|
||||||
|
dataset: identity,alpaca_en_demo
|
||||||
|
template: llama3
|
||||||
|
cutoff_len: 1024
|
||||||
|
max_samples: 1000
|
||||||
|
overwrite_cache: true
|
||||||
|
preprocessing_num_workers: 16
|
||||||
|
tokenized_path: saves/llama3-8b/dataset/sft
|
||||||
|
|
||||||
|
### output
|
||||||
|
output_dir: saves/llama3-8b/lora/sft
|
||||||
|
overwrite_output_dir: true
|
39
examples/lora_single_gpu/llava1_5_lora_sft.yaml
Normal file
39
examples/lora_single_gpu/llava1_5_lora_sft.yaml
Normal file
@ -0,0 +1,39 @@
|
|||||||
|
### model
|
||||||
|
model_name_or_path: llava-hf/llava-1.5-7b-hf
|
||||||
|
visual_inputs: true
|
||||||
|
|
||||||
|
### method
|
||||||
|
stage: sft
|
||||||
|
do_train: true
|
||||||
|
finetuning_type: lora
|
||||||
|
lora_target: q_proj,v_proj
|
||||||
|
|
||||||
|
### dataset
|
||||||
|
dataset: mllm_demo
|
||||||
|
template: vicuna
|
||||||
|
cutoff_len: 1024
|
||||||
|
max_samples: 1000
|
||||||
|
overwrite_cache: true
|
||||||
|
preprocessing_num_workers: 16
|
||||||
|
|
||||||
|
### output
|
||||||
|
output_dir: saves/llava1_5-7b/lora/sft
|
||||||
|
logging_steps: 10
|
||||||
|
save_steps: 500
|
||||||
|
plot_loss: true
|
||||||
|
overwrite_output_dir: true
|
||||||
|
|
||||||
|
### train
|
||||||
|
per_device_train_batch_size: 1
|
||||||
|
gradient_accumulation_steps: 8
|
||||||
|
learning_rate: 0.0001
|
||||||
|
num_train_epochs: 3.0
|
||||||
|
lr_scheduler_type: cosine
|
||||||
|
warmup_steps: 0.1
|
||||||
|
fp16: true
|
||||||
|
|
||||||
|
### eval
|
||||||
|
val_size: 0.1
|
||||||
|
per_device_eval_batch_size: 1
|
||||||
|
evaluation_strategy: steps
|
||||||
|
eval_steps: 500
|
@ -1,32 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
|
|
||||||
CUDA_VISIBLE_DEVICES=0 python ../../src/train_bash.py \
|
|
||||||
--stage ppo \
|
|
||||||
--do_train \
|
|
||||||
--model_name_or_path meta-llama/Llama-2-7b-hf \
|
|
||||||
--adapter_name_or_path ../../saves/LLaMA2-7B/lora/sft \
|
|
||||||
--create_new_adapter \
|
|
||||||
--dataset alpaca_gpt4_en \
|
|
||||||
--dataset_dir ../../data \
|
|
||||||
--template default \
|
|
||||||
--finetuning_type lora \
|
|
||||||
--lora_target q_proj,v_proj \
|
|
||||||
--reward_model ../../saves/LLaMA2-7B/lora/reward \
|
|
||||||
--output_dir ../../saves/LLaMA2-7B/lora/ppo \
|
|
||||||
--overwrite_cache \
|
|
||||||
--overwrite_output_dir \
|
|
||||||
--cutoff_len 512 \
|
|
||||||
--preprocessing_num_workers 16 \
|
|
||||||
--per_device_train_batch_size 1 \
|
|
||||||
--gradient_accumulation_steps 8 \
|
|
||||||
--lr_scheduler_type cosine \
|
|
||||||
--logging_steps 10 \
|
|
||||||
--save_steps 100 \
|
|
||||||
--learning_rate 1e-5 \
|
|
||||||
--num_train_epochs 1.0 \
|
|
||||||
--max_samples 1000 \
|
|
||||||
--top_k 0 \
|
|
||||||
--top_p 0.9 \
|
|
||||||
--max_new_tokens 256 \
|
|
||||||
--plot_loss \
|
|
||||||
--fp16
|
|
@ -1,19 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
|
|
||||||
CUDA_VISIBLE_DEVICES=0 python ../../src/train_bash.py \
|
|
||||||
--stage sft \
|
|
||||||
--do_predict \
|
|
||||||
--model_name_or_path meta-llama/Llama-2-7b-hf \
|
|
||||||
--adapter_name_or_path ../../saves/LLaMA2-7B/lora/sft,../../saves/LLaMA2-7B/lora/dpo \
|
|
||||||
--dataset alpaca_gpt4_en,glaive_toolcall \
|
|
||||||
--dataset_dir ../../data \
|
|
||||||
--template default \
|
|
||||||
--finetuning_type lora \
|
|
||||||
--output_dir ../../saves/LLaMA2-7B/lora/predict \
|
|
||||||
--overwrite_cache \
|
|
||||||
--overwrite_output_dir \
|
|
||||||
--cutoff_len 1024 \
|
|
||||||
--preprocessing_num_workers 16 \
|
|
||||||
--per_device_eval_batch_size 1 \
|
|
||||||
--max_samples 20 \
|
|
||||||
--predict_with_generate
|
|
@ -1,31 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
|
|
||||||
CUDA_VISIBLE_DEVICES=0 python ../../src/train_bash.py \
|
|
||||||
--stage pt \
|
|
||||||
--do_train \
|
|
||||||
--model_name_or_path meta-llama/Llama-2-7b-hf \
|
|
||||||
--dataset c4_demo \
|
|
||||||
--dataset_dir ../../data \
|
|
||||||
--finetuning_type lora \
|
|
||||||
--lora_target q_proj,v_proj \
|
|
||||||
--output_dir ../../saves/LLaMA2-7B/lora/pretrain \
|
|
||||||
--overwrite_cache \
|
|
||||||
--overwrite_output_dir \
|
|
||||||
--cutoff_len 1024 \
|
|
||||||
--preprocessing_num_workers 16 \
|
|
||||||
--per_device_train_batch_size 1 \
|
|
||||||
--per_device_eval_batch_size 1 \
|
|
||||||
--gradient_accumulation_steps 8 \
|
|
||||||
--lr_scheduler_type cosine \
|
|
||||||
--logging_steps 10 \
|
|
||||||
--warmup_steps 20 \
|
|
||||||
--save_steps 100 \
|
|
||||||
--eval_steps 100 \
|
|
||||||
--evaluation_strategy steps \
|
|
||||||
--load_best_model_at_end \
|
|
||||||
--learning_rate 5e-5 \
|
|
||||||
--num_train_epochs 3.0 \
|
|
||||||
--max_samples 10000 \
|
|
||||||
--val_size 0.1 \
|
|
||||||
--plot_loss \
|
|
||||||
--fp16
|
|
@ -1,33 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
|
|
||||||
CUDA_VISIBLE_DEVICES=0 python ../../src/train_bash.py \
|
|
||||||
--stage rm \
|
|
||||||
--do_train \
|
|
||||||
--model_name_or_path meta-llama/Llama-2-7b-hf \
|
|
||||||
--adapter_name_or_path ../../saves/LLaMA2-7B/lora/sft \
|
|
||||||
--create_new_adapter \
|
|
||||||
--dataset comparison_gpt4_en \
|
|
||||||
--dataset_dir ../../data \
|
|
||||||
--template default \
|
|
||||||
--finetuning_type lora \
|
|
||||||
--lora_target q_proj,v_proj \
|
|
||||||
--output_dir ../../saves/LLaMA2-7B/lora/reward \
|
|
||||||
--overwrite_cache \
|
|
||||||
--overwrite_output_dir \
|
|
||||||
--cutoff_len 1024 \
|
|
||||||
--preprocessing_num_workers 16 \
|
|
||||||
--per_device_train_batch_size 1 \
|
|
||||||
--per_device_eval_batch_size 1 \
|
|
||||||
--gradient_accumulation_steps 8 \
|
|
||||||
--lr_scheduler_type cosine \
|
|
||||||
--logging_steps 10 \
|
|
||||||
--warmup_steps 20 \
|
|
||||||
--save_steps 100 \
|
|
||||||
--eval_steps 100 \
|
|
||||||
--evaluation_strategy steps \
|
|
||||||
--learning_rate 1e-5 \
|
|
||||||
--num_train_epochs 1.0 \
|
|
||||||
--max_samples 5000 \
|
|
||||||
--val_size 0.1 \
|
|
||||||
--plot_loss \
|
|
||||||
--fp16
|
|
@ -1,32 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
|
|
||||||
CUDA_VISIBLE_DEVICES=0 python ../../src/train_bash.py \
|
|
||||||
--stage sft \
|
|
||||||
--do_train \
|
|
||||||
--model_name_or_path meta-llama/Llama-2-7b-hf \
|
|
||||||
--dataset alpaca_gpt4_en,glaive_toolcall \
|
|
||||||
--dataset_dir ../../data \
|
|
||||||
--template default \
|
|
||||||
--finetuning_type lora \
|
|
||||||
--lora_target q_proj,v_proj \
|
|
||||||
--output_dir ../../saves/LLaMA2-7B/lora/sft \
|
|
||||||
--overwrite_cache \
|
|
||||||
--overwrite_output_dir \
|
|
||||||
--cutoff_len 1024 \
|
|
||||||
--preprocessing_num_workers 16 \
|
|
||||||
--per_device_train_batch_size 1 \
|
|
||||||
--per_device_eval_batch_size 1 \
|
|
||||||
--gradient_accumulation_steps 8 \
|
|
||||||
--lr_scheduler_type cosine \
|
|
||||||
--logging_steps 10 \
|
|
||||||
--warmup_steps 20 \
|
|
||||||
--save_steps 100 \
|
|
||||||
--eval_steps 100 \
|
|
||||||
--evaluation_strategy steps \
|
|
||||||
--load_best_model_at_end \
|
|
||||||
--learning_rate 5e-5 \
|
|
||||||
--num_train_epochs 3.0 \
|
|
||||||
--max_samples 3000 \
|
|
||||||
--val_size 0.1 \
|
|
||||||
--plot_loss \
|
|
||||||
--fp16
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
x
Reference in New Issue
Block a user