diff --git a/README.md b/README.md
index 347ebe7e..14a2084d 100644
--- a/README.md
+++ b/README.md
@@ -276,18 +276,19 @@ huggingface-cli login
| ------------ | ------- | --------- |
| python | 3.8 | 3.10 |
| torch | 1.13.1 | 2.2.0 |
-| transformers | 4.37.2 | 4.39.3 |
-| datasets | 2.14.3 | 2.18.0 |
-| accelerate | 0.27.2 | 0.28.0 |
+| transformers | 4.37.2 | 4.40.1 |
+| datasets | 2.14.3 | 2.19.1 |
+| accelerate | 0.27.2 | 0.30.0 |
| peft | 0.9.0 | 0.10.0 |
-| trl | 0.8.1 | 0.8.1 |
+| trl | 0.8.1 | 0.8.6 |
| Optional | Minimum | Recommend |
| ------------ | ------- | --------- |
| CUDA | 11.6 | 12.2 |
| deepspeed | 0.10.0 | 0.14.0 |
-| bitsandbytes | 0.39.0 | 0.43.0 |
-| flash-attn | 2.3.0 | 2.5.6 |
+| bitsandbytes | 0.39.0 | 0.43.1 |
+| vllm | 0.4.0 | 0.4.2 |
+| flash-attn | 2.3.0 | 2.5.8 |
### Hardware Requirement
@@ -305,24 +306,15 @@ huggingface-cli login
## Getting Started
-### Data Preparation
-
-Please refer to [data/README.md](data/README.md) for checking the details about the format of dataset files. You can either use datasets on HuggingFace / ModelScope hub or load the dataset in local disk.
-
-> [!NOTE]
-> Please update `data/dataset_info.json` to use your custom dataset.
-
-### Dependence Installation
+### Installation
```bash
git clone https://github.com/hiyouga/LLaMA-Factory.git
-conda create -n llama_factory python=3.10
-conda activate llama_factory
cd LLaMA-Factory
pip install -e .[metrics]
```
-Extra dependencies available: deepspeed, metrics, galore, badam, vllm, bitsandbytes, gptq, awq, aqlm, qwen, modelscope, quality
+Extra dependencies available: metrics, deepspeed, bitsandbytes, vllm, galore, badam, gptq, awq, aqlm, qwen, modelscope, quality
For Windows users
@@ -336,19 +328,41 @@ To enable FlashAttention-2 on the Windows platform, you need to install the prec
-### Train with LLaMA Board GUI (powered by [Gradio](https://github.com/gradio-app/gradio))
+### Data Preparation
+
+Please refer to [data/README.md](data/README.md) for checking the details about the format of dataset files. You can either use datasets on HuggingFace / ModelScope hub or load the dataset in local disk.
+
+> [!NOTE]
+> Please update `data/dataset_info.json` to use your custom dataset.
+
+### Quickstart
+
+Use the following 3 commands to conduct LoRA **fine-tuning**, **inference** and **merging** for Llama3-8B-Instruct model, respectively.
+
+```bash
+CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/lora_single_gpu/llama3_lora_sft.yaml
+CUDA_VISIBLE_DEVICES=0 llamafactory-cli chat examples/inference/llama3_lora_sft.yaml
+CUDA_VISIBLE_DEVICES=0 llamafactory-cli export examples/merge_lora/llama3_lora_sft.yaml
+```
+
+See [examples/README.md](examples/README.md) for advanced usage (including distributed training).
+
+> [!TIP]
+> Use `llamafactory-cli help` to show help information.
+
+### Use LLaMA Board GUI (powered by [Gradio](https://github.com/gradio-app/gradio))
> [!IMPORTANT]
-> LLaMA Board GUI only supports training on a single GPU, please use [CLI](#train-with-command-line-interface) for distributed training.
+> LLaMA Board GUI only supports training on a single GPU.
#### Use local environment
```bash
-llamafactory-cli webui
+CUDA_VISIBLE_DEVICES=0 llamafactory-cli webui
```
> [!TIP]
-> To modify the default setting in the LLaMA Board GUI, you can use environment variables, e.g., `export CUDA_VISIBLE_DEVICES=0 GRADIO_SERVER_NAME=0.0.0.0 GRADIO_SERVER_PORT=7860 GRADIO_SHARE=False` (use `set` command on Windows OS).
+> To modify the default setting in the LLaMA Board GUI, you can use environment variables, e.g., `export GRADIO_SERVER_NAME=0.0.0.0 GRADIO_SERVER_PORT=7860 GRADIO_SHARE=False` (use `set` command on Windows OS).
For Alibaba Cloud users
@@ -389,21 +403,10 @@ docker compose -f ./docker-compose.yml up -d
-### Train with Command Line Interface
-
-See [examples/README.md](examples/README.md) for usage.
-
-> [!TIP]
-> Use `llamafactory-cli train -h` to display arguments description.
-
### Deploy with OpenAI-style API and vLLM
```bash
-CUDA_VISIBLE_DEVICES=0,1 API_PORT=8000 llamafactory-cli api \
- --model_name_or_path meta-llama/Meta-Llama-3-8B-Instruct \
- --template llama3 \
- --infer_backend vllm \
- --vllm_enforce_eager
+CUDA_VISIBLE_DEVICES=0,1 API_PORT=8000 llamafactory-cli api examples/inference/llama3_vllm.yaml
```
### Download from ModelScope Hub
diff --git a/README_zh.md b/README_zh.md
index 8a2fb79b..daf5f2e8 100644
--- a/README_zh.md
+++ b/README_zh.md
@@ -163,7 +163,7 @@ https://github.com/hiyouga/LLaMA-Factory/assets/16256802/ec36a9dd-37f4-4f72-81bd
| [Yuan](https://huggingface.co/IEITYuan) | 2B/51B/102B | q_proj,v_proj | yuan |
> [!NOTE]
-> **默认模块**应作为 `--lora_target` 参数的默认值,可使用 `--lora_target all` 参数指定全部模块以得到更好的效果。
+> **默认模块**应作为 `--lora_target` 参数的默认值,可使用 `--lora_target all` 参数指定全部模块以取得更好的效果。
>
> 对于所有“基座”(Base)模型,`--template` 参数可以是 `default`, `alpaca`, `vicuna` 等任意值。但“对话”(Instruct/Chat)模型请务必使用**对应的模板**。
>
@@ -276,18 +276,19 @@ huggingface-cli login
| ------------ | ------- | --------- |
| python | 3.8 | 3.10 |
| torch | 1.13.1 | 2.2.0 |
-| transformers | 4.37.2 | 4.39.3 |
-| datasets | 2.14.3 | 2.18.0 |
-| accelerate | 0.27.2 | 0.28.0 |
+| transformers | 4.37.2 | 4.40.1 |
+| datasets | 2.14.3 | 2.19.1 |
+| accelerate | 0.27.2 | 0.30.0 |
| peft | 0.9.0 | 0.10.0 |
-| trl | 0.8.1 | 0.8.1 |
+| trl | 0.8.1 | 0.8.6 |
| 可选项 | 至少 | 推荐 |
| ------------ | ------- | --------- |
| CUDA | 11.6 | 12.2 |
| deepspeed | 0.10.0 | 0.14.0 |
-| bitsandbytes | 0.39.0 | 0.43.0 |
-| flash-attn | 2.3.0 | 2.5.6 |
+| bitsandbytes | 0.39.0 | 0.43.1 |
+| vllm | 0.4.0 | 0.4.2 |
+| flash-attn | 2.3.0 | 2.5.8 |
### 硬件依赖
@@ -305,24 +306,15 @@ huggingface-cli login
## 如何使用
-### 数据准备
-
-关于数据集文件的格式,请参考 [data/README_zh.md](data/README_zh.md) 的内容。你可以使用 HuggingFace / ModelScope 上的数据集或加载本地数据集。
-
-> [!NOTE]
-> 使用自定义数据集时,请更新 `data/dataset_info.json` 文件。
-
-### 安装依赖
+### 安装 LLaMA Factory
```bash
git clone https://github.com/hiyouga/LLaMA-Factory.git
-conda create -n llama_factory python=3.10
-conda activate llama_factory
cd LLaMA-Factory
pip install -e .[metrics]
```
-可选的额外依赖项:deepspeed、metrics、galore、badam、vllm、bitsandbytes、gptq、awq、aqlm、qwen、modelscope、quality
+可选的额外依赖项:metrics、deepspeed、bitsandbytes、vllm、galore、badam、gptq、awq、aqlm、qwen、modelscope、quality
Windows 用户指南
@@ -336,19 +328,41 @@ pip install https://github.com/jllllll/bitsandbytes-windows-webui/releases/downl
-### 利用 LLaMA Board 可视化界面训练(由 [Gradio](https://github.com/gradio-app/gradio) 驱动)
+### 数据准备
+
+关于数据集文件的格式,请参考 [data/README_zh.md](data/README_zh.md) 的内容。你可以使用 HuggingFace / ModelScope 上的数据集或加载本地数据集。
+
+> [!NOTE]
+> 使用自定义数据集时,请更新 `data/dataset_info.json` 文件。
+
+### 快速开始
+
+下面三行命令分别对 Llama3-8B-Instruct 模型进行 LoRA **微调**、**推理**和**合并**。
+
+```bash
+CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/lora_single_gpu/llama3_lora_sft.yaml
+CUDA_VISIBLE_DEVICES=0 llamafactory-cli chat examples/inference/llama3_lora_sft.yaml
+CUDA_VISIBLE_DEVICES=0 llamafactory-cli export examples/merge_lora/llama3_lora_sft.yaml
+```
+
+高级用法请参考 [examples/README_zh.md](examples/README_zh.md)(包括多 GPU 微调)。
+
+> [!TIP]
+> 使用 `llamafactory-cli help` 显示帮助信息。
+
+### 使用 LLaMA Board 可视化界面(由 [Gradio](https://github.com/gradio-app/gradio) 驱动)
> [!IMPORTANT]
-> LLaMA Board 可视化界面目前仅支持单 GPU 训练,请使用[命令行接口](#利用命令行接口训练)来进行多 GPU 分布式训练。
+> LLaMA Board 可视化界面目前仅支持单 GPU 训练。
#### 使用本地环境
```bash
-llamafactory-cli webui
+CUDA_VISIBLE_DEVICES=0 llamafactory-cli webui
```
> [!TIP]
-> 您可以使用环境变量来修改 LLaMA Board 可视化界面的默认设置,例如 `export CUDA_VISIBLE_DEVICES=0 GRADIO_SERVER_NAME=0.0.0.0 GRADIO_SERVER_PORT=7860 GRADIO_SHARE=False`(Windows 系统可使用 `set` 指令)。
+> 您可以使用环境变量来修改 LLaMA Board 可视化界面的默认设置,例如 `export GRADIO_SERVER_NAME=0.0.0.0 GRADIO_SERVER_PORT=7860 GRADIO_SHARE=False`(Windows 系统可使用 `set` 指令)。
阿里云用户指南
@@ -389,21 +403,10 @@ docker compose -f ./docker-compose.yml up -d
-### 利用命令行接口训练
-
-使用方法请参考 [examples/README_zh.md](examples/README_zh.md)。
-
-> [!TIP]
-> 您可以执行 `llamafactory-cli train -h` 来查看参数文档。
-
### 利用 vLLM 部署 OpenAI API
```bash
-CUDA_VISIBLE_DEVICES=0,1 API_PORT=8000 llamafactory-cli api \
- --model_name_or_path meta-llama/Meta-Llama-3-8B-Instruct \
- --template llama3 \
- --infer_backend vllm \
- --vllm_enforce_eager
+CUDA_VISIBLE_DEVICES=0,1 API_PORT=8000 llamafactory-cli api examples/inference/llama3_vllm.yaml
```
### 从魔搭社区下载
diff --git a/examples/README.md b/examples/README.md
index 895e9c72..ba993b99 100644
--- a/examples/README.md
+++ b/examples/README.md
@@ -1,50 +1,218 @@
We provide diverse examples about fine-tuning LLMs.
+Make sure to execute these commands in the `LLaMA-Factory` directory.
+
+## Table of Contents
+
+- [LoRA Fine-Tuning on A Single GPU](#lora-fine-tuning-on-a-single-gpu)
+- [QLoRA Fine-Tuning on a Single GPU](#qlora-fine-tuning-on-a-single-gpu)
+- [LoRA Fine-Tuning on Multiple GPUs](#lora-fine-tuning-on-multiple-gpus)
+- [Full-Parameter Fine-Tuning on Multiple GPUs](#full-parameter-fine-tuning-on-multiple-gpus)
+- [Merging LoRA Adapters and Quantization](#merging-lora-adapters-and-quantization)
+- [Inferring LoRA Fine-Tuned Models](#inferring-lora-fine-tuned-models)
+- [Extras](#extras)
+
+## Examples
+
+### LoRA Fine-Tuning on A Single GPU
+
+#### (Continuous) Pre-Training
+
+```bash
+CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/lora_single_gpu/llama3_lora_pretrain.yaml
```
-examples/
-├── lora_single_gpu/
-│ ├── pretrain.sh: Do continuous pre-training using LoRA
-│ ├── sft.sh: Do supervised fine-tuning using LoRA
-│ ├── reward.sh: Do reward modeling using LoRA
-│ ├── ppo.sh: Do PPO training using LoRA
-│ ├── dpo.sh: Do DPO training using LoRA
-│ ├── orpo.sh: Do ORPO training using LoRA
-│ ├── sft_mllm.sh: Do supervised fine-tuning on multimodal data using LoRA
-│ ├── prepare.sh: Save tokenized dataset
-│ └── predict.sh: Do batch predict and compute BLEU and ROUGE scores after LoRA tuning
-├── qlora_single_gpu/
-│ ├── bitsandbytes.sh: Fine-tune 4/8-bit BNB models using QLoRA
-│ ├── gptq.sh: Fine-tune 4/8-bit GPTQ models using QLoRA
-│ ├── awq.sh: Fine-tune 4-bit AWQ models using QLoRA
-│ └── aqlm.sh: Fine-tune 2-bit AQLM models using QLoRA
-├── lora_multi_gpu/
-│ ├── single_node.sh: Fine-tune model with Accelerate on single node using LoRA
-│ ├── multi_node.sh: Fine-tune model with Accelerate on multiple nodes using LoRA
-│ └── ds_zero3.sh: Fine-tune model with DeepSpeed ZeRO-3 using LoRA (weight sharding)
-├── full_multi_gpu/
-│ ├── single_node.sh: Full fine-tune model with DeepSpeed on single node
-│ ├── multi_node.sh: Full fine-tune model with DeepSpeed on multiple nodes
-│ └── predict.sh: Do parallel batch predict and compute BLEU and ROUGE scores after full tuning
-├── merge_lora/
-│ ├── merge.sh: Merge LoRA weights into the pre-trained models
-│ └── quantize.sh: Quantize the fine-tuned model with AutoGPTQ
-├── inference/
-│ ├── cli_demo.sh: Chat with fine-tuned model in the CLI with LoRA adapters
-│ ├── api_demo.sh: Chat with fine-tuned model in an OpenAI-style API with LoRA adapters
-│ ├── web_demo.sh: Chat with fine-tuned model in the Web browser with LoRA adapters
-│ └── evaluate.sh: Evaluate model on the MMLU/CMMLU/C-Eval benchmarks with LoRA adapters
-└── extras/
- ├── galore/
- │ └── sft.sh: Fine-tune model with GaLore
- ├── badam/
- │ └── sft.sh: Fine-tune model with BAdam
- ├── loraplus/
- │ └── sft.sh: Fine-tune model using LoRA+
- ├── mod/
- │ └── sft.sh: Fine-tune model using Mixture-of-Depths
- ├── llama_pro/
- │ ├── expand.sh: Expand layers in the model
- │ └── sft.sh: Fine-tune the expanded model
- └── fsdp_qlora/
- └── sft.sh: Fine-tune quantized model with FSDP+QLoRA
+
+#### Supervised Fine-Tuning
+
+```bash
+CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/lora_single_gpu/llama3_lora_sft.yaml
+```
+
+#### Reward Modeling
+
+```bash
+CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/lora_single_gpu/llama3_lora_reward.yaml
+```
+
+#### PPO Training
+
+```bash
+CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/lora_single_gpu/llama3_lora_ppo.yaml
+```
+
+#### DPO Training
+
+```bash
+CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/lora_single_gpu/llama3_lora_dpo.yaml
+```
+
+#### ORPO Training
+
+```bash
+CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/lora_single_gpu/llama3_lora_orpo.yaml
+```
+
+#### Multimodal Supervised Fine-Tuning
+
+```bash
+CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/lora_single_gpu/llava1_5_lora_sft.yaml
+```
+
+#### Preprocess Dataset
+
+It is useful for large dataset, use `tokenized_path` in config to load the preprocessed dataset.
+
+```bash
+CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/lora_single_gpu/llama3_preprocess.yaml
+```
+
+#### Evaluating on MMLU/CMMLU/C-Eval Benchmarks
+
+```bash
+CUDA_VISIBLE_DEVICES=0 llamafactory-cli eval examples/lora_single_gpu/llama3_lora_eval.yaml
+```
+
+#### Batch Predicting and Computing BLEU and ROUGE Scores
+
+```bash
+CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/lora_single_gpu/llama3_lora_predict.yaml
+```
+
+### QLoRA Fine-Tuning on a Single GPU
+
+#### Supervised Fine-Tuning with 4/8-bit Bitsandbytes Quantization (Recommended)
+
+```bash
+CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/qlora_single_gpu/llama3_lora_sft_bitsandbytes.yaml
+```
+
+#### Supervised Fine-Tuning with 4/8-bit GPTQ Quantization
+
+```bash
+CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/qlora_single_gpu/llama3_lora_sft_gptq.yaml
+```
+
+#### Supervised Fine-Tuning with 4-bit AWQ Quantization
+
+```bash
+CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/qlora_single_gpu/llama3_lora_sft_awq.yaml
+```
+
+#### Supervised Fine-Tuning with 2-bit AQLM Quantization
+
+```bash
+CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/qlora_single_gpu/llama3_lora_sft_aqlm.yaml
+```
+
+### LoRA Fine-Tuning on Multiple GPUs
+
+#### Supervised Fine-Tuning with Accelerate on Single Node
+
+```bash
+bash examples/lora_multi_gpu/single_node.sh
+```
+
+#### Supervised Fine-Tuning with Accelerate on Multiple Nodes
+
+```bash
+bash examples/lora_multi_gpu/multi_node.sh
+```
+
+#### Supervised Fine-Tuning with DeepSpeed ZeRO-3 (Weight Sharding)
+
+```bash
+bash examples/lora_multi_gpu/ds_zero3.sh
+```
+
+### Full-Parameter Fine-Tuning on Multiple GPUs
+
+#### Supervised Fine-Tuning with Accelerate on Single Node
+
+```bash
+bash examples/full_multi_gpu/single_node.sh
+```
+
+#### Supervised Fine-Tuning with Accelerate on Multiple Nodes
+
+```bash
+bash examples/full_multi_gpu/multi_node.sh
+```
+
+#### Batch Predicting and Computing BLEU and ROUGE Scores
+
+```bash
+bash examples/full_multi_gpu/predict.sh
+```
+
+### Merging LoRA Adapters and Quantization
+
+#### Merge LoRA Adapters
+
+```bash
+CUDA_VISIBLE_DEVICES=0 llamafactory-cli export examples/merge_lora/llama3_lora_sft.yaml
+```
+
+#### Quantizing Model using AutoGPTQ
+
+```bash
+CUDA_VISIBLE_DEVICES=0 llamafactory-cli export examples/merge_lora/llama3_gptq.yaml
+```
+
+### Inferring LoRA Fine-Tuned Models
+
+#### Use CLI
+
+```bash
+CUDA_VISIBLE_DEVICES=0 llamafactory-cli chat examples/merge_lora/llama3_lora_sft.yaml
+```
+
+#### Use Web UI
+
+```bash
+CUDA_VISIBLE_DEVICES=0 llamafactory-cli webchat examples/merge_lora/llama3_lora_sft.yaml
+```
+
+#### Launch OpenAI-style API
+
+```bash
+CUDA_VISIBLE_DEVICES=0 llamafactory-cli api examples/merge_lora/llama3_lora_sft.yaml
+```
+
+### Extras
+
+#### Full-Parameter Fine-Tuning using GaLore
+
+```bash
+CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/extras/galore/llama3_full_sft.yaml
+```
+
+#### Full-Parameter Fine-Tuning using BAdam
+
+```bash
+CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/extras/badam/llama3_full_sft.yaml
+```
+
+#### LoRA+ Fine-Tuning
+
+```bash
+CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/extras/loraplus/llama3_lora_sft.yaml
+```
+
+#### Mixture-of-Depths Fine-Tuning
+
+```bash
+CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/extras/mod/llama3_full_sft.yaml
+```
+
+#### LLaMA-Pro Fine-Tuning
+
+```bash
+bash examples/extras/llama_pro/expand.sh
+CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/extras/llama_pro/llama3_freeze_sft.yaml
+```
+
+#### FSDP+QLoRA Fine-Tuning
+
+```bash
+bash examples/extras/fsdp_qlora/single_node.sh
```
diff --git a/examples/README_zh.md b/examples/README_zh.md
index 091a877f..491ec688 100644
--- a/examples/README_zh.md
+++ b/examples/README_zh.md
@@ -1,50 +1,218 @@
我们提供了多样化的大模型微调示例脚本。
+请确保在 `LLaMA-Factory` 目录下执行下述命令。
+
+## 目录
+
+- [单 GPU LoRA 微调](#单-gpu-lora-微调)
+- [单 GPU QLoRA 微调](#单-gpu-qlora-微调)
+- [多 GPU LoRA 微调](#多-gpu-lora-微调)
+- [多 GPU 全参数微调](#多-gpu-全参数微调)
+- [合并 LoRA 适配器与模型量化](#合并-lora-适配器与模型量化)
+- [推理 LoRA 模型](#推理-lora-模型)
+- [杂项](#杂项)
+
+## 示例
+
+### 单 GPU LoRA 微调
+
+#### (增量)预训练
+
+```bash
+CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/lora_single_gpu/llama3_lora_pretrain.yaml
```
-examples/
-├── lora_single_gpu/
-│ ├── pretrain.sh: 基于 LoRA 进行增量预训练
-│ ├── sft.sh: 基于 LoRA 进行指令监督微调
-│ ├── reward.sh: 基于 LoRA 进行奖励模型训练
-│ ├── ppo.sh: 基于 LoRA 进行 PPO 训练
-│ ├── dpo.sh: 基于 LoRA 进行 DPO 训练
-│ ├── orpo.sh: 基于 LoRA 进行 ORPO 训练
-│ ├── sft_mllm.sh: 基于 LoRA 进行多模态指令监督微调
-│ ├── prepare.sh: 保存预处理后的数据集
-│ └── predict.sh: 基于 LoRA 进行批量预测并计算 BLEU 和 ROUGE 分数
-├── qlora_single_gpu/
-│ ├── bitsandbytes.sh: 基于 QLoRA 微调 4/8 比特 BNB 模型
-│ ├── gptq.sh: 基于 QLoRA 微调 4/8 比特 GPTQ 模型
-│ ├── awq.sh: 基于 QLoRA 微调 4 比特 AWQ 模型
-│ └── aqlm.sh: 基于 QLoRA 微调 2 比特 AQLM 模型
-├── lora_multi_gpu/
-│ ├── single_node.sh: 使用 Accelerate 进行单节点 LoRA 训练
-│ ├── multi_node.sh: 使用 Accelerate 进行多节点 LoRA 训练
-│ └── ds_zero3.sh: 使用 DeepSpeed ZeRO-3 进行 LoRA 训练(拆分权重)
-├── full_multi_gpu/
-│ ├── single_node.sh: 使用 DeepSpeed 进行单节点全量训练
-│ ├── multi_node.sh: 使用 DeepSpeed 进行多节点全量训练
-│ └── predict.sh: 基于全量训练进行多卡批量预测并计算 BLEU 和 ROUGE 分数
-├── merge_lora/
-│ ├── merge.sh: 将 LoRA 权重合并到预训练模型中
-│ └── quantize.sh: 使用 AutoGPTQ 量化微调后的模型
-├── inference/
-│ ├── cli_demo.sh: 启动 LoRA 模型的命令行推理接口
-│ ├── api_demo.sh: 启动 LoRA 模型的 OpenAI 风格 API
-│ ├── web_demo.sh: 启动 LoRA 模型的浏览器推理接口
-│ └── evaluate.sh: 在 MMLU/CMMLU/C-Eval 数据集上评测 LoRA 模型
-└── extras/
- ├── galore/
- │ └── sft.sh: 使用 GaLore 训练模型
- ├── badam/
- │ └── sft.sh: 使用 BAdam 训练模型
- ├── loraplus/
- │ └── sft.sh: 使用 LoRA+ 训练模型
- ├── mod/
- │ └── sft.sh: 使用深度混合训练模型
- ├── llama_pro/
- │ ├── expand.sh: 扩展模型中的层
- │ └── sft.sh: 训练扩展后的模型
- └── fsdp_qlora/
- └── sft.sh: 使用 FSDP+QLoRA 微调量化模型
+
+#### 指令监督微调
+
+```bash
+CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/lora_single_gpu/llama3_lora_sft.yaml
+```
+
+#### 奖励模型训练
+
+```bash
+CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/lora_single_gpu/llama3_lora_reward.yaml
+```
+
+#### PPO 训练
+
+```bash
+CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/lora_single_gpu/llama3_lora_ppo.yaml
+```
+
+#### DPO 训练
+
+```bash
+CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/lora_single_gpu/llama3_lora_dpo.yaml
+```
+
+#### ORPO 训练
+
+```bash
+CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/lora_single_gpu/llama3_lora_orpo.yaml
+```
+
+#### 多模态指令监督微调
+
+```bash
+CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/lora_single_gpu/llava1_5_lora_sft.yaml
+```
+
+#### 预处理数据集
+
+对于大数据集有帮助,在配置中使用 `tokenized_path` 以加载预处理后的数据集。
+
+```bash
+CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/lora_single_gpu/llama3_preprocess.yaml
+```
+
+#### 在 MMLU/CMMLU/C-Eval 上评估
+
+```bash
+CUDA_VISIBLE_DEVICES=0 llamafactory-cli eval examples/lora_single_gpu/llama3_lora_eval.yaml
+```
+
+#### 批量预测并计算 BLEU 和 ROUGE 分数
+
+```bash
+CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/lora_single_gpu/llama3_lora_predict.yaml
+```
+
+### 单 GPU QLoRA 微调
+
+#### 基于 4/8 比特 Bitsandbytes 量化进行指令监督微调(推荐)
+
+```bash
+CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/qlora_single_gpu/llama3_lora_sft_bitsandbytes.yaml
+```
+
+#### 基于 4/8 比特 GPTQ 量化进行指令监督微调
+
+```bash
+CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/qlora_single_gpu/llama3_lora_sft_gptq.yaml
+```
+
+#### 基于 4 比特 AWQ 量化进行指令监督微调
+
+```bash
+CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/qlora_single_gpu/llama3_lora_sft_awq.yaml
+```
+
+#### 基于 2 比特 AQLM 量化进行指令监督微调
+
+```bash
+CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/qlora_single_gpu/llama3_lora_sft_aqlm.yaml
+```
+
+### 多 GPU LoRA 微调
+
+#### 使用 Accelerate 进行单节点训练
+
+```bash
+bash examples/lora_multi_gpu/single_node.sh
+```
+
+#### 使用 Accelerate 进行多节点训练
+
+```bash
+bash examples/lora_multi_gpu/multi_node.sh
+```
+
+#### 使用 DeepSpeed ZeRO-3 平均分配显存
+
+```bash
+bash examples/lora_multi_gpu/ds_zero3.sh
+```
+
+### 多 GPU 全参数微调
+
+#### 使用 DeepSpeed 进行单节点训练
+
+```bash
+bash examples/full_multi_gpu/single_node.sh
+```
+
+#### 使用 DeepSpeed 进行多节点训练
+
+```bash
+bash examples/full_multi_gpu/multi_node.sh
+```
+
+#### 批量预测并计算 BLEU 和 ROUGE 分数
+
+```bash
+bash examples/full_multi_gpu/predict.sh
+```
+
+### 合并 LoRA 适配器与模型量化
+
+#### 合并 LoRA 适配器
+
+```bash
+CUDA_VISIBLE_DEVICES=0 llamafactory-cli export examples/merge_lora/llama3_lora_sft.yaml
+```
+
+#### 使用 AutoGPTQ 量化模型
+
+```bash
+CUDA_VISIBLE_DEVICES=0 llamafactory-cli export examples/merge_lora/llama3_gptq.yaml
+```
+
+### 推理 LoRA 模型
+
+#### 使用命令行接口
+
+```bash
+CUDA_VISIBLE_DEVICES=0 llamafactory-cli chat examples/merge_lora/llama3_lora_sft.yaml
+```
+
+#### 使用浏览器界面
+
+```bash
+CUDA_VISIBLE_DEVICES=0 llamafactory-cli webchat examples/merge_lora/llama3_lora_sft.yaml
+```
+
+#### 启动 OpenAI 风格 API
+
+```bash
+CUDA_VISIBLE_DEVICES=0 llamafactory-cli api examples/merge_lora/llama3_lora_sft.yaml
+```
+
+### 杂项
+
+#### 使用 GaLore 进行全参数训练
+
+```bash
+CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/extras/galore/llama3_full_sft.yaml
+```
+
+#### 使用 BAdam 进行全参数训练
+
+```bash
+CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/extras/badam/llama3_full_sft.yaml
+```
+
+#### LoRA+ 微调
+
+```bash
+CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/extras/loraplus/llama3_lora_sft.yaml
+```
+
+#### 深度混合微调
+
+```bash
+CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/extras/mod/llama3_full_sft.yaml
+```
+
+#### LLaMA-Pro 微调
+
+```bash
+bash examples/extras/llama_pro/expand.sh
+CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/extras/llama_pro/llama3_freeze_sft.yaml
+```
+
+#### FSDP+QLoRA 微调
+
+```bash
+bash examples/extras/fsdp_qlora/single_node.sh
```
diff --git a/examples/extras/badam/llama3_lora_sft.yaml b/examples/extras/badam/llama3_lora_sft.yaml
new file mode 100644
index 00000000..9f1f1976
--- /dev/null
+++ b/examples/extras/badam/llama3_lora_sft.yaml
@@ -0,0 +1,41 @@
+# model
+model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
+
+# method
+stage: sft
+do_train: true
+finetuning_type: full
+use_badam: true
+badam_switch_mode: descending
+badam_switch_interval: 50
+badam_verbose: 2
+
+# dataset
+dataset: identity,alpaca_gpt4_en
+template: llama3
+cutoff_len: 1024
+max_samples: 1000
+val_size: 0.1
+overwrite_cache: true
+preprocessing_num_workers: 16
+
+# output
+output_dir: saves/llama3-8b/full/sft
+logging_steps: 10
+save_steps: 500
+plot_loss: true
+overwrite_output_dir: true
+
+# train
+per_device_train_batch_size: 1
+gradient_accumulation_steps: 8
+learning_rate: 0.0001
+num_train_epochs: 3.0
+lr_scheduler_type: cosine
+warmup_steps: 0.1
+pure_bf16: true
+
+# eval
+per_device_eval_batch_size: 1
+evaluation_strategy: steps
+eval_steps: 500
diff --git a/examples/extras/badam/sft.sh b/examples/extras/badam/sft.sh
deleted file mode 100644
index 4bcfe9d2..00000000
--- a/examples/extras/badam/sft.sh
+++ /dev/null
@@ -1,35 +0,0 @@
-#!/bin/bash
-
-CUDA_VISIBLE_DEVICES=0 llamafactory-cli train \
- --stage sft \
- --do_train \
- --model_name_or_path meta-llama/Llama-2-7b-hf \
- --dataset alpaca_gpt4_en,glaive_toolcall \
- --dataset_dir ../../../data \
- --template default \
- --finetuning_type full \
- --use_badam \
- --badam_switch_mode descending \
- --badam_switch_interval 50 \
- --badam_verbose 2 \
- --output_dir ../../../saves/LLaMA2-7B/badam/sft \
- --overwrite_cache \
- --overwrite_output_dir \
- --cutoff_len 1024 \
- --preprocessing_num_workers 16 \
- --per_device_train_batch_size 1 \
- --per_device_eval_batch_size 1 \
- --gradient_accumulation_steps 8 \
- --lr_scheduler_type cosine \
- --logging_steps 10 \
- --warmup_steps 20 \
- --save_steps 100 \
- --eval_steps 100 \
- --evaluation_strategy steps \
- --load_best_model_at_end \
- --learning_rate 5e-5 \
- --num_train_epochs 3.0 \
- --max_samples 3000 \
- --val_size 0.1 \
- --plot_loss \
- --pure_bf16
diff --git a/examples/extras/fsdp_qlora/llama3_lora_sft.yaml b/examples/extras/fsdp_qlora/llama3_lora_sft.yaml
new file mode 100644
index 00000000..64bf1356
--- /dev/null
+++ b/examples/extras/fsdp_qlora/llama3_lora_sft.yaml
@@ -0,0 +1,39 @@
+# model
+model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
+quantization_bit: 4
+
+# method
+stage: sft
+do_train: true
+finetuning_type: lora
+lora_target: q_proj,v_proj
+
+# dataset
+dataset: identity,alpaca_gpt4_en
+template: llama3
+cutoff_len: 1024
+max_samples: 1000
+val_size: 0.1
+overwrite_cache: true
+preprocessing_num_workers: 16
+
+# output
+output_dir: saves/llama3-8b/lora/sft
+logging_steps: 10
+save_steps: 500
+plot_loss: true
+overwrite_output_dir: true
+
+# train
+per_device_train_batch_size: 1
+gradient_accumulation_steps: 8
+learning_rate: 0.0001
+num_train_epochs: 3.0
+lr_scheduler_type: cosine
+warmup_steps: 0.1
+fp16: true
+
+# eval
+per_device_eval_batch_size: 1
+evaluation_strategy: steps
+eval_steps: 500
diff --git a/examples/extras/fsdp_qlora/sft.sh b/examples/extras/fsdp_qlora/sft.sh
deleted file mode 100644
index 9eb70a53..00000000
--- a/examples/extras/fsdp_qlora/sft.sh
+++ /dev/null
@@ -1,41 +0,0 @@
-#!/bin/bash
-# DO NOT use GPTQ/AWQ model in FSDP+QLoRA
-
-pip install "transformers>=4.39.1"
-pip install "accelerate>=0.28.0"
-pip install "bitsandbytes>=0.43.0"
-
-CUDA_VISIBLE_DEVICES=0,1 accelerate launch \
- --config_file ../../accelerate/fsdp_config.yaml \
- ../../../src/train.py \
- --stage sft \
- --do_train \
- --model_name_or_path meta-llama/Llama-2-70b-hf \
- --dataset alpaca_gpt4_en,glaive_toolcall \
- --dataset_dir ../../../data \
- --template default \
- --finetuning_type lora \
- --lora_target q_proj,v_proj \
- --output_dir ../../../saves/LLaMA2-70B/lora/sft \
- --overwrite_cache \
- --overwrite_output_dir \
- --cutoff_len 1024 \
- --preprocessing_num_workers 16 \
- --per_device_train_batch_size 1 \
- --per_device_eval_batch_size 1 \
- --gradient_accumulation_steps 4 \
- --lr_scheduler_type cosine \
- --logging_steps 10 \
- --warmup_steps 20 \
- --save_steps 100 \
- --eval_steps 100 \
- --evaluation_strategy steps \
- --load_best_model_at_end \
- --learning_rate 5e-5 \
- --num_train_epochs 3.0 \
- --max_samples 3000 \
- --val_size 0.1 \
- --ddp_timeout 180000000 \
- --quantization_bit 4 \
- --plot_loss \
- --fp16
diff --git a/examples/extras/fsdp_qlora/single_node.sh b/examples/extras/fsdp_qlora/single_node.sh
new file mode 100644
index 00000000..54ec2bd2
--- /dev/null
+++ b/examples/extras/fsdp_qlora/single_node.sh
@@ -0,0 +1,10 @@
+#!/bin/bash
+# DO NOT use GPTQ/AWQ model in FSDP+QLoRA
+
+pip install "transformers>=4.39.1"
+pip install "accelerate>=0.28.0"
+pip install "bitsandbytes>=0.43.0"
+
+CUDA_VISIBLE_DEVICES=0,1 accelerate launch \
+ --config_file examples/accelerate/fsdp_config.yaml \
+ src/train.py examples/extras/fsdp_qlora/llama3_lora_sft.yaml
diff --git a/examples/extras/galore/llama3_full_sft.yaml b/examples/extras/galore/llama3_full_sft.yaml
new file mode 100644
index 00000000..5aec8af9
--- /dev/null
+++ b/examples/extras/galore/llama3_full_sft.yaml
@@ -0,0 +1,42 @@
+# model
+model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
+
+# method
+stage: sft
+do_train: true
+finetuning_type: full
+use_galore: true
+galore_layerwise: true
+galore_target: mlp,self_attn
+galore_rank: 128
+galore_scale: 2.0
+
+# dataset
+dataset: identity,alpaca_gpt4_en
+template: llama3
+cutoff_len: 1024
+max_samples: 1000
+val_size: 0.1
+overwrite_cache: true
+preprocessing_num_workers: 16
+
+# output
+output_dir: saves/llama3-8b/full/sft
+logging_steps: 10
+save_steps: 500
+plot_loss: true
+overwrite_output_dir: true
+
+# train
+per_device_train_batch_size: 1
+gradient_accumulation_steps: 1
+learning_rate: 0.0001
+num_train_epochs: 3.0
+lr_scheduler_type: cosine
+warmup_steps: 0.1
+pure_bf16: true
+
+# eval
+per_device_eval_batch_size: 1
+evaluation_strategy: steps
+eval_steps: 500
diff --git a/examples/extras/galore/sft.sh b/examples/extras/galore/sft.sh
deleted file mode 100644
index 283673e7..00000000
--- a/examples/extras/galore/sft.sh
+++ /dev/null
@@ -1,36 +0,0 @@
-#!/bin/bash
-
-CUDA_VISIBLE_DEVICES=0 llamafactory-cli train \
- --stage sft \
- --do_train \
- --model_name_or_path meta-llama/Llama-2-7b-hf \
- --dataset alpaca_gpt4_en,glaive_toolcall \
- --dataset_dir ../../../data \
- --template default \
- --finetuning_type full \
- --use_galore \
- --galore_layerwise \
- --galore_target mlp,self_attn \
- --galore_rank 128 \
- --galore_scale 2.0 \
- --output_dir ../../../saves/LLaMA2-7B/galore/sft \
- --overwrite_cache \
- --overwrite_output_dir \
- --cutoff_len 1024 \
- --preprocessing_num_workers 16 \
- --per_device_train_batch_size 1 \
- --per_device_eval_batch_size 1 \
- --gradient_accumulation_steps 1 \
- --lr_scheduler_type cosine \
- --logging_steps 10 \
- --warmup_steps 20 \
- --save_steps 100 \
- --eval_steps 100 \
- --evaluation_strategy steps \
- --load_best_model_at_end \
- --learning_rate 5e-5 \
- --num_train_epochs 3.0 \
- --max_samples 3000 \
- --val_size 0.1 \
- --plot_loss \
- --pure_bf16
diff --git a/examples/extras/llama_pro/expand.sh b/examples/extras/llama_pro/expand.sh
index b260902c..e0d41c7b 100644
--- a/examples/extras/llama_pro/expand.sh
+++ b/examples/extras/llama_pro/expand.sh
@@ -1,6 +1,6 @@
#!/bin/bash
-python ../../../scripts/llama_pro.py \
- --model_name_or_path meta-llama/Llama-2-7b-hf \
- --output_dir ../../../models/llama2-7b-pro \
+python scripts/llama_pro.py \
+ --model_name_or_path meta-llama/Meta-Llama-3-8B-Instruct \
+ --output_dir models/llama3-8b-instruct-pro \
--num_expand 8
diff --git a/examples/extras/llama_pro/llama3_freeze_sft.yaml b/examples/extras/llama_pro/llama3_freeze_sft.yaml
new file mode 100644
index 00000000..a54be8b8
--- /dev/null
+++ b/examples/extras/llama_pro/llama3_freeze_sft.yaml
@@ -0,0 +1,40 @@
+# model
+model_name_or_path: models/llama3-8b-instruct-pro
+
+# method
+stage: sft
+do_train: true
+finetuning_type: freeze
+name_module_trainable: all
+num_layer_trainable: 8
+use_llama_pro: true
+
+# dataset
+dataset: identity,alpaca_gpt4_en
+template: llama3
+cutoff_len: 1024
+max_samples: 1000
+val_size: 0.1
+overwrite_cache: true
+preprocessing_num_workers: 16
+
+# output
+output_dir: saves/llama3-8b-instruct-pro/freeze/sft
+logging_steps: 10
+save_steps: 500
+plot_loss: true
+overwrite_output_dir: true
+
+# train
+per_device_train_batch_size: 1
+gradient_accumulation_steps: 8
+learning_rate: 0.0001
+num_train_epochs: 3.0
+lr_scheduler_type: cosine
+warmup_steps: 0.1
+pure_bf16: true
+
+# eval
+per_device_eval_batch_size: 1
+evaluation_strategy: steps
+eval_steps: 500
diff --git a/examples/extras/llama_pro/sft.sh b/examples/extras/llama_pro/sft.sh
deleted file mode 100644
index 3e26e0a6..00000000
--- a/examples/extras/llama_pro/sft.sh
+++ /dev/null
@@ -1,34 +0,0 @@
-#!/bin/bash
-
-CUDA_VISIBLE_DEVICES=0 llamafactory-cli train \
- --stage sft \
- --do_train \
- --model_name_or_path ../../../models/llama2-7b-pro \
- --dataset alpaca_gpt4_en,glaive_toolcall \
- --dataset_dir ../../../data \
- --template default \
- --finetuning_type freeze \
- --name_module_trainable all \
- --num_layer_trainable 8 \
- --use_llama_pro \
- --output_dir ../../../saves/LLaMA2-7B-Pro/lora/sft \
- --overwrite_cache \
- --overwrite_output_dir \
- --cutoff_len 1024 \
- --preprocessing_num_workers 16 \
- --per_device_train_batch_size 1 \
- --per_device_eval_batch_size 1 \
- --gradient_accumulation_steps 8 \
- --lr_scheduler_type cosine \
- --logging_steps 10 \
- --warmup_steps 20 \
- --save_steps 100 \
- --eval_steps 100 \
- --evaluation_strategy steps \
- --load_best_model_at_end \
- --learning_rate 5e-5 \
- --num_train_epochs 3.0 \
- --max_samples 3000 \
- --val_size 0.1 \
- --plot_loss \
- --fp16
diff --git a/examples/extras/loraplus/llama3_lora_sft.yaml b/examples/extras/loraplus/llama3_lora_sft.yaml
new file mode 100644
index 00000000..dfb7058b
--- /dev/null
+++ b/examples/extras/loraplus/llama3_lora_sft.yaml
@@ -0,0 +1,39 @@
+# model
+model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
+
+# method
+stage: sft
+do_train: true
+finetuning_type: lora
+lora_target: q_proj,v_proj
+loraplus_lr_ratio: 16.0
+
+# dataset
+dataset: identity,alpaca_gpt4_en
+template: llama3
+cutoff_len: 1024
+max_samples: 1000
+val_size: 0.1
+overwrite_cache: true
+preprocessing_num_workers: 16
+
+# output
+output_dir: saves/llama3-8b/lora/sft
+logging_steps: 10
+save_steps: 500
+plot_loss: true
+overwrite_output_dir: true
+
+# train
+per_device_train_batch_size: 1
+gradient_accumulation_steps: 8
+learning_rate: 0.0001
+num_train_epochs: 3.0
+lr_scheduler_type: cosine
+warmup_steps: 0.1
+pure_bf16: true
+
+# eval
+per_device_eval_batch_size: 1
+evaluation_strategy: steps
+eval_steps: 500
diff --git a/examples/extras/loraplus/sft.sh b/examples/extras/loraplus/sft.sh
deleted file mode 100644
index 8d152d9e..00000000
--- a/examples/extras/loraplus/sft.sh
+++ /dev/null
@@ -1,33 +0,0 @@
-#!/bin/bash
-
-CUDA_VISIBLE_DEVICES=0 llamafactory-cli train \
- --stage sft \
- --do_train \
- --model_name_or_path meta-llama/Llama-2-7b-hf \
- --dataset alpaca_gpt4_en,glaive_toolcall \
- --dataset_dir ../../data \
- --template default \
- --finetuning_type lora \
- --lora_target q_proj,v_proj \
- --loraplus_lr_ratio 16.0 \
- --output_dir ../../saves/LLaMA2-7B/loraplus/sft \
- --overwrite_cache \
- --overwrite_output_dir \
- --cutoff_len 1024 \
- --preprocessing_num_workers 16 \
- --per_device_train_batch_size 1 \
- --per_device_eval_batch_size 1 \
- --gradient_accumulation_steps 8 \
- --lr_scheduler_type cosine \
- --logging_steps 10 \
- --warmup_steps 20 \
- --save_steps 100 \
- --eval_steps 100 \
- --evaluation_strategy steps \
- --load_best_model_at_end \
- --learning_rate 5e-5 \
- --num_train_epochs 3.0 \
- --max_samples 3000 \
- --val_size 0.1 \
- --plot_loss \
- --fp16
diff --git a/examples/extras/mod/llama3_full_sft.yaml b/examples/extras/mod/llama3_full_sft.yaml
new file mode 100644
index 00000000..5f80521d
--- /dev/null
+++ b/examples/extras/mod/llama3_full_sft.yaml
@@ -0,0 +1,39 @@
+# model
+model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
+
+# method
+stage: sft
+do_train: true
+finetuning_type: full
+mixture_of_depths: convert
+
+# dataset
+dataset: identity,alpaca_gpt4_en
+template: llama3
+cutoff_len: 1024
+max_samples: 1000
+val_size: 0.1
+overwrite_cache: true
+preprocessing_num_workers: 16
+
+# output
+output_dir: saves/llama3-8b-mod/full/sft
+logging_steps: 10
+save_steps: 500
+plot_loss: true
+overwrite_output_dir: true
+
+# train
+per_device_train_batch_size: 1
+gradient_accumulation_steps: 8
+optim: paged_adamw_8bit
+learning_rate: 0.0001
+num_train_epochs: 3.0
+lr_scheduler_type: cosine
+warmup_steps: 0.1
+pure_bf16: true
+
+# eval
+per_device_eval_batch_size: 1
+evaluation_strategy: steps
+eval_steps: 500
diff --git a/examples/extras/mod/sft.sh b/examples/extras/mod/sft.sh
deleted file mode 100644
index 5219751f..00000000
--- a/examples/extras/mod/sft.sh
+++ /dev/null
@@ -1,33 +0,0 @@
-#!/bin/bash
-
-CUDA_VISIBLE_DEVICES=0 llamafactory-cli train \
- --stage sft \
- --do_train \
- --model_name_or_path meta-llama/Llama-2-7b-hf \
- --dataset alpaca_gpt4_en,glaive_toolcall \
- --dataset_dir ../../../data \
- --template default \
- --finetuning_type full \
- --mixture_of_depths convert \
- --output_dir ../../../saves/LLaMA2-7B/mod/sft \
- --overwrite_cache \
- --overwrite_output_dir \
- --cutoff_len 1024 \
- --preprocessing_num_workers 16 \
- --per_device_train_batch_size 1 \
- --per_device_eval_batch_size 1 \
- --gradient_accumulation_steps 8 \
- --optim paged_adamw_8bit \
- --lr_scheduler_type cosine \
- --logging_steps 10 \
- --warmup_steps 20 \
- --save_steps 100 \
- --eval_steps 100 \
- --evaluation_strategy steps \
- --load_best_model_at_end \
- --learning_rate 5e-5 \
- --num_train_epochs 3.0 \
- --max_samples 3000 \
- --val_size 0.1 \
- --plot_loss \
- --pure_bf16
diff --git a/examples/full_multi_gpu/llama3_full_predict.yaml b/examples/full_multi_gpu/llama3_full_predict.yaml
new file mode 100644
index 00000000..5b9b680b
--- /dev/null
+++ b/examples/full_multi_gpu/llama3_full_predict.yaml
@@ -0,0 +1,23 @@
+# model
+model_name_or_path: saves/llama3-8b/full/sft
+
+# method
+stage: sft
+do_predict: true
+finetuning_type: full
+
+# dataset
+dataset: identity,alpaca_gpt4_en
+template: llama3
+cutoff_len: 1024
+max_samples: 50
+overwrite_cache: true
+preprocessing_num_workers: 16
+
+# output
+output_dir: saves/llama3-8b/full/predict
+overwrite_output_dir: true
+
+# eval
+per_device_eval_batch_size: 1
+predict_with_generate: true
diff --git a/examples/full_multi_gpu/llama3_full_sft.yaml b/examples/full_multi_gpu/llama3_full_sft.yaml
new file mode 100644
index 00000000..ef35e441
--- /dev/null
+++ b/examples/full_multi_gpu/llama3_full_sft.yaml
@@ -0,0 +1,41 @@
+# model
+model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
+
+# method
+stage: sft
+do_train: true
+finetuning_type: full
+
+# ddp
+ddp_timeout: 180000000
+deepspeed: examples/deepspeed/ds_z3_config.json
+
+# dataset
+dataset: identity,alpaca_gpt4_en
+template: llama3
+cutoff_len: 1024
+max_samples: 1000
+val_size: 0.1
+overwrite_cache: true
+preprocessing_num_workers: 16
+
+# output
+output_dir: saves/llama3-8b/full/sft
+logging_steps: 10
+save_steps: 500
+plot_loss: true
+overwrite_output_dir: true
+
+# train
+per_device_train_batch_size: 1
+gradient_accumulation_steps: 2
+learning_rate: 0.0001
+num_train_epochs: 3.0
+lr_scheduler_type: cosine
+warmup_steps: 0.1
+fp16: true
+
+# eval
+per_device_eval_batch_size: 1
+evaluation_strategy: steps
+eval_steps: 500
diff --git a/examples/full_multi_gpu/multi_node.sh b/examples/full_multi_gpu/multi_node.sh
index a1ffc0ee..9c2508b6 100644
--- a/examples/full_multi_gpu/multi_node.sh
+++ b/examples/full_multi_gpu/multi_node.sh
@@ -6,33 +6,4 @@ python -m torch.distributed.run \
--node_rank $RANK \
--master_addr $MASTER_ADDR \
--master_port $MASTER_PORT \
- ../../src/train.py \
- --deepspeed ../deepspeed/ds_z3_config.json \
- --stage sft \
- --do_train \
- --model_name_or_path meta-llama/Llama-2-7b-hf \
- --dataset alpaca_gpt4_en,glaive_toolcall \
- --dataset_dir ../../data \
- --template default \
- --finetuning_type full \
- --output_dir ../../saves/LLaMA2-7B/full/sft \
- --overwrite_cache \
- --overwrite_output_dir \
- --cutoff_len 1024 \
- --preprocessing_num_workers 16 \
- --per_device_train_batch_size 1 \
- --per_device_eval_batch_size 1 \
- --gradient_accumulation_steps 2 \
- --lr_scheduler_type cosine \
- --logging_steps 10 \
- --warmup_steps 20 \
- --save_steps 100 \
- --eval_steps 100 \
- --evaluation_strategy steps \
- --learning_rate 5e-5 \
- --num_train_epochs 3.0 \
- --max_samples 3000 \
- --val_size 0.1 \
- --ddp_timeout 180000000 \
- --plot_loss \
- --fp16
+ src/train.py examples/full_multi_gpu/llama3_full_sft.yaml
diff --git a/examples/full_multi_gpu/predict.sh b/examples/full_multi_gpu/predict.sh
index 7c2e458f..2445f444 100644
--- a/examples/full_multi_gpu/predict.sh
+++ b/examples/full_multi_gpu/predict.sh
@@ -1,20 +1,5 @@
#!/bin/bash
CUDA_VISIBLE_DEVICES=0,1,2,3 accelerate launch \
- --config_file ../accelerate/single_config.yaml \
- ../../src/train.py \
- --stage sft \
- --do_predict \
- --model_name_or_path ../../saves/LLaMA2-7B/full/sft \
- --dataset alpaca_gpt4_en,glaive_toolcall \
- --dataset_dir ../../data \
- --template default \
- --finetuning_type full \
- --output_dir ../../saves/LLaMA2-7B/full/predict \
- --overwrite_cache \
- --overwrite_output_dir \
- --cutoff_len 1024 \
- --preprocessing_num_workers 16 \
- --per_device_eval_batch_size 1 \
- --max_samples 20 \
- --predict_with_generate
+ --config_file examples/accelerate/single_config.yaml \
+ src/train.py examples/full_multi_gpu/llama3_full_predict.yaml
diff --git a/examples/full_multi_gpu/single_node.sh b/examples/full_multi_gpu/single_node.sh
index 73c7662d..f391166a 100644
--- a/examples/full_multi_gpu/single_node.sh
+++ b/examples/full_multi_gpu/single_node.sh
@@ -1,32 +1,4 @@
#!/bin/bash
-deepspeed --num_gpus 4 ../../src/train.py \
- --deepspeed ../deepspeed/ds_z3_config.json \
- --stage sft \
- --do_train \
- --model_name_or_path meta-llama/Llama-2-7b-hf \
- --dataset alpaca_gpt4_en,glaive_toolcall \
- --dataset_dir ../../data \
- --template default \
- --finetuning_type full \
- --output_dir ../../saves/LLaMA2-7B/full/sft \
- --overwrite_cache \
- --overwrite_output_dir \
- --cutoff_len 1024 \
- --preprocessing_num_workers 16 \
- --per_device_train_batch_size 1 \
- --per_device_eval_batch_size 1 \
- --gradient_accumulation_steps 2 \
- --lr_scheduler_type cosine \
- --logging_steps 10 \
- --warmup_steps 20 \
- --save_steps 100 \
- --eval_steps 100 \
- --evaluation_strategy steps \
- --learning_rate 5e-5 \
- --num_train_epochs 3.0 \
- --max_samples 3000 \
- --val_size 0.1 \
- --ddp_timeout 180000000 \
- --plot_loss \
- --fp16
+deepspeed --include "localhost:0,1,2,3" \
+ src/train.py examples/full_multi_gpu/llama3_full_sft.yaml
diff --git a/examples/inference/api_demo.sh b/examples/inference/api_demo.sh
deleted file mode 100644
index 6f0f1b2e..00000000
--- a/examples/inference/api_demo.sh
+++ /dev/null
@@ -1,7 +0,0 @@
-#!/bin/bash
-
-CUDA_VISIBLE_DEVICES=0 API_PORT=8000 llamafactory-cli api \
- --model_name_or_path meta-llama/Llama-2-7b-hf \
- --adapter_name_or_path ../../saves/LLaMA2-7B/lora/sft \
- --template default \
- --finetuning_type lora
diff --git a/examples/inference/cli_demo.sh b/examples/inference/cli_demo.sh
deleted file mode 100644
index bc762411..00000000
--- a/examples/inference/cli_demo.sh
+++ /dev/null
@@ -1,7 +0,0 @@
-#!/bin/bash
-
-CUDA_VISIBLE_DEVICES=0 llamafactory-cli chat \
- --model_name_or_path meta-llama/Llama-2-7b-hf \
- --adapter_name_or_path ../../saves/LLaMA2-7B/lora/sft \
- --template default \
- --finetuning_type lora
diff --git a/examples/inference/evaluate.sh b/examples/inference/evaluate.sh
deleted file mode 100644
index 5030329d..00000000
--- a/examples/inference/evaluate.sh
+++ /dev/null
@@ -1,12 +0,0 @@
-#!/bin/bash
-
-CUDA_VISIBLE_DEVICES=0 llamafactory-cli eval \
- --model_name_or_path meta-llama/Llama-2-7b-hf \
- --adapter_name_or_path ../../saves/LLaMA2-7B/lora/sft \
- --template fewshot \
- --finetuning_type lora \
- --task mmlu \
- --split test \
- --lang en \
- --n_shot 5 \
- --batch_size 4
diff --git a/examples/inference/llama3.yaml b/examples/inference/llama3.yaml
new file mode 100644
index 00000000..ffc5be82
--- /dev/null
+++ b/examples/inference/llama3.yaml
@@ -0,0 +1,2 @@
+model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
+template: llama3
diff --git a/examples/inference/llama3_lora_sft.yaml b/examples/inference/llama3_lora_sft.yaml
new file mode 100644
index 00000000..262f4445
--- /dev/null
+++ b/examples/inference/llama3_lora_sft.yaml
@@ -0,0 +1,4 @@
+model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
+adapter_name_or_path: saves/llama3-8b/lora/sft
+template: llama3
+finetuning_type: lora
diff --git a/examples/inference/llama3_vllm.yaml b/examples/inference/llama3_vllm.yaml
new file mode 100644
index 00000000..8dd3b61a
--- /dev/null
+++ b/examples/inference/llama3_vllm.yaml
@@ -0,0 +1,4 @@
+model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
+template: llama3
+infer_backend: vllm
+vllm_enforce_eager: true
diff --git a/examples/inference/web_demo.sh b/examples/inference/web_demo.sh
deleted file mode 100644
index a58cd2a0..00000000
--- a/examples/inference/web_demo.sh
+++ /dev/null
@@ -1,8 +0,0 @@
-#!/bin/bash
-# add `--visual_inputs True` to load MLLM
-
-CUDA_VISIBLE_DEVICES=0 llamafactory-cli webchat \
- --model_name_or_path meta-llama/Llama-2-7b-hf \
- --adapter_name_or_path ../../saves/LLaMA2-7B/lora/sft \
- --template default \
- --finetuning_type lora
diff --git a/examples/lora_multi_gpu/ds_zero3.sh b/examples/lora_multi_gpu/ds_zero3.sh
index bc74a6de..304f3780 100644
--- a/examples/lora_multi_gpu/ds_zero3.sh
+++ b/examples/lora_multi_gpu/ds_zero3.sh
@@ -1,34 +1,5 @@
#!/bin/bash
# ZeRO-3 enables weight sharding on multiple GPUs
-deepspeed --num_gpus 4 ../../src/train.py \
- --deepspeed ../deepspeed/ds_z3_config.json \
- --stage sft \
- --do_train \
- --model_name_or_path meta-llama/Llama-2-7b-hf \
- --dataset alpaca_gpt4_en,glaive_toolcall \
- --dataset_dir ../../data \
- --template default \
- --finetuning_type lora \
- --lora_target q_proj,v_proj \
- --output_dir ../../saves/LLaMA2-7B/lora/sft \
- --overwrite_cache \
- --overwrite_output_dir \
- --cutoff_len 1024 \
- --preprocessing_num_workers 16 \
- --per_device_train_batch_size 1 \
- --per_device_eval_batch_size 1 \
- --gradient_accumulation_steps 2 \
- --lr_scheduler_type cosine \
- --logging_steps 10 \
- --warmup_steps 20 \
- --save_steps 100 \
- --eval_steps 100 \
- --evaluation_strategy steps \
- --learning_rate 5e-5 \
- --num_train_epochs 3.0 \
- --max_samples 3000 \
- --val_size 0.1 \
- --ddp_timeout 180000000 \
- --plot_loss \
- --fp16
+deepspeed --include "localhost:0,1,2,3" \
+ src/train.py examples/lora_multi_gpu/llama3_lora_sft_ds.yaml
diff --git a/examples/lora_multi_gpu/llama3_lora_sft.yaml b/examples/lora_multi_gpu/llama3_lora_sft.yaml
new file mode 100644
index 00000000..d9690679
--- /dev/null
+++ b/examples/lora_multi_gpu/llama3_lora_sft.yaml
@@ -0,0 +1,41 @@
+# model
+model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
+
+# method
+stage: sft
+do_train: true
+finetuning_type: lora
+lora_target: q_proj,v_proj
+
+# ddp
+ddp_timeout: 180000000
+
+# dataset
+dataset: identity,alpaca_gpt4_en
+template: llama3
+cutoff_len: 1024
+max_samples: 1000
+val_size: 0.1
+overwrite_cache: true
+preprocessing_num_workers: 16
+
+# output
+output_dir: saves/llama3-8b/lora/sft
+logging_steps: 10
+save_steps: 500
+plot_loss: true
+overwrite_output_dir: true
+
+# train
+per_device_train_batch_size: 1
+gradient_accumulation_steps: 2
+learning_rate: 0.0001
+num_train_epochs: 3.0
+lr_scheduler_type: cosine
+warmup_steps: 0.1
+fp16: true
+
+# eval
+per_device_eval_batch_size: 1
+evaluation_strategy: steps
+eval_steps: 500
diff --git a/examples/lora_multi_gpu/llama3_lora_sft_ds.yaml b/examples/lora_multi_gpu/llama3_lora_sft_ds.yaml
new file mode 100644
index 00000000..26955167
--- /dev/null
+++ b/examples/lora_multi_gpu/llama3_lora_sft_ds.yaml
@@ -0,0 +1,42 @@
+# model
+model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
+
+# method
+stage: sft
+do_train: true
+finetuning_type: lora
+lora_target: q_proj,v_proj
+
+# ddp
+ddp_timeout: 180000000
+deepspeed: examples/deepspeed/ds_z3_config.json
+
+# dataset
+dataset: identity,alpaca_gpt4_en
+template: llama3
+cutoff_len: 1024
+max_samples: 1000
+val_size: 0.1
+overwrite_cache: true
+preprocessing_num_workers: 16
+
+# output
+output_dir: saves/llama3-8b/lora/sft
+logging_steps: 10
+save_steps: 500
+plot_loss: true
+overwrite_output_dir: true
+
+# train
+per_device_train_batch_size: 1
+gradient_accumulation_steps: 2
+learning_rate: 0.0001
+num_train_epochs: 3.0
+lr_scheduler_type: cosine
+warmup_steps: 0.1
+fp16: true
+
+# eval
+per_device_eval_batch_size: 1
+evaluation_strategy: steps
+eval_steps: 500
diff --git a/examples/lora_multi_gpu/multi_node.sh b/examples/lora_multi_gpu/multi_node.sh
index a58cac20..401fac5f 100644
--- a/examples/lora_multi_gpu/multi_node.sh
+++ b/examples/lora_multi_gpu/multi_node.sh
@@ -2,35 +2,5 @@
# also launch it on slave machine using slave_config.yaml
CUDA_VISIBLE_DEVICES=0,1,2,3 accelerate launch \
- --config_file ../accelerate/master_config.yaml \
- ../../src/train.py \
- --stage sft \
- --do_train \
- --model_name_or_path meta-llama/Llama-2-7b-hf \
- --dataset alpaca_gpt4_en,glaive_toolcall \
- --dataset_dir ../../data \
- --template default \
- --finetuning_type lora \
- --lora_target q_proj,v_proj \
- --output_dir ../../saves/LLaMA2-7B/lora/sft \
- --overwrite_cache \
- --overwrite_output_dir \
- --cutoff_len 1024 \
- --preprocessing_num_workers 16 \
- --per_device_train_batch_size 1 \
- --per_device_eval_batch_size 1 \
- --gradient_accumulation_steps 2 \
- --lr_scheduler_type cosine \
- --logging_steps 10 \
- --warmup_steps 20 \
- --save_steps 100 \
- --eval_steps 100 \
- --evaluation_strategy steps \
- --load_best_model_at_end \
- --learning_rate 5e-5 \
- --num_train_epochs 3.0 \
- --max_samples 3000 \
- --val_size 0.1 \
- --ddp_timeout 180000000 \
- --plot_loss \
- --fp16
+ --config_file examples/accelerate/master_config.yaml \
+ src/train.py examples/lora_multi_gpu/llama3_lora_sft.yaml
diff --git a/examples/lora_multi_gpu/single_node.sh b/examples/lora_multi_gpu/single_node.sh
index c0719c04..885a0e8c 100644
--- a/examples/lora_multi_gpu/single_node.sh
+++ b/examples/lora_multi_gpu/single_node.sh
@@ -1,35 +1,5 @@
#!/bin/bash
CUDA_VISIBLE_DEVICES=0,1,2,3 accelerate launch \
- --config_file ../accelerate/single_config.yaml \
- ../../src/train.py \
- --stage sft \
- --do_train \
- --model_name_or_path meta-llama/Llama-2-7b-hf \
- --dataset alpaca_gpt4_en,glaive_toolcall \
- --dataset_dir ../../data \
- --template default \
- --finetuning_type lora \
- --lora_target q_proj,v_proj \
- --output_dir ../../saves/LLaMA2-7B/lora/sft \
- --overwrite_cache \
- --overwrite_output_dir \
- --cutoff_len 1024 \
- --preprocessing_num_workers 16 \
- --per_device_train_batch_size 1 \
- --per_device_eval_batch_size 1 \
- --gradient_accumulation_steps 2 \
- --lr_scheduler_type cosine \
- --logging_steps 10 \
- --warmup_steps 20 \
- --save_steps 100 \
- --eval_steps 100 \
- --evaluation_strategy steps \
- --load_best_model_at_end \
- --learning_rate 5e-5 \
- --num_train_epochs 3.0 \
- --max_samples 3000 \
- --val_size 0.1 \
- --ddp_timeout 180000000 \
- --plot_loss \
- --fp16
+ --config_file examples/accelerate/single_config.yaml \
+ src/train.py examples/lora_multi_gpu/llama3_lora_sft.yaml
diff --git a/examples/lora_single_gpu/dpo.sh b/examples/lora_single_gpu/dpo.sh
deleted file mode 100644
index 2cb6cb01..00000000
--- a/examples/lora_single_gpu/dpo.sh
+++ /dev/null
@@ -1,35 +0,0 @@
-#!/bin/bash
-
-CUDA_VISIBLE_DEVICES=0 llamafactory-cli train \
- --stage dpo \
- --do_train \
- --model_name_or_path meta-llama/Llama-2-7b-hf \
- --adapter_name_or_path ../../saves/LLaMA2-7B/lora/sft \
- --create_new_adapter \
- --dataset orca_rlhf \
- --dataset_dir ../../data \
- --template default \
- --finetuning_type lora \
- --lora_target q_proj,v_proj \
- --output_dir ../../saves/LLaMA2-7B/lora/dpo \
- --overwrite_cache \
- --overwrite_output_dir \
- --cutoff_len 1024 \
- --preprocessing_num_workers 16 \
- --per_device_train_batch_size 1 \
- --per_device_eval_batch_size 1 \
- --gradient_accumulation_steps 8 \
- --lr_scheduler_type cosine \
- --logging_steps 10 \
- --warmup_steps 20 \
- --save_steps 100 \
- --eval_steps 100 \
- --evaluation_strategy steps \
- --load_best_model_at_end \
- --learning_rate 1e-5 \
- --num_train_epochs 1.0 \
- --max_samples 1000 \
- --val_size 0.1 \
- --dpo_ftx 1.0 \
- --plot_loss \
- --fp16
diff --git a/examples/lora_single_gpu/llama3_lora_dpo.yaml b/examples/lora_single_gpu/llama3_lora_dpo.yaml
new file mode 100644
index 00000000..f71f752d
--- /dev/null
+++ b/examples/lora_single_gpu/llama3_lora_dpo.yaml
@@ -0,0 +1,39 @@
+# model
+model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
+
+# method
+stage: dpo
+do_train: true
+finetuning_type: lora
+lora_target: q_proj,v_proj
+dpo_ftx: 1.0
+
+# dataset
+dataset: orca_rlhf
+template: llama3
+cutoff_len: 1024
+max_samples: 1000
+val_size: 0.1
+overwrite_cache: true
+preprocessing_num_workers: 16
+
+# output
+output_dir: saves/llama3-8b/lora/dpo
+logging_steps: 10
+save_steps: 500
+plot_loss: true
+overwrite_output_dir: true
+
+# train
+per_device_train_batch_size: 1
+gradient_accumulation_steps: 8
+learning_rate: 0.00001
+num_train_epochs: 3.0
+lr_scheduler_type: cosine
+warmup_steps: 0.1
+fp16: true
+
+# eval
+per_device_eval_batch_size: 1
+evaluation_strategy: steps
+eval_steps: 500
diff --git a/examples/lora_single_gpu/llama3_lora_eval.yaml b/examples/lora_single_gpu/llama3_lora_eval.yaml
new file mode 100644
index 00000000..5808a47a
--- /dev/null
+++ b/examples/lora_single_gpu/llama3_lora_eval.yaml
@@ -0,0 +1,19 @@
+# model
+model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
+adapter_name_or_path: saves/llama3-8b/lora/sft
+
+# method
+finetuning_type: lora
+
+# dataset
+task: mmlu
+split: test
+template: fewshot
+lang: en
+n_shot: 5
+
+# output
+save_dir: saves/llama3-8b/lora/eval
+
+# eval
+batch_size: 4
diff --git a/examples/lora_single_gpu/llama3_lora_orpo.yaml b/examples/lora_single_gpu/llama3_lora_orpo.yaml
new file mode 100644
index 00000000..5d78d260
--- /dev/null
+++ b/examples/lora_single_gpu/llama3_lora_orpo.yaml
@@ -0,0 +1,38 @@
+# model
+model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
+
+# method
+stage: orpo
+do_train: true
+finetuning_type: lora
+lora_target: q_proj,v_proj
+
+# dataset
+dataset: orca_rlhf
+template: llama3
+cutoff_len: 1024
+max_samples: 1000
+val_size: 0.1
+overwrite_cache: true
+preprocessing_num_workers: 16
+
+# output
+output_dir: saves/llama3-8b/lora/orpo
+logging_steps: 10
+save_steps: 500
+plot_loss: true
+overwrite_output_dir: true
+
+# train
+per_device_train_batch_size: 1
+gradient_accumulation_steps: 8
+learning_rate: 0.00001
+num_train_epochs: 3.0
+lr_scheduler_type: cosine
+warmup_steps: 0.1
+fp16: true
+
+# eval
+per_device_eval_batch_size: 1
+evaluation_strategy: steps
+eval_steps: 500
diff --git a/examples/lora_single_gpu/llama3_lora_ppo.yaml b/examples/lora_single_gpu/llama3_lora_ppo.yaml
new file mode 100644
index 00000000..8d78d20d
--- /dev/null
+++ b/examples/lora_single_gpu/llama3_lora_ppo.yaml
@@ -0,0 +1,38 @@
+# model
+model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
+reward_model: saves/llama3-8b/lora/reward
+
+# method
+stage: ppo
+do_train: true
+finetuning_type: lora
+lora_target: q_proj,v_proj
+
+# dataset
+dataset: identity,alpaca_gpt4_en
+template: llama3
+cutoff_len: 1024
+max_samples: 1000
+overwrite_cache: true
+preprocessing_num_workers: 16
+
+# output
+output_dir: saves/llama3-8b/lora/ppo
+logging_steps: 10
+save_steps: 500
+plot_loss: true
+overwrite_output_dir: true
+
+# train
+per_device_train_batch_size: 1
+gradient_accumulation_steps: 8
+learning_rate: 0.00001
+num_train_epochs: 3.0
+lr_scheduler_type: cosine
+warmup_steps: 0.1
+fp16: true
+
+# generate
+max_new_tokens: 512
+top_k: 0
+top_p: 0.9
diff --git a/examples/lora_single_gpu/llama3_lora_predict.yaml b/examples/lora_single_gpu/llama3_lora_predict.yaml
new file mode 100644
index 00000000..5a9de686
--- /dev/null
+++ b/examples/lora_single_gpu/llama3_lora_predict.yaml
@@ -0,0 +1,24 @@
+# model
+model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
+adapter_name_or_path: saves/llama3-8b/lora/sft
+
+# method
+stage: sft
+do_predict: true
+finetuning_type: lora
+
+# dataset
+dataset: identity,alpaca_gpt4_en
+template: llama3
+cutoff_len: 1024
+max_samples: 50
+overwrite_cache: true
+preprocessing_num_workers: 16
+
+# output
+output_dir: saves/llama3-8b/lora/predict
+overwrite_output_dir: true
+
+# eval
+per_device_eval_batch_size: 1
+predict_with_generate: true
diff --git a/examples/lora_single_gpu/llama3_lora_pretrain.yaml b/examples/lora_single_gpu/llama3_lora_pretrain.yaml
new file mode 100644
index 00000000..64245b71
--- /dev/null
+++ b/examples/lora_single_gpu/llama3_lora_pretrain.yaml
@@ -0,0 +1,37 @@
+# model
+model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
+
+# method
+stage: pt
+do_train: true
+finetuning_type: lora
+lora_target: q_proj,v_proj
+
+# dataset
+dataset: c4_demo
+cutoff_len: 1024
+max_samples: 1000
+val_size: 0.1
+overwrite_cache: true
+preprocessing_num_workers: 16
+
+# output
+output_dir: saves/llama3-8b/lora/sft
+logging_steps: 10
+save_steps: 500
+plot_loss: true
+overwrite_output_dir: true
+
+# train
+per_device_train_batch_size: 1
+gradient_accumulation_steps: 8
+learning_rate: 0.0001
+num_train_epochs: 3.0
+lr_scheduler_type: cosine
+warmup_steps: 0.1
+fp16: true
+
+# eval
+per_device_eval_batch_size: 1
+evaluation_strategy: steps
+eval_steps: 500
diff --git a/examples/lora_single_gpu/llama3_lora_reward.yaml b/examples/lora_single_gpu/llama3_lora_reward.yaml
new file mode 100644
index 00000000..f190f4ac
--- /dev/null
+++ b/examples/lora_single_gpu/llama3_lora_reward.yaml
@@ -0,0 +1,38 @@
+# model
+model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
+
+# method
+stage: rm
+do_train: true
+finetuning_type: lora
+lora_target: q_proj,v_proj
+
+# dataset
+dataset: orca_rlhf
+template: llama3
+cutoff_len: 1024
+max_samples: 1000
+val_size: 0.1
+overwrite_cache: true
+preprocessing_num_workers: 16
+
+# output
+output_dir: saves/llama3-8b/lora/reward
+logging_steps: 10
+save_steps: 500
+plot_loss: true
+overwrite_output_dir: true
+
+# train
+per_device_train_batch_size: 1
+gradient_accumulation_steps: 8
+learning_rate: 0.00001
+num_train_epochs: 3.0
+lr_scheduler_type: cosine
+warmup_steps: 0.1
+fp16: true
+
+# eval
+per_device_eval_batch_size: 1
+evaluation_strategy: steps
+eval_steps: 500
diff --git a/examples/lora_single_gpu/llama3_lora_sft.yaml b/examples/lora_single_gpu/llama3_lora_sft.yaml
new file mode 100644
index 00000000..f99df305
--- /dev/null
+++ b/examples/lora_single_gpu/llama3_lora_sft.yaml
@@ -0,0 +1,38 @@
+# model
+model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
+
+# method
+stage: sft
+do_train: true
+finetuning_type: lora
+lora_target: q_proj,v_proj
+
+# dataset
+dataset: identity,alpaca_gpt4_en
+template: llama3
+cutoff_len: 1024
+max_samples: 1000
+val_size: 0.1
+overwrite_cache: true
+preprocessing_num_workers: 16
+
+# output
+output_dir: saves/llama3-8b/lora/sft
+logging_steps: 10
+save_steps: 500
+plot_loss: true
+overwrite_output_dir: true
+
+# train
+per_device_train_batch_size: 1
+gradient_accumulation_steps: 8
+learning_rate: 0.0001
+num_train_epochs: 3.0
+lr_scheduler_type: cosine
+warmup_steps: 0.1
+fp16: true
+
+# eval
+per_device_eval_batch_size: 1
+evaluation_strategy: steps
+eval_steps: 500
diff --git a/examples/lora_single_gpu/llama3_preprocess.yaml b/examples/lora_single_gpu/llama3_preprocess.yaml
new file mode 100644
index 00000000..0b3dc599
--- /dev/null
+++ b/examples/lora_single_gpu/llama3_preprocess.yaml
@@ -0,0 +1,22 @@
+# model
+model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
+
+# method
+stage: sft
+do_train: true
+finetuning_type: lora
+lora_target: q_proj,v_proj
+
+# dataset
+dataset: identity,alpaca_gpt4_en
+template: llama3
+cutoff_len: 1024
+max_samples: 1000
+val_size: 0.1
+overwrite_cache: true
+preprocessing_num_workers: 16
+tokenized_path: saves/llama3-8b/dataset/sft
+
+# output
+output_dir: saves/llama3-8b/lora/sft
+overwrite_output_dir: true
diff --git a/examples/lora_single_gpu/llava1_5_lora_sft.yaml b/examples/lora_single_gpu/llava1_5_lora_sft.yaml
new file mode 100644
index 00000000..96c2701a
--- /dev/null
+++ b/examples/lora_single_gpu/llava1_5_lora_sft.yaml
@@ -0,0 +1,39 @@
+# model
+model_name_or_path: llava-hf/llava-1.5-7b-hf
+visual_inputs: true
+
+# method
+stage: sft
+do_train: true
+finetuning_type: lora
+lora_target: q_proj,v_proj
+
+# dataset
+dataset: mllm_demo
+template: vicuna
+cutoff_len: 1024
+max_samples: 1000
+val_size: 0.1
+overwrite_cache: true
+preprocessing_num_workers: 16
+
+# output
+output_dir: saves/llava1_5-7b/lora/sft
+logging_steps: 10
+save_steps: 500
+plot_loss: true
+overwrite_output_dir: true
+
+# train
+per_device_train_batch_size: 1
+gradient_accumulation_steps: 8
+learning_rate: 0.0001
+num_train_epochs: 3.0
+lr_scheduler_type: cosine
+warmup_steps: 0.1
+fp16: true
+
+# eval
+per_device_eval_batch_size: 1
+evaluation_strategy: steps
+eval_steps: 500
diff --git a/examples/lora_single_gpu/orpo.sh b/examples/lora_single_gpu/orpo.sh
deleted file mode 100644
index 335707bf..00000000
--- a/examples/lora_single_gpu/orpo.sh
+++ /dev/null
@@ -1,32 +0,0 @@
-#!/bin/bash
-
-CUDA_VISIBLE_DEVICES=0 llamafactory-cli train \
- --stage orpo \
- --do_train \
- --model_name_or_path meta-llama/Llama-2-7b-hf \
- --dataset orca_rlhf \
- --dataset_dir ../../data \
- --template default \
- --finetuning_type lora \
- --lora_target q_proj,v_proj \
- --output_dir ../../saves/LLaMA2-7B/lora/orpo \
- --overwrite_cache \
- --overwrite_output_dir \
- --cutoff_len 1024 \
- --preprocessing_num_workers 16 \
- --per_device_train_batch_size 1 \
- --per_device_eval_batch_size 1 \
- --gradient_accumulation_steps 8 \
- --lr_scheduler_type cosine \
- --logging_steps 10 \
- --warmup_steps 20 \
- --save_steps 100 \
- --eval_steps 100 \
- --evaluation_strategy steps \
- --load_best_model_at_end \
- --learning_rate 1e-5 \
- --num_train_epochs 1.0 \
- --max_samples 1000 \
- --val_size 0.1 \
- --plot_loss \
- --fp16
diff --git a/examples/lora_single_gpu/ppo.sh b/examples/lora_single_gpu/ppo.sh
deleted file mode 100644
index 9eccb05e..00000000
--- a/examples/lora_single_gpu/ppo.sh
+++ /dev/null
@@ -1,32 +0,0 @@
-#!/bin/bash
-
-CUDA_VISIBLE_DEVICES=0 llamafactory-cli train \
- --stage ppo \
- --do_train \
- --model_name_or_path meta-llama/Llama-2-7b-hf \
- --adapter_name_or_path ../../saves/LLaMA2-7B/lora/sft \
- --create_new_adapter \
- --dataset alpaca_gpt4_en \
- --dataset_dir ../../data \
- --template default \
- --finetuning_type lora \
- --lora_target q_proj,v_proj \
- --reward_model ../../saves/LLaMA2-7B/lora/reward \
- --output_dir ../../saves/LLaMA2-7B/lora/ppo \
- --overwrite_cache \
- --overwrite_output_dir \
- --cutoff_len 512 \
- --preprocessing_num_workers 16 \
- --per_device_train_batch_size 1 \
- --gradient_accumulation_steps 8 \
- --lr_scheduler_type cosine \
- --logging_steps 10 \
- --save_steps 100 \
- --learning_rate 1e-5 \
- --num_train_epochs 1.0 \
- --max_samples 1000 \
- --top_k 0 \
- --top_p 0.9 \
- --max_new_tokens 256 \
- --plot_loss \
- --fp16
diff --git a/examples/lora_single_gpu/predict.sh b/examples/lora_single_gpu/predict.sh
deleted file mode 100644
index 250efed1..00000000
--- a/examples/lora_single_gpu/predict.sh
+++ /dev/null
@@ -1,19 +0,0 @@
-#!/bin/bash
-
-CUDA_VISIBLE_DEVICES=0 llamafactory-cli train \
- --stage sft \
- --do_predict \
- --model_name_or_path meta-llama/Llama-2-7b-hf \
- --adapter_name_or_path ../../saves/LLaMA2-7B/lora/sft,../../saves/LLaMA2-7B/lora/dpo \
- --dataset alpaca_gpt4_en,glaive_toolcall \
- --dataset_dir ../../data \
- --template default \
- --finetuning_type lora \
- --output_dir ../../saves/LLaMA2-7B/lora/predict \
- --overwrite_cache \
- --overwrite_output_dir \
- --cutoff_len 1024 \
- --preprocessing_num_workers 16 \
- --per_device_eval_batch_size 1 \
- --max_samples 20 \
- --predict_with_generate
diff --git a/examples/lora_single_gpu/prepare.sh b/examples/lora_single_gpu/prepare.sh
deleted file mode 100644
index 277f9b7a..00000000
--- a/examples/lora_single_gpu/prepare.sh
+++ /dev/null
@@ -1,19 +0,0 @@
-#!/bin/bash
-# use `--tokenized_path` in training script to load data
-
-CUDA_VISIBLE_DEVICES= llamafactory-cli train \
- --stage sft \
- --do_train \
- --model_name_or_path meta-llama/Llama-2-7b-hf \
- --dataset alpaca_gpt4_en,glaive_toolcall \
- --dataset_dir ../../data \
- --template default \
- --finetuning_type lora \
- --lora_target q_proj,v_proj \
- --output_dir ../../saves/LLaMA2-7B/lora/sft \
- --overwrite_cache \
- --overwrite_output_dir \
- --cutoff_len 1024 \
- --preprocessing_num_workers 16 \
- --max_samples 3000 \
- --tokenized_path ../../saves/datasets/sft
diff --git a/examples/lora_single_gpu/pretrain.sh b/examples/lora_single_gpu/pretrain.sh
deleted file mode 100644
index 0782f00c..00000000
--- a/examples/lora_single_gpu/pretrain.sh
+++ /dev/null
@@ -1,31 +0,0 @@
-#!/bin/bash
-
-CUDA_VISIBLE_DEVICES=0 llamafactory-cli train \
- --stage pt \
- --do_train \
- --model_name_or_path meta-llama/Llama-2-7b-hf \
- --dataset c4_demo \
- --dataset_dir ../../data \
- --finetuning_type lora \
- --lora_target q_proj,v_proj \
- --output_dir ../../saves/LLaMA2-7B/lora/pretrain \
- --overwrite_cache \
- --overwrite_output_dir \
- --cutoff_len 1024 \
- --preprocessing_num_workers 16 \
- --per_device_train_batch_size 1 \
- --per_device_eval_batch_size 1 \
- --gradient_accumulation_steps 8 \
- --lr_scheduler_type cosine \
- --logging_steps 10 \
- --warmup_steps 20 \
- --save_steps 100 \
- --eval_steps 100 \
- --evaluation_strategy steps \
- --load_best_model_at_end \
- --learning_rate 5e-5 \
- --num_train_epochs 3.0 \
- --max_samples 10000 \
- --val_size 0.1 \
- --plot_loss \
- --fp16
diff --git a/examples/lora_single_gpu/reward.sh b/examples/lora_single_gpu/reward.sh
deleted file mode 100644
index 678809fd..00000000
--- a/examples/lora_single_gpu/reward.sh
+++ /dev/null
@@ -1,33 +0,0 @@
-#!/bin/bash
-
-CUDA_VISIBLE_DEVICES=0 llamafactory-cli train \
- --stage rm \
- --do_train \
- --model_name_or_path meta-llama/Llama-2-7b-hf \
- --adapter_name_or_path ../../saves/LLaMA2-7B/lora/sft \
- --create_new_adapter \
- --dataset orca_rlhf \
- --dataset_dir ../../data \
- --template default \
- --finetuning_type lora \
- --lora_target q_proj,v_proj \
- --output_dir ../../saves/LLaMA2-7B/lora/reward \
- --overwrite_cache \
- --overwrite_output_dir \
- --cutoff_len 1024 \
- --preprocessing_num_workers 16 \
- --per_device_train_batch_size 1 \
- --per_device_eval_batch_size 1 \
- --gradient_accumulation_steps 8 \
- --lr_scheduler_type cosine \
- --logging_steps 10 \
- --warmup_steps 20 \
- --save_steps 100 \
- --eval_steps 100 \
- --evaluation_strategy steps \
- --learning_rate 1e-5 \
- --num_train_epochs 1.0 \
- --max_samples 5000 \
- --val_size 0.1 \
- --plot_loss \
- --fp16
diff --git a/examples/lora_single_gpu/sft.sh b/examples/lora_single_gpu/sft.sh
deleted file mode 100644
index 2047e21f..00000000
--- a/examples/lora_single_gpu/sft.sh
+++ /dev/null
@@ -1,32 +0,0 @@
-#!/bin/bash
-
-CUDA_VISIBLE_DEVICES=0 llamafactory-cli train \
- --stage sft \
- --do_train \
- --model_name_or_path meta-llama/Llama-2-7b-hf \
- --dataset alpaca_gpt4_en,glaive_toolcall \
- --dataset_dir ../../data \
- --template default \
- --finetuning_type lora \
- --lora_target q_proj,v_proj \
- --output_dir ../../saves/LLaMA2-7B/lora/sft \
- --overwrite_cache \
- --overwrite_output_dir \
- --cutoff_len 1024 \
- --preprocessing_num_workers 16 \
- --per_device_train_batch_size 1 \
- --per_device_eval_batch_size 1 \
- --gradient_accumulation_steps 8 \
- --lr_scheduler_type cosine \
- --logging_steps 10 \
- --warmup_steps 20 \
- --save_steps 100 \
- --eval_steps 100 \
- --evaluation_strategy steps \
- --load_best_model_at_end \
- --learning_rate 5e-5 \
- --num_train_epochs 3.0 \
- --max_samples 3000 \
- --val_size 0.1 \
- --plot_loss \
- --fp16
diff --git a/examples/lora_single_gpu/sft_mllm.sh b/examples/lora_single_gpu/sft_mllm.sh
deleted file mode 100644
index 53e37262..00000000
--- a/examples/lora_single_gpu/sft_mllm.sh
+++ /dev/null
@@ -1,33 +0,0 @@
-#!/bin/bash
-
-CUDA_VISIBLE_DEVICES=0 llamafactory-cli train \
- --stage sft \
- --do_train \
- --model_name_or_path llava-hf/llava-1.5-7b-hf \
- --visual_inputs \
- --dataset mllm_demo \
- --dataset_dir ../../data \
- --template vicuna \
- --finetuning_type lora \
- --lora_target q_proj,v_proj \
- --output_dir ../../saves/LLaMA2-7B/lora/sft_mllm \
- --overwrite_cache \
- --overwrite_output_dir \
- --cutoff_len 1024 \
- --preprocessing_num_workers 16 \
- --per_device_train_batch_size 1 \
- --per_device_eval_batch_size 1 \
- --gradient_accumulation_steps 8 \
- --lr_scheduler_type cosine \
- --logging_steps 10 \
- --warmup_steps 20 \
- --save_steps 100 \
- --eval_steps 100 \
- --evaluation_strategy steps \
- --load_best_model_at_end \
- --learning_rate 5e-5 \
- --num_train_epochs 100.0 \
- --max_samples 3000 \
- --val_size 0.1 \
- --plot_loss \
- --fp16
diff --git a/examples/merge_lora/llama3_gptq.yaml b/examples/merge_lora/llama3_gptq.yaml
new file mode 100644
index 00000000..eac12f90
--- /dev/null
+++ b/examples/merge_lora/llama3_gptq.yaml
@@ -0,0 +1,11 @@
+# model
+model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
+template: llama3
+
+# export
+export_dir: models/llama3_gptq
+export_quantization_bit: 4
+export_quantization_dataset: data/c4_demo.json
+export_size: 2
+export_device: cpu
+export_legacy_format: false
diff --git a/examples/merge_lora/llama3_lora_sft.yaml b/examples/merge_lora/llama3_lora_sft.yaml
new file mode 100644
index 00000000..508a0b8c
--- /dev/null
+++ b/examples/merge_lora/llama3_lora_sft.yaml
@@ -0,0 +1,13 @@
+# Note: DO NOT use quantized model or quantization_bit when merging lora weights
+
+# model
+model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
+adapter_name_or_path: saves/llama3-8b/lora/sft
+template: llama3
+finetuning_type: lora
+
+# export
+export_dir: models/llama3_lora_sft
+export_size: 2
+export_device: cpu
+export_legacy_format: false
diff --git a/examples/merge_lora/merge.sh b/examples/merge_lora/merge.sh
deleted file mode 100644
index 186e64a4..00000000
--- a/examples/merge_lora/merge.sh
+++ /dev/null
@@ -1,12 +0,0 @@
-#!/bin/bash
-# DO NOT use quantized model or quantization_bit when merging lora weights
-
-CUDA_VISIBLE_DEVICES=0 llamafactory-cli export \
- --model_name_or_path meta-llama/Llama-2-7b-hf \
- --adapter_name_or_path ../../saves/LLaMA2-7B/lora/sft \
- --template default \
- --finetuning_type lora \
- --export_dir ../../models/llama2-7b-sft \
- --export_size 2 \
- --export_device cpu \
- --export_legacy_format False
diff --git a/examples/merge_lora/quantize.sh b/examples/merge_lora/quantize.sh
deleted file mode 100644
index 4a104645..00000000
--- a/examples/merge_lora/quantize.sh
+++ /dev/null
@@ -1,11 +0,0 @@
-#!/bin/bash
-# NEED TO run `merge.sh` before using this script
-
-CUDA_VISIBLE_DEVICES=0 llamafactory-cli export \
- --model_name_or_path ../../models/llama2-7b-sft \
- --template default \
- --export_dir ../../models/llama2-7b-sft-int4 \
- --export_quantization_bit 4 \
- --export_quantization_dataset ../../data/c4_demo.json \
- --export_size 2 \
- --export_legacy_format False
diff --git a/examples/qlora_single_gpu/aqlm.sh b/examples/qlora_single_gpu/aqlm.sh
deleted file mode 100644
index 1e0a71ca..00000000
--- a/examples/qlora_single_gpu/aqlm.sh
+++ /dev/null
@@ -1,30 +0,0 @@
-#!/bin/bash
-
-CUDA_VISIBLE_DEVICES=0 llamafactory-cli train \
- --stage sft \
- --do_train \
- --model_name_or_path BlackSamorez/Llama-2-7b-AQLM-2Bit-1x16-hf \
- --dataset alpaca_gpt4_en,glaive_toolcall \
- --dataset_dir ../../data \
- --template default \
- --finetuning_type lora \
- --lora_target q_proj,v_proj \
- --output_dir ../../saves/LLaMA2-7B/lora/sft \
- --overwrite_cache \
- --overwrite_output_dir \
- --cutoff_len 1024 \
- --per_device_train_batch_size 1 \
- --per_device_eval_batch_size 1 \
- --gradient_accumulation_steps 8 \
- --lr_scheduler_type cosine \
- --logging_steps 10 \
- --save_steps 100 \
- --eval_steps 100 \
- --evaluation_strategy steps \
- --load_best_model_at_end \
- --learning_rate 5e-5 \
- --num_train_epochs 3.0 \
- --max_samples 3000 \
- --val_size 0.1 \
- --plot_loss \
- --fp16
diff --git a/examples/qlora_single_gpu/awq.sh b/examples/qlora_single_gpu/awq.sh
deleted file mode 100644
index c13c8134..00000000
--- a/examples/qlora_single_gpu/awq.sh
+++ /dev/null
@@ -1,30 +0,0 @@
-#!/bin/bash
-
-CUDA_VISIBLE_DEVICES=0 llamafactory-cli train \
- --stage sft \
- --do_train \
- --model_name_or_path TheBloke/Llama-2-7B-AWQ \
- --dataset alpaca_gpt4_en,glaive_toolcall \
- --dataset_dir ../../data \
- --template default \
- --finetuning_type lora \
- --lora_target q_proj,v_proj \
- --output_dir ../../saves/LLaMA2-7B/lora/sft \
- --overwrite_cache \
- --overwrite_output_dir \
- --cutoff_len 1024 \
- --per_device_train_batch_size 1 \
- --per_device_eval_batch_size 1 \
- --gradient_accumulation_steps 8 \
- --lr_scheduler_type cosine \
- --logging_steps 10 \
- --save_steps 100 \
- --eval_steps 100 \
- --evaluation_strategy steps \
- --load_best_model_at_end \
- --learning_rate 5e-5 \
- --num_train_epochs 3.0 \
- --max_samples 3000 \
- --val_size 0.1 \
- --plot_loss \
- --fp16
diff --git a/examples/qlora_single_gpu/bitsandbytes.sh b/examples/qlora_single_gpu/bitsandbytes.sh
deleted file mode 100644
index 27f48d41..00000000
--- a/examples/qlora_single_gpu/bitsandbytes.sh
+++ /dev/null
@@ -1,31 +0,0 @@
-#!/bin/bash
-
-CUDA_VISIBLE_DEVICES=0 llamafactory-cli train \
- --stage sft \
- --do_train \
- --model_name_or_path meta-llama/Llama-2-7b-hf \
- --dataset alpaca_gpt4_en,glaive_toolcall \
- --dataset_dir ../../data \
- --template default \
- --finetuning_type lora \
- --lora_target q_proj,v_proj \
- --output_dir ../../saves/LLaMA2-7B/lora/sft \
- --overwrite_cache \
- --overwrite_output_dir \
- --cutoff_len 1024 \
- --per_device_train_batch_size 1 \
- --per_device_eval_batch_size 1 \
- --gradient_accumulation_steps 8 \
- --lr_scheduler_type cosine \
- --logging_steps 10 \
- --save_steps 100 \
- --eval_steps 100 \
- --evaluation_strategy steps \
- --load_best_model_at_end \
- --learning_rate 5e-5 \
- --num_train_epochs 3.0 \
- --max_samples 3000 \
- --val_size 0.1 \
- --quantization_bit 4 \
- --plot_loss \
- --fp16
diff --git a/examples/qlora_single_gpu/gptq.sh b/examples/qlora_single_gpu/gptq.sh
deleted file mode 100644
index 5b1b80e1..00000000
--- a/examples/qlora_single_gpu/gptq.sh
+++ /dev/null
@@ -1,30 +0,0 @@
-#!/bin/bash
-
-CUDA_VISIBLE_DEVICES=0 llamafactory-cli train \
- --stage sft \
- --do_train \
- --model_name_or_path TheBloke/Llama-2-7B-GPTQ \
- --dataset alpaca_gpt4_en,glaive_toolcall \
- --dataset_dir ../../data \
- --template default \
- --finetuning_type lora \
- --lora_target q_proj,v_proj \
- --output_dir ../../saves/LLaMA2-7B/lora/sft \
- --overwrite_cache \
- --overwrite_output_dir \
- --cutoff_len 1024 \
- --per_device_train_batch_size 1 \
- --per_device_eval_batch_size 1 \
- --gradient_accumulation_steps 8 \
- --lr_scheduler_type cosine \
- --logging_steps 10 \
- --save_steps 100 \
- --eval_steps 100 \
- --evaluation_strategy steps \
- --load_best_model_at_end \
- --learning_rate 5e-5 \
- --num_train_epochs 3.0 \
- --max_samples 3000 \
- --val_size 0.1 \
- --plot_loss \
- --fp16
diff --git a/examples/qlora_single_gpu/llama3_lora_sft_aqlm.yaml b/examples/qlora_single_gpu/llama3_lora_sft_aqlm.yaml
new file mode 100644
index 00000000..11f1d277
--- /dev/null
+++ b/examples/qlora_single_gpu/llama3_lora_sft_aqlm.yaml
@@ -0,0 +1,38 @@
+# model
+model_name_or_path: ISTA-DASLab/Meta-Llama-3-8B-Instruct-AQLM-2Bit-1x16
+
+# method
+stage: sft
+do_train: true
+finetuning_type: lora
+lora_target: q_proj,v_proj
+
+# dataset
+dataset: identity,alpaca_gpt4_en
+template: llama3
+cutoff_len: 1024
+max_samples: 1000
+val_size: 0.1
+overwrite_cache: true
+preprocessing_num_workers: 16
+
+# output
+output_dir: saves/llama3-8b/lora/sft
+logging_steps: 10
+save_steps: 500
+plot_loss: true
+overwrite_output_dir: true
+
+# train
+per_device_train_batch_size: 1
+gradient_accumulation_steps: 8
+learning_rate: 0.0001
+num_train_epochs: 3.0
+lr_scheduler_type: cosine
+warmup_steps: 0.1
+fp16: true
+
+# eval
+per_device_eval_batch_size: 1
+evaluation_strategy: steps
+eval_steps: 500
diff --git a/examples/qlora_single_gpu/llama3_lora_sft_awq.yaml b/examples/qlora_single_gpu/llama3_lora_sft_awq.yaml
new file mode 100644
index 00000000..4b070d45
--- /dev/null
+++ b/examples/qlora_single_gpu/llama3_lora_sft_awq.yaml
@@ -0,0 +1,38 @@
+# model
+model_name_or_path: TechxGenus/Meta-Llama-3-8B-Instruct-AWQ
+
+# method
+stage: sft
+do_train: true
+finetuning_type: lora
+lora_target: q_proj,v_proj
+
+# dataset
+dataset: identity,alpaca_gpt4_en
+template: llama3
+cutoff_len: 1024
+max_samples: 1000
+val_size: 0.1
+overwrite_cache: true
+preprocessing_num_workers: 16
+
+# output
+output_dir: saves/llama3-8b/lora/sft
+logging_steps: 10
+save_steps: 500
+plot_loss: true
+overwrite_output_dir: true
+
+# train
+per_device_train_batch_size: 1
+gradient_accumulation_steps: 8
+learning_rate: 0.0001
+num_train_epochs: 3.0
+lr_scheduler_type: cosine
+warmup_steps: 0.1
+fp16: true
+
+# eval
+per_device_eval_batch_size: 1
+evaluation_strategy: steps
+eval_steps: 500
diff --git a/examples/qlora_single_gpu/llama3_lora_sft_bitsandbytes.yaml b/examples/qlora_single_gpu/llama3_lora_sft_bitsandbytes.yaml
new file mode 100644
index 00000000..7bc31bde
--- /dev/null
+++ b/examples/qlora_single_gpu/llama3_lora_sft_bitsandbytes.yaml
@@ -0,0 +1,42 @@
+# model
+model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
+quantization_bit: 4
+
+# method
+stage: sft
+do_train: true
+finetuning_type: lora
+lora_target: q_proj,v_proj
+
+# ddp
+ddp_timeout: 180000000
+
+# dataset
+dataset: identity,alpaca_gpt4_en
+template: llama3
+cutoff_len: 1024
+max_samples: 1000
+val_size: 0.1
+overwrite_cache: true
+preprocessing_num_workers: 16
+
+# output
+output_dir: saves/llama3-8b/lora/sft
+logging_steps: 10
+save_steps: 500
+plot_loss: true
+overwrite_output_dir: true
+
+# train
+per_device_train_batch_size: 1
+gradient_accumulation_steps: 8
+learning_rate: 0.0001
+num_train_epochs: 3.0
+lr_scheduler_type: cosine
+warmup_steps: 0.1
+fp16: true
+
+# eval
+per_device_eval_batch_size: 1
+evaluation_strategy: steps
+eval_steps: 500
diff --git a/examples/qlora_single_gpu/llama3_lora_sft_gptq.yaml b/examples/qlora_single_gpu/llama3_lora_sft_gptq.yaml
new file mode 100644
index 00000000..2f8cfe45
--- /dev/null
+++ b/examples/qlora_single_gpu/llama3_lora_sft_gptq.yaml
@@ -0,0 +1,38 @@
+# model
+model_name_or_path: TechxGenus/Meta-Llama-3-8B-Instruct-GPTQ
+
+# method
+stage: sft
+do_train: true
+finetuning_type: lora
+lora_target: q_proj,v_proj
+
+# dataset
+dataset: identity,alpaca_gpt4_en
+template: llama3
+cutoff_len: 1024
+max_samples: 1000
+val_size: 0.1
+overwrite_cache: true
+preprocessing_num_workers: 16
+
+# output
+output_dir: saves/llama3-8b/lora/sft
+logging_steps: 10
+save_steps: 500
+plot_loss: true
+overwrite_output_dir: true
+
+# train
+per_device_train_batch_size: 1
+gradient_accumulation_steps: 8
+learning_rate: 0.0001
+num_train_epochs: 3.0
+lr_scheduler_type: cosine
+warmup_steps: 0.1
+fp16: true
+
+# eval
+per_device_eval_batch_size: 1
+evaluation_strategy: steps
+eval_steps: 500
diff --git a/setup.py b/setup.py
index f7589eb8..7b849942 100644
--- a/setup.py
+++ b/setup.py
@@ -20,12 +20,12 @@ def get_requires():
extra_require = {
- "deepspeed": ["deepspeed>=0.10.0"],
"metrics": ["nltk", "jieba", "rouge-chinese"],
+ "deepspeed": ["deepspeed>=0.10.0"],
+ "bitsandbytes": ["bitsandbytes>=0.39.0"],
+ "vllm": ["vllm>=0.4.0"],
"galore": ["galore-torch"],
"badam": ["badam"],
- "vllm": ["vllm>=0.4.0"],
- "bitsandbytes": ["bitsandbytes>=0.39.0"],
"gptq": ["optimum>=1.16.0", "auto-gptq>=0.5.0"],
"awq": ["autoawq"],
"aqlm": ["aqlm[gpu]>=1.1.0"],
diff --git a/src/webui.py b/src/webui.py
new file mode 100644
index 00000000..c225c710
--- /dev/null
+++ b/src/webui.py
@@ -0,0 +1,9 @@
+from llmtuner.webui.interface import create_ui
+
+
+def main():
+ create_ui().queue().launch(server_name="0.0.0.0", server_port=None, share=False)
+
+
+if __name__ == "__main__":
+ main()