update readme

Former-commit-id: 2b4e5f0d3239984f62c7eca6dc7b9e3bbc6f8c4e
This commit is contained in:
hiyouga 2023-12-18 15:46:45 +08:00
parent 16cc0321f2
commit dee19b11ba
2 changed files with 7 additions and 7 deletions

View File

@ -104,7 +104,7 @@ Compared to ChatGLM's [P-Tuning](https://github.com/THUDM/ChatGLM2-6B/tree/main/
| [LLaMA-2](https://huggingface.co/meta-llama) | 7B/13B/70B | q_proj,v_proj | llama2 | | [LLaMA-2](https://huggingface.co/meta-llama) | 7B/13B/70B | q_proj,v_proj | llama2 |
| [Mistral](https://huggingface.co/mistralai) | 7B | q_proj,v_proj | mistral | | [Mistral](https://huggingface.co/mistralai) | 7B | q_proj,v_proj | mistral |
| [Mixtral](https://huggingface.co/mistralai) | 8x7B | q_proj,v_proj | mistral | | [Mixtral](https://huggingface.co/mistralai) | 8x7B | q_proj,v_proj | mistral |
| [Phi-1.5](https://huggingface.co/microsoft/phi-1_5) | 1.3B | Wqkv | - | | [Phi-1.5/2](https://huggingface.co/microsoft) | 1.3B/2.7B | Wqkv | - |
| [Qwen](https://github.com/QwenLM/Qwen) | 1.8B/7B/14B/72B | c_attn | qwen | | [Qwen](https://github.com/QwenLM/Qwen) | 1.8B/7B/14B/72B | c_attn | qwen |
| [XVERSE](https://github.com/xverse-ai) | 7B/13B/65B | q_proj,v_proj | xverse | | [XVERSE](https://github.com/xverse-ai) | 7B/13B/65B | q_proj,v_proj | xverse |
@ -126,7 +126,7 @@ Please refer to [constants.py](src/llmtuner/extras/constants.py) for a full list
| DPO Training | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | | DPO Training | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
> [!NOTE] > [!NOTE]
> Use `--quantization_bit 4/8` argument to enable QLoRA. > Use `--quantization_bit 4` argument to enable QLoRA.
## Provided Datasets ## Provided Datasets
@ -482,7 +482,7 @@ python src/export_model.py \
> Merging LoRA weights into a quantized model is not supported. > Merging LoRA weights into a quantized model is not supported.
> [!TIP] > [!TIP]
> Use `--export_quantization_bit 4` and `--export_quantization_dataset data/c4_demo.json` to quantize the model. > Use `--export_quantization_bit 4` and `--export_quantization_dataset data/c4_demo.json` to quantize the model after merging the LoRA weights.
### API Demo ### API Demo

View File

@ -104,7 +104,7 @@ https://github.com/hiyouga/LLaMA-Factory/assets/16256802/6ba60acc-e2e2-4bec-b846
| [LLaMA-2](https://huggingface.co/meta-llama) | 7B/13B/70B | q_proj,v_proj | llama2 | | [LLaMA-2](https://huggingface.co/meta-llama) | 7B/13B/70B | q_proj,v_proj | llama2 |
| [Mistral](https://huggingface.co/mistralai) | 7B | q_proj,v_proj | mistral | | [Mistral](https://huggingface.co/mistralai) | 7B | q_proj,v_proj | mistral |
| [Mixtral](https://huggingface.co/mistralai) | 8x7B | q_proj,v_proj | mistral | | [Mixtral](https://huggingface.co/mistralai) | 8x7B | q_proj,v_proj | mistral |
| [Phi-1.5](https://huggingface.co/microsoft/phi-1_5) | 1.3B | Wqkv | - | | [Phi-1.5/2](https://huggingface.co/microsoft) | 1.3B/2.7B | Wqkv | - |
| [Qwen](https://github.com/QwenLM/Qwen) | 1.8B/7B/14B/72B | c_attn | qwen | | [Qwen](https://github.com/QwenLM/Qwen) | 1.8B/7B/14B/72B | c_attn | qwen |
| [XVERSE](https://github.com/xverse-ai) | 7B/13B/65B | q_proj,v_proj | xverse | | [XVERSE](https://github.com/xverse-ai) | 7B/13B/65B | q_proj,v_proj | xverse |
@ -126,7 +126,7 @@ https://github.com/hiyouga/LLaMA-Factory/assets/16256802/6ba60acc-e2e2-4bec-b846
| DPO 训练 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | | DPO 训练 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
> [!NOTE] > [!NOTE]
> 请使用 `--quantization_bit 4/8` 参数来启用 QLoRA 训练。 > 请使用 `--quantization_bit 4` 参数来启用 QLoRA 训练。
## 数据集 ## 数据集
@ -467,7 +467,7 @@ deepspeed --num_gpus 8 --master_port=9901 src/train_bash.py \
</details> </details>
### 合并 LoRA 权重并导出完整模型 ### 合并 LoRA 权重并导出模型
```bash ```bash
python src/export_model.py \ python src/export_model.py \
@ -482,7 +482,7 @@ python src/export_model.py \
> 尚不支持量化模型的 LoRA 权重合并及导出。 > 尚不支持量化模型的 LoRA 权重合并及导出。
> [!TIP] > [!TIP]
> 使用 `--export_quantization_bit 4``--export_quantization_dataset data/c4_demo.json` 量化导出模型。 > 合并 LoRA 权重之后可再次使用 `--export_quantization_bit 4``--export_quantization_dataset data/c4_demo.json` 量化模型。
### API 服务 ### API 服务