[assets] update readme (#7644)

This commit is contained in:
hoshi-hiyouga 2025-04-09 01:06:06 +08:00 committed by GitHub
parent 7d8bee96fc
commit 39876b85fc
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
11 changed files with 17 additions and 8 deletions

View File

@ -42,6 +42,7 @@ Choose your path:
- **Local machine**: Please refer to [usage](#getting-started)
- **PAI-DSW (free trial)**: [Llama3 Example](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory) | [Qwen2-VL Example](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory_qwen2vl) | [DeepSeek-R1-Distill Example](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory_deepseek_r1_distill_7b)
- **Amazon SageMaker**: [Blog](https://aws.amazon.com/cn/blogs/china/a-one-stop-code-free-model-fine-tuning-deployment-platform-based-on-sagemaker-and-llama-factory/)
- **Easy Dataset**: [Fine-tune on Synthetic Data](https://buaa-act.feishu.cn/wiki/GVzlwYcRFiR8OLkHbL6cQpYin7g)
> [!NOTE]
> Except for the above links, all other websites are unauthorized third-party websites. Please carefully use them.
@ -557,6 +558,8 @@ Please refer to [data/README.md](data/README.md) for checking the details about
> [!NOTE]
> Please update `data/dataset_info.json` to use your custom dataset.
You can also use **[Easy Dataset](https://github.com/ConardLi/easy-dataset)** to create synthetic data for fine-tuning.
### Quickstart
Use the following 3 commands to run LoRA **fine-tuning**, **inference** and **merging** of the Llama3-8B-Instruct model, respectively.

View File

@ -44,6 +44,7 @@ https://github.com/user-attachments/assets/43b700c6-a178-41db-b1f8-8190a5d3fcfc
- **本地机器**:请见[如何使用](#如何使用)
- **PAI-DSW免费试用**[Llama3 案例](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory) | [Qwen2-VL 案例](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory_qwen2vl) | [DeepSeek-R1-Distill 案例](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory_deepseek_r1_distill_7b)
- **Amazon SageMaker**[博客](https://aws.amazon.com/cn/blogs/china/a-one-stop-code-free-model-fine-tuning-deployment-platform-based-on-sagemaker-and-llama-factory/)
- **Easy Dataset**[数据蒸馏微调](https://buaa-act.feishu.cn/wiki/KY9xwTGs1iqHrRkjXBwcZP9WnL9)
> [!NOTE]
> 除上述链接以外的其他网站均为未经许可的第三方网站,请小心甄别。
@ -404,7 +405,7 @@ huggingface-cli login
## 软硬件依赖
| 必需项 | 至少 | 推荐 |
| 必需项 | 至少 | 推荐 |
| ------------ | ------- | --------- |
| python | 3.9 | 3.10 |
| torch | 1.13.1 | 2.6.0 |
@ -414,7 +415,7 @@ huggingface-cli login
| peft | 0.14.0 | 0.15.0 |
| trl | 0.8.6 | 0.9.6 |
| 可选项 | 至少 | 推荐 |
| 可选项 | 至少 | 推荐 |
| ------------ | ------- | --------- |
| CUDA | 11.6 | 12.2 |
| deepspeed | 0.10.0 | 0.16.4 |
@ -560,6 +561,8 @@ pip install .
> [!NOTE]
> 使用自定义数据集时,请更新 `data/dataset_info.json` 文件。
您也可以使用 **[Easy Dataset](https://github.com/ConardLi/easy-dataset)** 构建用于微调的合成数据。
### 快速开始
下面三行命令分别对 Llama3-8B-Instruct 模型进行 LoRA **微调**、**推理**和**合并**。

Binary file not shown.

Before

Width:  |  Height:  |  Size: 160 KiB

After

Width:  |  Height:  |  Size: 165 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 167 KiB

After

Width:  |  Height:  |  Size: 168 KiB

View File

@ -53,5 +53,4 @@
"mllm_demo_data/4.mp3"
]
}
]

View File

@ -1,4 +1,4 @@
transformers>=4.41.2,<=4.51.0,!=4.46.*,!=4.47.*,!=4.48.0
transformers>=4.41.2,<=4.51.1,!=4.46.*,!=4.47.*,!=4.48.0
datasets>=2.16.0,<=3.4.1
accelerate>=0.34.0,<=1.5.2
peft>=0.14.0,<=0.15.0

View File

@ -54,7 +54,7 @@ extra_require = {
"awq": ["autoawq"],
"aqlm": ["aqlm[gpu]>=1.1.0"],
"vllm": ["vllm>=0.4.3,<=0.8.3"],
"sglang": ["sglang[srt]>=0.4.5", "transformers==4.51.0"],
"sglang": ["sglang[srt]>=0.4.5", "transformers==4.51.1"],
"galore": ["galore-torch"],
"apollo": ["apollo-torch"],
"badam": ["badam>=1.2.1"],

View File

@ -19,7 +19,7 @@ Level:
Dependency graph:
main:
transformers>=4.41.2,<=4.51.0,!=4.46.*,!=4.47.*,!=4.48.0
transformers>=4.41.2,<=4.51.1,!=4.46.*,!=4.47.*,!=4.48.0
datasets>=2.16.0,<=3.4.1
accelerate>=0.34.0,<=1.5.2
peft>=0.14.0,<=0.15.0

View File

@ -298,8 +298,9 @@ class KTODataCollatorWithPadding(MultiModalDataCollatorForSeq2Seq):
batch["kl_input_ids"] = kl_batch["input_ids"]
batch["kl_attention_mask"] = kl_batch["attention_mask"]
batch["kl_labels"] = kl_batch["labels"]
if "cross_attention_mask" in kl_batch: # for mllama inputs.
if "cross_attention_mask" in kl_batch: # for mllama inputs
batch["kl_cross_attention_mask"] = kl_batch["cross_attention_mask"]
if "token_type_ids" in kl_batch:
batch["kl_token_type_ids"] = kl_batch["token_type_ids"]

View File

@ -89,7 +89,7 @@ def check_version(requirement: str, mandatory: bool = False) -> None:
def check_dependencies() -> None:
r"""Check the version of the required packages."""
check_version("transformers>=4.41.2,<=4.51.0,!=4.46.0,!=4.46.1,!=4.46.2,!=4.46.3,!=4.47.0,!=4.47.1,!=4.48.0")
check_version("transformers>=4.41.2,<=4.51.1,!=4.46.0,!=4.46.1,!=4.46.2,!=4.46.3,!=4.47.0,!=4.47.1,!=4.48.0")
check_version("datasets>=2.16.0,<=3.4.1")
check_version("accelerate>=0.34.0,<=1.5.2")
check_version("peft>=0.14.0,<=0.15.0")

View File

@ -147,6 +147,9 @@ class CustomKTOTrainer(KTOTrainer):
if "pixel_values" in batch:
model_inputs["pixel_values"] = batch["pixel_values"]
if "image_sizes" in batch:
model_inputs["image_sizes"] = batch["image_sizes"]
if "image_grid_thw" in batch:
model_inputs["image_grid_thw"] = batch["image_grid_thw"]