From fe494fe97ee5af511e04cf3b915060c977660350 Mon Sep 17 00:00:00 2001 From: hiyouga Date: Mon, 6 May 2024 23:07:55 +0800 Subject: [PATCH] update examples Former-commit-id: 047313f48e0b2c050952592329509e8b3dfc6f81 --- examples/README.md | 14 ++++++++++++++ examples/README_zh.md | 14 ++++++++++++++ 2 files changed, 28 insertions(+) diff --git a/examples/README.md b/examples/README.md index 922f9c7b..ba993b99 100644 --- a/examples/README.md +++ b/examples/README.md @@ -1,5 +1,19 @@ We provide diverse examples about fine-tuning LLMs. +Make sure to execute these commands in the `LLaMA-Factory` directory. + +## Table of Contents + +- [LoRA Fine-Tuning on A Single GPU](#lora-fine-tuning-on-a-single-gpu) +- [QLoRA Fine-Tuning on a Single GPU](#qlora-fine-tuning-on-a-single-gpu) +- [LoRA Fine-Tuning on Multiple GPUs](#lora-fine-tuning-on-multiple-gpus) +- [Full-Parameter Fine-Tuning on Multiple GPUs](#full-parameter-fine-tuning-on-multiple-gpus) +- [Merging LoRA Adapters and Quantization](#merging-lora-adapters-and-quantization) +- [Inferring LoRA Fine-Tuned Models](#inferring-lora-fine-tuned-models) +- [Extras](#extras) + +## Examples + ### LoRA Fine-Tuning on A Single GPU #### (Continuous) Pre-Training diff --git a/examples/README_zh.md b/examples/README_zh.md index 14d72c10..491ec688 100644 --- a/examples/README_zh.md +++ b/examples/README_zh.md @@ -1,5 +1,19 @@ 我们提供了多样化的大模型微调示例脚本。 +请确保在 `LLaMA-Factory` 目录下执行下述命令。 + +## 目录 + +- [单 GPU LoRA 微调](#单-gpu-lora-微调) +- [单 GPU QLoRA 微调](#单-gpu-qlora-微调) +- [多 GPU LoRA 微调](#多-gpu-lora-微调) +- [多 GPU 全参数微调](#多-gpu-全参数微调) +- [合并 LoRA 适配器与模型量化](#合并-lora-适配器与模型量化) +- [推理 LoRA 模型](#推理-lora-模型) +- [杂项](#杂项) + +## 示例 + ### 单 GPU LoRA 微调 #### (增量)预训练