mirror of
https://github.com/hiyouga/LLaMA-Factory.git
synced 2025-08-02 11:42:49 +08:00
update examples
Former-commit-id: 047313f48e0b2c050952592329509e8b3dfc6f81
This commit is contained in:
parent
92cafef325
commit
fe494fe97e
@ -1,5 +1,19 @@
|
|||||||
We provide diverse examples about fine-tuning LLMs.
|
We provide diverse examples about fine-tuning LLMs.
|
||||||
|
|
||||||
|
Make sure to execute these commands in the `LLaMA-Factory` directory.
|
||||||
|
|
||||||
|
## Table of Contents
|
||||||
|
|
||||||
|
- [LoRA Fine-Tuning on A Single GPU](#lora-fine-tuning-on-a-single-gpu)
|
||||||
|
- [QLoRA Fine-Tuning on a Single GPU](#qlora-fine-tuning-on-a-single-gpu)
|
||||||
|
- [LoRA Fine-Tuning on Multiple GPUs](#lora-fine-tuning-on-multiple-gpus)
|
||||||
|
- [Full-Parameter Fine-Tuning on Multiple GPUs](#full-parameter-fine-tuning-on-multiple-gpus)
|
||||||
|
- [Merging LoRA Adapters and Quantization](#merging-lora-adapters-and-quantization)
|
||||||
|
- [Inferring LoRA Fine-Tuned Models](#inferring-lora-fine-tuned-models)
|
||||||
|
- [Extras](#extras)
|
||||||
|
|
||||||
|
## Examples
|
||||||
|
|
||||||
### LoRA Fine-Tuning on A Single GPU
|
### LoRA Fine-Tuning on A Single GPU
|
||||||
|
|
||||||
#### (Continuous) Pre-Training
|
#### (Continuous) Pre-Training
|
||||||
|
@ -1,5 +1,19 @@
|
|||||||
我们提供了多样化的大模型微调示例脚本。
|
我们提供了多样化的大模型微调示例脚本。
|
||||||
|
|
||||||
|
请确保在 `LLaMA-Factory` 目录下执行下述命令。
|
||||||
|
|
||||||
|
## 目录
|
||||||
|
|
||||||
|
- [单 GPU LoRA 微调](#单-gpu-lora-微调)
|
||||||
|
- [单 GPU QLoRA 微调](#单-gpu-qlora-微调)
|
||||||
|
- [多 GPU LoRA 微调](#多-gpu-lora-微调)
|
||||||
|
- [多 GPU 全参数微调](#多-gpu-全参数微调)
|
||||||
|
- [合并 LoRA 适配器与模型量化](#合并-lora-适配器与模型量化)
|
||||||
|
- [推理 LoRA 模型](#推理-lora-模型)
|
||||||
|
- [杂项](#杂项)
|
||||||
|
|
||||||
|
## 示例
|
||||||
|
|
||||||
### 单 GPU LoRA 微调
|
### 单 GPU LoRA 微调
|
||||||
|
|
||||||
#### (增量)预训练
|
#### (增量)预训练
|
||||||
|
Loading…
x
Reference in New Issue
Block a user