mirror of
https://github.com/hiyouga/LLaMA-Factory.git
synced 2025-08-02 11:42:49 +08:00
10 lines
339 B
Markdown
10 lines
339 B
Markdown
Usage:
|
|
|
|
- `pretrain.sh`: do pre-train (optional)
|
|
- `sft.sh`: do supervised fine-tuning
|
|
- `reward.sh`: do reward modeling (must after sft.sh)
|
|
- `ppo.sh`: do PPO training (must after sft.sh and reward.sh)
|
|
- `dpo.sh`: do DPO training (must after sft.sh)
|
|
- `orpo.sh`: do ORPO training
|
|
- `predict.sh`: do predict (must after sft.sh and dpo.sh)
|