From 5eb8107db2adcc49e25d96c302c4ba79676c2bc6 Mon Sep 17 00:00:00 2001 From: hiyouga Date: Thu, 16 May 2024 19:12:09 +0800 Subject: [PATCH] fix examples #3769 Former-commit-id: 3df986c6793a51ec2cb5f31fd1808cd3a9883bc4 --- examples/README.md | 8 +++++--- examples/README_zh.md | 8 +++++--- 2 files changed, 10 insertions(+), 6 deletions(-) diff --git a/examples/README.md b/examples/README.md index 4b4a8248..5aa03dfc 100644 --- a/examples/README.md +++ b/examples/README.md @@ -171,22 +171,24 @@ CUDA_VISIBLE_DEVICES=0 llamafactory-cli export examples/merge_lora/llama3_gptq.y ### Inferring LoRA Fine-Tuned Models +Use `CUDA_VISIBLE_DEVICES=0,1` to infer models on multiple devices. + #### Use CLI ```bash -CUDA_VISIBLE_DEVICES=0 llamafactory-cli chat examples/merge_lora/llama3_lora_sft.yaml +CUDA_VISIBLE_DEVICES=0 llamafactory-cli chat examples/inference/llama3_lora_sft.yaml ``` #### Use Web UI ```bash -CUDA_VISIBLE_DEVICES=0 llamafactory-cli webchat examples/merge_lora/llama3_lora_sft.yaml +CUDA_VISIBLE_DEVICES=0 llamafactory-cli webchat examples/inference/llama3_lora_sft.yaml ``` #### Launch OpenAI-style API ```bash -CUDA_VISIBLE_DEVICES=0 llamafactory-cli api examples/merge_lora/llama3_lora_sft.yaml +CUDA_VISIBLE_DEVICES=0 llamafactory-cli api examples/inference/llama3_lora_sft.yaml ``` ### Extras diff --git a/examples/README_zh.md b/examples/README_zh.md index 3b5b2dee..5d205a21 100644 --- a/examples/README_zh.md +++ b/examples/README_zh.md @@ -171,22 +171,24 @@ CUDA_VISIBLE_DEVICES=0 llamafactory-cli export examples/merge_lora/llama3_gptq.y ### 推理 LoRA 模型 +使用 `CUDA_VISIBLE_DEVICES=0,1` 进行多卡推理。 + #### 使用命令行接口 ```bash -CUDA_VISIBLE_DEVICES=0 llamafactory-cli chat examples/merge_lora/llama3_lora_sft.yaml +CUDA_VISIBLE_DEVICES=0 llamafactory-cli chat examples/inference/llama3_lora_sft.yaml ``` #### 使用浏览器界面 ```bash -CUDA_VISIBLE_DEVICES=0 llamafactory-cli webchat examples/merge_lora/llama3_lora_sft.yaml +CUDA_VISIBLE_DEVICES=0 llamafactory-cli webchat examples/inference/llama3_lora_sft.yaml ``` #### 启动 OpenAI 风格 API ```bash -CUDA_VISIBLE_DEVICES=0 llamafactory-cli api examples/merge_lora/llama3_lora_sft.yaml +CUDA_VISIBLE_DEVICES=0 llamafactory-cli api examples/inference/llama3_lora_sft.yaml ``` ### 杂项