mirror of
https://github.com/hiyouga/LLaMA-Factory.git
synced 2025-08-04 12:42:51 +08:00
update readme
Former-commit-id: a83e7587a05e388916a3b9bb6fe42cffcdcf0886
This commit is contained in:
parent
31f96ce22a
commit
c0d8d530dd
14
README.md
14
README.md
@ -342,6 +342,16 @@ export GRADIO_SERVER_PORT=7860 # `set GRADIO_SERVER_PORT=7860` for Windows
|
|||||||
python src/train_web.py # or python -m llmtuner.webui.interface
|
python src/train_web.py # or python -m llmtuner.webui.interface
|
||||||
```
|
```
|
||||||
|
|
||||||
|
<details><summary>For Aliyun users</summary>
|
||||||
|
|
||||||
|
If you encountered display problems in LLaMA Board GUI, try using the following command to set environment variables before starting LLaMA Board:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export GRADIO_ROOT_PATH=/${JUPYTER_NAME}/proxy/7860/
|
||||||
|
```
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
#### Use Docker
|
#### Use Docker
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
@ -381,8 +391,8 @@ Use `python src/train_bash.py -h` to display arguments description.
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
CUDA_VISIBLE_DEVICES=0,1 API_PORT=8000 python src/api_demo.py \
|
CUDA_VISIBLE_DEVICES=0,1 API_PORT=8000 python src/api_demo.py \
|
||||||
--model_name_or_path mistralai/Mistral-7B-Instruct-v0.2 \
|
--model_name_or_path meta-llama/Meta-Llama-3-8B-Instruct \
|
||||||
--template mistral \
|
--template llama3 \
|
||||||
--infer_backend vllm \
|
--infer_backend vllm \
|
||||||
--vllm_enforce_eager
|
--vllm_enforce_eager
|
||||||
```
|
```
|
||||||
|
14
README_zh.md
14
README_zh.md
@ -342,6 +342,16 @@ export GRADIO_SERVER_PORT=7860 # Windows 使用 `set GRADIO_SERVER_PORT=7860`
|
|||||||
python src/train_web.py # 或 python -m llmtuner.webui.interface
|
python src/train_web.py # 或 python -m llmtuner.webui.interface
|
||||||
```
|
```
|
||||||
|
|
||||||
|
<details><summary>阿里云用户指南</summary>
|
||||||
|
|
||||||
|
如果您在 LLaMA Board GUI 中遇到显示问题,请尝试在启动前使用以下命令设置环境变量:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export GRADIO_ROOT_PATH=/${JUPYTER_NAME}/proxy/7860/
|
||||||
|
```
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
#### 使用 Docker
|
#### 使用 Docker
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
@ -381,8 +391,8 @@ docker compose -f ./docker-compose.yml up -d
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
CUDA_VISIBLE_DEVICES=0,1 API_PORT=8000 python src/api_demo.py \
|
CUDA_VISIBLE_DEVICES=0,1 API_PORT=8000 python src/api_demo.py \
|
||||||
--model_name_or_path mistralai/Mistral-7B-Instruct-v0.2 \
|
--model_name_or_path meta-llama/Meta-Llama-3-8B-Instruct \
|
||||||
--template mistral \
|
--template llama3 \
|
||||||
--infer_backend vllm \
|
--infer_backend vllm \
|
||||||
--vllm_enforce_eager
|
--vllm_enforce_eager
|
||||||
```
|
```
|
||||||
|
Loading…
x
Reference in New Issue
Block a user