diff --git a/README.md b/README.md
index ed56baba..05a75949 100644
--- a/README.md
+++ b/README.md
@@ -342,6 +342,16 @@ export GRADIO_SERVER_PORT=7860 # `set GRADIO_SERVER_PORT=7860` for Windows
python src/train_web.py # or python -m llmtuner.webui.interface
```
+For Aliyun users
+
+If you encountered display problems in LLaMA Board GUI, try using the following command to set environment variables before starting LLaMA Board:
+
+```bash
+export GRADIO_ROOT_PATH=/${JUPYTER_NAME}/proxy/7860/
+```
+
+
+
#### Use Docker
```bash
@@ -381,8 +391,8 @@ Use `python src/train_bash.py -h` to display arguments description.
```bash
CUDA_VISIBLE_DEVICES=0,1 API_PORT=8000 python src/api_demo.py \
- --model_name_or_path mistralai/Mistral-7B-Instruct-v0.2 \
- --template mistral \
+ --model_name_or_path meta-llama/Meta-Llama-3-8B-Instruct \
+ --template llama3 \
--infer_backend vllm \
--vllm_enforce_eager
```
diff --git a/README_zh.md b/README_zh.md
index 586ee38a..0e01e2c2 100644
--- a/README_zh.md
+++ b/README_zh.md
@@ -342,6 +342,16 @@ export GRADIO_SERVER_PORT=7860 # Windows 使用 `set GRADIO_SERVER_PORT=7860`
python src/train_web.py # 或 python -m llmtuner.webui.interface
```
+阿里云用户指南
+
+如果您在 LLaMA Board GUI 中遇到显示问题,请尝试在启动前使用以下命令设置环境变量:
+
+```bash
+export GRADIO_ROOT_PATH=/${JUPYTER_NAME}/proxy/7860/
+```
+
+
+
#### 使用 Docker
```bash
@@ -381,8 +391,8 @@ docker compose -f ./docker-compose.yml up -d
```bash
CUDA_VISIBLE_DEVICES=0,1 API_PORT=8000 python src/api_demo.py \
- --model_name_or_path mistralai/Mistral-7B-Instruct-v0.2 \
- --template mistral \
+ --model_name_or_path meta-llama/Meta-Llama-3-8B-Instruct \
+ --template llama3 \
--infer_backend vllm \
--vllm_enforce_eager
```