mirror of
https://github.com/hiyouga/LLaMA-Factory.git
synced 2025-08-23 06:12:50 +08:00
update readme
Former-commit-id: 638043ced426c392014c5f42ce00f378f92f905d
This commit is contained in:
parent
10e65f0042
commit
0f941f30f7
12
README.md
12
README.md
@ -366,17 +366,23 @@ See [examples/README.md](examples/README.md) for advanced usage (including distr
|
|||||||
#### Use local environment
|
#### Use local environment
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
CUDA_VISIBLE_DEVICES=0 GRADIO_SHARE=1 llamafactory-cli webui
|
CUDA_VISIBLE_DEVICES=0 GRADIO_SERVER_PORT=7860 GRADIO_SHARE=1 llamafactory-cli webui
|
||||||
```
|
```
|
||||||
|
|
||||||
<details><summary>For Alibaba Cloud users</summary>
|
<details><summary>For Alibaba Cloud PAI or AutoDL users</summary>
|
||||||
|
|
||||||
If you encountered display problems in LLaMA Board on Alibaba Cloud, try using the following command to set environment variables before starting LLaMA Board:
|
If you encountered display problems in LLaMA Board on Alibaba Cloud PAI, try using the following command to set environment variables before starting LLaMA Board:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
export GRADIO_ROOT_PATH=/${JUPYTER_NAME}/proxy/7860/
|
export GRADIO_ROOT_PATH=/${JUPYTER_NAME}/proxy/7860/
|
||||||
```
|
```
|
||||||
|
|
||||||
|
If you are using AutoDL, please install a specific version of Gradio:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pip install gradio==4.10.0
|
||||||
|
```
|
||||||
|
|
||||||
</details>
|
</details>
|
||||||
|
|
||||||
#### Use Docker
|
#### Use Docker
|
||||||
|
13
README_zh.md
13
README_zh.md
@ -366,17 +366,23 @@ CUDA_VISIBLE_DEVICES=0 llamafactory-cli export examples/merge_lora/llama3_lora_s
|
|||||||
#### 使用本地环境
|
#### 使用本地环境
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
CUDA_VISIBLE_DEVICES=0 GRADIO_SHARE=1 llamafactory-cli webui
|
CUDA_VISIBLE_DEVICES=0 GRADIO_SERVER_PORT=7860 GRADIO_SHARE=1 llamafactory-cli webui
|
||||||
```
|
```
|
||||||
|
|
||||||
<details><summary>阿里云用户指南</summary>
|
<details><summary>阿里云 PAI 和 AutoDL 用户指南</summary>
|
||||||
|
|
||||||
如果您在阿里云上使用 LLaMA Board 时遇到显示问题,请尝试在启动前使用以下命令设置环境变量:
|
如果您在阿里云 PAI 上使用 LLaMA Board 时遇到显示问题,请尝试在启动前使用以下命令设置环境变量:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
export GRADIO_ROOT_PATH=/${JUPYTER_NAME}/proxy/7860/
|
export GRADIO_ROOT_PATH=/${JUPYTER_NAME}/proxy/7860/
|
||||||
```
|
```
|
||||||
|
|
||||||
|
如果您正在使用 AutoDL,请安装下述 Gradio 版本:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pip install gradio==4.10.0
|
||||||
|
```
|
||||||
|
|
||||||
</details>
|
</details>
|
||||||
|
|
||||||
#### 使用 Docker
|
#### 使用 Docker
|
||||||
@ -475,7 +481,6 @@ export USE_MODELSCOPE_HUB=1 # Windows 使用 `set USE_MODELSCOPE_HUB=1`
|
|||||||
1. **[Luminia-13B-v3](https://huggingface.co/Nekochu/Luminia-13B-v3)**:一个用于生成 Stable Diffusion 提示词的大型语言模型。[[🤗Demo]](https://huggingface.co/spaces/Nekochu/Luminia-13B_SD_Prompt)
|
1. **[Luminia-13B-v3](https://huggingface.co/Nekochu/Luminia-13B-v3)**:一个用于生成 Stable Diffusion 提示词的大型语言模型。[[🤗Demo]](https://huggingface.co/spaces/Nekochu/Luminia-13B_SD_Prompt)
|
||||||
1. **[Chinese-LLaVA-Med](https://github.com/BUAADreamer/Chinese-LLaVA-Med)**:中文多模态医学大模型,基于 LLaVA-1.5-7B 在中文多模态医疗数据上微调而得。
|
1. **[Chinese-LLaVA-Med](https://github.com/BUAADreamer/Chinese-LLaVA-Med)**:中文多模态医学大模型,基于 LLaVA-1.5-7B 在中文多模态医疗数据上微调而得。
|
||||||
|
|
||||||
|
|
||||||
</details>
|
</details>
|
||||||
|
|
||||||
## 协议
|
## 协议
|
||||||
|
Loading…
x
Reference in New Issue
Block a user