support batch infer in vllm

This commit is contained in:
hiyouga
2024-12-04 13:50:00 +00:00
parent dc78355002
commit 1324d158f9
29 changed files with 148 additions and 407 deletions

View File

@@ -594,7 +594,7 @@ API_PORT=8000 llamafactory-cli api examples/inference/llama3_vllm.yaml
> [!TIP]
> API 文档请查阅[这里](https://platform.openai.com/docs/api-reference/chat/create)。
>
> 示例:[图像理解](scripts/test_image.py) | [工具调用](scripts/test_toolcall.py)
> 示例:[图像理解](scripts/api_example/test_image.py) | [工具调用](scripts/api_example/test_toolcall.py)
### 从魔搭社区下载