hoshi-hiyouga
|
b83a38eb98
|
[data] qwen3 fixes (#8109)
|
2025-05-20 02:00:30 +08:00 |
|
hoshi-hiyouga
|
ed2f89efaf
|
[doc] add no build isolation (#8103)
|
2025-05-19 19:25:13 +08:00 |
|
Shawn Tao
|
e8a18c17e9
|
[infer] Modify vllm_infer.py to batch preprocess to avoid too much files opened error (#8051)
Co-authored-by: Kingsley <82590017+Kuangdd01@users.noreply.github.com>
|
2025-05-15 10:54:35 +08:00 |
|
Kingsley
|
cef3a0b2e2
|
[scripts] add video params for vllm infer (#7992)
|
2025-05-09 21:16:52 +08:00 |
|
hoshi-hiyouga
|
5817cda37e
|
[misc] fix packing and eval plot (#7623)
|
2025-04-07 18:20:57 +08:00 |
|
hoshi-hiyouga
|
903db09822
|
[infer] vllm video/audio inference (#7566)
|
2025-04-02 02:27:04 +08:00 |
|
hoshi-hiyouga
|
aaf2e6ba2a
|
[model] fix kv cache (#7564)
|
2025-04-01 23:07:46 +08:00 |
|
hoshi-hiyouga
|
7c1640ed5f
|
[misc] upgrade format to py39 (#7256)
|
2025-03-12 00:08:41 +08:00 |
|
hoshi-hiyouga
|
317d0855d2
|
[infer] fix vllm args (#7235)
Former-commit-id: ef7af457fc44b1e8cad0c78717848617f98364f0
|
2025-03-11 01:15:35 +08:00 |
|
hoshi-hiyouga
|
b6c0e8608e
|
[script] fix vllm version (#7193)
Former-commit-id: 313355759dc906d3612364dc6c8f6344afdedb97
|
2025-03-06 17:14:17 +08:00 |
|
hoshi-hiyouga
|
f4aa0a146c
|
[misc] fix project toml (#7067)
Former-commit-id: 96fd510e6a03eae7a1f41772e1d6b784df6d5d2e
|
2025-02-25 23:22:48 +08:00 |
|
JieShen
|
96636c3729
|
[script] add seed args (#7058)
* add seed args
* add seed args
* update seed
Former-commit-id: e8266fe5635470e84f9d39f43e53cc49f962c2e9
|
2025-02-25 19:44:57 +08:00 |
|
hoshi-hiyouga
|
184c5d0882
|
[misc] fix script (#6977)
Former-commit-id: cc8c7e762b9c873ef79529152465bbed9231053c
|
2025-02-18 17:00:46 +08:00 |
|
hoshi-hiyouga
|
1fee69f874
|
[misc] update license year & fix llama pro (#6814)
* fix llamapro script
* change year
Former-commit-id: e2dc5b952aa22835d5220ba624f44676138b65ac
|
2025-02-05 01:53:33 +08:00 |
|
hoshi-hiyouga
|
5e699458e5
|
pin vllm version to 0.6.5 (#6629)
Former-commit-id: 1c7663d3049e00a9148c3e3c58204deca7a08c8d
|
2025-01-14 02:44:02 +08:00 |
|
hoshi-hiyouga
|
d8cba9464f
|
[inference] fix stop token for object detection (#6624)
* fix stop token
* update minicpm data pipeline
* fix npu qlora examples
Former-commit-id: e3e2c8c689c54ebb2af264de808502e5a8ba0f2b
|
2025-01-13 21:34:20 +08:00 |
|
hiyouga
|
20a9565e36
|
update scripts
Former-commit-id: dd44c65d7f60cb6f5d0e0d8ee5f4e7643defb89b
|
2025-01-03 10:50:32 +00:00 |
|
hiyouga
|
88b06a0c7f
|
support qwen2vl vllm infer
Former-commit-id: 207f8b069ca35a28de4588b4962e7254f451c52c
|
2024-12-05 10:17:26 +00:00 |
|
hiyouga
|
235cdcacee
|
support batch infer in vllm
Former-commit-id: 1324d158f954d777f1fbf09f46149c372704b388
|
2024-12-04 13:50:00 +00:00 |
|
JieShen
|
99265c7d2f
|
add vllm_infer script
Former-commit-id: 961e8c2d2e5505de14702cf8609d54b4f3a23b1e
|
2024-11-29 14:22:20 +08:00 |
|