hoshi-hiyouga
|
9b5baa97f0
|
[data] qwen3 fixes (#8109)
|
2025-05-20 02:00:30 +08:00 |
|
hoshi-hiyouga
|
beae231af6
|
[doc] add no build isolation (#8103)
|
2025-05-19 19:25:13 +08:00 |
|
Shawn Tao
|
0b773234e5
|
[infer] Modify vllm_infer.py to batch preprocess to avoid too much files opened error (#8051)
Co-authored-by: Kingsley <82590017+Kuangdd01@users.noreply.github.com>
|
2025-05-15 10:54:35 +08:00 |
|
Kingsley
|
9620825892
|
[scripts] add video params for vllm infer (#7992)
|
2025-05-09 21:16:52 +08:00 |
|
hoshi-hiyouga
|
c3c0efbaa0
|
[misc] fix packing and eval plot (#7623)
|
2025-04-07 18:20:57 +08:00 |
|
hoshi-hiyouga
|
5e22597ff1
|
[infer] vllm video/audio inference (#7566)
|
2025-04-02 02:27:04 +08:00 |
|
hoshi-hiyouga
|
2bfcad2394
|
[model] fix kv cache (#7564)
|
2025-04-01 23:07:46 +08:00 |
|
hoshi-hiyouga
|
264538cb26
|
[misc] upgrade format to py39 (#7256)
|
2025-03-12 00:08:41 +08:00 |
|
hoshi-hiyouga
|
522a3e8493
|
[infer] fix vllm args (#7235)
Former-commit-id: 999be5b4512890b8cf4f45874a77e35cf35626f5
|
2025-03-11 01:15:35 +08:00 |
|
hoshi-hiyouga
|
f4ec4fa6ad
|
[script] fix vllm version (#7193)
Former-commit-id: ababdde597b2b9bf0ab3f30f036bc8d97de07f03
|
2025-03-06 17:14:17 +08:00 |
|
hoshi-hiyouga
|
5f65558088
|
[misc] fix project toml (#7067)
Former-commit-id: 28a668ff4e0beebfe5387362f5518c1d9343666f
|
2025-02-25 23:22:48 +08:00 |
|
JieShen
|
0f54a78144
|
[script] add seed args (#7058)
* add seed args
* add seed args
* update seed
Former-commit-id: eb9770b2c01a840b6a0ac119210c22bdbb81e18b
|
2025-02-25 19:44:57 +08:00 |
|
hoshi-hiyouga
|
be33ef67fb
|
[misc] fix script (#6977)
Former-commit-id: 775efa1d8cbdb1b7d122be2a986d47f85214e0a1
|
2025-02-18 17:00:46 +08:00 |
|
hoshi-hiyouga
|
c2022431aa
|
[misc] update license year & fix llama pro (#6814)
* fix llamapro script
* change year
Former-commit-id: d9ae594178796994d400a5f207d6499712816f89
|
2025-02-05 01:53:33 +08:00 |
|
hoshi-hiyouga
|
28d145a066
|
pin vllm version to 0.6.5 (#6629)
Former-commit-id: 26097ca0adf25ebb7d9e8eec2d2cef673c6cfe88
|
2025-01-14 02:44:02 +08:00 |
|
hoshi-hiyouga
|
2a05941b14
|
[inference] fix stop token for object detection (#6624)
* fix stop token
* update minicpm data pipeline
* fix npu qlora examples
Former-commit-id: 844919fadaa8a61dfae47020971ea80730b2346f
|
2025-01-13 21:34:20 +08:00 |
|
hiyouga
|
8516054e4d
|
update scripts
Former-commit-id: 05aa52adde8905ca892f1ed5847d6f90b1992848
|
2025-01-03 10:50:32 +00:00 |
|
hiyouga
|
bbd432415d
|
support qwen2vl vllm infer
Former-commit-id: 03ddd2555fb97488cd4daab11e8b672d36150c5a
|
2024-12-05 10:17:26 +00:00 |
|
hiyouga
|
c1768cfb14
|
support batch infer in vllm
Former-commit-id: 3ef5ed3b9a44eed2f7e3ff221dfc343d0a97c0b5
|
2024-12-04 13:50:00 +00:00 |
|
JieShen
|
6c9d05539a
|
add vllm_infer script
Former-commit-id: 4daab843a3aa096b35e5d3832c01fac4271e4604
|
2024-11-29 14:22:20 +08:00 |
|