hiyouga
|
fa50fc470e
|
fix qwen2vl vllm infer
|
2024-11-24 23:27:24 +08:00 |
|
hoshi-hiyouga
|
8d70edf39b
|
fix #5988
|
2024-11-11 13:57:14 +08:00 |
|
hiyouga
|
58ab4579dc
|
add vllm config
|
2024-11-10 21:28:18 +08:00 |
|
hiyouga
|
8f3a32286e
|
fix #5966
|
2024-11-08 23:49:16 +08:00 |
|
hiyouga
|
8c88065c38
|
fix chat engines
|
2024-11-04 08:18:12 +00:00 |
|
hiyouga
|
c38aa29336
|
support rank0 logger
|
2024-11-02 18:31:04 +08:00 |
|
hiyouga
|
e824b715ad
|
add examples
|
2024-11-01 08:41:54 +00:00 |
|
hiyouga
|
e80a481927
|
support multiimage inference
|
2024-11-01 07:25:20 +00:00 |
|
hiyouga
|
21db8ed2f4
|
use pre-commit
|
2024-10-29 09:07:46 +00:00 |
|
hiyouga
|
f2aa02c070
|
update scripts
|
2024-09-08 14:17:41 +08:00 |
|
hiyouga
|
b6681d7198
|
support vllm 0.6.0
|
2024-09-08 02:26:20 +08:00 |
|
hiyouga
|
54c6905937
|
add docstrings, refactor logger
|
2024-09-08 00:56:56 +08:00 |
|
hoshi-hiyouga
|
36665f3001
|
fix #5384
|
2024-09-07 01:21:14 +08:00 |
|
hiyouga
|
dabad5570b
|
update get template
|
2024-09-04 22:36:20 +08:00 |
|
hiyouga
|
60fc6b926e
|
fix mm inference
|
2024-09-02 01:47:40 +08:00 |
|
hiyouga
|
9967ccb3ae
|
fix mixed mm inputs and rlhf-v
|
2024-09-01 20:52:47 +08:00 |
|
hiyouga
|
a025c3df61
|
remove visual_inputs, fix qlora
|
2024-08-31 00:24:51 +08:00 |
|
hiyouga
|
bee1bd43b9
|
tiny fix
|
2024-08-30 03:21:50 +08:00 |
|
hiyouga
|
2f09520c0d
|
fix #4742
|
2024-07-09 23:24:24 +08:00 |
|
Lian Junhong
|
322663bf90
|
chore: Update vllm_engine.py to support vllm version >= 0.5.1
|
2024-07-07 15:08:12 +08:00 |
|
hiyouga
|
1e27e8c776
|
fix #4677
|
2024-07-04 14:22:07 +08:00 |
|
mMrBun
|
20e2e6fdcb
|
Add tool_format to overwrite tool formatter template
|
2024-06-22 02:13:23 +08:00 |
|
hiyouga
|
c96264bc47
|
fix #4335
|
2024-06-18 22:08:56 +08:00 |
|
hiyouga
|
d87108daa6
|
add license
|
2024-06-15 17:54:33 +08:00 |
|
hiyouga
|
b27269bd2b
|
add test cases
|
2024-06-15 04:05:54 +08:00 |
|
hiyouga
|
2ed8270112
|
clean code
|
2024-06-13 01:58:16 +08:00 |
|
hzhaoy
|
8fb6366ebe
|
adapt vllm==0.5.0
|
2024-06-12 18:29:03 +08:00 |
|
hiyouga
|
577de2fa07
|
fix #4242
|
2024-06-12 16:50:11 +08:00 |
|
Arthur Kim
|
d65a3f7cb6
|
Support vllm==0.5.0
|
2024-06-12 16:49:12 +09:00 |
|
hiyouga
|
74f96efef9
|
rename files
|
2024-06-07 00:09:06 +08:00 |
|
hiyouga
|
8fcc79e1e6
|
add vllm_dtype arg #3387 #3717
|
2024-06-06 02:53:27 +08:00 |
|
hiyouga
|
24e1c0e2ee
|
fix #4022
|
2024-06-03 18:38:36 +08:00 |
|
hiyouga
|
5581cb2e4e
|
update readme
|
2024-05-27 18:14:02 +08:00 |
|
hiyouga
|
3a023bca2a
|
refactor data preprocessing, fix mllm rlhf
|
2024-05-24 04:08:25 +08:00 |
|
hiyouga
|
542229abb3
|
fix paligemma inference
|
2024-05-20 23:36:43 +08:00 |
|
hiyouga
|
d52fae2fa8
|
fix chat engines
do not use pop(key, default) since api assigns None to dict values
|
2024-05-20 00:36:43 +08:00 |
|
hoshi-hiyouga
|
a0e8d3d159
|
Update vllm_engine.py
|
2024-05-20 00:31:04 +08:00 |
|
juejuezi
|
b20d62ba3c
|
feat: pass the max_lora_rank parameter to vLLM backend
|
2024-05-17 16:07:39 +08:00 |
|
hiyouga
|
308edbc426
|
rename package
|
2024-05-16 18:39:08 +08:00 |
|