Kingsley
7a00670f70
[model] support intern-VL 2.5-3 series ( #7258 )
...
* add internvl and rebase
* fix for internvl2&3
* remove lines
* fix video_inputs & lint
* nit
* add constants
* remove lines
* fix
* fix error
* pass ci
* pass ci
* skip internvl & nit
2025-04-17 00:31:30 +08:00
ENg-122
049696b7dd
[misc] improve entrypoint ( #7345 )
...
* 纯粹优化下入口代码,因为看到if else太多了
* Update cli.py
---------
Co-authored-by: hoshi-hiyouga <hiyouga@buaa.edu.cn>
2025-04-16 21:48:23 +08:00
leo-pony
98e6b3c0ca
[infer] support vllm-ascend ( #7739 )
2025-04-16 20:06:47 +08:00
hoshi-hiyouga
1cb63a49e5
[api] fix chat messages ( #7732 )
2025-04-15 16:39:08 +08:00
hoshi-hiyouga
1e2f315dca
[deps] upgrade vllm ( #7728 )
2025-04-15 14:57:40 +08:00
hoshi-hiyouga
d74634c68a
[assets] update model readme ( #7724 )
2025-04-15 00:41:09 +08:00
Kingsley
d1b695cd9f
[model] Support Kimi_VL thinking/instruct ( #7719 )
...
* add kimi_vl
* patch config
* check version
* Update mm_plugin.py
* Update mm_plugin.py
---------
Co-authored-by: hoshi-hiyouga <hiyouga@buaa.edu.cn>
2025-04-15 00:21:58 +08:00
hoshi-hiyouga
2b92e85cdd
[misc] fix env vars ( #7715 )
2025-04-14 16:04:04 +08:00
hoshi-hiyouga
8f46aced51
[misc] upgrade cli ( #7714 )
2025-04-14 15:41:22 +08:00
hoshi-hiyouga
c60971f4b8
[deps] upgrade transformers ( #7704 )
2025-04-13 18:11:34 +08:00
Yuxuan Zhang
c21dc814ff
[model] add GLM-4-0414 ( #7695 )
...
* Update README_zh.md
* update
2025-04-13 17:10:45 +08:00
Eric Tang
45c5a913f8
[data] support for specifying a dataset in cloud storage ( #7567 )
...
* add support for loading datasets from s3/gcs
* add comments to readme
* run linter and address comments
* add option to pass in kwargs to ray init (i.e. runtime env)
* address comment
* revert mixed up changes
2025-04-10 11:31:35 +08:00
Eric Tang
5cc0d6a8f0
[ray] allow for specifying ray.init kwargs (i.e. runtime_env) ( #7647 )
...
* ray init kwargs
* Update trainer_utils.py
* fix ray args
---------
Co-authored-by: hoshi-hiyouga <hiyouga@buaa.edu.cn>
2025-04-10 11:31:05 +08:00
Dain Kim
e60249d597
[bugfix] enable_gemma_liger_kernel ( #7660 )
...
- The `enable_liger_kernel` function for the Gemma model series was not executed due to the existing `if` statement in the code.
- Changed the line to an `elif` statement so that the `apply_liger_kernel` function is executed properly.
resolved : #7628
2025-04-10 11:27:30 +08:00
jilongW
f0179cb4e8
[misc] fix cuda warn on intel GPU ( #7655 )
2025-04-09 21:37:54 +08:00
hoshi-hiyouga
cca359fb6d
[data] add coig-p dataset ( #7657 )
2025-04-09 21:18:25 +08:00
hoshi-hiyouga
458b6b0aef
[assets] update readme ( #7644 )
2025-04-09 01:06:06 +08:00
Kingsley
0935eff188
[data] Fix bugs of use_audio_in_video in Qwen2.5 Omni ( #7638 )
...
* cache _mm_inputs
* nit
* support for use_audio_in_video
* remove cache
* fix data
* Update mllm_video_audio_demo.json
2025-04-08 18:40:10 +08:00
Shawn Tao
85f95a2883
[trainer] fix key error ( #7635 )
2025-04-08 18:39:50 +08:00
hoshi-hiyouga
fb46193364
[misc] fix packing and eval plot ( #7623 )
2025-04-07 18:20:57 +08:00
hoshi-hiyouga
40fb24916f
[model] add llama4 ( #7611 )
2025-04-06 13:42:31 +08:00
Kingsley
6eb28bcacd
[data] fix qwen2.5 omni plugin ( #7578 )
...
* specific entry
* Update mm_plugin.py
* fix fps cal
---------
Co-authored-by: hoshi-hiyouga <hiyouga@buaa.edu.cn>
2025-04-02 23:58:39 +08:00
Kingsley
ac9ba80128
[data] fix qwen2.5 omni plugin ( #7573 )
...
* align key with qwen2vl
* nit && change scripts
2025-04-02 21:28:52 +08:00
gechengze
a47370b85f
[trainer] fix batch processing in PPO trainer ( #7576 )
2025-04-02 21:17:48 +08:00
hoshi-hiyouga
be0289292d
[infer] vllm video/audio inference ( #7566 )
2025-04-02 02:27:04 +08:00
hoshi-hiyouga
37d783149d
[model] fix kv cache ( #7564 )
2025-04-01 23:07:46 +08:00
Yu Shi Jie
69b0c1cf4f
[model] fix use_cache patching for gemma3 multimodal ( #7500 )
2025-04-01 16:06:48 +08:00
Ritesh Goru
917264f79f
[data] specify position_ids in PackedSupervisedDatasetProcessor for neat_packing ( #7318 )
...
* use position_ids for neat_packing with fa2
* revert fa2 changes
2025-04-01 16:03:13 +08:00
taoharry
99a247926e
[webui] fix launch with proxy ( #7332 )
2025-04-01 15:52:56 +08:00
Billy Cao
51e741ec85
[data] shard the dataset to allow multiprocessing when streaming is enabled ( #7530 )
...
* Shard the dataset when streaming to allow multiprocessing
* Allow user to not set dataset_shards to ensure backward compatibility
2025-04-01 15:36:23 +08:00
Hao
538e6c70c3
[trainer] new kto mismatch pair creation strategy ( #7509 )
2025-04-01 15:21:53 +08:00
hoshi-hiyouga
56fa64035a
[data] fix qwen2.5 omni collator ( #7553 )
2025-04-01 00:15:12 +08:00
Kingsley
1189aeb6c2
[model] add Qwen2.5-Omni model ( #7537 )
...
* preserve image_sizes
* preserve image_sizes
* init plugin
* support audio-text2text lora
* nit
* support image/video-text2text, audio-text2text
* remove args
* remove lines
* add docs && nit
* remove some comments
* fix && add merge part script
* add license
2025-03-31 20:39:35 +08:00
Kingsley
bde7d60f4e
[data] fix pixtral plugin ( #7505 )
...
* preserve `image_sizes`
* add comments
2025-03-27 17:06:40 +08:00
Xu-pixel
2a952305f3
[3rdparty] support swanlab lark notification ( #7481 )
2025-03-27 01:52:01 +08:00
Kdump
2c1d0b7a83
[trainer] fix wsd scheduler ( #7304 )
...
* [trainer] Warmup_stable_decay supports setting the number of stable and decay steps according to the warmup_ratio ratio
* Update trainer_utils.py
---------
Co-authored-by: hoshi-hiyouga <hiyouga@buaa.edu.cn>
2025-03-26 15:25:02 +08:00
hoshi-hiyouga
cb42e2c4de
[model] add qwen2vl 32b & upgrade peft ( #7469 )
...
* add qwen2vl 32b
* fix ci
* upgrade peft to 0.15
* fix ci
* fix ci
2025-03-25 12:15:58 +08:00
GuoCoder
50d404f344
[model] fix lora on quant models ( #7456 )
...
Co-authored-by: root <root@ai>
2025-03-25 11:59:46 +08:00
Xiaosu Zhu
d38c402f63
[misc] update liger-kernel's monkey patch ( #7453 )
...
* Update liger_kernel.py
* Update setup.py
2025-03-25 11:58:52 +08:00
AbdelKarim ELJANDOUBI
ce089ef8f6
[misc] enable liger kernel for gemma3 text and paligemma ( #7466 )
...
* add gemma3 text
* add paligemma (1,2 and 2 mix)
2025-03-25 09:27:43 +08:00
Kenny Lam
cad8bde6b1
[misc] enable liger kernel for gemma3 ( #7462 )
2025-03-24 19:09:59 +08:00
hoshi-hiyouga
180019b376
[trainer] fix vlm loss for transformers 4.49 ( #7448 )
2025-03-24 10:24:05 +08:00
hoshi-hiyouga
1a7c872c14
[deps] upgrade transformers to 4.50.0 ( #7437 )
...
* upgrade transformers
* fix hf cache
* fix dpo trainer
2025-03-23 17:44:27 +08:00
hoshi-hiyouga
2ce975f6f4
[deps] upgrade vllm to 0.8 ( #7436 )
2025-03-23 14:32:22 +08:00
Eric Tang
8f09c0bf96
[3rdparty] fix redundant process group destroy for ray ( #7395 )
...
* fix redundant process group destroy for ray
* Update tuner.py
---------
Co-authored-by: hoshi-hiyouga <hiyouga@buaa.edu.cn>
2025-03-21 10:56:47 +08:00
hoshi-hiyouga
99edc530e3
[assets] update wechat ( #7361 )
2025-03-18 21:31:09 +08:00
hoshi-hiyouga
a918b769ba
[misc] set dev version ( #7351 )
2025-03-18 00:10:53 +08:00
hoshi-hiyouga
2a13067a42
[data] fix template ( #7349 )
2025-03-17 23:45:20 +08:00
Hertz
db936dc329
[model] support hunyuan 7b ( #7317 )
...
* [Model]supported tencent-hunyuan model
* [Model]supported tencent-hunyuan model(fix)
* [Model]supported tencent-hunyuan model(fix)
2025-03-15 20:55:24 +08:00
Qiaolin Yu
280d9bda76
[inference] support sglang backend ( #7278 )
...
* Mimic SGLang offline Engine
* Add more tests and args
* Pass all current tests
* Clean Code
* fix sample_params
* clean code
* Fix Stream Chat
* change sglang from engine mode to server mode
* fix
* Fix Review Issues
* Use SGLang Built-In Utilities
* Fix test SGLang
* Some Doc Issue
* fix sglang engine
* add readme
---------
Co-authored-by: Jin Pan <jpan236@wisc.edu>
Co-authored-by: hiyouga <hiyouga@buaa.edu.cn>
2025-03-15 04:37:58 +08:00