418 Commits

Author SHA1 Message Date
Eric Tang
a9aa392ba4 [data] support loading folder from remote (#8078) 2025-05-16 15:35:38 +08:00
hoshi-hiyouga
dc080399c6 [model] add seed coder and qwen3 quant models (#8039) 2025-05-13 15:59:55 +08:00
hoshi-hiyouga
68fc068cab [data] fix kimi vl template (#8015) 2025-05-11 20:45:19 +08:00
yunhao-tech
26cbb03a5f [data] Avoid repetitive tool description warp (#8000)
Co-authored-by: chenyunhao <chenyunhao@wps.cn>
Co-authored-by: hoshi-hiyouga <hiyouga@buaa.edu.cn>
2025-05-09 21:16:37 +08:00
Kingsley
ef86a53063 [model] add mimo7b (#7946) 2025-05-06 17:10:30 +02:00
hoshi-hiyouga
ce7032e1b3 [model] add qwen2 omni 3b (#7945) 2025-05-03 16:36:51 +08:00
hoshi-hiyouga
c566e39b7d [data] fix base plugin (#7924) 2025-04-30 16:28:05 +08:00
hoshi-hiyouga
052ca871bd [data] optimize qwen3 loss computation (#7923) 2025-04-30 16:18:00 +08:00
hoshi-hiyouga
d4ee44bdef [data] add eval_on_each_dataset arg (#7912) 2025-04-30 06:56:43 +08:00
hoshi-hiyouga
6d2cde43e7 [data] replace eos token for base models (#7911) 2025-04-30 06:52:28 +08:00
hoshi-hiyouga
11295cdea0 [data] improve mm plugin (#7910) 2025-04-30 06:34:28 +08:00
hoshi-hiyouga
98f23c6584 [model] add qwen3 (#7885) 2025-04-29 09:34:05 +08:00
Kingsley
db9559456c [data] fix qwen2.5 omni template (#7883) 2025-04-29 00:58:23 +08:00
hoshi-hiyouga
3ae5da2a04 [model] fix dsv3 leaf node (#7879) 2025-04-28 18:11:09 +08:00
hoshi-hiyouga
d173cb50f5 [data] fix qwen2 omni plugin (#7875) 2025-04-28 14:22:41 +08:00
hoshi-hiyouga
bb5b83352b [data] fix minicpmo vllm infer (#7870) 2025-04-28 01:59:53 +08:00
Kingsley
fa0eb91f1f [data] fix internvl plugin (#7817) 2025-04-23 00:58:22 +08:00
Kingsley
7500e761d3 [misc] update internvl constants (#7801) 2025-04-22 15:53:08 +08:00
hoshi-hiyouga
0e4ce039ee [data] improve mmplugin (#7795) 2025-04-22 01:25:33 +08:00
Changrui Chen
bd7bc31c79 [data] Fix wrong position ids with packed attention masks (#7754)
Co-authored-by: hoshi-hiyouga <hiyouga@buaa.edu.cn>
2025-04-21 23:19:36 +08:00
hoshi-hiyouga
39169986ef [trainer] fix pt loss (#7748)
* fix pt loss

* robust

* fix

* test
2025-04-17 03:15:35 +08:00
hoshi-hiyouga
86ebb219d6 [breaking] bump transformers to 4.45.0 & improve ci (#7746)
* update ci

* fix

* fix

* fix

* fix

* fix
2025-04-17 02:36:48 +08:00
Kingsley
2e518f255f [model] support intern-VL 2.5-3 series (#7258)
* add internvl and rebase

* fix for internvl2&3

* remove lines

* fix video_inputs & lint

* nit

* add constants

* remove lines

* fix

* fix error

* pass ci

* pass ci

* skip internvl & nit
2025-04-17 00:31:30 +08:00
Kingsley
2101399c94 [model] Support Kimi_VL thinking/instruct (#7719)
* add kimi_vl

* patch config

* check version

* Update mm_plugin.py

* Update mm_plugin.py

---------

Co-authored-by: hoshi-hiyouga <hiyouga@buaa.edu.cn>
2025-04-15 00:21:58 +08:00
Eric Tang
a8caf09c7f [data] support for specifying a dataset in cloud storage (#7567)
* add support for loading datasets from s3/gcs

* add comments to readme

* run linter and address comments

* add option to pass in kwargs to ray init (i.e. runtime env)

* address comment

* revert mixed up changes
2025-04-10 11:31:35 +08:00
hoshi-hiyouga
1abd71b551 [assets] update readme (#7644) 2025-04-09 01:06:06 +08:00
Kingsley
349c56c51c [data] Fix bugs of use_audio_in_video in Qwen2.5 Omni (#7638)
* cache _mm_inputs

* nit

* support for use_audio_in_video

* remove cache

* fix data

* Update mllm_video_audio_demo.json
2025-04-08 18:40:10 +08:00
hoshi-hiyouga
c3c0efbaa0 [misc] fix packing and eval plot (#7623) 2025-04-07 18:20:57 +08:00
hoshi-hiyouga
831e7f1cfd [model] add llama4 (#7611) 2025-04-06 13:42:31 +08:00
Kingsley
d4cfa9507e [data] fix qwen2.5 omni plugin (#7578)
* specific entry

* Update mm_plugin.py

* fix fps cal

---------

Co-authored-by: hoshi-hiyouga <hiyouga@buaa.edu.cn>
2025-04-02 23:58:39 +08:00
Kingsley
d32c6c014d [data] fix qwen2.5 omni plugin (#7573)
* align key with qwen2vl

* nit && change scripts
2025-04-02 21:28:52 +08:00
hoshi-hiyouga
5e22597ff1 [infer] vllm video/audio inference (#7566) 2025-04-02 02:27:04 +08:00
hoshi-hiyouga
2bfcad2394 [model] fix kv cache (#7564) 2025-04-01 23:07:46 +08:00
Ritesh Goru
d10467d178 [data] specify position_ids in PackedSupervisedDatasetProcessor for neat_packing (#7318)
* use position_ids for neat_packing with fa2

* revert fa2 changes
2025-04-01 16:03:13 +08:00
Billy Cao
00409ff28a [data] shard the dataset to allow multiprocessing when streaming is enabled (#7530)
* Shard the dataset when streaming to allow multiprocessing

* Allow user to not set dataset_shards to ensure backward compatibility
2025-04-01 15:36:23 +08:00
Hao
d70b3b4bc5 [trainer] new kto mismatch pair creation strategy (#7509) 2025-04-01 15:21:53 +08:00
hoshi-hiyouga
e76eba051d [data] fix qwen2.5 omni collator (#7553) 2025-04-01 00:15:12 +08:00
Kingsley
7eed496336 [model] add Qwen2.5-Omni model (#7537)
* preserve image_sizes

* preserve image_sizes

* init plugin

* support audio-text2text lora

* nit

* support image/video-text2text, audio-text2text

* remove args

* remove lines

* add docs && nit

* remove some comments

* fix && add merge part script

* add license
2025-03-31 20:39:35 +08:00
Kingsley
8da1d2fa71 [data] fix pixtral plugin (#7505)
* preserve `image_sizes`

* add comments
2025-03-27 17:06:40 +08:00
hoshi-hiyouga
7203365b80 [trainer] fix vlm loss for transformers 4.49 (#7448) 2025-03-24 10:24:05 +08:00
hoshi-hiyouga
05b19d6952 [deps] upgrade transformers to 4.50.0 (#7437)
* upgrade transformers

* fix hf cache

* fix dpo trainer
2025-03-23 17:44:27 +08:00
hoshi-hiyouga
63752fccf7 [assets] update wechat (#7361) 2025-03-18 21:31:09 +08:00
hoshi-hiyouga
128b5b12b3 [data] fix template (#7349) 2025-03-17 23:45:20 +08:00
Hertz
ec1154662b [model] support hunyuan 7b (#7317)
* [Model]supported tencent-hunyuan model

* [Model]supported tencent-hunyuan model(fix)

* [Model]supported tencent-hunyuan model(fix)
2025-03-15 20:55:24 +08:00
hoshi-hiyouga
93e6184cbe [data] gemma3 plugin pan and scan (#7294)
* gemma3 pan and scan

* add test case

* fix test
2025-03-13 23:29:23 +08:00
Ritesh Goru
480369a9f2 [data] efficient 4d_attention_mask creation in neat_packing (#7272) 2025-03-13 03:31:12 +08:00
hoshi-hiyouga
650a9a9057 [misc] update format (#7277) 2025-03-13 02:53:08 +08:00
hoshi-hiyouga
4b9d8da5a4 [model] support gemma3 (#7273) 2025-03-13 01:35:23 +08:00
hoshi-hiyouga
264538cb26 [misc] upgrade format to py39 (#7256) 2025-03-12 00:08:41 +08:00
hiyouga
478e8194d9 remove exit in preprocess
Former-commit-id: f369b6ef41ffd9586ba568b88c5ff32a1af4bace
2025-03-11 15:08:25 +08:00