flashJd
0ac641326b
[misc] fix new tokens adding ( #7253 )
...
Co-authored-by: hoshi-hiyouga <hiyouga@buaa.edu.cn>
2025-04-21 23:19:02 +08:00
ddddng
c5ba9106ec
[model] fix gemma3 export ( #7786 )
...
Co-authored-by: hoshi-hiyouga <hiyouga@buaa.edu.cn>
2025-04-21 23:07:11 +08:00
Sachin Beldona
3b2d3794a5
[misc] fix bug in constant ( #7765 )
...
Co-authored-by: Sachin Beldona <sbeldona@cs.cmu.edu>
2025-04-21 23:06:31 +08:00
hoshi-hiyouga
b605c20768
[assets] update wechat ( #7792 )
2025-04-21 21:29:42 +08:00
hoshi-hiyouga
39169986ef
[trainer] fix pt loss ( #7748 )
...
* fix pt loss
* robust
* fix
* test
2025-04-17 03:15:35 +08:00
hoshi-hiyouga
86ebb219d6
[breaking] bump transformers to 4.45.0 & improve ci ( #7746 )
...
* update ci
* fix
* fix
* fix
* fix
* fix
2025-04-17 02:36:48 +08:00
hoshi-hiyouga
d222f63cb7
[infer] set env for vllm ascend ( #7745 )
2025-04-17 01:08:55 +08:00
Kingsley
2e518f255f
[model] support intern-VL 2.5-3 series ( #7258 )
...
* add internvl and rebase
* fix for internvl2&3
* remove lines
* fix video_inputs & lint
* nit
* add constants
* remove lines
* fix
* fix error
* pass ci
* pass ci
* skip internvl & nit
2025-04-17 00:31:30 +08:00
ENg-122
8f88a4e6a4
[misc] improve entrypoint ( #7345 )
...
* 纯粹优化下入口代码,因为看到if else太多了
* Update cli.py
---------
Co-authored-by: hoshi-hiyouga <hiyouga@buaa.edu.cn>
2025-04-16 21:48:23 +08:00
leo-pony
b9263ff5ac
[infer] support vllm-ascend ( #7739 )
2025-04-16 20:06:47 +08:00
hoshi-hiyouga
ee2ab093a7
[api] fix chat messages ( #7732 )
2025-04-15 16:39:08 +08:00
hoshi-hiyouga
3df021d4d7
[deps] upgrade vllm ( #7728 )
2025-04-15 14:57:40 +08:00
Joe Schoonover
e252abf051
[docker] patch docker-rocm ( #7725 )
...
* Update Dockerfile
* Fix typo
* Fix syntax for /bin/sh conditional
* Add build args to docker-compose
* Change shell to /bin/bash
This is required for "==" syntax in conditional string comparison
2025-04-15 13:36:39 +08:00
hoshi-hiyouga
1134baeedd
[assets] update model readme ( #7724 )
2025-04-15 00:41:09 +08:00
Kingsley
2101399c94
[model] Support Kimi_VL thinking/instruct ( #7719 )
...
* add kimi_vl
* patch config
* check version
* Update mm_plugin.py
* Update mm_plugin.py
---------
Co-authored-by: hoshi-hiyouga <hiyouga@buaa.edu.cn>
2025-04-15 00:21:58 +08:00
hoshi-hiyouga
3f91a95250
[misc] fix env vars ( #7715 )
2025-04-14 16:04:04 +08:00
hoshi-hiyouga
7c61b35106
[misc] upgrade cli ( #7714 )
2025-04-14 15:41:22 +08:00
hoshi-hiyouga
f518bfba5b
[deps] upgrade transformers ( #7704 )
2025-04-13 18:11:34 +08:00
Yuxuan Zhang
8162f94db5
[model] add GLM-4-0414 ( #7695 )
...
* Update README_zh.md
* update
2025-04-13 17:10:45 +08:00
hoshi-hiyouga
1f0c52b73c
[deps] fix uv conflicts ( #7686 )
...
* fix #7678
* Update setup.py
* Update tests.yml
* Update publish.yml
* Update Makefile
2025-04-11 18:02:24 +08:00
Eric Tang
a8caf09c7f
[data] support for specifying a dataset in cloud storage ( #7567 )
...
* add support for loading datasets from s3/gcs
* add comments to readme
* run linter and address comments
* add option to pass in kwargs to ray init (i.e. runtime env)
* address comment
* revert mixed up changes
2025-04-10 11:31:35 +08:00
Eric Tang
bb8d79bae2
[ray] allow for specifying ray.init kwargs (i.e. runtime_env) ( #7647 )
...
* ray init kwargs
* Update trainer_utils.py
* fix ray args
---------
Co-authored-by: hoshi-hiyouga <hiyouga@buaa.edu.cn>
2025-04-10 11:31:05 +08:00
Dain Kim
1c436c9f25
[bugfix] enable_gemma_liger_kernel ( #7660 )
...
- The `enable_liger_kernel` function for the Gemma model series was not executed due to the existing `if` statement in the code.
- Changed the line to an `elif` statement so that the `apply_liger_kernel` function is executed properly.
resolved : #7628
2025-04-10 11:27:30 +08:00
jilongW
1b0934bccb
[misc] fix cuda warn on intel GPU ( #7655 )
2025-04-09 21:37:54 +08:00
hoshi-hiyouga
4eec541857
[data] add coig-p dataset ( #7657 )
2025-04-09 21:18:25 +08:00
hoshi-hiyouga
89a4f9ec7f
[assets] update readme ( #7654 )
2025-04-09 18:27:38 +08:00
hoshi-hiyouga
1abd71b551
[assets] update readme ( #7644 )
2025-04-09 01:06:06 +08:00
Kingsley
349c56c51c
[data] Fix bugs of use_audio_in_video
in Qwen2.5 Omni ( #7638 )
...
* cache _mm_inputs
* nit
* support for use_audio_in_video
* remove cache
* fix data
* Update mllm_video_audio_demo.json
2025-04-08 18:40:10 +08:00
Shawn Tao
acb09fa3a3
[trainer] fix key error ( #7635 )
2025-04-08 18:39:50 +08:00
Adarsh Shirawalmath
f75b91077b
[sglang] support transformers 4.51.0 ( #7639 )
2025-04-08 18:39:23 +08:00
hoshi-hiyouga
c3c0efbaa0
[misc] fix packing and eval plot ( #7623 )
2025-04-07 18:20:57 +08:00
hoshi-hiyouga
5115dc8c7f
[assets] update readme ( #7612 )
2025-04-06 13:58:49 +08:00
hoshi-hiyouga
831e7f1cfd
[model] add llama4 ( #7611 )
2025-04-06 13:42:31 +08:00
Kingsley
d4cfa9507e
[data] fix qwen2.5 omni plugin ( #7578 )
...
* specific entry
* Update mm_plugin.py
* fix fps cal
---------
Co-authored-by: hoshi-hiyouga <hiyouga@buaa.edu.cn>
2025-04-02 23:58:39 +08:00
Kingsley
d32c6c014d
[data] fix qwen2.5 omni plugin ( #7573 )
...
* align key with qwen2vl
* nit && change scripts
2025-04-02 21:28:52 +08:00
gechengze
7b9deb9410
[trainer] fix batch processing in PPO trainer ( #7576 )
2025-04-02 21:17:48 +08:00
hoshi-hiyouga
5e22597ff1
[infer] vllm video/audio inference ( #7566 )
2025-04-02 02:27:04 +08:00
hoshi-hiyouga
2bfcad2394
[model] fix kv cache ( #7564 )
2025-04-01 23:07:46 +08:00
Yu Shi Jie
a13b1bb49a
[model] fix use_cache patching for gemma3 multimodal ( #7500 )
2025-04-01 16:06:48 +08:00
Ritesh Goru
d10467d178
[data] specify position_ids in PackedSupervisedDatasetProcessor for neat_packing ( #7318 )
...
* use position_ids for neat_packing with fa2
* revert fa2 changes
2025-04-01 16:03:13 +08:00
taoharry
aac70663fd
[webui] fix launch with proxy ( #7332 )
2025-04-01 15:52:56 +08:00
Billy Cao
00409ff28a
[data] shard the dataset to allow multiprocessing when streaming is enabled ( #7530 )
...
* Shard the dataset when streaming to allow multiprocessing
* Allow user to not set dataset_shards to ensure backward compatibility
2025-04-01 15:36:23 +08:00
Hao
d70b3b4bc5
[trainer] new kto mismatch pair creation strategy ( #7509 )
2025-04-01 15:21:53 +08:00
hoshi-hiyouga
e76eba051d
[data] fix qwen2.5 omni collator ( #7553 )
2025-04-01 00:15:12 +08:00
Kingsley
7eed496336
[model] add Qwen2.5-Omni model ( #7537 )
...
* preserve image_sizes
* preserve image_sizes
* init plugin
* support audio-text2text lora
* nit
* support image/video-text2text, audio-text2text
* remove args
* remove lines
* add docs && nit
* remove some comments
* fix && add merge part script
* add license
2025-03-31 20:39:35 +08:00
hoshi-hiyouga
0f8296626a
[deps] pin pydantic to 2.10.6 ( #7546 )
2025-03-31 14:42:28 +08:00
Kingsley
8da1d2fa71
[data] fix pixtral plugin ( #7505 )
...
* preserve `image_sizes`
* add comments
2025-03-27 17:06:40 +08:00
Xu-pixel
b578a7d5b6
[3rdparty] support swanlab lark notification ( #7481 )
2025-03-27 01:52:01 +08:00
Kdump
24afceddb7
[trainer] fix wsd scheduler ( #7304 )
...
* [trainer] Warmup_stable_decay supports setting the number of stable and decay steps according to the warmup_ratio ratio
* Update trainer_utils.py
---------
Co-authored-by: hoshi-hiyouga <hiyouga@buaa.edu.cn>
2025-03-26 15:25:02 +08:00
hoshi-hiyouga
0583d06676
[model] add qwen2vl 32b & upgrade peft ( #7469 )
...
* add qwen2vl 32b
* fix ci
* upgrade peft to 0.15
* fix ci
* fix ci
2025-03-25 12:15:58 +08:00