Yu Shi Jie
69b0c1cf4f
[model] fix use_cache patching for gemma3 multimodal ( #7500 )
2025-04-01 16:06:48 +08:00
Ritesh Goru
917264f79f
[data] specify position_ids in PackedSupervisedDatasetProcessor for neat_packing ( #7318 )
...
* use position_ids for neat_packing with fa2
* revert fa2 changes
2025-04-01 16:03:13 +08:00
taoharry
99a247926e
[webui] fix launch with proxy ( #7332 )
2025-04-01 15:52:56 +08:00
Billy Cao
51e741ec85
[data] shard the dataset to allow multiprocessing when streaming is enabled ( #7530 )
...
* Shard the dataset when streaming to allow multiprocessing
* Allow user to not set dataset_shards to ensure backward compatibility
2025-04-01 15:36:23 +08:00
Hao
538e6c70c3
[trainer] new kto mismatch pair creation strategy ( #7509 )
2025-04-01 15:21:53 +08:00
hoshi-hiyouga
56fa64035a
[data] fix qwen2.5 omni collator ( #7553 )
2025-04-01 00:15:12 +08:00
Kingsley
1189aeb6c2
[model] add Qwen2.5-Omni model ( #7537 )
...
* preserve image_sizes
* preserve image_sizes
* init plugin
* support audio-text2text lora
* nit
* support image/video-text2text, audio-text2text
* remove args
* remove lines
* add docs && nit
* remove some comments
* fix && add merge part script
* add license
2025-03-31 20:39:35 +08:00
hoshi-hiyouga
6ca29fe7f2
[deps] pin pydantic to 2.10.6 ( #7546 )
2025-03-31 14:42:28 +08:00
Kingsley
bde7d60f4e
[data] fix pixtral plugin ( #7505 )
...
* preserve `image_sizes`
* add comments
2025-03-27 17:06:40 +08:00
Xu-pixel
2a952305f3
[3rdparty] support swanlab lark notification ( #7481 )
2025-03-27 01:52:01 +08:00
Kdump
2c1d0b7a83
[trainer] fix wsd scheduler ( #7304 )
...
* [trainer] Warmup_stable_decay supports setting the number of stable and decay steps according to the warmup_ratio ratio
* Update trainer_utils.py
---------
Co-authored-by: hoshi-hiyouga <hiyouga@buaa.edu.cn>
2025-03-26 15:25:02 +08:00
hoshi-hiyouga
cb42e2c4de
[model] add qwen2vl 32b & upgrade peft ( #7469 )
...
* add qwen2vl 32b
* fix ci
* upgrade peft to 0.15
* fix ci
* fix ci
2025-03-25 12:15:58 +08:00
GuoCoder
50d404f344
[model] fix lora on quant models ( #7456 )
...
Co-authored-by: root <root@ai>
2025-03-25 11:59:46 +08:00
Xiaosu Zhu
d38c402f63
[misc] update liger-kernel's monkey patch ( #7453 )
...
* Update liger_kernel.py
* Update setup.py
2025-03-25 11:58:52 +08:00
AbdelKarim ELJANDOUBI
ce089ef8f6
[misc] enable liger kernel for gemma3 text and paligemma ( #7466 )
...
* add gemma3 text
* add paligemma (1,2 and 2 mix)
2025-03-25 09:27:43 +08:00
Kenny Lam
cad8bde6b1
[misc] enable liger kernel for gemma3 ( #7462 )
2025-03-24 19:09:59 +08:00
hoshi-hiyouga
23906c1a5c
[assets] fix gemma3 readme ( #7449 )
2025-03-24 10:31:25 +08:00
hoshi-hiyouga
180019b376
[trainer] fix vlm loss for transformers 4.49 ( #7448 )
2025-03-24 10:24:05 +08:00
rumichi
b4f8514540
[docker] upgrade to torch 2.6 ( #7442 )
2025-03-23 21:18:08 +08:00
hoshi-hiyouga
00495b805c
[misc] fix ci ( #7441 )
...
* fix ci
* improve ci
2025-03-23 21:09:35 +08:00
hoshi-hiyouga
4d2c16fd39
[misc] fix license ( #7440 )
2025-03-23 19:31:56 +08:00
SnowFox4004
1b5666b989
[scripts] support compute score on vllm's predictions ( #7419 )
...
* enable manual bleu&rouge eval by adding `scripts/eval_bleu_rouge.py`
* added libraries check
* update: 使用datasets库的多进程加速处理
* update:
- 使用 fire.Fire
- 修改代码格式
* Update eval_bleu_rouge.py: correctly uses fire
Deleted the code of using sys.argv
* Update eval_bleu_rouge.py
---------
Co-authored-by: SnowFox4004 <manba@out>
Co-authored-by: hoshi-hiyouga <hiyouga@buaa.edu.cn>
2025-03-23 19:21:01 +08:00
hoshi-hiyouga
1a7c872c14
[deps] upgrade transformers to 4.50.0 ( #7437 )
...
* upgrade transformers
* fix hf cache
* fix dpo trainer
2025-03-23 17:44:27 +08:00
hoshi-hiyouga
2ce975f6f4
[deps] upgrade vllm to 0.8 ( #7436 )
2025-03-23 14:32:22 +08:00
Guo, Quan
b302301005
[misc] fix sglang deps ( #7432 )
...
* feat: Add transformer version requirement for sglang
* feat: add srt to sglang which is required for running sglang
Other options are srt_hip, srt_xpu, srt_npu, srt_hpu, srt_cpu, for different computation architectures.
2025-03-23 14:07:10 +08:00
Eric Tang
8f09c0bf96
[3rdparty] fix redundant process group destroy for ray ( #7395 )
...
* fix redundant process group destroy for ray
* Update tuner.py
---------
Co-authored-by: hoshi-hiyouga <hiyouga@buaa.edu.cn>
2025-03-21 10:56:47 +08:00
hoshi-hiyouga
6714adf788
[version] fix minicpmo ( #7378 )
2025-03-20 16:59:31 +08:00
hoshi-hiyouga
99edc530e3
[assets] update wechat ( #7361 )
2025-03-18 21:31:09 +08:00
hoshi-hiyouga
a918b769ba
[misc] set dev version ( #7351 )
2025-03-18 00:10:53 +08:00
hoshi-hiyouga
2a13067a42
[data] fix template ( #7349 )
2025-03-17 23:45:20 +08:00
hoshi-hiyouga
20908e4429
[assets] update videos ( #7340 )
...
* Update README.md
* Update README_zh.md
2025-03-17 15:48:02 +08:00
Hertz
db936dc329
[model] support hunyuan 7b ( #7317 )
...
* [Model]supported tencent-hunyuan model
* [Model]supported tencent-hunyuan model(fix)
* [Model]supported tencent-hunyuan model(fix)
2025-03-15 20:55:24 +08:00
Qiaolin Yu
280d9bda76
[inference] support sglang backend ( #7278 )
...
* Mimic SGLang offline Engine
* Add more tests and args
* Pass all current tests
* Clean Code
* fix sample_params
* clean code
* Fix Stream Chat
* change sglang from engine mode to server mode
* fix
* Fix Review Issues
* Use SGLang Built-In Utilities
* Fix test SGLang
* Some Doc Issue
* fix sglang engine
* add readme
---------
Co-authored-by: Jin Pan <jpan236@wisc.edu>
Co-authored-by: hiyouga <hiyouga@buaa.edu.cn>
2025-03-15 04:37:58 +08:00
hoshi-hiyouga
e7ae755ab6
[data] gemma3 plugin pan and scan ( #7294 )
...
* gemma3 pan and scan
* add test case
* fix test
2025-03-13 23:29:23 +08:00
Victor Nogueira
0ecad4b178
[dataset] fix ultrachat_200k dataset ( #7259 )
...
The `HuggingFaceH4/ultrachat_200k` dataset doesn't contain the default "train" split. The correct split is "train_sft".
2025-03-13 20:20:18 +08:00
hoshi-hiyouga
3c974c466e
[assets] update video ( #7287 )
2025-03-13 18:45:47 +08:00
Ritesh Goru
f5b53249a4
[data] efficient 4d_attention_mask creation in neat_packing ( #7272 )
2025-03-13 03:31:12 +08:00
hoshi-hiyouga
1b1964714e
[misc] update format ( #7277 )
2025-03-13 02:53:08 +08:00
hoshi-hiyouga
a54c859674
[model] support gemma3 ( #7273 )
2025-03-13 01:35:23 +08:00
hoshi-hiyouga
9e7e07b78f
[misc] upgrade deps ( #7257 )
2025-03-12 00:33:47 +08:00
hoshi-hiyouga
efa86e730c
[misc] upgrade format to py39 ( #7256 )
2025-03-12 00:08:41 +08:00
hoshi-hiyouga
bcd287848c
[ci] update workflow ( #7255 )
2025-03-11 22:57:49 +08:00
hoshi-hiyouga
1942d3b119
[core] release v0.9.2 ( #7254 )
2025-03-11 22:42:23 +08:00
hoshi-hiyouga
943e4e130c
Merge pull request #7242 from hiyouga/hiyouga/release
...
[release] release v0.9.2
Former-commit-id: 6b25268990bf225d84e29d4067595cf720fa12d8
2025-03-11 15:28:45 +08:00
hoshi-hiyouga
6a7d8b7b87
Merge pull request #7247 from hiyouga/hiyouga/commit
...
[misc] support print commit info
Former-commit-id: 0f7ec4f8529a5d7ea2153b881335821038307bb7
2025-03-11 15:28:04 +08:00
hoshi-hiyouga
4cf9a0df41
Merge pull request #7244 from hiyouga/hiyouga/token
...
[data] avoid exit after saving preprocessed data
Former-commit-id: dcbf01b0035062fa14187e5bdbb925080d349501
2025-03-11 15:17:15 +08:00
hiyouga
02963b7261
support commit info
...
Former-commit-id: a7d89a6dc10579deaf9f45825cc18405a27cade6
2025-03-11 15:13:59 +08:00
hiyouga
ec251b4614
remove exit in preprocess
...
Former-commit-id: f369b6ef41ffd9586ba568b88c5ff32a1af4bace
2025-03-11 15:08:25 +08:00
hiyouga
3722d04db1
release v0.9.2
...
Former-commit-id: e7ed1782d4a006400de6fc0f864abd01f7fadeea
2025-03-11 14:49:13 +08:00
hoshi-hiyouga
6c5927ba93
[infer] fix vllm args ( #7235 )
...
Former-commit-id: 999be5b4512890b8cf4f45874a77e35cf35626f5
2025-03-11 01:15:35 +08:00