Commit Graph

267 Commits

Author SHA1 Message Date
Xunpeng Xiao
6a2eafbae3 [feat] Models trained and inferred with Mxfp4 are dequantized by default (#9652)
Co-authored-by: Yaowei Zheng <hiyouga@buaa.edu.cn>
2025-12-24 00:26:40 +08:00
Yaowei Zheng
84485406b7 [ci] disable pip cache for ci (#9654) 2025-12-23 18:37:40 +08:00
thulyubh22
7901b2f32e [model] efficient tuning for gpt-oss (#9354)
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-12-23 16:28:38 +08:00
Hertz
4923f52a28 [model] support MiMo-V2-Flash model (#9637) 2025-12-21 14:38:18 +08:00
浮梦
5204cd2bca [misc] add version check for moe (#9633) 2025-12-19 14:57:37 +08:00
Xunpeng Xiao
8c74dca76a [feat] Models trained and inferred with FP8 are dequantized by default (#9627) 2025-12-18 22:54:35 +08:00
tangefly
4fd94141a4 [model] Add Ministral3 (#9582)
Co-authored-by: kingsley <kingsleydodonow@gmail.com>
2025-12-10 15:57:24 +08:00
DoubleWheat
cff4483392 [config] Fix RoPE scaling patch for resuming from a scaled model (#9588) 2025-12-09 20:37:37 +08:00
Yaowei Zheng
5d56817e2b [misc] lint (#9593)
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-12-09 18:00:35 +08:00
xvxuopop
109162dc56 [fix] fix the issue when using fsdp2 with gradient checkpointing. (#9541)
Co-authored-by: jin-yongxu <jinyongxu@h-partners.com>
2025-12-06 16:04:51 +08:00
Kingsley
22be45c78c [misc] fix omni thinker load (#9552)
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-11-30 09:36:36 +08:00
浮梦
2b6f16f261 [model] temporarily support npu fused options on v0, powered by v1 kernels (#9520)
Co-authored-by: frozenleaves <frozen@Mac.local>
2025-11-27 02:08:36 +08:00
Edge-Seven
9779b1f361 [misc] fix typos in some files (#9505)
Co-authored-by: khanhkhanhlele <namkhanh20xx@gmail.com>
2025-11-18 20:36:01 +08:00
浮梦
d4e120423d [data] fix qwen3omni moe model (#9501)
Co-authored-by: frozenleaves <frozen@Mac.local>
2025-11-18 13:43:22 +08:00
Pory
10a446e373 [model] ktransformers qwen3 support (#9485)
Co-authored-by: unknown <xiongchenhui@hisense.ad>
2025-11-13 20:09:44 +08:00
Yaowei Zheng
eaf963f67f [model] update kt code (#9406) 2025-11-05 15:27:22 +08:00
魅影
14abb75126 [model] enable using FA in npu (#9397)
Co-authored-by: frozenleaves <frozen@Mac.local>
2025-11-04 19:32:30 +08:00
한송민
5a9939050e [model] add deepstack_merger_list to Qwen3-VL vision_model_keys (#9399) 2025-11-04 19:27:34 +08:00
Peilin Li
934b3084ee [train] KTransformers SFT as backend engine for LLaMA-Factory (#9400)
Co-authored-by: jimmy128 <jimmy128@noreply.gitcode.com>
Co-authored-by: Yaowei Zheng <hiyouga@buaa.edu.cn>
2025-11-04 15:54:12 +08:00
魅影
767b344fb4 [model] remove npu sdpa patch (#9368)
Co-authored-by: frozenleaves <frozen@Mac.local>
2025-10-30 16:26:35 +08:00
Yaowei Zheng
d9d67ba62d [misc] fix import error (#9299) 2025-10-17 17:46:27 +08:00
Yaowei Zheng
a442fa90ad [misc] fix import error (#9296) 2025-10-17 10:54:30 +08:00
Ximing Xing
c867e28093 [model] adds semantic initialization support for special tokens (#9267)
Co-authored-by: ximingxing <ximingxing@tencent.com>
2025-10-14 17:00:48 +08:00
Jiayi Mao
48974783da [model]: add ernie4_5_moe support for DeepSpeed Zero3 training (#9262) 2025-10-13 13:13:31 +08:00
Yaowei Zheng
40d3691e9e [misc] fix moe models (#9230) 2025-10-05 02:41:02 +08:00
h7878778h
09dedf144f [npu] Redirect SDPA to torch_npu.npu_fusion_attention (opt-in, ZeRO-3 safe, no impact off NPU) (#8972) 2025-09-30 18:11:31 +08:00
Yaowei Zheng
6ffebe5ff7 [data] fix qwen omni plugin (#9204)
Co-authored-by: kingsley <kingsleydodonow@gmail.com>
2025-09-28 01:02:29 +08:00
xvxuopop
0761a4448f [model] add qwen3-vl/qwen3-omni (#9196)
Co-authored-by: kingsley <kingsleydodonow@gmail.com>
2025-09-27 01:21:47 +08:00
Yaowei Zheng
80fe3a172d [model] add dots ocr (#9176) 2025-09-21 23:34:19 +08:00
Yaowei Zheng
260b5625c3 [assets] update wechat (#9129) 2025-09-14 03:05:08 +08:00
Kingsley
610a3f1094 [data] Fix qwen_2vl with valuehead (#9078) 2025-09-14 02:22:20 +08:00
Yaowei Zheng
db223e3975 [misc] update readme (#9071) 2025-09-03 17:22:54 +08:00
Kingsley
185f0556d4 [model] support Internvl3_5 (#9028) 2025-08-28 17:12:00 +08:00
Kingsley
9c433f6b41 [model] fix kimivl (#9018) 2025-08-25 16:32:23 +08:00
Haian Huang(深度眸)
1664657d80 [model] Support Intern-S1-mini (#8976) 2025-08-20 23:52:51 +08:00
Kingsley
022a326ca4 [misc] update glm4v ligerkernel (#8978) 2025-08-20 23:39:56 +08:00
Yaowei Zheng
2c31279316 [assets] update wechat (#8962) 2025-08-19 02:55:09 +08:00
Zeju Qiu
003a2acb1a [feature] adding orthogononal finetuning (OFT) to llama factory (#8623)
Co-authored-by: Zeju <zqiu@g003.internal.cluster.is.localnet>
Co-authored-by: Zeju <zqiu@login2.is.localnet>
Co-authored-by: Yaowei Zheng <hiyouga@buaa.edu.cn>
2025-08-18 18:22:47 +08:00
Kingsley
893edb26d0 [model] support GLM4.5V (#8876) 2025-08-11 21:45:14 +08:00
Yaowei Zheng
b523543994 [data] fix template (#8827) 2025-08-06 06:58:09 +08:00
Yaowei Zheng
4dfad24902 [model] add gpt oss (#8826) 2025-08-06 05:56:46 +08:00
davidlightmysterion
c709c0378d [train] fix adjusting logits size after adding special tokens (#8823) 2025-08-05 20:35:07 +08:00
Kingsley
52882d01c3 [model] support keye-vl-8b (#8776) 2025-07-29 21:24:08 +08:00
Kingsley
d6767f355a [model] add glm4moe (#8689) 2025-07-25 19:53:45 +08:00
Yaowei Zheng
4b0ec83928 [deps] bump transformers to 4.49.0 (#8564) 2025-07-07 20:31:50 +08:00
Vivek Iyer
e0dfdb7dbb Revert "[model] add lora dropout to unsloth" - requested feature already exists (#8554)
Co-authored-by: viyer <vivek_iyer2@apple.com>
2025-07-05 11:25:31 +08:00
Vivek Iyer
0686206020 [model] add lora dropout to unsloth (#8548)
Co-authored-by: viyer <vivek_iyer2@apple.com>
2025-07-04 14:56:36 +08:00
Kingsley
e9f70daabe [model] add gemma3n (#8509) 2025-07-01 22:37:24 +08:00
Kingsley
d17a672251 [model] add GLM-4.1V (#8462) 2025-06-30 01:09:41 +08:00
Yaowei Zheng
2c26ce6ac4 Merge commit from fork 2025-06-26 13:55:42 +08:00