hoshi-hiyouga
|
5f72439a1d
|
Update tuner.py
|
2024-05-11 23:55:59 +08:00 |
|
hoshi-hiyouga
|
13851fb045
|
Update tuner.py
|
2024-05-11 23:54:53 +08:00 |
|
BUAADreamer
|
8b997e32fb
|
add push processor to hub
|
2024-05-09 14:05:19 +08:00 |
|
BUAADreamer
|
ef33856380
|
add mllm export
|
2024-05-08 22:50:42 +08:00 |
|
BUAADreamer
|
0ca1d1967d
|
modify export model
|
2024-05-08 10:36:36 +08:00 |
|
hiyouga
|
ed8f8be752
|
update api and support abort eval in webui
|
2024-05-04 15:59:15 +08:00 |
|
hiyouga
|
245fe47ece
|
update webui and add CLIs
|
2024-05-03 02:58:23 +08:00 |
|
hiyouga
|
e057c8de48
|
support mllm hf inference
|
2024-04-26 05:34:58 +08:00 |
|
hiyouga
|
232642a621
|
fix #3238
|
2024-04-12 14:28:11 +08:00 |
|
hiyouga
|
148bda353f
|
fix resize vocab at inference #3022
|
2024-04-03 18:14:24 +08:00 |
|
hiyouga
|
17bf8a2c3a
|
support ORPO
|
2024-03-31 18:29:50 +08:00 |
|
marko1616
|
eb178eaff3
|
Fix Llama model save for full param train
|
2024-03-30 23:45:04 +08:00 |
|
hiyouga
|
558a538724
|
tiny fix
|
2024-03-25 21:18:08 +08:00 |
|
marko1616
|
c8f0d99704
|
pass ruff check
|
2024-03-24 16:12:10 +08:00 |
|
marko1616
|
6f080fdba3
|
fix Llama lora merge crash
|
2024-03-24 03:06:11 +08:00 |
|
marko1616
|
51349ea1cc
|
fix Llama lora merge crash
|
2024-03-24 02:55:23 +08:00 |
|
marko1616
|
c1e2c4ea45
|
fix Llama lora merge crash
|
2024-03-24 02:44:35 +08:00 |
|
hiyouga
|
8e04794b2d
|
fix packages
|
2024-03-17 22:32:03 +08:00 |
|
hiyouga
|
6bc2c23b6d
|
fix export
|
2024-03-15 15:06:30 +08:00 |
|
hiyouga
|
6ebde4f23e
|
tiny fix
|
2024-03-14 21:19:06 +08:00 |
|
hiyouga
|
3b4a59bfb1
|
fix export
|
2024-03-14 18:17:01 +08:00 |
|
hiyouga
|
72367307df
|
improve lora+ impl.
|
2024-03-13 23:32:51 +08:00 |
|
hiyouga
|
e5edcf440f
|
fix export model
|
2024-03-05 11:05:41 +08:00 |
|
hiyouga
|
2bc30763e9
|
fix #2320
|
2024-01-24 16:19:18 +08:00 |
|
hoshi-hiyouga
|
662b9a9dcf
|
Update tuner.py
|
2024-01-21 12:39:38 +08:00 |
|
yhyu13
|
9cdbd3bfc8
|
Remove manully set use_cache; torch_dtype is not str, save model as bfloat16 used to fail;
|
2024-01-21 11:12:15 +08:00 |
|
hiyouga
|
638234ceee
|
format style
|
2024-01-20 20:15:56 +08:00 |
|
hiyouga
|
ddd48ce8ab
|
Update tuner.py
|
2024-01-18 15:06:02 +08:00 |
|
hiyouga
|
d9f1cae351
|
support function calling
|
2024-01-18 09:54:23 +08:00 |
|
hiyouga
|
42859f0734
|
support export push_to_hub #2183
|
2024-01-16 23:59:42 +08:00 |
|
hiyouga
|
05ed4e8028
|
improve model export
|
2024-01-09 22:26:24 +08:00 |
|
hiyouga
|
d2a676c8ba
|
improve model export
|
2024-01-05 18:51:49 +08:00 |
|
hiyouga
|
65c5b0477c
|
fix args
|
2023-12-28 18:47:19 +08:00 |
|
hiyouga
|
e165354fac
|
fix export format
|
2023-12-28 18:40:46 +08:00 |
|
hiyouga
|
7aad0b889d
|
support unsloth
|
2023-12-23 00:14:33 +08:00 |
|
hiyouga
|
3551171d49
|
update tips
|
2023-12-15 23:52:50 +08:00 |
|
hiyouga
|
439a26c276
|
fix #1770
|
2023-12-15 23:50:15 +08:00 |
|
hiyouga
|
3524aa1e58
|
support quantization in export model
|
2023-12-15 23:44:50 +08:00 |
|
hiyouga
|
475a3fa0f4
|
fix #1659
|
2023-11-28 20:52:28 +08:00 |
|
hiyouga
|
859a6ea942
|
support export size setting
|
2023-11-26 18:34:09 +08:00 |
|
hiyouga
|
9ea9380145
|
support GPTQ tuning #729 #1481 #1545 , fix chatglm template #1453 #1480 #1569
|
2023-11-20 22:52:11 +08:00 |
|
hiyouga
|
ce78303600
|
support full-parameter PPO
|
2023-11-16 02:08:04 +08:00 |
|
hiyouga
|
4736344eb1
|
disentangle model from tuner and rename modules
|
2023-11-15 16:29:09 +08:00 |
|