hoshi-hiyouga
|
29c1f31baa
|
Update parser.py
|
2024-06-16 02:57:00 +08:00 |
|
hiyouga
|
8c1046d78a
|
support pissa
|
2024-06-16 01:08:12 +08:00 |
|
hiyouga
|
38b6b0f52e
|
tiny fix
|
2024-06-16 01:06:41 +08:00 |
|
hiyouga
|
80a9e6bf94
|
use fixture
|
2024-06-15 20:06:17 +08:00 |
|
hiyouga
|
d87108daa6
|
add license
|
2024-06-15 17:54:33 +08:00 |
|
hiyouga
|
d519b4d76d
|
disable DP
|
2024-06-15 04:57:19 +08:00 |
|
hiyouga
|
b27269bd2b
|
add test cases
|
2024-06-15 04:05:54 +08:00 |
|
hiyouga
|
713fde4259
|
fix lint
|
2024-06-13 00:48:44 +08:00 |
|
hiyouga
|
89f2bd8c8c
|
fix #4198
|
2024-06-11 15:38:38 +08:00 |
|
hiyouga
|
90e14a960d
|
tiny fix
|
2024-06-11 12:48:53 +08:00 |
|
hiyouga
|
54cd743ebf
|
reorganize adapter code
|
2024-06-08 00:47:23 +08:00 |
|
hiyouga
|
cae4737907
|
lora modules: all by default
|
2024-06-06 03:53:28 +08:00 |
|
hoshi-hiyouga
|
ca459f67eb
|
Merge pull request #4080 from MengqingCao/npu
Add npu option for model exporting
|
2024-06-06 03:15:44 +08:00 |
|
hoshi-hiyouga
|
af2c3cbee4
|
Update model_args.py
|
2024-06-06 03:14:23 +08:00 |
|
hiyouga
|
8fcc79e1e6
|
add vllm_dtype arg #3387 #3717
|
2024-06-06 02:53:27 +08:00 |
|
hiyouga
|
a12a506c3d
|
support train from scratch #4033 #4075
|
2024-06-06 02:43:19 +08:00 |
|
MengqingCao
|
07045c876a
|
add npu for model export
|
2024-06-05 07:06:40 +00:00 |
|
hiyouga
|
eed33862bc
|
fix #4005 #4013
|
2024-06-03 19:12:29 +08:00 |
|
hoshi-hiyouga
|
1539c72b94
|
Merge pull request #4007 from xu-song/patch-3
Update model_args.py
|
2024-06-03 18:54:37 +08:00 |
|
hiyouga
|
24e1c0e2ee
|
fix #4022
|
2024-06-03 18:38:36 +08:00 |
|
Xu Song
|
dade2f083d
|
Update model_args.py
|
2024-05-31 14:35:48 +08:00 |
|
hiyouga
|
8070871732
|
better llamaboard
* easily resume from checkpoint
* support full and freeze checkpoints
* faster ui
|
2024-05-29 23:55:38 +08:00 |
|
hiyouga
|
1e80a3a638
|
bump vllm version to 0.4.1
|
2024-05-28 21:27:27 +08:00 |
|
hiyouga
|
7c016b22aa
|
support DDP in webui
|
2024-05-28 19:24:22 +08:00 |
|
hiyouga
|
08564838bd
|
fix full/freeze tuning for mllm
|
2024-05-27 20:37:57 +08:00 |
|
BUAADreamer
|
4bc7c10c00
|
Merge branch 'hiyouga:main' into main
|
2024-05-27 11:54:01 +08:00 |
|
hiyouga
|
cb63b32986
|
support SimPO #3900
|
2024-05-26 23:46:33 +08:00 |
|
BUAADreamer
|
047a06a1e5
|
Merge branch 'hiyouga:main' into main
|
2024-05-24 09:50:00 +08:00 |
|
hiyouga
|
67ebc7b388
|
fix oom issues in export
|
2024-05-23 23:32:45 +08:00 |
|
BUAADreamer
|
29a6d5bdb8
|
support pretraining of llava
|
2024-05-21 08:57:14 +08:00 |
|
hoshi-hiyouga
|
a1fa7aa63b
|
Update generating_args.py
|
2024-05-20 00:29:31 +08:00 |
|
ycjcl868
|
a08ba254c8
|
feat: cli chat support system_message
|
2024-05-19 23:17:46 +08:00 |
|
hiyouga
|
c450ee87a3
|
improve KTO impl., replace datasets
|
2024-05-18 03:44:56 +08:00 |
|
hoshi-hiyouga
|
33a354548e
|
Merge pull request #3785 from enji-zhou/feature/add_kto
add kto
|
2024-05-18 03:07:18 +08:00 |
|
hoshi-hiyouga
|
9646727453
|
Update model_args.py
|
2024-05-17 16:16:41 +08:00 |
|
juejuezi
|
b20d62ba3c
|
feat: pass the max_lora_rank parameter to vLLM backend
|
2024-05-17 16:07:39 +08:00 |
|
enji.zhou
|
db1d5a4f51
|
add kto
|
2024-05-17 13:09:17 +08:00 |
|
hiyouga
|
308edbc426
|
rename package
|
2024-05-16 18:39:08 +08:00 |
|