hiyouga
|
2542b62d77
|
remove loftq
Former-commit-id: e175c0a1c631296117abda2403a4b87bbdd35a66
|
2023-12-13 01:53:46 +08:00 |
|
hiyouga
|
e39bbdd287
|
support loftq
Former-commit-id: e7ac2eb7f7daae17525a278ffbe2f82c0fbd8093
|
2023-12-12 22:47:06 +08:00 |
|
hiyouga
|
c27675f70d
|
fix modelscope data hub
Former-commit-id: 5b63e8c22538a4788e4b6c8df50e6e6be93ceeac
|
2023-12-12 18:33:06 +08:00 |
|
hiyouga
|
7a03c8dab5
|
use peft 0.7.0, fix #1561 #1764
Former-commit-id: 423947bd58aa50da8785b8ceca1e7e288447a9da
|
2023-12-11 17:13:40 +08:00 |
|
hiyouga
|
5f572cbd77
|
fix gptq training
Former-commit-id: bec58e3dc575aa4247e563881a456328ee5ef496
|
2023-12-02 00:27:15 +08:00 |
|
hiyouga
|
0105cd48f2
|
support GPTQ tuning #729 #1481 #1545 , fix chatglm template #1453 #1480 #1569
Former-commit-id: fdccc6cc9b68890199e9250cabdb996ff2f853b9
|
2023-11-20 22:52:11 +08:00 |
|
hiyouga
|
f9d4e37b3c
|
fix bug in freeze tuning
Former-commit-id: f6b436a08421ca17d64abc51497f4aa43729a43b
|
2023-11-16 14:25:11 +08:00 |
|
hiyouga
|
7a3a0144a5
|
support full-parameter PPO
Former-commit-id: 4af967d69475e1c9fdf1a7983cd6b83bd431abff
|
2023-11-16 02:08:04 +08:00 |
|
hiyouga
|
b2ac8376e1
|
support multiple modules in freeze training #1514
Former-commit-id: 60abac70dfd778df2ae8b3a2e960ed8b607d7ab6
|
2023-11-15 17:08:18 +08:00 |
|
hiyouga
|
09a4474e7f
|
disentangle model from tuner and rename modules
Former-commit-id: 02cbf91e7e424f8379c1fed01b82a5f7a83b6947
|
2023-11-15 16:29:09 +08:00 |
|