Commit Graph

92 Commits

Author SHA1 Message Date
hiyouga
92dab8a90b simplify readme 2024-04-02 20:07:43 +08:00
hiyouga
4a6ca621c0 fix #3083 2024-04-01 22:53:52 +08:00
hiyouga
816d714146 fix ORPO loss 2024-04-01 14:42:41 +08:00
hiyouga
5b9b40403d fix IPO and ORPO loss 2024-04-01 14:37:53 +08:00
hiyouga
5907216a1c fix plots 2024-03-31 19:43:48 +08:00
hiyouga
68aaa4904b use log1p in orpo loss
https://github.com/huggingface/trl/pull/1491
2024-03-31 19:27:08 +08:00
hiyouga
17bf8a2c3a support ORPO 2024-03-31 18:29:50 +08:00
marko1616
eb178eaff3 Fix Llama model save for full param train 2024-03-30 23:45:04 +08:00
hiyouga
ca793028c6 release v0.6.1 2024-03-29 11:36:08 +08:00
hiyouga
8d603f8820 fix #2982 2024-03-28 20:22:31 +08:00
hiyouga
8c77b10912 update trainers 2024-03-28 18:16:27 +08:00
hoshi-hiyouga
3bcd41b639 fix ds optimizer 2024-03-26 23:39:56 +08:00
hiyouga
511f675402 fix #2961 2024-03-26 17:26:14 +08:00
hiyouga
ba70aca8fb release v0.6.0 (real) 2024-03-25 23:37:48 +08:00
hiyouga
6f2b563f12 release v0.6.0 2024-03-25 22:38:56 +08:00
hiyouga
558a538724 tiny fix 2024-03-25 21:18:08 +08:00
marko1616
c8f0d99704 pass ruff check 2024-03-24 16:12:10 +08:00
marko1616
6f080fdba3 fix Llama lora merge crash 2024-03-24 03:06:11 +08:00
marko1616
51349ea1cc fix Llama lora merge crash 2024-03-24 02:55:23 +08:00
marko1616
c1e2c4ea45 fix Llama lora merge crash 2024-03-24 02:44:35 +08:00
hiyouga
9bec3c98a2 fix #2777 #2895 2024-03-20 17:59:45 +08:00
hiyouga
8e04794b2d fix packages 2024-03-17 22:32:03 +08:00
hiyouga
6bc2c23b6d fix export 2024-03-15 15:06:30 +08:00
hiyouga
6ebde4f23e tiny fix 2024-03-14 21:19:06 +08:00
hiyouga
3b4a59bfb1 fix export 2024-03-14 18:17:01 +08:00
hiyouga
8172530d54 fix bug 2024-03-13 23:55:31 +08:00
hiyouga
714d936dfb fix bug 2024-03-13 23:43:42 +08:00
hiyouga
72367307df improve lora+ impl. 2024-03-13 23:32:51 +08:00
齐保元
a0965cd62c [FEATURE]: ADD LORA+ ALGORITHM 2024-03-13 19:43:27 +08:00
hiyouga
e874c00906 fix #2775 2024-03-11 00:42:54 +08:00
hiyouga
8664262cde support layerwise galore 2024-03-10 00:24:11 +08:00
hiyouga
bdb496644c allow non-packing pretraining 2024-03-09 22:21:46 +08:00
hiyouga
412c52e325 fix #2766 2024-03-09 21:35:24 +08:00
hiyouga
e8dd38b7fd fix #2756 , patch #2746 2024-03-09 02:01:26 +08:00
hiyouga
33a4c24a8a fix galore 2024-03-08 00:44:51 +08:00
hiyouga
28f7862188 support galore 2024-03-07 22:41:36 +08:00
hiyouga
0048a2021e tiny fix 2024-03-06 17:25:08 +08:00
hiyouga
e5edcf440f fix export model 2024-03-05 11:05:41 +08:00
hiyouga
4e5fae2fac fix #2649 2024-03-01 13:02:41 +08:00
hoshi-hiyouga
4aab19c7ef Merge pull request #2525 from stephen-nju/main
update project_kwargs for ppo config
2024-02-25 15:54:00 +08:00
hiyouga
3cc10a01a7 fix #2532 2024-02-21 21:55:14 +08:00
stephen
42c23798f2 update project_kwargs for ppo config 2024-02-21 13:47:38 +08:00
hiyouga
7924ffc55d support llama pro #2338 , add rslora 2024-02-15 02:27:36 +08:00
hiyouga
b988ce0a0c fix #2189 2024-02-04 00:47:37 +08:00
hiyouga
2bc30763e9 fix #2320 2024-01-24 16:19:18 +08:00
hoshi-hiyouga
662b9a9dcf Update tuner.py 2024-01-21 12:39:38 +08:00
yhyu13
9cdbd3bfc8 Remove manully set use_cache; torch_dtype is not str, save model as bfloat16 used to fail; 2024-01-21 11:12:15 +08:00
hiyouga
638234ceee format style 2024-01-20 20:15:56 +08:00
hiyouga
f6d6e00337 fix tests 2024-01-20 19:58:04 +08:00
hiyouga
38af076a75 support longlora for main branch 2024-01-20 19:25:22 +08:00