Commit Graph

164 Commits

Author SHA1 Message Date
hiyouga
b27269bd2b add test cases 2024-06-15 04:05:54 +08:00
hiyouga
c94e6c9411 add quant check in webui export tab 2024-06-13 03:19:18 +08:00
hiyouga
6baafd4eb3 fix #4221 2024-06-13 02:48:21 +08:00
hiyouga
cf9f2d6c42 fix #4209
DeepSpeed ZeRO3 has inflight param error when calling model.eval()
2024-06-13 02:25:50 +08:00
hiyouga
2ed8270112 clean code 2024-06-13 01:58:16 +08:00
hoshi-hiyouga
1f23f25226 Merge pull request #4246 from hzhaoy/adapt-vllm-v0.5.0
adapt vllm==0.5.0
2024-06-13 01:54:02 +08:00
hiyouga
713fde4259 fix lint 2024-06-13 00:48:44 +08:00
hzhaoy
8fb6366ebe adapt vllm==0.5.0 2024-06-12 18:29:03 +08:00
hiyouga
577de2fa07 fix #4242 2024-06-12 16:50:11 +08:00
Arthur Kim
d65a3f7cb6 Support vllm==0.5.0 2024-06-12 16:49:12 +09:00
hoshi-hiyouga
9049aab911 Merge pull request #4204 from dignfei/main
fixbug:llama3在增量预训练时应该使用<|end_of_text|>标识文本的结束
2024-06-11 17:06:10 +08:00
hoshi-hiyouga
0c29233237 Update pretrain.py 2024-06-11 17:02:14 +08:00
hiyouga
cca6f35108 fix deepspeed version 2024-06-11 16:52:36 +08:00
d
6979f3f848 经过大量的增量预训练,进行对比试验,发现这个bug:llama3在预训练时使用的tokenizer.eos_toke是'<|end_of_text|>' ,这里在每条数据后面也得用这个,而不是'<|eot_id|>',否则很容易导致严重的性能下降 2024-06-11 16:23:40 +08:00
hiyouga
89f2bd8c8c fix #4198 2024-06-11 15:38:38 +08:00
hiyouga
90e14a960d tiny fix 2024-06-11 12:48:53 +08:00
hiyouga
3f24337a8a tiny fix 2024-06-11 01:04:16 +08:00
hiyouga
91e62a098f set dev version 2024-06-11 00:50:53 +08:00
hiyouga
2b6ebd6b51 release v0.8.1 2024-06-11 00:44:26 +08:00
hiyouga
a793e8456b fix #4160
The split heads should be concatenated in dim=2
2024-06-11 00:37:17 +08:00
hiyouga
0012762b04 update evaluator 2024-06-10 23:56:00 +08:00
hiyouga
c907d81667 fix #2666 2024-06-10 21:24:15 +08:00
hiyouga
972ec9c668 fix llamafactory-cli env 2024-06-08 07:15:45 +08:00
hiyouga
3ac11e77cc set dev version 2024-06-08 06:46:09 +08:00
hiyouga
5aa4ce4756 release v0.8.0 2024-06-08 05:20:54 +08:00
hiyouga
54cd743ebf reorganize adapter code 2024-06-08 00:47:23 +08:00
hoshi-hiyouga
cfd62283a9 fix #4139 2024-06-08 00:45:02 +08:00
hiyouga
06e5d136a4 add resume args in webui 2024-06-08 00:22:16 +08:00
hiyouga
8bf9da659c fix #4137 2024-06-07 19:16:06 +08:00
hiyouga
f8d8690bf4 tiny fix 2024-06-07 05:19:21 +08:00
hiyouga
4489d73ac7 fix ppo trainer save zero3 model
accelerator.get_state_dict(ds_model) should be called at all ranks
2024-06-07 05:14:19 +08:00
hiyouga
2702d7e952 fix ppo in trl 0.8.6 2024-06-07 04:48:29 +08:00
hiyouga
f9e818d79c fix #4120 2024-06-07 04:18:05 +08:00
hiyouga
ccc8b64cc2 update data processors 2024-06-07 04:15:40 +08:00
hoshi-hiyouga
181dbb0d05 Merge pull request #4009 from AlongWY/main
supervised packing with greedy knapsack algorithm
2024-06-07 03:48:46 +08:00
hoshi-hiyouga
c09ad8bab3 Update supervised.py 2024-06-07 03:42:08 +08:00
hoshi-hiyouga
788e8232fc Update supervised.py 2024-06-07 03:38:23 +08:00
hoshi-hiyouga
8cecade708 Update supervised.py 2024-06-07 03:38:04 +08:00
hiyouga
8e95648850 add qwen2 models 2024-06-07 00:22:57 +08:00
hiyouga
74f96efef9 rename files 2024-06-07 00:09:06 +08:00
hiyouga
45d8be8f93 add DISABLE_TORCHRUN option 2024-06-06 23:44:58 +08:00
hoshi-hiyouga
55c18c49b0 Merge pull request #4082 from MengqingCao/bugfix
Fix #4077
2024-06-06 23:38:40 +08:00
hoshi-hiyouga
751dd77bc0 Update cli.py 2024-06-06 23:38:09 +08:00
hiyouga
76c61905b2 fix ppo+zero3 #3108 2024-06-06 23:30:07 +08:00
hiyouga
451b6693c0 fix torch gc 2024-06-06 20:30:25 +08:00
hiyouga
149610c636 fix ppo dataset bug #4012 2024-06-06 19:03:20 +08:00
hiyouga
fad2591e31 update trainers 2024-06-06 18:45:49 +08:00
hiyouga
67aa78cde0 fix base64 image read #4061 2024-06-06 17:29:19 +08:00
hiyouga
cae4737907 lora modules: all by default 2024-06-06 03:53:28 +08:00
hiyouga
c23cc63d3d add codestral 22B 2024-06-06 03:42:50 +08:00