codingma
|
1ccc6153c7
|
1. fix output_dir in llama3_lora_pretrain.yaml
2. add llava1_5.yaml for inference
Former-commit-id: 982a1cdd24dfa51535af3e49c7ea80fddc95b0ee
|
2024-07-13 13:16:22 +08:00 |
|
hzhaoy
|
955e01c038
|
tiny fix
Former-commit-id: 8bab99c5829a80752e461cf65a9124fdea609676
|
2024-07-12 00:28:44 +08:00 |
|
hzhaoy
|
93ba3bd5b0
|
fix #4780
Former-commit-id: 642c6d666f3bd00fcdea45c65a6394bcae9c2080
|
2024-07-12 00:25:48 +08:00 |
|
hzhaoy
|
b3e4793ded
|
fix #4779
Former-commit-id: a8bf1abf0fd39f84748c94ac3ba39eaa53137529
|
2024-07-12 00:15:15 +08:00 |
|
codemayq
|
0fa59c9b4c
|
update wechat_npu.jpg
Former-commit-id: 67040f149c0b3fbae443ba656ed0dcab0ebaf730
|
2024-07-11 20:03:39 +08:00 |
|
hoshi-hiyouga
|
f85187b4dd
|
Merge pull request #4700 from marko1616/patch-1
Fix Windows command preview
Former-commit-id: 555194e15026c444b2bd1c09f521950cbff86c21
|
2024-07-10 13:51:50 +08:00 |
|
hoshi-hiyouga
|
2528487847
|
Merge pull request #4746 from yzoaim/fix
fix src/llamafactory/train/callbacks.py
Former-commit-id: 40c3b88b68b205e4124a9704d73500e3c404364d
|
2024-07-10 13:32:49 +08:00 |
|
hoshi-hiyouga
|
4edd7c3529
|
Update callbacks.py
Former-commit-id: 39cd89ce17220dc50c8331299ae5af230fe40cc9
|
2024-07-10 13:32:20 +08:00 |
|
-.-
|
973aac3203
|
fix src/llamafactory/train/callbacks.py
Former-commit-id: cff89a2e8907f3fe89406006105cb6494e2ee993
|
2024-07-10 12:05:51 +08:00 |
|
hiyouga
|
a9ce54d143
|
fix #4731
Former-commit-id: 51942acee84cdb20002f8fdccf6be8c7fe9bd0d3
|
2024-07-10 11:32:36 +08:00 |
|
hiyouga
|
d7130ec635
|
fix ppo trainer
Former-commit-id: fb0c40011689b3ae84cc3b258bf3c66af3e1e430
|
2024-07-10 11:05:45 +08:00 |
|
hiyouga
|
aa15ca1719
|
fix #4742
Former-commit-id: 2f09520c0d5039a5a8be310ab668272cb4dc1bd3
|
2024-07-09 23:24:24 +08:00 |
|
hiyouga
|
7e9d51fb95
|
Update wechat.jpg
Former-commit-id: 86b1594823f3e7d61c61981d53f353a9724ea9c4
|
2024-07-09 09:25:11 +08:00 |
|
hoshi-hiyouga
|
553e517f0f
|
Merge pull request #4706 from T-Atlas/main
chore: Update vllm_engine.py to support vllm version >= 0.5.1
Former-commit-id: 563a27dab7e66d9454c6a09404c354d9fca06908
|
2024-07-07 15:50:38 +08:00 |
|
hoshi-hiyouga
|
7483e187c6
|
Update packages.py
Former-commit-id: f84b007ebbb9fa63f797b4bd1c487372877bbc65
|
2024-07-07 15:48:29 +08:00 |
|
Lian Junhong
|
7ca84e0a09
|
chore: Update vllm_engine.py to support vllm version >= 0.5.1
Former-commit-id: 322663bf90ce7b99ca5b0b43ff9dbd95eb36ff6b
|
2024-07-07 15:08:12 +08:00 |
|
hiyouga
|
f3c105f088
|
fix #4705
Former-commit-id: a15782cb9f3ee64ba1f5fc2a3da20ac6c6ef0aa0
|
2024-07-07 13:10:06 +08:00 |
|
marko1616
|
c8205c5163
|
Update utils.py
In windows mutiline command should like
command --arg1 xxx `
--arg2 xxx `
Former-commit-id: e0562521bbd7cf6b3b90f8c87e52690931f736bd
|
2024-07-06 20:40:13 +08:00 |
|
hiyouga
|
7fcffb860d
|
add codegeex4, internlm2.5
Former-commit-id: 53b1002fb74123095e7466c75b941a31a7cfba4d
|
2024-07-06 16:16:47 +08:00 |
|
hiyouga
|
d97bb11821
|
update pissa example
Former-commit-id: c9bb0757ecfa90ba456e2ef7b38e64dbb809265d
|
2024-07-06 15:47:32 +08:00 |
|
codingma
|
74f0d02eb8
|
1. add custom eval dataset support
2. merge load dataset and split dataset function
Former-commit-id: 76f3bbcfc0e11aa41f8f5cbebc60b77b987f7901
|
2024-07-05 15:52:10 +08:00 |
|
hiyouga
|
8379a39776
|
fix processors
Former-commit-id: 9f33f1edf544807e498f60881f30b00149fe570f
|
2024-07-05 08:33:22 +08:00 |
|
hiyouga
|
9aa3403687
|
fix #4683
Former-commit-id: e43809bced009323b3bac9accdd3baa3a2836fdb
|
2024-07-05 00:58:05 +08:00 |
|
hiyouga
|
956e555310
|
fix #4674
Former-commit-id: ed232311e857865da2f493d3ead9a9ffa44953e9
|
2024-07-05 00:41:03 +08:00 |
|
hiyouga
|
c1262dbf94
|
Merge branch 'main' of https://github.com/hiyouga/LLaMA-Factory
Former-commit-id: 226a9e563f15ad125856db371871e6f4a3d3eef0
|
2024-07-04 14:23:37 +08:00 |
|
hiyouga
|
e17f12fcad
|
fix #4677
Former-commit-id: 1e27e8c776acadf312804a6d9a243955427e9978
|
2024-07-04 14:22:07 +08:00 |
|
hoshi-hiyouga
|
d08456c0ce
|
Merge pull request #4673 from hzhaoy/main
tiny fix
Former-commit-id: 07d96d497ca807cad1a6941ec27b019fc6769e06
|
2024-07-04 10:40:41 +08:00 |
|
hzhaoy
|
6d892dbc23
|
tiny fix
Former-commit-id: 738df477485de3633049651a9f1d498adf95a3d5
|
2024-07-04 10:20:28 +08:00 |
|
hiyouga
|
aa14a625e4
|
update tests
Former-commit-id: 636bb9c1e65e72c3a27049dacb3200234d1c2782
|
2024-07-04 04:00:12 +08:00 |
|
hiyouga
|
d7657d772d
|
tiny fix
Former-commit-id: 0c699de39de06eac96af67e8dd4fc4c53335b17e
|
2024-07-04 03:47:05 +08:00 |
|
hiyouga
|
cbb93a2b47
|
tiny fix
Former-commit-id: 44747cebd28d0b800196f032e18d2f4ff51ee5b3
|
2024-07-04 03:02:23 +08:00 |
|
hiyouga
|
4987aa32ba
|
fix data map for packing
Former-commit-id: b5d101e1bf435731e6b8e5aed8727ddfb021e4f0
|
2024-07-04 03:01:31 +08:00 |
|
hiyouga
|
c15210a312
|
update wechat
Former-commit-id: b03e4a74bab17d7fdce36c48123126f502c3f98b
|
2024-07-04 01:55:05 +08:00 |
|
hiyouga
|
7b3c1f29ff
|
fix packing for eager/sdpa attn
Former-commit-id: 6fd6aa4530f81a2ed306eeb2a5167607288b62c6
|
2024-07-04 01:52:43 +08:00 |
|
hoshi-hiyouga
|
a38ff842d0
|
Merge pull request #4224 from chuan298/main
Implement efficient packing without cross-contamination attention
Former-commit-id: 87d9b2d00513c163335d3f2e2bb3cb3299cecdaa
|
2024-07-04 01:18:54 +08:00 |
|
hiyouga
|
bfdaadcc40
|
update packing
Former-commit-id: cce7083024bed4c7429ddc8288d1c9190fde29f5
|
2024-07-04 01:10:55 +08:00 |
|
hoshi-hiyouga
|
51c75985b8
|
Update packing.py
Former-commit-id: a36e8f2dd50e0f1c589457a7e785fdbc905d561d
|
2024-07-03 23:36:01 +08:00 |
|
hiyouga
|
13cec0cc2f
|
update func name
Former-commit-id: c346f79f99db5296000e4d22a65e53c26e85b344
|
2024-07-03 23:29:33 +08:00 |
|
hiyouga
|
e671ed520b
|
update arg name
Former-commit-id: 8a6a7b9c8a876da9c16e5ada7df461eb8cabee21
|
2024-07-03 23:23:24 +08:00 |
|
hiyouga
|
ff6fc666c1
|
update hparams
Former-commit-id: 575a02a23d9b41d00ca6291d8a40b5bdb3cbeeec
|
2024-07-03 23:18:58 +08:00 |
|
hiyouga
|
b254df2d34
|
update ui
Former-commit-id: 7f770f6895f1e2e0b8e4f0b49088bfae096f6d3c
|
2024-07-03 23:13:49 +08:00 |
|
hiyouga
|
28c8e083f4
|
test
Former-commit-id: a4a1ddbcb987422cd04125ff3f36f8c739061b5c
|
2024-07-03 23:05:39 +08:00 |
|
hiyouga
|
e5c89890b1
|
update scripts
Former-commit-id: 1e0c860c8c5ae8958d7105acafdac5d253a585f9
|
2024-07-03 20:07:44 +08:00 |
|
hiyouga
|
3595d98b4c
|
fix #4609
unwrap_model_for_generation(reward_model) is necessary for zero3 training
Former-commit-id: 8845e94f917b503bbee0604d7290efea7260a30c
|
2024-07-03 19:45:51 +08:00 |
|
hiyouga
|
0d438e5cf4
|
update readme
Former-commit-id: 87346c094631b054ca975694416df324d2031c9a
|
2024-07-03 19:39:05 +08:00 |
|
hoshi-hiyouga
|
34bec52cc4
|
Merge pull request #4662 from wzh1994/wzh/readme
Add `LazyLLM` to `Projects using LLaMA Factory` in `README.md`
Former-commit-id: 3449c3531f09f0ad45afe765bd4bb8f5d338fe75
|
2024-07-03 15:51:02 +08:00 |
|
wangzhihong
|
84f8113bb1
|
Update README_zh.md
Former-commit-id: 6f8f53f879faf991c494ee9655a47f905fd11867
|
2024-07-03 14:59:09 +08:00 |
|
wangzhihong
|
3881f4eb58
|
add LazyLLM to Projects using LLaMA Factory in README.md
Former-commit-id: 22da47ba27dc9c15887d21d47c456fb26fc81f5b
|
2024-07-03 11:12:20 +08:00 |
|
hiyouga
|
104151d558
|
tiny fix
Former-commit-id: 8b1172b91085125a83a4150943873141c8bbd8bc
|
2024-07-03 02:31:50 +08:00 |
|
hiyouga
|
c9e9beee4e
|
tiny fix
Former-commit-id: 71cdf8956e1640a1f3e5f6a4b86d28db70e72041
|
2024-07-02 23:06:13 +08:00 |
|