hoshi-hiyouga
|
17975fefd7
|
Update tuner.py
Former-commit-id: 13851fb045
|
2024-05-11 23:54:53 +08:00 |
|
BUAADreamer
|
c3e15c049b
|
Merge branch 'main' of https://github.com/BUAADreamer/LLaMA-Factory
Former-commit-id: 7944cbc576
|
2024-05-11 13:11:10 +08:00 |
|
BUAADreamer
|
743d0f22b7
|
add full parameter finetuning of mllm
Former-commit-id: 7be7972f28
|
2024-05-11 13:11:00 +08:00 |
|
kkkl
|
639c6e9cad
|
Update constants.py
Fix the download issue of the Phi3 model
Former-commit-id: b5c5c315a5
|
2024-05-11 00:22:40 +08:00 |
|
BUAADreamer
|
012d93da84
|
Merge branch 'hiyouga:main' into main
Former-commit-id: 508d474754
|
2024-05-10 20:34:41 +08:00 |
|
hiyouga
|
9dc209b458
|
resolve python 3.8 package
Former-commit-id: 75aec4cf8e
|
2024-05-09 16:52:27 +08:00 |
|
Tendo33
|
dd42439b03
|
1.Change the name of is_fastapi_available function
2. Added the log of printing requests when deploying using vllm
Former-commit-id: fd2e6dec58
|
2024-05-09 14:28:01 +08:00 |
|
BUAADreamer
|
a185cf7e18
|
add push processor to hub
Former-commit-id: 8b997e32fb
|
2024-05-09 14:05:19 +08:00 |
|
BUAADreamer
|
db30e9089a
|
Merge branch 'hiyouga:main' into main
Former-commit-id: 83f2f0de1d
|
2024-05-09 13:45:43 +08:00 |
|
cocktailpeanut
|
2370e7403f
|
yet another removal of unnecessary environment variables
Former-commit-id: 3c11157a49
|
2024-05-09 01:33:20 -04:00 |
|
cocktailpeanut
|
58c5a5afaf
|
more removal of unnecessary environment variables
Former-commit-id: 425b9d6166
|
2024-05-09 01:32:00 -04:00 |
|
cocktailpeanut
|
de509fa081
|
remove unnecessary environment variable usage
Former-commit-id: b783673e0a
|
2024-05-09 01:26:15 -04:00 |
|
BUAADreamer
|
f40b602c41
|
add mllm export
Former-commit-id: ef33856380
|
2024-05-08 22:50:42 +08:00 |
|
hiyouga
|
5ff89a0f32
|
fix #3625
Former-commit-id: d9cdddd19c
|
2024-05-08 17:12:56 +08:00 |
|
hiyouga
|
0000894bbe
|
add llama3 chinese chat
Former-commit-id: 48ee46dac1
|
2024-05-08 17:10:03 +08:00 |
|
hiyouga
|
1d3fb90590
|
add deepseek moe 236B
Former-commit-id: 10ab83f4c4
|
2024-05-08 16:37:54 +08:00 |
|
BUAADreamer
|
3534a75bcc
|
modify export model
Former-commit-id: 0ca1d1967d
|
2024-05-08 10:36:36 +08:00 |
|
hiyouga
|
57fee01114
|
fix #3560
Former-commit-id: 0f8f7d3b90
|
2024-05-07 19:03:35 +08:00 |
|
hiyouga
|
52cc6bce38
|
fix #3602
Former-commit-id: b0888262e3
|
2024-05-07 17:50:27 +08:00 |
|
hiyouga
|
175a7ea951
|
fix stop param
Former-commit-id: 09f3ef1de4
|
2024-05-07 00:41:04 +08:00 |
|
hoshi-hiyouga
|
c198db4db2
|
Merge pull request #3527 from zhaonx/dev
"add support for vllm api stop parameter"
Former-commit-id: bcf7ec5ceb
|
2024-05-07 00:37:49 +08:00 |
|
hoshi-hiyouga
|
df66b288a2
|
Update vllm_engine.py
Former-commit-id: 17d0005b8c
|
2024-05-07 00:37:05 +08:00 |
|
hoshi-hiyouga
|
4c91104471
|
Update generating_args.py
Former-commit-id: f32eefae3d
|
2024-05-07 00:28:16 +08:00 |
|
hoshi-hiyouga
|
d65b2332cf
|
Update generating_args.py
Former-commit-id: 7ae7ae64f0
|
2024-05-07 00:27:56 +08:00 |
|
hiyouga
|
89e7cabaa9
|
fix gradio args
Former-commit-id: a153039380
|
2024-05-06 23:33:06 +08:00 |
|
hiyouga
|
eb21a527a6
|
update docs
Former-commit-id: 34d33e2257
|
2024-05-06 21:47:00 +08:00 |
|
zhouwei
|
7b0629dac4
|
The training efficiency of the Ascend 910A has been significantly enhanced, leveraging the full computational power of the NPU (Neural Processing Unit) and the capabilities of torch_npu, a PyTorch library optimized for NPUs. This improvement has resulted in a remarkable tenfold increase in efficiency.
Former-commit-id: 28ae947161
|
2024-05-06 13:29:59 +08:00 |
|
zhaonx96
|
189346188b
|
”add stop parameter in chat.py“
Former-commit-id: 80645751bc
|
2024-05-06 10:10:00 +08:00 |
|
zhaonx96
|
0c6c50f9b5
|
Merge branch 'main' of https://github.com/zhaonx/LLaMA-Factory into dev
Former-commit-id: 1abd55dd59
|
2024-05-06 10:09:00 +08:00 |
|
hiyouga
|
fa9c7eb48e
|
add version and help to cli
Former-commit-id: bd095eeb73
|
2024-05-05 02:44:35 +08:00 |
|
hiyouga
|
9bbb5c846d
|
update webui
Former-commit-id: af596988b1
|
2024-05-05 00:17:54 +08:00 |
|
hiyouga
|
87b9f70ab4
|
remove empty stream response
Former-commit-id: e984ba3167
|
2024-05-04 16:13:52 +08:00 |
|
hiyouga
|
6672ad7a83
|
fix async stream api response
Former-commit-id: 941924fdbd
|
2024-05-04 16:11:18 +08:00 |
|
hiyouga
|
c32fc1d89b
|
update api and support abort eval in webui
Former-commit-id: ed8f8be752
|
2024-05-04 15:59:15 +08:00 |
|
hiyouga
|
ed92038736
|
update readme and webui launch
Former-commit-id: 9d2ce57345
|
2024-05-04 00:43:02 +08:00 |
|
hiyouga
|
9fc7549d25
|
fix eval in webui
Former-commit-id: 24cc93ab15
|
2024-05-04 00:19:19 +08:00 |
|
hiyouga
|
340f70cd82
|
fix webui resume
Former-commit-id: 510e64ee70
|
2024-05-03 23:15:19 +08:00 |
|
hiyouga
|
226587fc4a
|
fix slow op in dpo/orpo trainer
Former-commit-id: 3010154adb
|
2024-05-03 23:06:52 +08:00 |
|
hiyouga
|
a2cb40735b
|
fix callback log multigpu #3559
Former-commit-id: 9585838ebe
|
2024-05-03 21:24:27 +08:00 |
|
hiyouga
|
65abcf1a94
|
enable tqdm in webui
Former-commit-id: 5e6f808e3c
|
2024-05-03 04:42:50 +08:00 |
|
hiyouga
|
59965c2dca
|
fix gen_args
Former-commit-id: 17d2e5147e
|
2024-05-03 04:24:50 +08:00 |
|
hiyouga
|
572d25734a
|
fix colab gradio
Former-commit-id: 530f6b49bb
|
2024-05-03 03:54:46 +08:00 |
|
hiyouga
|
289d1f3679
|
update webui and add CLIs
Former-commit-id: 245fe47ece
|
2024-05-03 02:58:23 +08:00 |
|
hiyouga
|
ed8d9e0881
|
fix badam configs
Former-commit-id: 9433c8c215
|
2024-05-02 02:47:04 +08:00 |
|
hoshi-hiyouga
|
1d00dede8e
|
Update train.py
Former-commit-id: dcd53cb89a
|
2024-05-02 02:21:27 +08:00 |
|
zhaonx
|
4a0aab86f1
|
"add support for vllm api stop parameter"
Former-commit-id: 42edc81585
|
2024-04-30 17:17:09 +08:00 |
|
codingma
|
ac76a9e140
|
support BAdam in WebUI
Former-commit-id: 26f7170393
|
2024-04-28 11:31:34 +08:00 |
|
hiyouga
|
506f868de7
|
fix llava rlhf
Former-commit-id: b3e33c703e
|
2024-04-28 03:01:49 +08:00 |
|
hiyouga
|
3b42f1abce
|
add models to 0.7.0
Former-commit-id: 4dbbce21d5
|
2024-04-28 01:50:30 +08:00 |
|
hiyouga
|
eb14501a52
|
release v0.7.0
Former-commit-id: 168f56683a
|
2024-04-26 23:18:00 +08:00 |
|