hiyouga
|
b0888262e3
|
fix #3602
|
2024-05-07 17:50:27 +08:00 |
|
hiyouga
|
09f3ef1de4
|
fix stop param
|
2024-05-07 00:41:04 +08:00 |
|
hoshi-hiyouga
|
bcf7ec5ceb
|
Merge pull request #3527 from zhaonx/dev
"add support for vllm api stop parameter"
|
2024-05-07 00:37:49 +08:00 |
|
hoshi-hiyouga
|
17d0005b8c
|
Update vllm_engine.py
|
2024-05-07 00:37:05 +08:00 |
|
hoshi-hiyouga
|
f32eefae3d
|
Update generating_args.py
|
2024-05-07 00:28:16 +08:00 |
|
hoshi-hiyouga
|
7ae7ae64f0
|
Update generating_args.py
|
2024-05-07 00:27:56 +08:00 |
|
hiyouga
|
a153039380
|
fix gradio args
|
2024-05-06 23:33:06 +08:00 |
|
hiyouga
|
34d33e2257
|
update docs
|
2024-05-06 21:47:00 +08:00 |
|
zhaonx96
|
80645751bc
|
”add stop parameter in chat.py“
|
2024-05-06 10:10:00 +08:00 |
|
zhaonx96
|
1abd55dd59
|
Merge branch 'main' of https://github.com/zhaonx/LLaMA-Factory into dev
|
2024-05-06 10:09:00 +08:00 |
|
hiyouga
|
bd095eeb73
|
add version and help to cli
|
2024-05-05 02:44:35 +08:00 |
|
hiyouga
|
af596988b1
|
update webui
|
2024-05-05 00:17:54 +08:00 |
|
hiyouga
|
e984ba3167
|
remove empty stream response
|
2024-05-04 16:13:52 +08:00 |
|
hiyouga
|
941924fdbd
|
fix async stream api response
|
2024-05-04 16:11:18 +08:00 |
|
hiyouga
|
ed8f8be752
|
update api and support abort eval in webui
|
2024-05-04 15:59:15 +08:00 |
|
hiyouga
|
9d2ce57345
|
update readme and webui launch
|
2024-05-04 00:43:02 +08:00 |
|
hiyouga
|
24cc93ab15
|
fix eval in webui
|
2024-05-04 00:19:19 +08:00 |
|
hiyouga
|
510e64ee70
|
fix webui resume
|
2024-05-03 23:15:19 +08:00 |
|
hiyouga
|
3010154adb
|
fix slow op in dpo/orpo trainer
|
2024-05-03 23:06:52 +08:00 |
|
hiyouga
|
9585838ebe
|
fix callback log multigpu #3559
|
2024-05-03 21:24:27 +08:00 |
|
hiyouga
|
5e6f808e3c
|
enable tqdm in webui
|
2024-05-03 04:42:50 +08:00 |
|
hiyouga
|
17d2e5147e
|
fix gen_args
|
2024-05-03 04:24:50 +08:00 |
|
hiyouga
|
530f6b49bb
|
fix colab gradio
|
2024-05-03 03:54:46 +08:00 |
|
hiyouga
|
245fe47ece
|
update webui and add CLIs
|
2024-05-03 02:58:23 +08:00 |
|
hiyouga
|
9433c8c215
|
fix badam configs
|
2024-05-02 02:47:04 +08:00 |
|
hoshi-hiyouga
|
dcd53cb89a
|
Update train.py
|
2024-05-02 02:21:27 +08:00 |
|
zhaonx
|
42edc81585
|
"add support for vllm api stop parameter"
|
2024-04-30 17:17:09 +08:00 |
|
codingma
|
26f7170393
|
support BAdam in WebUI
|
2024-04-28 11:31:34 +08:00 |
|
hiyouga
|
b3e33c703e
|
fix llava rlhf
|
2024-04-28 03:01:49 +08:00 |
|
hiyouga
|
4dbbce21d5
|
add models to 0.7.0
|
2024-04-28 01:50:30 +08:00 |
|
hiyouga
|
168f56683a
|
release v0.7.0
|
2024-04-26 23:18:00 +08:00 |
|
hiyouga
|
375b25131b
|
support Qwen1.5 110B
|
2024-04-26 19:59:22 +08:00 |
|
hiyouga
|
fc67b736ba
|
fix llava qlora
|
2024-04-26 18:00:23 +08:00 |
|
hiyouga
|
cd3a960f81
|
add llava to llamaboard
|
2024-04-26 06:41:35 +08:00 |
|
hiyouga
|
27ba1b63ce
|
update readme
|
2024-04-26 05:44:30 +08:00 |
|
hiyouga
|
e057c8de48
|
support mllm hf inference
|
2024-04-26 05:34:58 +08:00 |
|
hoshi-hiyouga
|
7f3bd35c0e
|
Update preprocess.py
|
2024-04-26 04:10:28 +08:00 |
|
hoshi-hiyouga
|
fcd09112d5
|
Update aligner.py
|
2024-04-26 03:48:34 +08:00 |
|
hoshi-hiyouga
|
f62cadb258
|
Update parser.py
|
2024-04-26 03:35:39 +08:00 |
|
hoshi-hiyouga
|
3408af236f
|
Update loader.py
|
2024-04-26 03:33:07 +08:00 |
|
hoshi-hiyouga
|
e16f128dc3
|
Update workflow.py
|
2024-04-26 03:29:12 +08:00 |
|
hoshi-hiyouga
|
7d812ed841
|
Update loader.py
|
2024-04-26 03:22:40 +08:00 |
|
hoshi-hiyouga
|
860549b99b
|
update hparam name
|
2024-04-26 02:49:39 +08:00 |
|
hoshi-hiyouga
|
646a7885e7
|
delete llava template (use vicuna)
|
2024-04-26 02:20:47 +08:00 |
|
BUAADreamer
|
a7ead1440f
|
modify some bug
|
2024-04-25 22:59:46 +08:00 |
|
BUAADreamer
|
ece78a6d6a
|
modify some style
|
2024-04-25 22:40:53 +08:00 |
|
BUAADreamer
|
d29f3798f6
|
modify some style
|
2024-04-25 22:40:25 +08:00 |
|
BUAADreamer
|
31420f7b31
|
merge some func
|
2024-04-25 22:35:17 +08:00 |
|
BUAADreamer
|
c27f7fbf62
|
modify some style
|
2024-04-25 22:04:09 +08:00 |
|
BUAADreamer
|
2d4ded535f
|
modify some style
|
2024-04-25 21:58:18 +08:00 |
|