Commit Graph

  • 0ccc76392e Update tuner.py hoshi-hiyouga 2024-05-11 23:54:53 +08:00
  • e2cfcb0a5f Update README_zh.md hoshi-hiyouga 2024-05-11 22:44:51 +08:00
  • b530a798c1 Update README.md hoshi-hiyouga 2024-05-11 22:43:04 +08:00
  • fdf38b70a0 Merge branch 'main' of https://github.com/BUAADreamer/LLaMA-Factory BUAADreamer 2024-05-11 13:11:10 +08:00
  • 1a78b675be add full parameter finetuning of mllm BUAADreamer 2024-05-11 13:11:00 +08:00
  • 9b1008912c Update constants.py kkkl 2024-05-11 00:22:40 +08:00
  • 18241f4ed8 Merge branch 'hiyouga:main' into main BUAADreamer 2024-05-10 20:34:41 +08:00
  • 223bbd9930 resolve python 3.8 package hiyouga 2024-05-09 16:52:27 +08:00
  • 9dadff90bb 1.Change the name of is_fastapi_available function 2. Added the log of printing requests when deploying using vllm Tendo33 2024-05-09 14:28:01 +08:00
  • 827a929f1d add push processor to hub BUAADreamer 2024-05-09 14:05:19 +08:00
  • e508519e0a add mllm processor save and Chinese-LLaVA-Med show BUAADreamer 2024-05-09 13:53:39 +08:00
  • 47892418ad Merge branch 'hiyouga:main' into main BUAADreamer 2024-05-09 13:45:43 +08:00
  • 2aeae4b88b yet another removal of unnecessary environment variables cocktailpeanut 2024-05-09 01:33:20 -04:00
  • c213f2a9a9 more removal of unnecessary environment variables cocktailpeanut 2024-05-09 01:32:00 -04:00
  • 333f4a69bb remove unnecessary environment variable usage cocktailpeanut 2024-05-09 01:26:15 -04:00
  • 172600d432 add mllm export BUAADreamer 2024-05-08 22:50:42 +08:00
  • 4ce4172c87 fix #3625 hiyouga 2024-05-08 17:12:56 +08:00
  • 400ae144a4 add llama3 chinese chat hiyouga 2024-05-08 17:10:03 +08:00
  • 0a1b6ca5a7 add deepseek moe 236B hiyouga 2024-05-08 16:37:54 +08:00
  • 05ef89cfcc modify export model BUAADreamer 2024-05-08 10:36:36 +08:00
  • 6d9d8b92ca update readme hiyouga 2024-05-07 22:17:04 +08:00
  • 3f7f1daa33 remove big file hiyouga 2024-05-07 22:14:06 +08:00
  • 8061e92d07 update readme hiyouga 2024-05-07 21:17:31 +08:00
  • 0c811a7653 update readme hiyouga 2024-05-07 19:03:47 +08:00
  • f6ac3796ca fix #3560 hiyouga 2024-05-07 19:03:35 +08:00
  • c1394e7dfc Merge pull request #3601 from Katehuuh/main hoshi-hiyouga 2024-05-07 18:01:48 +08:00
  • ebab655683 fix #3602 hiyouga 2024-05-07 17:50:27 +08:00
  • 3d74f21738 Merge pull request #3604 from gaussian8/main hoshi-hiyouga 2024-05-07 16:53:23 +08:00
  • 8493753fab fix: splitted Dockerfile's CMD junwooo.lee 2024-05-07 15:09:48 +09:00
  • 0f626a2145 Update README_zh.md Katehuuh 2024-05-07 06:28:48 +02:00
  • 5100c290c4 Update README.md Katehuuh 2024-05-07 06:23:36 +02:00
  • 4bde37e7c8 update readme hiyouga 2024-05-07 06:19:29 +08:00
  • e3b3a722de fix stop param hiyouga 2024-05-07 00:41:04 +08:00
  • b9e167e6ca Merge pull request #3527 from zhaonx/dev hoshi-hiyouga 2024-05-07 00:37:49 +08:00
  • 1ebd1e50e7 Update vllm_engine.py hoshi-hiyouga 2024-05-07 00:37:05 +08:00
  • 14316f6583 Update generating_args.py hoshi-hiyouga 2024-05-07 00:28:16 +08:00
  • 8e4ab2f7d0 Update generating_args.py hoshi-hiyouga 2024-05-07 00:27:56 +08:00
  • 196068fa19 update readme hiyouga 2024-05-06 23:34:59 +08:00
  • da2295f8c8 fix gradio args hiyouga 2024-05-06 23:33:06 +08:00
  • ab0741b5a6 Merge pull request #3596 from hiyouga/dev_doc hoshi-hiyouga 2024-05-06 23:10:38 +08:00
  • 6aec446940 update examples hiyouga 2024-05-06 23:07:55 +08:00
  • 50c71dd29f update example docs hiyouga 2024-05-06 22:51:02 +08:00
  • 5c9da798b5 update docs hiyouga 2024-05-06 21:47:00 +08:00
  • 3d1b0e1864 The training efficiency of the Ascend 910A has been significantly enhanced, leveraging the full computational power of the NPU (Neural Processing Unit) and the capabilities of torch_npu, a PyTorch library optimized for NPUs. This improvement has resulted in a remarkable tenfold increase in efficiency. zhouwei 2024-05-06 13:29:59 +08:00
  • 45becd2a45 ”add stop parameter in chat.py“ zhaonx96 2024-05-06 10:10:00 +08:00
  • 8f1197de7e Merge branch 'main' of https://github.com/zhaonx/LLaMA-Factory into dev zhaonx96 2024-05-06 10:09:00 +08:00
  • 25de4ce56a Merge pull request #3578 from pha123661/main hoshi-hiyouga 2024-05-05 23:41:58 +08:00
  • d0597897bf Fix badam example outdated argument Oscar 2024-05-05 23:35:19 +08:00
  • 4674f3baa7 add version and help to cli hiyouga 2024-05-05 02:44:35 +08:00
  • 2f5f6722cf fix eval scripts hiyouga 2024-05-05 00:53:07 +08:00
  • 7ef3788ff4 update webui hiyouga 2024-05-05 00:17:54 +08:00
  • f9aa74715a update scripts hiyouga 2024-05-04 23:05:17 +08:00
  • 9b187b274c add avg ppl hiyouga 2024-05-04 22:35:31 +08:00
  • 68ed89f351 update ppl script hiyouga 2024-05-04 22:13:14 +08:00
  • 342d7da8d7 add cal_ppl script hiyouga 2024-05-04 22:02:25 +08:00
  • 6eda42eb7c update readme hiyouga 2024-05-04 17:01:21 +08:00
  • e9fe8815be remove empty stream response hiyouga 2024-05-04 16:13:52 +08:00
  • 9381fecca7 fix async stream api response hiyouga 2024-05-04 16:11:18 +08:00
  • efa9140577 update api and support abort eval in webui hiyouga 2024-05-04 15:59:15 +08:00
  • b1b18b2c5a update readme hiyouga 2024-05-04 00:43:53 +08:00
  • 37bcbf72b4 update readme and webui launch hiyouga 2024-05-04 00:43:02 +08:00
  • 99125c8825 update readme hiyouga 2024-05-04 00:31:02 +08:00
  • 182b974786 fix eval in webui hiyouga 2024-05-04 00:19:19 +08:00
  • 7a4a6a5522 fix webui resume hiyouga 2024-05-03 23:15:19 +08:00
  • 2383e5440c fix slow op in dpo/orpo trainer hiyouga 2024-05-03 23:06:52 +08:00
  • 1fea91736a fix callback log multigpu #3559 hiyouga 2024-05-03 21:24:27 +08:00
  • 09d9fb28f9 enable tqdm in webui hiyouga 2024-05-03 04:42:50 +08:00
  • 57c6eabf83 fix gen_args hiyouga 2024-05-03 04:24:50 +08:00
  • 33d440b577 fix colab gradio hiyouga 2024-05-03 03:54:46 +08:00
  • ce8200ad98 update webui and add CLIs hiyouga 2024-05-03 02:58:23 +08:00
  • 2cedb59bee Update prepare.sh hiyouga 2024-05-02 17:16:02 +08:00
  • dd0b85580e fix badam configs hiyouga 2024-05-02 02:47:04 +08:00
  • cd4dad846b Merge pull request #3487 from codemayq/main hoshi-hiyouga 2024-05-02 02:38:01 +08:00
  • a11a04a24f Update train.py hoshi-hiyouga 2024-05-02 02:21:27 +08:00
  • eb99999ca8 Update README_zh.md hoshi-hiyouga 2024-05-02 02:14:55 +08:00
  • ea58cf111e Update README.md hoshi-hiyouga 2024-05-02 02:13:46 +08:00
  • 2d95127c33 "add support for vllm api stop parameter" zhaonx 2024-04-30 17:17:09 +08:00
  • 57fcdca336 Update README_zh.md Lao 2024-04-28 23:31:37 +08:00
  • 3d88589c0f Upgrade the second sharegpt format khazic 2024-04-28 14:30:05 +08:00
  • dfd153cc81 added the second sharegpt format khazic 2024-04-28 14:27:45 +08:00
  • 7641a214d8 support BAdam in WebUI codingma 2024-04-28 11:31:34 +08:00
  • 3cef844079 fix setup v0.7.0 hiyouga 2024-04-28 03:49:13 +08:00
  • 4dcd47100d fix llava rlhf hiyouga 2024-04-28 03:01:49 +08:00
  • a412b4ed4a add models to 0.7.0 hiyouga 2024-04-28 01:50:30 +08:00
  • 544a6259b6 update readme hiyouga 2024-04-26 23:39:19 +08:00
  • c501f377dd release v0.7.0 hiyouga 2024-04-26 23:18:00 +08:00
  • cb8b8f40cd update readme hiyouga 2024-04-26 20:09:14 +08:00
  • 70bed8ad8f support Qwen1.5 110B hiyouga 2024-04-26 19:59:22 +08:00
  • 51f776ae2a fix llava qlora hiyouga 2024-04-26 18:00:23 +08:00
  • 697bc20941 add llava to llamaboard hiyouga 2024-04-26 06:41:35 +08:00
  • 1480e3a88f update readme hiyouga 2024-04-26 05:49:26 +08:00
  • 19029d5b0f Merge pull request #3454 from hiyouga/mllm hoshi-hiyouga 2024-04-26 05:46:29 +08:00
  • 7773ac0ead update readme hiyouga 2024-04-26 05:44:30 +08:00
  • 23b881bff1 support mllm hf inference hiyouga 2024-04-26 05:34:58 +08:00
  • 10a6c395bb Merge pull request #3450 from BUAADreamer/mllm hoshi-hiyouga 2024-04-26 05:30:30 +08:00
  • f9a7732a1f Update preprocess.py hoshi-hiyouga 2024-04-26 04:10:28 +08:00
  • c37582af02 Update aligner.py hoshi-hiyouga 2024-04-26 03:48:34 +08:00
  • ece67f8c7f Update parser.py hoshi-hiyouga 2024-04-26 03:35:39 +08:00
  • e1838e76fe Update loader.py hoshi-hiyouga 2024-04-26 03:33:07 +08:00
  • 2eede9ffd6 Update workflow.py hoshi-hiyouga 2024-04-26 03:29:12 +08:00