hoshi-hiyouga
18daf10eda
Merge pull request #6124 from hiyouga/hiyouga/release
...
[release] release v0.9.1
2024-11-25 00:20:02 +08:00
hoshi-hiyouga
07059a7ca4
Merge pull request #6126 from hiyouga/hiyouga/fix_vllm
...
[inference] fix vllm
2024-11-25 00:19:54 +08:00
hoshi-hiyouga
8e9f4617f2
Merge pull request #6010 from XYZliang/fix-#4316
...
Increase shm_size to 16GB in docker-compose.yml
2024-11-25 00:16:42 +08:00
hoshi-hiyouga
57953c8ff6
Merge pull request #6125 from hiyouga/hiyouga/fix_cli
...
[cli] remove shell=True in cli
2024-11-25 00:07:35 +08:00
hiyouga
13ee1f5cec
fix vllm
2024-11-25 00:07:24 +08:00
hiyouga
8792d78c82
fix cli
2024-11-24 23:56:21 +08:00
hiyouga
d622f8fdec
release v0.9.1
2024-11-24 23:48:41 +08:00
hoshi-hiyouga
0ce173e2a4
Merge pull request #6123 from hiyouga/hiyouga/fix_qwen2vl_vllm
...
[inference] fix qwen2vl vllm infer
2024-11-24 23:42:11 +08:00
hiyouga
fa50fc470e
fix qwen2vl vllm infer
2024-11-24 23:27:24 +08:00
hoshi-hiyouga
f2bfa80d55
Merge pull request #6121 from hiyouga/hiyouga/readme
...
[readme] update readme
2024-11-24 03:28:09 +08:00
hiyouga
a89ad72d03
update readme
2024-11-23 19:27:18 +00:00
hoshi-hiyouga
5f310d9279
Merge pull request #6120 from hiyouga/hiyouga/fix_ci
...
[test] fix ci
2024-11-24 03:21:11 +08:00
hiyouga
b52c38350d
fix ci
2024-11-23 19:13:32 +00:00
hoshi-hiyouga
e68ef89600
Merge pull request #5555 from marko1616/feat/llama3.2vl
...
Support llama3.2 vision
2024-11-24 02:49:07 +08:00
hiyouga
df477370dc
add forbidden modules
2024-11-23 18:34:15 +00:00
hiyouga
446441fdb0
fix inputs
2024-11-23 18:26:02 +00:00
marko1616
b1e43e56db
Linter.
2024-11-23 16:09:04 +00:00
marko1616
8372c5e377
Tiny fix.
2024-11-23 16:09:01 +00:00
marko1616
3f2c056253
Support llama3.2vl.
2024-11-23 16:07:35 +00:00
hoshi-hiyouga
b3aa80d54a
Merge commit from fork
...
[patch] Patch remote OS command injection vulnerability
2024-11-21 22:39:44 +08:00
hoshi-hiyouga
d20b97e7e9
do not split save_cmd ret value
2024-11-21 22:30:23 +08:00
superboy-zjc
aa6a174d68
[patch] Patch remote OS command injection vulnerability
2024-11-21 01:52:12 -05:00
hoshi-hiyouga
c8f199881a
Merge pull request #6098 from hiyouga/hiyouga-patch-2
...
update wechat
2024-11-21 14:26:03 +08:00
hoshi-hiyouga
acf491fc3a
update wechat
2024-11-21 14:25:33 +08:00
hoshi-hiyouga
bd639a137e
Merge pull request #6078 from wtmlon/support-efficient-tokens-calculation
...
support effective tokens calculation on sft/dpo
2024-11-20 13:43:15 +08:00
hoshi-hiyouga
fdcc78b639
Merge pull request #6083 from hiyouga/hiyouga-patch
...
[asset] update wechat
2024-11-20 11:46:54 +08:00
hiyouga
2f959c73b5
update wechat
2024-11-20 10:57:30 +08:00
Ting
40627c601e
code refactor
2024-11-19 20:33:18 +08:00
Ting
f566ecc8d1
update
2024-11-19 19:12:10 +08:00
Ting
ef6e14550d
update
2024-11-19 19:10:07 +08:00
Ting
b9f00286d8
support efficient tokens calculation on sft/dpo
2024-11-19 17:15:47 +08:00
hoshi-hiyouga
9c0f6556ee
Merge pull request #6065 from hiyouga/hiyouga-patch-1
...
[misc] fix dep package version
2024-11-18 21:13:59 +08:00
hoshi-hiyouga
4ac5b97011
fix #6061
2024-11-18 20:56:44 +08:00
hoshi-hiyouga
45f32916ce
Merge pull request #6052 from hiyouga/hiyouga-patch-1
...
[trainer] fix DPO metrics
2024-11-16 16:20:12 +08:00
hoshi-hiyouga
dc82821872
fix #6050
2024-11-16 16:11:16 +08:00
hoshi-hiyouga
6c0847899d
Merge pull request #6046 from hiyouga/hiyouga/add_code_model
...
[model] add qwen-coder and opencoder
2024-11-15 21:58:03 +08:00
hiyouga
431ac4892c
add qwen-coder and opencoder
2024-11-15 21:48:38 +08:00
codingma
8e5aad3ffa
Merge pull request #6022 from codemayq/main
...
update wechat
2024-11-14 10:03:46 +08:00
codemayq
fc1aa8f45c
update wechat
2024-11-14 10:02:06 +08:00
XYZliang
64414905a3
Increase shm_size to 16GB in docker-compose.yml to optimize shared memory allocation for large-scale model fine-tuning tasks.
...
This pull request increases the shm_size parameter in docker-compose.yml to 16GB. The goal is to enhance the LLaMA-Factory framework’s performance for large model fine-tuning tasks by providing sufficient shared memory for efficient data loading and parallel processing.
This PR also addresses the issues discussed in [this comment](https://github.com/hiyouga/LLaMA-Factory/issues/4316#issuecomment-2466270708 ) regarding Shared Memory Limit error.
2024-11-13 10:13:59 +08:00
hoshi-hiyouga
3eebae892b
Merge pull request #5990 from hiyouga/hiyouga/dev_vllm
...
[generate] fix vllm config args
2024-11-11 14:10:35 +08:00
hoshi-hiyouga
8d70edf39b
fix #5988
2024-11-11 13:57:14 +08:00
hoshi-hiyouga
2176224f4b
Merge pull request #5984 from hiyouga/hiyouga/wechat
...
[readme] update wechat
2024-11-10 22:08:55 +08:00
hiyouga
f2a44e1a2a
update wechat
2024-11-10 22:08:10 +08:00
hoshi-hiyouga
1ca6b1582f
Merge pull request #5982 from hiyouga/hiyouga/vllm_args
...
[args] add vllm config
2024-11-10 21:37:18 +08:00
hiyouga
58ab4579dc
add vllm config
2024-11-10 21:28:18 +08:00
hoshi-hiyouga
40a2fcc02d
Merge pull request #5973 from JJJJerry/fix_vllm_generate
...
fix VllmEngine: 将inputs参数替换为prompt
2024-11-10 21:04:38 +08:00
hoshi-hiyouga
a543bc478d
Update vllm_engine.py
2024-11-10 20:57:00 +08:00
JJJJerry
1d04078bb5
fix VllmEngine: 将inputs参数替换为prompt
2024-11-09 11:45:59 +08:00
hoshi-hiyouga
adc5849ce7
Merge pull request #5971 from hiyouga/hiyouga/fix_webui
...
[webui] fix extra args
2024-11-09 00:25:24 +08:00