hoshi-hiyouga
da9e4ddd26
lint
2024-11-25 22:55:56 +08:00
hoshi-hiyouga
3924a3d6e9
Merge pull request #6140 from hiyouga/hiyouga/fix_mllama
...
[data] fix mllama plugin
2024-11-25 22:32:07 +08:00
hoshi-hiyouga
d87e16cf5c
fix #6139
2024-11-25 22:22:06 +08:00
hoshi-hiyouga
3a1402a4ed
Merge pull request #6138 from hiyouga/hiyouga/update_data
...
[data] update dataset info
2024-11-25 21:47:23 +08:00
hoshi-hiyouga
5214d3ea06
update dataset
2024-11-25 21:47:04 +08:00
hoshi-hiyouga
2b7157dc1d
Merge pull request #6137 from hiyouga/hiyouga/fix_mllama
...
[model] fix mllama hidden_size
2024-11-25 20:17:33 +08:00
hoshi-hiyouga
75b586c31a
fix visual patch
2024-11-25 20:06:06 +08:00
hoshi-hiyouga
0516e556a7
fix #6136
2024-11-25 19:43:42 +08:00
hoshi-hiyouga
44125da5a5
Merge pull request #6127 from hiyouga/hiyouga/dev_version
...
[misc] set dev version
2024-11-25 01:42:29 +08:00
hiyouga
b0ccc2ee86
set dev version
2024-11-25 01:36:49 +08:00
hoshi-hiyouga
18daf10eda
Merge pull request #6124 from hiyouga/hiyouga/release
...
[release] release v0.9.1
2024-11-25 00:20:02 +08:00
hoshi-hiyouga
07059a7ca4
Merge pull request #6126 from hiyouga/hiyouga/fix_vllm
...
[inference] fix vllm
2024-11-25 00:19:54 +08:00
hoshi-hiyouga
8e9f4617f2
Merge pull request #6010 from XYZliang/fix-#4316
...
Increase shm_size to 16GB in docker-compose.yml
2024-11-25 00:16:42 +08:00
hoshi-hiyouga
57953c8ff6
Merge pull request #6125 from hiyouga/hiyouga/fix_cli
...
[cli] remove shell=True in cli
2024-11-25 00:07:35 +08:00
hiyouga
13ee1f5cec
fix vllm
2024-11-25 00:07:24 +08:00
hiyouga
8792d78c82
fix cli
2024-11-24 23:56:21 +08:00
hiyouga
d622f8fdec
release v0.9.1
2024-11-24 23:48:41 +08:00
hoshi-hiyouga
0ce173e2a4
Merge pull request #6123 from hiyouga/hiyouga/fix_qwen2vl_vllm
...
[inference] fix qwen2vl vllm infer
2024-11-24 23:42:11 +08:00
hiyouga
fa50fc470e
fix qwen2vl vllm infer
2024-11-24 23:27:24 +08:00
hoshi-hiyouga
f2bfa80d55
Merge pull request #6121 from hiyouga/hiyouga/readme
...
[readme] update readme
2024-11-24 03:28:09 +08:00
hiyouga
a89ad72d03
update readme
2024-11-23 19:27:18 +00:00
hoshi-hiyouga
5f310d9279
Merge pull request #6120 from hiyouga/hiyouga/fix_ci
...
[test] fix ci
2024-11-24 03:21:11 +08:00
hiyouga
b52c38350d
fix ci
2024-11-23 19:13:32 +00:00
hoshi-hiyouga
e68ef89600
Merge pull request #5555 from marko1616/feat/llama3.2vl
...
Support llama3.2 vision
2024-11-24 02:49:07 +08:00
hiyouga
df477370dc
add forbidden modules
2024-11-23 18:34:15 +00:00
hiyouga
446441fdb0
fix inputs
2024-11-23 18:26:02 +00:00
marko1616
b1e43e56db
Linter.
2024-11-23 16:09:04 +00:00
marko1616
8372c5e377
Tiny fix.
2024-11-23 16:09:01 +00:00
marko1616
3f2c056253
Support llama3.2vl.
2024-11-23 16:07:35 +00:00
hoshi-hiyouga
b3aa80d54a
Merge commit from fork
...
[patch] Patch remote OS command injection vulnerability
2024-11-21 22:39:44 +08:00
hoshi-hiyouga
d20b97e7e9
do not split save_cmd ret value
2024-11-21 22:30:23 +08:00
superboy-zjc
aa6a174d68
[patch] Patch remote OS command injection vulnerability
2024-11-21 01:52:12 -05:00
hoshi-hiyouga
c8f199881a
Merge pull request #6098 from hiyouga/hiyouga-patch-2
...
update wechat
2024-11-21 14:26:03 +08:00
hoshi-hiyouga
acf491fc3a
update wechat
2024-11-21 14:25:33 +08:00
hoshi-hiyouga
bd639a137e
Merge pull request #6078 from wtmlon/support-efficient-tokens-calculation
...
support effective tokens calculation on sft/dpo
2024-11-20 13:43:15 +08:00
hoshi-hiyouga
fdcc78b639
Merge pull request #6083 from hiyouga/hiyouga-patch
...
[asset] update wechat
2024-11-20 11:46:54 +08:00
hiyouga
2f959c73b5
update wechat
2024-11-20 10:57:30 +08:00
Ting
40627c601e
code refactor
2024-11-19 20:33:18 +08:00
Ting
f566ecc8d1
update
2024-11-19 19:12:10 +08:00
Ting
ef6e14550d
update
2024-11-19 19:10:07 +08:00
Ting
b9f00286d8
support efficient tokens calculation on sft/dpo
2024-11-19 17:15:47 +08:00
hoshi-hiyouga
9c0f6556ee
Merge pull request #6065 from hiyouga/hiyouga-patch-1
...
[misc] fix dep package version
2024-11-18 21:13:59 +08:00
hoshi-hiyouga
4ac5b97011
fix #6061
2024-11-18 20:56:44 +08:00
hoshi-hiyouga
45f32916ce
Merge pull request #6052 from hiyouga/hiyouga-patch-1
...
[trainer] fix DPO metrics
2024-11-16 16:20:12 +08:00
hoshi-hiyouga
dc82821872
fix #6050
2024-11-16 16:11:16 +08:00
hoshi-hiyouga
6c0847899d
Merge pull request #6046 from hiyouga/hiyouga/add_code_model
...
[model] add qwen-coder and opencoder
2024-11-15 21:58:03 +08:00
hiyouga
431ac4892c
add qwen-coder and opencoder
2024-11-15 21:48:38 +08:00
codingma
8e5aad3ffa
Merge pull request #6022 from codemayq/main
...
update wechat
2024-11-14 10:03:46 +08:00
codemayq
fc1aa8f45c
update wechat
2024-11-14 10:02:06 +08:00
XYZliang
64414905a3
Increase shm_size to 16GB in docker-compose.yml to optimize shared memory allocation for large-scale model fine-tuning tasks.
...
This pull request increases the shm_size parameter in docker-compose.yml to 16GB. The goal is to enhance the LLaMA-Factory framework’s performance for large model fine-tuning tasks by providing sufficient shared memory for efficient data loading and parallel processing.
This PR also addresses the issues discussed in [this comment](https://github.com/hiyouga/LLaMA-Factory/issues/4316#issuecomment-2466270708 ) regarding Shared Memory Limit error.
2024-11-13 10:13:59 +08:00