Commit Graph

10 Commits

Author SHA1 Message Date
XYZliang
64414905a3 Increase shm_size to 16GB in docker-compose.yml to optimize shared memory allocation for large-scale model fine-tuning tasks.
This pull request increases the shm_size parameter in docker-compose.yml to 16GB. The goal is to enhance the LLaMA-Factory framework’s performance for large model fine-tuning tasks by providing sufficient shared memory for efficient data loading and parallel processing.

This PR also addresses the issues discussed in [this comment](https://github.com/hiyouga/LLaMA-Factory/issues/4316#issuecomment-2466270708) regarding Shared Memory Limit error.
2024-11-13 10:13:59 +08:00
hiyouga
3af57795dd tiny fix 2024-10-11 23:51:54 +08:00
StrangeBytesDev
237e302b5c Add additional install options to Dockerfiles 2024-09-24 16:54:46 -07:00
hiyouga
e44a4f07f0 tiny fix 2024-06-27 20:14:48 +08:00
hoshi-hiyouga
64b131dcfa Merge pull request #4461 from hzhaoy/feature/support-flash-attn
support flash-attn in Dockerfile
2024-06-27 20:05:26 +08:00
hzhaoy
e19491b0f0 add flash-attn installation flag in Dockerfile 2024-06-27 00:13:30 +08:00
MengqingCao
106647a99d fix docker-compose path 2024-06-26 02:15:00 +00:00
hiyouga
efb81b25ec fix #4419 2024-06-25 01:51:29 +08:00
hoshi-hiyouga
15608d0558 Update docker-compose.yml 2024-06-25 00:46:47 +08:00
MengqingCao
d7207e8ad1 update docker files
1. add docker-npu (Dockerfile and docker-compose.yml)
  2. move cuda docker to docker-cuda and tiny changes to adapt to the new path
2024-06-24 10:57:36 +00:00