This pull request increases the shm_size parameter in docker-compose.yml to 16GB. The goal is to enhance the LLaMA-Factory framework’s performance for large model fine-tuning tasks by providing sufficient shared memory for efficient data loading and parallel processing.
This PR also addresses the issues discussed in [this comment](https://github.com/hiyouga/LLaMA-Factory/issues/4316#issuecomment-2466270708) regarding Shared Memory Limit error.
Former-commit-id: 64414905a3728abf3c51968177ffc42cfc653310
1. add docker-npu (Dockerfile and docker-compose.yml)
2. move cuda docker to docker-cuda and tiny changes to adapt to the new path
Former-commit-id: d7207e8ad10c7df6dcb1f5e59ff8eb06f9d77e67