* Update base npu container image version:The Python version required for Hugging Face Transformers is >= python3.10
* Fix the bug: arg type of INSTALL_DEEPSPEED shoud been string now.
* Update Ascend CANN, CANN-Kernel and corresponding torch and torch-npu version
* Upgrade torch-npu needs packages' version: torch==2.1.0 and torch-npu==2.4.0.post2
This pull request increases the shm_size parameter in docker-compose.yml to 16GB. The goal is to enhance the LLaMA-Factory framework’s performance for large model fine-tuning tasks by providing sufficient shared memory for efficient data loading and parallel processing.
This PR also addresses the issues discussed in [this comment](https://github.com/hiyouga/LLaMA-Factory/issues/4316#issuecomment-2466270708) regarding Shared Memory Limit error.