This website requires JavaScript.
Explore
Help
Register
Sign In
423A35C7
/
LLaMA-Factory
Watch
1
Star
0
Fork
0
You've already forked LLaMA-Factory
mirror of
https://github.com/hiyouga/LLaMA-Factory.git
synced
2025-12-15 19:30:36 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
e3e2c8c689c54ebb2af264de808502e5a8ba0f2b
LLaMA-Factory
/
scripts
History
hoshi-hiyouga
e3e2c8c689
[inference] fix stop token for object detection (
#6624
)
...
* fix stop token * update minicpm data pipeline * fix npu qlora examples
2025-01-13 21:34:20 +08:00
..
api_example
support batch infer in vllm
2024-12-04 13:50:00 +00:00
convert_ckpt
support batch infer in vllm
2024-12-04 13:50:00 +00:00
stat_utils
update scripts
2025-01-03 10:50:32 +00:00
llama_pro.py
update scripts
2025-01-03 10:50:32 +00:00
loftq_init.py
use pre-commit
2024-10-29 09:07:46 +00:00
pissa_init.py
use pre-commit
2024-10-29 09:07:46 +00:00
vllm_infer.py
[inference] fix stop token for object detection (
#6624
)
2025-01-13 21:34:20 +08:00