This website requires JavaScript.
Explore
Help
Register
Sign In
423A35C7
/
LLaMA-Factory
Watch
1
Star
0
Fork
0
You've already forked LLaMA-Factory
mirror of
https://github.com/hiyouga/LLaMA-Factory.git
synced
2025-10-16 00:28:10 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
LLaMA-Factory
/
src
/
llmtuner
/
extras
History
hiyouga
dba1af4841
add max_memory for gptq
#1923
...
Former-commit-id: 9afc42c8b999fbbc206d9a467ca5795b27a10096
2023-12-20 18:15:17 +08:00
..
patches
disentangle model from tuner and rename modules
2023-11-15 16:29:09 +08:00
__init__.py
modity code structure
2023-07-15 16:54:28 +08:00
callbacks.py
fix
#1696
2023-12-01 15:34:50 +08:00
constants.py
add xverse-65B-2 model
2023-12-18 19:24:09 +08:00
logging.py
fix bug in web ui
2023-11-16 15:21:24 +08:00
misc.py
add max_memory for gptq
#1923
2023-12-20 18:15:17 +08:00
packages.py
support dpo-ftx
2023-12-16 19:21:41 +08:00
ploting.py
disentangle model from tuner and rename modules
2023-11-15 16:29:09 +08:00