This website requires JavaScript.
Explore
Help
Register
Sign In
423A35C7
/
LLaMA-Factory
Watch
1
Star
0
Fork
0
You've already forked LLaMA-Factory
mirror of
https://github.com/hiyouga/LLaMA-Factory.git
synced
2025-12-16 20:00:36 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
65c5b0477c0e62691a1f8790670ba04d7f6d2804
LLaMA-Factory
/
src
/
llmtuner
/
extras
History
hiyouga
c4a3977ad7
add max_memory for gptq
#1923
2023-12-20 18:15:17 +08:00
..
patches
disentangle model from tuner and rename modules
2023-11-15 16:29:09 +08:00
__init__.py
modity code structure
2023-07-15 16:54:28 +08:00
callbacks.py
fix
#1696
2023-12-01 15:34:50 +08:00
constants.py
add xverse-65B-2 model
2023-12-18 19:24:09 +08:00
logging.py
fix bug in web ui
2023-11-16 15:21:24 +08:00
misc.py
add max_memory for gptq
#1923
2023-12-20 18:15:17 +08:00
packages.py
support dpo-ftx
2023-12-16 19:21:41 +08:00
ploting.py
disentangle model from tuner and rename modules
2023-11-15 16:29:09 +08:00