issues
search
0cc4m
/
KoboldAI
GNU Affero General Public License v3.0
150
stars
31
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Install
#77
Bobbylargykisjo
opened
2 months ago
0
./play-rocm.sh gptq error fedora 39
#76
EtereosDawn
closed
9 months ago
1
ImportError: cannot import name 'url_quote' from 'werkzeug.urls'
#75
archiebhl
opened
1 year ago
0
Require Werkzeug 2
#74
Krisseck
closed
1 year ago
0
[Regression] Can't participate in horde with `exllama` branch, stopping sharing breaks processing
#73
InconsolableCellist
opened
1 year ago
0
Support for MythoMax-L2-13B-GPTQ
#72
ghost
opened
1 year ago
0
How to load multiple graphics cards
#71
qweronly
opened
1 year ago
0
Hook up use_default_badwordids in exllama
#70
pi6am
closed
1 year ago
0
Merge branch henk717/united into exllama
#69
pi6am
closed
1 year ago
0
Merge henk717/united into exllama
#68
pi6am
closed
1 year ago
0
Merge upstream branch 'united' into exllama
#67
pi6am
closed
1 year ago
0
Modify exllama to load unrenamed gptq quantized models
#66
pi6am
closed
1 year ago
0
Add the eos token to exllama bad words.
#65
pi6am
closed
1 year ago
0
Resample to work around a bug in torch.multinomial
#64
pi6am
closed
1 year ago
0
Add stopper hooks suppport to exllama
#63
pi6am
closed
1 year ago
0
Strip the eos token from exllama generations.
#62
pi6am
closed
1 year ago
0
Exllama in KoboldAI emits a spurious space at the beginning of generations that end with a stop token.
#61
pi6am
closed
1 year ago
2
Significant Speed Recression on P40 compared to United
#60
wereretot
opened
1 year ago
0
Attempting to pass model params to ExLlama on startup causes an AttributeError
#59
InconsolableCellist
opened
1 year ago
2
"expected scalar type BFloat16 but found Half"
#58
j2l
opened
1 year ago
0
i keep getting a merge conflict when trying to git pull from the new updated 4bit-plugin dev branch
#57
0xYc0d0ne
opened
1 year ago
1
when will the new update kobold just got for llama-2 be pushed here?
#56
0xYc0d0ne
opened
1 year ago
1
cant load models 4bit
#55
scavru
closed
1 year ago
4
WinError 127 on nvfuser_codegen.dll
#54
racinmat
opened
1 year ago
0
Request for T5 gptq model support.
#53
sigmareaver
opened
1 year ago
1
Can't load 4bit models on Rocm
#52
Infection321
opened
1 year ago
4
please add code for landmark attention to 4bit-plugin
#51
BlairSadewitz
opened
1 year ago
0
i cannot load any ai models and i keep getting this error no matter what i do. this happened after i did "git pull" command from this repository
#50
0xYc0d0ne
opened
1 year ago
1
1 token generation in story mode
#49
Hotohori
opened
1 year ago
2
anaconda3/lib/python3.9/runpy.py:127: RuntimeWarning: 'gptq.bigcode' found in sys.modules after import of package 'gptq', but prior to execution of 'gptq.bigcode'; this may result in unpredictable behaviour
#48
sigmareaver
opened
1 year ago
0
ModuleNotFoundError: No module named 'gptq.bigcode'
#47
sigmareaver
closed
1 year ago
1
Hey, I'm not sure what's wrong, but it does automatically delete a lot of output at the end of each generation.
#46
anyezhixie
closed
1 year ago
4
Interface not loading... WSL/Windows
#45
bbecausereasonss
opened
1 year ago
0
ModuleNotFoundError when starting "play.bat"
#44
anyezhixie
closed
1 year ago
4
how i can uninstall
#43
gandolfi974
opened
1 year ago
1
install_requirements error libmamba
#42
TFlame82
opened
1 year ago
0
Cannot find the path specified & No module named 'hf_bleeding_edge' when trying to start.
#41
TheFairyMan
closed
1 year ago
20
Can't split 4bit model between gpu/cpu, and can't run only on cpu
#39
tdtrumble
closed
1 year ago
1
Fix for Float16 error
#38
HarmonyTechLabs
closed
1 year ago
2
Fix for float16 error
#37
HarmonyTechLabs
closed
1 year ago
1
Failed to load 4bit-128g WizardLM 7B
#36
lee-b
opened
1 year ago
3
Update README.md to GPTQ-KoboldAI 0.0.5
#35
YellowRoseCx
closed
1 year ago
0
Slow speed for some models.
#33
BadisG
opened
1 year ago
4
Error involving bfloat 16 on generation with MPT 7B 4-bit_128g
#31
Bytemixer
opened
1 year ago
2
Error using previously good model.
#30
HeroMines
closed
1 year ago
2
AMD install out of date?
#29
jthree2001
opened
1 year ago
6
Issue with loading 30b model which was previously good
#28
szarkab123
opened
1 year ago
1
Loading a model via command line (--model) does not work in 0cc4m Branch
#27
RandomBanana122132
closed
1 year ago
5
NameError: name 'os' is not defined after last commit
#26
OValete
closed
1 year ago
2
What is the best way to update?
#24
silvestron
closed
1 year ago
2
Next