issues
search
bitsandbytes-foundation
/
bitsandbytes
Accessible large language models via k-bit quantization for PyTorch.
https://huggingface.co/docs/bitsandbytes/main/en/index
MIT License
6.14k
stars
616
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
MPS progress
#1326
tcdent
opened
1 month ago
2
fix 4bit dtype
#1325
jiqing-feng
closed
1 month ago
1
Error occurred when executing DownloadAndLoadFlorence2Model:
#1324
JunpeakChen
closed
1 month ago
1
Error while trying to install the multi-backend-refactor branch for rocm in WSL2
#1323
Kademo15
closed
1 month ago
1
RuntimeError: CUDA Setup failed despite GPU being available. Please run the following command to get more information:
#1322
pradeep10kumar
opened
1 month ago
1
'nf4' compute datatype?
#1321
dorsa-zeinali
opened
1 month ago
1
where are the outliers stored in LLM.int8 quantization for inference suing transformers library on AMD GPU?
#1320
vbayanag
opened
1 month ago
1
About fusion of **kdequantize kernel** and **simple bf16/fp16 matmul**
#1319
Ther-nullptr
opened
1 month ago
1
Bugfix: Load correct nocublaslt library variant when BNB_CUDA_VERSION override is set
#1318
matthewdouglas
closed
1 month ago
1
Communicate blocksize constraints to kernels that take blocksize as a runtime argument
#1317
mm04926412
opened
1 month ago
6
Initial support for ppc64le
#1316
mgiessing
closed
1 month ago
3
Unable to override PyTorch CUDA Version
#1315
tinglvv
opened
1 month ago
4
AdamE optimizer with decoupled L1 and L2 regularization
#1314
vincenzo-scotti
opened
1 month ago
3
libcudart.so Not Found
#1313
arunsandy1309
closed
1 month ago
2
libbitsandbytes_cpu.so,libbitsandbytes_cuda124_nocublaslt124.so
#1312
magicwang1111
closed
1 month ago
4
4bit quantized model.dequantize() fails on CPU
#1311
npbool
opened
2 months ago
0
Crash running FSDP on BF16-prequantized models
#1310
dmitrii-palisaderesearch
closed
1 month ago
4
Runtime Error, cannot import name 'get_keys_to_not_convert' from 'transformers.integrations'
#1309
zeruiz99
closed
1 month ago
1
dequantize_4bit() gives wrong output when working in cuda graph mode
#1308
chenqianfzh
closed
1 month ago
4
RuntimeError: CUDA Setup failed despite GPU being available. Please run the following command to get more information: python -m bitsandbytes Inspect the output of the command and see if you can locate CUDA libraries. You might need to add them to your LD_LIBRARY_PATH. If you suspect a bug, please take the information from python -m bitsandbytes and open an issue at: https://github.com/TimDettmers/bitsandbytes/issues
#1307
senzawapoi
opened
2 months ago
0
Regarding bnb import error
#1306
Mubashirshariq
opened
2 months ago
2
Update wheel requirement from ~=0.43.0 to ~=0.44.0 in the minor-patch group
#1305
dependabot[bot]
closed
1 month ago
1
bitsbytes 8bit quantized LLama 3.1 gets stuck sometimes when producing output
#1304
Techbhatia
opened
2 months ago
0
fix loading int8 model in CPU
#1303
jiqing-feng
closed
2 months ago
0
None 8bit
#1302
jiqing-feng
closed
2 months ago
0
fix transpose 4bit
#1301
jiqing-feng
closed
2 months ago
0
Fix dequant 4bit
#1300
jiqing-feng
closed
2 months ago
0
Enable bitsandbytes packaging for ROCm
#1299
pnunna93
closed
2 months ago
3
> I encountered the same issue on CUDA 11.6 and fixed it by building bitsandbytes from source. Below is my bash script for reference:
#1297
insafim
opened
2 months ago
0
Bump pytest from 8.3.1 to 8.3.2 in the minor-patch group
#1296
dependabot[bot]
closed
2 months ago
0
[FSDP] Enable loading prequantized weights with bf16/fp16/fp32 quant_storage
#1295
matthewdouglas
closed
2 months ago
0
script to estimate qlora mem usage
#1294
Titus-von-Koeller
closed
1 month ago
2
FLUTE Integration for Fast Inference
#1293
HanGuo97
opened
2 months ago
12
Embedding4bit and Embedding8bit implementation
#1292
galqiwi
closed
2 months ago
6
Update fsdp_qlora.md
#1290
qgallouedec
closed
2 months ago
1
CUDA Setup failed despite GPU being available
#1289
Keertiraj
opened
2 months ago
1
Who owns bitsandbytes?
#1288
garrettbyrd
opened
2 months ago
0
Bump pytest from 8.2.2 to 8.3.1 in the minor-patch group
#1287
dependabot[bot]
closed
2 months ago
0
Edenzzzz's fix for min_8bit_size functionality in Optimizer base classes
#1286
Titus-von-Koeller
closed
2 months ago
1
fix dtype mismatch
#1285
jiqing-feng
closed
2 months ago
0
Add CUDA 12.5 and update 12.4 builds
#1284
matthewdouglas
closed
2 months ago
0
Clarifying the quantization algorithm
#1283
chrisjmccormick
opened
2 months ago
1
add job to upload wheels to continuous pre-release
#1282
Titus-von-Koeller
closed
2 months ago
0
NameError: name 'str2optimizer32bit' is not defined
#1281
qingqinggu
opened
2 months ago
3
about F.igemm error?
#1280
HadXu
opened
2 months ago
0
Fixes for quant_storage and CPU offloading
#1279
matthewdouglas
closed
2 months ago
3
Update matplotlib requirement from ~=3.9.0 to ~=3.9.1 in the major group
#1278
dependabot[bot]
closed
2 months ago
0
Error occurred when executing DownloadAndLoadMimicMotionModel - CUDA Setup failed despite GPU being available: python -m bitsandbytes
#1277
schneegecko
opened
2 months ago
1
Fix Windows CUDA build compatibility with newest MSVC
#1276
matthewdouglas
closed
2 months ago
1
fix broken <source> links in autodoc API reference
#1275
Titus-von-Koeller
closed
2 months ago
0
Previous
Next