issues
search
bitsandbytes-foundation
/
bitsandbytes
Accessible large language models via k-bit quantization for PyTorch.
https://huggingface.co/docs/bitsandbytes/main/en/index
MIT License
6.31k
stars
634
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Release 0.44.0 does not belong to the current repository
#1370
ccoulombe
closed
1 month ago
2
Add workflow to publish tagged releases to PyPI
#1369
matthewdouglas
closed
1 month ago
1
Update pandas requirement from ~=2.2.2 to ~=2.2.3 in the major group
#1368
dependabot[bot]
closed
1 month ago
2
Enable packaging for ROCm 6.2
#1367
pnunna93
closed
2 months ago
1
cpu benchmark
#1366
jiqing-feng
closed
2 months ago
1
Change 8bit optimizer blocksize 2048->256; additional bf16 support
#1365
matthewdouglas
closed
2 months ago
2
RuntimeError: CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling cublasGemmEx
#1363
LukeLIN-web
opened
2 months ago
1
Bump the minor-patch group with 3 updates
#1362
dependabot[bot]
closed
2 months ago
0
Update matplotlib requirement from ~=3.9.1 to ~=3.9.2 in the major group
#1361
dependabot[bot]
closed
2 months ago
0
Add AdEMAMix optimizer
#1360
matthewdouglas
closed
2 months ago
1
Merge LoRA into 405B
#1359
junzhang-zj
opened
2 months ago
4
check grad before using ipex
#1358
jiqing-feng
closed
2 months ago
1
Nf4 grad
#1357
jiqing-feng
closed
2 months ago
0
Lion Optimizer With Triton Kernel
#1356
lapp0
closed
2 months ago
1
cuda is available but import bnb error
#1355
ZeroneBo
opened
2 months ago
2
Model not able to quantize
#1354
alielfilali01
opened
2 months ago
0
add CPU benchmark
#1353
jiqing-feng
closed
2 months ago
1
docs: add internal reference to multi-backend guide
#1352
Titus-von-Koeller
closed
2 months ago
1
docs: add internal reference to multi-backend guide
#1351
Titus-von-Koeller
closed
2 months ago
1
Bug when using optimizer LAMB 32bits
#1350
FrsECM
opened
2 months ago
0
fix nf4 memory issue by init op_context in forward
#1349
jiqing-feng
closed
2 months ago
3
Nf4 reload
#1348
jiqing-feng
closed
2 months ago
1
Torch autograd support for dequantize methods
#1347
yaldashbz
opened
2 months ago
0
Bump the minor-patch group with 2 updates
#1346
dependabot[bot]
closed
2 months ago
2
Update matplotlib requirement from ~=3.9.1 to ~=3.9.2 in the major group
#1345
dependabot[bot]
closed
2 months ago
2
Add `move_to_device` kwarg to the optimizer's `load_state_dict`
#1344
koute
closed
2 months ago
1
Cannot load decoder.lm_head.weight when loading 4 bit quantized model using VisionEncoderDecoder.from_pretrained
#1343
AditiJain14
opened
2 months ago
1
quantize_4bit/dequantize_4bit gives wrong output on in-contiguous tensor
#1342
chenqianfzh
opened
2 months ago
0
Update for VS2022 17.11 compatibility with CUDA < 12.4
#1341
matthewdouglas
closed
2 months ago
0
rm warn for multi backend
#1336
jiqing-feng
closed
2 months ago
0
Bump the minor-patch group with 2 updates
#1335
dependabot[bot]
closed
2 months ago
1
Update matplotlib requirement from ~=3.9.1 to ~=3.9.2 in the major group
#1334
dependabot[bot]
closed
2 months ago
1
Update diagnostic functions for ROCm
#1333
pnunna93
closed
2 months ago
5
Linear8bitLt can not be moved back to cpu
#1332
Nerogar
opened
3 months ago
2
Pretrained Causal LM cannot be loaded in 4bit/8bit
#1331
adrienchaton
opened
3 months ago
6
Enable certain CUDA kernels to accept specified cuda stream
#1330
jeejeelee
closed
3 months ago
8
Any plan to support block size 32?
#1329
lllyasviel
opened
3 months ago
4
Cuda source cleanup , refactor and fixes
#1328
abhilash1910
closed
2 months ago
4
RuntimeError: Failed to import transformers.integrations.bitsandbytes because of the following error (look up to see its traceback):
#1327
pradeep10kumar
opened
3 months ago
1
MPS progress
#1326
tcdent
opened
3 months ago
3
fix 4bit dtype
#1325
jiqing-feng
closed
3 months ago
1
Error occurred when executing DownloadAndLoadFlorence2Model:
#1324
JunpeakChen
closed
3 months ago
1
Error while trying to install the multi-backend-refactor branch for rocm in WSL2
#1323
Kademo15
closed
3 months ago
1
RuntimeError: CUDA Setup failed despite GPU being available. Please run the following command to get more information:
#1322
pradeep10kumar
opened
3 months ago
1
'nf4' compute datatype?
#1321
dorsa-zeinali
opened
3 months ago
1
where are the outliers stored in LLM.int8 quantization for inference suing transformers library on AMD GPU?
#1320
vbayanag
opened
3 months ago
2
About fusion of **kdequantize kernel** and **simple bf16/fp16 matmul**
#1319
Ther-nullptr
opened
3 months ago
1
Bugfix: Load correct nocublaslt library variant when BNB_CUDA_VERSION override is set
#1318
matthewdouglas
closed
3 months ago
1
Communicate blocksize constraints to kernels that take blocksize as a runtime argument
#1317
mm04926412
opened
3 months ago
6
Initial support for ppc64le
#1316
mgiessing
closed
3 months ago
3
Previous
Next