issues
search
microsoft
/
BitNet
Official inference framework for 1-bit LLMs
MIT License
11.39k
stars
768
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
not generate /build/bin/llama-quantize file when compile
#121
whk6688
opened
2 hours ago
1
Model conversion from HF to GGUF crashes due to lack of memory
#120
philtomson
opened
17 hours ago
1
Refactor TL1 codegen
#119
JCGoran
opened
1 day ago
0
The current revision bf11a49 fails to build: Cannot find source file: ../../../../include/bitnet-lut-kernels.h
#118
yurivict
closed
1 day ago
6
Getting Errors When following Readme Instruction on ARM
#117
zwx109473
closed
1 day ago
2
GGML_ASSERT Failed During Benchmarking Dummy Model on Apple Silicon Mac.
#116
MattyAB
opened
1 week ago
0
Wrong and irrelevant answers with very Basic Usage
#115
zvigrinberg
opened
1 week ago
0
docs: add git submodule update to enable successful pip install
#114
zvigrinberg
opened
1 week ago
0
Cannot run tl1 model on iOS
#113
luionTW
opened
1 week ago
3
Can you support reordering and vector embedding models?
#111
tanyo520
opened
1 week ago
0
Support for Loading GGUF Quantized Model in Python Pipeline
#109
NiloufarAb
opened
1 week ago
0
Cannot quantize to f16 with i2_s
#108
pis7
opened
1 week ago
1
i2_s quantized model giving random outputs after fine-tuning.
#107
sovit-123
opened
1 week ago
0
Only the example works, everything else is gibberish
#106
ErfolgreichCharismatisch
opened
1 week ago
1
Update README.md
#105
grctest
opened
1 week ago
0
why clang ?
#104
wmx-github
opened
2 weeks ago
1
On Ubuntu System getting error while command executing - python setup_env.py -md models/Llama3-8B-1.58-100B-tokens -q i2_s
#103
mrunal77
closed
2 weeks ago
6
Certain characters crash bitnet model inference?
#102
grctest
opened
2 weeks ago
6
ggml-model-i2_s.gguf gives magic file error when adding to Ollama
#101
geoej
opened
3 weeks ago
0
Can't build on Windows 11
#100
sstock2005
closed
2 weeks ago
1
AttributeError: TL1 , when building the project on MacOS , M1 device.
#99
DoggingDog
opened
3 weeks ago
3
Question. why cmake just use one core to compile, its so slow ?
#98
YouUpDay
closed
1 day ago
1
e2e_benchmark.py uses incorrect build path?
#97
grctest
closed
3 weeks ago
1
Fail to run eval_ppl and eval_task
#96
Kimho666
closed
3 weeks ago
2
where can i have a converted i2_s.gguf model, to convert it to gguf needs quite a lot of GPU RAM,which makes it not a good choice to run Locally.
#95
ssdutliuhaibo
closed
3 weeks ago
0
Experiment - Running on AWS EC2 Graviton c6g.xlarge
#94
giusedroid
opened
3 weeks ago
0
Use of undeclared identifiers [posix_memalign, free, memset] in bitnet-lut-kernels.h Ubuntu ARM
#93
giusedroid
closed
3 weeks ago
1
is there a complete tutorial that guides me through using BitNet from scratch?
#92
hjc3613
closed
2 weeks ago
1
File utils/kernel_tuning.py not used.
#91
Gabrielfernandes7
closed
2 weeks ago
2
Question - Am I missing something?
#90
DataJuggler
closed
2 weeks ago
7
returned non-zero exit status 3221225477
#89
m7mdhka
closed
2 weeks ago
5
Minor Read Me Mistake?
#88
DataJuggler
closed
1 day ago
1
Refactor TL2 codegen
#84
JCGoran
opened
3 weeks ago
0
Fix building on GCC toolchain
#83
JCGoran
closed
2 weeks ago
1
Fix arm64 build issue
#82
wintersalmon
closed
4 weeks ago
0
Add Jupyter notebook example for Bitnet.cpp
#81
swaroski
opened
4 weeks ago
1
Getting Errors When following Readme Instruction on ARM server with ubuntu24.04 #74
#79
MrEcco
closed
2 weeks ago
1
How to compile a server.exe?
#78
clilyn1234
opened
1 month ago
1
Running on Colab - convert-hf-to-gguf-bitnet.py stops with "^C"
#77
ApurvPujari
closed
2 weeks ago
6
BitNet model outputs repetitive gibberish ('Breis') regardless of input during text generation
#76
EladWarshawsky
closed
2 weeks ago
4
error : cannot use 'throw' with exceptions disabled
#75
alake1
closed
2 weeks ago
2
Getting Errors When following Readme Instruction on ARM server with ubuntu24.04
#74
sadaqatullah
closed
2 weeks ago
5
readme steps don't work for me. Fails at setup_env.py
#73
byjlw
closed
2 weeks ago
8
Fix Readme
#72
RealMrCactus
closed
4 weeks ago
1
Does not run with i2 model; SIGABRT and assertion `!isnan(wp[i])' failed.
#71
lorenzog
closed
1 month ago
1
Memory leak in quantize_i2_s function
#70
asalaria-cisco
closed
2 weeks ago
2
Significant code overlap between BitNet and T-MAC, what are the specific differences?
#69
DRAn-An
opened
1 month ago
1
Inference not working properly on x86 Ubuntu 20.04 VM while functioning correctly on M1 Pro
#68
kronenz
closed
2 weeks ago
2
fix for empty loops and memory handling, handling of generated files
#67
bmerkle
opened
1 month ago
0
Add list of supported model
#66
luohao123
opened
1 month ago
2
Next