Open dokterbob opened 1 month ago
Great question! Yes this is inherited from llama.cpp as noted in the Acknowledge section, we had pushed our model support code into llama.cpp via https://github.com/ggerganov/llama.cpp/pull/7931, however there are some framework refinements in bitnet.cpp that have hard conflicts with original llama.cpp code, so that a new repo might be needed. Comparing to llama.cpp, bitnet.cpp's inference result is exact the same as the "ground truth" one, we will add more explanations in the Readme file later, thanks.
@sd983527
..there are some framework refinements in bitnet.cpp that have hard conflicts...
Sorry, but this doesn't sound like a very credible reason. Especially given the MS history of taking over other (often FOSS) code and make it their own. It should be stated clearly up-front and not as a footnote, that this code is a fork from llama.cpp, and exactly why this fork was needed.
Still waiting for someone to address the questions from @dokterbob .
Can someone just do a pull request of what’s been done in here to llama.cpp? Thanks this is a better practice for me.
Can someone just do a pull request of what’s been done in here to llama.cpp? Thanks this is a better practice for me.
Maybe try reading the contributor answer next time
I share the same concerns.
After checking the submodule in this repository (I personally dislike using submodules in Git), I found that it relies on an outdated fork of the original llama.cpp project. it is 320 commits behind
https://github.com/Eddie-Wang1120/llama.cpp.git
https://github.com/microsoft/BitNet/blob/main/.gitmodules#L3
[submodule "3rdparty/llama.cpp"]
path = 3rdparty/llama.cpp
url = https://github.com/Eddie-Wang1120/llama.cpp.git
branch = merge-dev
Will Microsoft seriously support this project? This repository appears more like a personal project.
@ozbillwang
Thanks for investigating! 💯
Will Microsoft seriously support this project? This repository appears more like a personal project.
Indeed very suspicions, and seem more like some kind of clickbait project. They racked up 10,000 stars in no time, and nearly no commits or useful feedback, since. I hate to sound negative, but I hate even more to get involved in these kind of unethical/corporate side hustles. In addition, I also hate sub-modules! :( Avoiding huge number of various external files may be the very reason why llama.cpp was so successful:
1 screen, 1 editor, 1 page, 1 tab and 1 file
! 🥇
First of all: CONGRATS ON YOUR AMAZING RESEARCH WORK.
Considering that this is using GGML and seems based directly on
llama.cpp
:Why is this a separate project to
llama.cpp
, given thatllama.cpp
already supports BitNet ternary quants? (https://github.com/ggerganov/llama.cpp/pull/8151)Are these simply more optimised kernels? If so, how do they compare to llama's implementation? Can/should they be contributed back to
llama.cpp
?