kyegomez / BitNet

Implementation of "BitNet: Scaling 1-bit Transformers for Large Language Models" in pytorch
https://discord.gg/qUtxnK2NMf
MIT License
1.55k stars 143 forks source link

does not have support for mistral, gemma, etc and generate error [BUG] ? #27

Closed NickyDark1 closed 3 months ago

NickyDark1 commented 7 months ago

model_id = "h2oai/h2o-danube-1.8b-chat"#

image

Upvote & Fund

Fund with Polar

NickyDark1 commented 7 months ago

version: 4.36.2 new -> transformers==4.38.0 (no support)

NickyDark1 commented 7 months ago

only support this model?

Load a model from Hugging Face's Transformers

model_name = "bert-base-uncased"

NickyDark1 commented 7 months ago

no support:

sanjeev-bhandari commented 5 months ago

@NickyDark1, I ran that model in colab and it work

Without quanitizing

# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("h2oai/h2o-danube-1.8b-chat")
model = AutoModelForCausalLM.from_pretrained("h2oai/h2o-danube-1.8b-chat")

# from transformers import pipeline

pipe = pipeline("text2text-generation", model=model, tokenizer=tokenizer)
pipe("Hello, How")

Output:

[{'generated_text': 'Hello, How are you?\n\n"I\'m doing well, thank you. How about'}]
After replacing Linear layer with bitnet
from bitnet import replace_linears_in_hf

replace_linears_in_hf(model)
# change model back to device cuda
model.to("cuda")
pipe_1_bit = pipeline("text-generation", model=model, tokenizer=tokenizer)
pipe_1_bit("Hello, How")

Output is:

[{'generated_text': 'Hello, How島 waters everyoneürgen Mess till revel馬 Vitt officials ambos">< czł plusieurs ap riv居'}]

But it takes ages to give this answer(8 min in my case in free colab).

github-actions[bot] commented 3 months ago

Stale issue message