Open 1-ashraful-islam opened 5 months ago
Hey, can I see the code you are running?
Sure. Here's the code that reproduces this error for me:
from lightning_whisper_mlx import LightningWhisperMLX
model = "distil-large-v3" # Options ["tiny", "small", "distil-small.en", "base", "medium", distil-medium.en", "large", "large-v2", "distil-large-v2", "large-v3", "distil-large-v3"]
batch_size = 12
quant = "4bit" # Options [None, "4bit", "8bit"]
whisper = LightningWhisperMLX(model, batch_size, quant)
text = whisper.transcribe("chunk_output/clip_0_to_30.mp3")['text']
print(text)
I am still getting the same error even after running pip install -U lightning_whisper_mlx
.
Is there a step missing in integrating https://github.com/mustafaaljadery/lightning-whisper-mlx/pull/14 to the Pipy distribution?
I'm seeing the same error, same situation.
Hi, I'm testing using this code :
whisper = LightningWhisperMLX(model="large-v3", batch_size=12, quant="4bit")
same error result
AttributeError: type object 'QuantizedLinear' has no attribute 'quantize_module'
Any success using quant ?
edit : Trying
whisper = LightningWhisperMLX(model="distil-medium.en", batch_size=12, quant="4bit")
same error as well
I am still getting the same error even after running
pip install -U lightning_whisper_mlx
.Is there a step missing in integrating #14 to the Pipy distribution?
Looks like the fix hasn't made it to Pypi yet, reinstall with pip install lightning-whisper-mlx --git=https://github.com/mustafaaljadery/lightning-whisper-mlx
This introduces another problem though...
I also get the same error !
I am still getting the same error even after running
pip install -U lightning_whisper_mlx
. Is there a step missing in integrating #14 to the Pipy distribution?Looks like the fix hasn't made it to Pypi yet, reinstall with
pip install lightning-whisper-mlx --git=https://github.com/mustafaaljadery/lightning-whisper-mlx
This introduces another problem though...
What error does it introduce ?
Running into the same issue. Do we have a final solution here @mustafaaljadery ?
Thanks!
I am getting the following error if the quant flag is anything but
None
Any thoughts on, if I am missing something or we need to fix the
load_model.py
?