dvmazur / mixtral-offloading

Run Mixtral-8x7B models in Colab or consumer desktops
MIT License
2.28k stars 223 forks source link

Doesn't work #14

Closed SanskarX10 closed 6 months ago

SanskarX10 commented 6 months ago

The notebook code does not even run, even after entering the hugging face token.

dvmazur commented 6 months ago

Hey, @SanskarX10, could you provide more info?

Just tried running the notebook myself, It appears to be stuck downloading the model snapshot from the model hub. Could be an issue on HF's side.

segamboam commented 6 months ago

Hi everyone. I have the same problem. A curiosity is that with Colab, if I don't have the Hugging Face token, the code doesn't run on line 5.

img1 But when I introduce the token in Colab secrets, it doesn't run on line 4.

img1

Maybe, there is an error of compatibility between Colab and Hugging Face, or issues related to connection

SanskarX10 commented 6 months ago

Same issue , the execution goes on forever and nothing gets downloaded

SanskarX10 commented 6 months ago

Hey, @SanskarX10, could you provide more info?

Just tried running the notebook myself, It appears to be stuck downloading the model snapshot from the model hub. Could be an issue on HF's side.

Oh ! , that can be the case. Thanks for quick reply.

ffreemt commented 6 months ago

Try to use huggingface-cli for downloading the model first, something like

!huggingface-cli download --resume-download lavawolfiee/Mixtral-8x7B-Instruct-v0.1-offloading-demo --local-dir Mixtral-8x7B-Instruct-v0.1-offloading-demo
clear_output()

! date

# then load the modle from local-dir
# config = AutoConfig.from_pretrained(quantized_model_name)
# state_path = snapshot_download(quantized_model_name)

state_path = "Mixtral-8x7B-Instruct-v0.1-offloading-demo"
config = AutoConfig.from_pretrained(state_path)

Maybe snapshot_download can't handle so many files. huggingface-cli download is quite fast: 17G in a 2-3 minutes. Note config = AutoConfig.from_pretrained(quantized_model_name) also seems to hang.

dvmazur commented 6 months ago

@ffreemt, thanks for the tip! I just published the new notebook.

We'll implement a more permanent solution on the weekend.

Manojkl commented 6 months ago

The recent notebook works. However, the speed of generation is slow. To answer the query "write a poem about python" it took 4 minutes.

oltipreka commented 6 months ago

Despite the generation speed being slows, it works like a charm. So, I wanted to thank you, @dvmazur, for such an amazing job you have done, really!!!!

That said, I would have some questions:

==> Ok. I gave a look at the Paper, which seems to clarify my questions.

Again, sincere congratulations to the all the contributors !!!!!

dvmazur commented 6 months ago

Hey, @oltipreka, thanks for the kind words. This was a collaborative effort, so please shout out @lavawolfiee for making it happen.

As for the generation speed, we are still working on making it faster, but we've slowed down a bit due to the holidays :)

Regarding your questions,

I'm closing this issue due to it being resolved.

oltipreka commented 6 months ago

Hey, @oltipreka, thanks for the kind words. This was a collaborative effort, so please shout out @lavawolfiee for making it happen.

As for the generation speed, we are still working on making it faster, but we've slowed down a bit due to the holidays :)

Regarding your questions,

  • You'll need about 27Gb of combined GPU and CPU memory. The proportion of GPU to CPU memory affects generation speed, as lower GPU memory might require offloading more experts. You can view some setups in our tech-report
  • You can download the original embedding layer weights from Mixtral's repo on HF Hub.

I'm closing this issue due to it being resolved.

Thanks for the clarifications, extremely useful. Yeah, you are absolutely right, the entire team deserves credit for this, including @lavawolfiee. Thank you folks and keep going !!!