mlabonne / llm-course

Course to get into Large Language Models (LLMs) with roadmaps and Colab notebooks.
https://mlabonne.github.io/blog/
Apache License 2.0
39.18k stars 4.14k forks source link

Collaboration: Unsloth + llm-course #18

Closed danielhanchen closed 10 months ago

danielhanchen commented 10 months ago

Hey @mlabonne! Actually found this repo via Linkedin! :) Happy New Year!

Had a look through your notebooks - they look sick! Interestingly I was trying myself to run axolotl via Google Colab to no avail.

Anyways I'm the maintainer of Unsloth, which makes QLoRA 2.2x faster and use 62% less memory! It would be awesome if we could somehow collaborate :)

I have a few examples:

  1. Mistral 7b + Alpaca: https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing
  2. DPO Zephyr replication: https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing
  3. TinyLlama automatic RoPE Scaling from 2048 to 4096 tokens + full Alpaca dataset in 80 minutes. https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing (still running since TinyLlama was just released!)

Anyways great work again!

mlabonne commented 10 months ago

Hi @danielhanchen, cool I know Unsloth from r/LocalLlama. Do you have something particular in mind? We can continue the conversation on Twitter @maximelabonne if you don't mind.

DngBack commented 10 months ago

thankiu I have same question

danielhanchen commented 10 months ago

@mlabonne I'll bring the chat over to Twitter! Oh lol actually I dont have Twitter premium, so you first have to follow me :))

mlabonne commented 10 months ago

Unsloth has been added!