unslothai / unsloth

Finetune Llama 3.2, Mistral, Phi, Qwen 2.5 & Gemma LLMs 2-5x faster with 80% less memory
https://unsloth.ai
Apache License 2.0
18.4k stars 1.29k forks source link

erorr #1238

Open werruww opened 2 weeks ago

werruww commented 2 weeks ago

from unsloth import FastLanguageModel import torch max_seq_length = 2048 # Choose any! We auto support RoPE Scaling internally! dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+ load_in_4bit = True # Use 4bit quantization to reduce memory usage. Can be False.

4bit pre quantized models we support for 4x faster downloading + no OOMs.

fourbit_models = [

"unsloth/Meta-Llama-3.1-70B-bnb-4bit",

] # More models at https://huggingface.co/unsloth

model, tokenizer = FastLanguageModel.from_pretrained( model_name = "unsloth/Meta-Llama-3.1-70B-bnb-4bit", max_seq_length = max_seq_length, dtype = dtype, load_in_4bit = load_in_4bit,

token = "hf_...", # use one if using gated models like meta-llama/Llama-2-7b-hf

)

πŸ¦₯ Unsloth: Will patch your computer to enable 2x faster free finetuning. ==((====))== Unsloth 2024.10.7: Fast Llama patching. Transformers = 4.44.2. \ /| GPU: Tesla T4. Max memory: 14.748 GB. Platform = Linux. O^O/ _/ \ Pytorch: 2.5.0+cu121. CUDA = 7.5. CUDA Toolkit = 12.1. \ / Bfloat16 = FALSE. FA [Xformers = 0.0.28.post2. FA2 = False] "-____-" Free Apache license: http://github.com/unslothai/unsloth Unsloth: Fast downloading is enabled - ignore downloading bars which are red colored! model.safetensors.index.json: 100%  331k/331k [00:00<00:00, 18.6MB/s] Downloading shards: 100%  6/6 [07:59<00:00, 75.27s/it] model-00001-of-00006.safetensors: 100%  7.00G/7.00G [01:28<00:00, 468MB/s] model-00002-of-00006.safetensors: 100%  6.90G/6.90G [01:33<00:00, 80.7MB/s] model-00003-of-00006.safetensors: 100%  6.94G/6.94G [01:31<00:00, 29.8MB/s] model-00004-of-00006.safetensors: 100%  6.94G/6.94G [01:01<00:00, 441MB/s] model-00005-of-00006.safetensors: 100%  6.99G/6.99G [01:09<00:00, 214MB/s] model-00006-of-00006.safetensors: 100%  4.75G/4.75G [01:13<00:00, 56.0MB/s]

ValueError Traceback (most recent call last) in <cell line: 14>() 12 ] # More models at https://huggingface.co/unsloth 13 ---> 14 model, tokenizer = FastLanguageModel.from_pretrained( 15 model_name = "unsloth/Meta-Llama-3.1-70B-bnb-4bit", 16 max_seq_length = max_seq_length,

4 frames /usr/local/lib/python3.10/dist-packages/transformers/quantizers/quantizer_bnb_4bit.py in validate_environment(self, *args, **kwargs) 84 } 85 if "cpu" in device_map_without_lm_head.values() or "disk" in device_map_without_lm_head.values(): ---> 86 raise ValueError( 87 "Some modules are dispatched on the CPU or the disk. Make sure you have enough GPU RAM to fit the " 88 "quantized model. If you want to dispatch the model on the CPU or the disk while keeping these modules "

ValueError: Some modules are dispatched on the CPU or the disk. Make sure you have enough GPU RAM to fit the quantized model. If you want to dispatch the model on the CPU or the disk while keeping these modules in 32-bit, you need to set load_in_8bit_fp32_cpu_offload=True and pass a custom device_map to from_pretrained. Check https://huggingface.co/docs/transformers/main/en/main_classes/quantization#offload-between-cpu-and-gpu for more details.

werruww commented 2 weeks ago

error with nemotron 70b

werruww commented 2 weeks ago

colab t4

danielhanchen commented 2 weeks ago

T4 only has 16GB of VRAM, so it definitely will not fit - you need at least a 48GB card for 70B

werruww commented 2 weeks ago

ask unsloth/Llama-3.1-Nemotron-70B-Instruct-bnb-4bit can run on vram 16 and ram 32 ??????????? with device_map-auto

danielhanchen commented 1 week ago

@werruww CPU offloading will be quite slow, so technically yes, but not a good idea