rmihaylov / falcontune

Tune any FALCON in 4-bit
Apache License 2.0
468 stars 52 forks source link

Does pretrained falcon-40b works on colab? #28

Open imthebilliejoe opened 1 year ago

imthebilliejoe commented 1 year ago

Hi there,

i'm trying to use your repo to finetune the vanilla pretrained model falcon-40b ma using this command:

!falcontune finetune \ --model=falcon-40b \ --weights=tiiuae/falcon-40b \ --dataset=./alpaca_data_cleaned.json \ --data_type=alpaca \ --lora_out_dir=./falcon-40b-alpaca/ \ --mbatch_size=1 \ --batch_size=2 \ --epochs=3 \ --lr=3e-4 \ --cutoff_len=2048 \ --lora_r=8 \ --lora_alpha=16 \ --lora_dropout=0.05 \ --warmup_steps=5 \ --save_steps=50 \ --save_total_limit=3 \ --logging_steps=5 \ --target_modules='["query_key_value"]'

i get this error: `2023-06-22 15:49:58.475124: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT

BUG REPORT Welcome to bitsandbytes. For bug reports, please run

python -m bitsandbytes

and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues bin /usr/local/lib/python3.10/dist-packages/bitsandbytes/libbitsandbytes_cuda118.so /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: /usr/lib64-nvidia did not contain ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] as expected! Searching further paths... warn(msg) /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/sys/fs/cgroup/memory.events /var/colab/cgroup/jupyter-children/memory.events')} warn(msg) /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('http'), PosixPath('8013'), PosixPath('//172.28.0.1')} warn(msg) /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('--logtostderr --listen_host=172.28.0.12 --target_host=172.28.0.12 --tunnel_background_save_url=https'), PosixPath('//colab.research.google.com/tun/m/cc48301118ce562b961b3c22d803539adc1e0c19/gpu-a100-s-3cww4v9wjy1aw --tunnel_background_save_delay=10s --tunnel_periodic_background_save_frequency=30m0s --enable_output_coalescing=true --output_coalescing_required=true')} warn(msg) /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/env/python')} warn(msg) /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('//ipykernel.pylab.backend_inline'), PosixPath('module')} warn(msg) CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching in backup paths... /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: Found duplicate ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] files: {PosixPath('/usr/local/cuda/lib64/libcudart.so'), PosixPath('/usr/local/cuda/lib64/libcudart.so.11.0')}.. We'll flip a coin and try one of these, in order to fail forward. Either way, this might cause trouble in the future: If you get CUDA error: invalid device function errors, the above might be the cause and the solution is to make sure only one ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] in the paths that we search based on your env. warn(msg) CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so CUDA SETUP: Highest compute capability among GPUs detected: 8.0 CUDA SETUP: Detected CUDA version 118 CUDA SETUP: Loading binary /usr/local/lib/python3.10/dist-packages/bitsandbytes/libbitsandbytes_cuda118.so... You are using a model of type RefinedWeb to instantiate a model of type RefinedWebModel. This is not supported for all configurations of models and can yield errors. Overriding torch_dtype=None with torch_dtype=torch.float16 due to requirements of bitsandbytes to enable model loading in mixed int8. Either pass torch_dtype=torch.float16 or don't pass this argument at all to remove this warning. ╭───────────────────── Traceback (most recent call last) ──────────────────────╮ │ /usr/local/bin/falcontune:33 in │ │ │ │ 30 │ │ 31 if name == 'main': │ │ 32 │ sys.argv[0] = re.sub(r'(-script.pyw?|.exe)?$', '', sys.argv[0]) │ │ ❱ 33 │ sys.exit(load_entry_point('falcontune==0.1.0', 'console_scripts', ' │ │ 34 │ │ │ │ /usr/local/lib/python3.10/dist-packages/falcontune-0.1.0-py3.10.egg/falcontu │ │ ne/run.py:88 in main │ │ │ │ 85 │ │ 86 def main(): │ │ 87 │ args = get_args() │ │ ❱ 88 │ args.func(args) │ │ 89 │ │ 90 │ │ 91 if name == 'main': │ │ │ │ /usr/local/lib/python3.10/dist-packages/falcontune-0.1.0-py3.10.egg/falcontu │ │ ne/finetune.py:49 in finetune │ │ │ │ 46 │ │ 47 │ │ 48 def finetune(args): │ │ ❱ 49 │ llm, tokenizer = load_model(args.model, args.weights, backend=args │ │ 50 │ tune_config = FinetuneConfig(args) │ │ 51 │ │ │ 52 │ transformers.logging.set_verbosity_info() │ │ │ │ /usr/local/lib/python3.10/dist-packages/falcontune-0.1.0-py3.10.egg/falcontu │ │ ne/model/init.py:33 in load_model │ │ │ │ 30 │ │ │ 31 │ if model_name in MODEL_CONFIGS: │ │ 32 │ │ from falcontune.model.falcon.model import load_model │ │ ❱ 33 │ │ model, tokenizer = load_model(model_config, weights, half=half, │ │ 34 │ │ │ 35 │ else: │ │ 36 │ │ raise ValueError(f"Invalid model name: {model_name}") │ │ │ │ /usr/local/lib/python3.10/dist-packages/falcontune-0.1.0-py3.10.egg/falcontu │ │ ne/model/falcon/model.py:1171 in load_model │ │ │ │ 1168 │ │ model.loaded_in_4bit = True │ │ 1169 │ │ │ 1170 │ elif llm_config.bits == 8: │ │ ❱ 1171 │ │ model = RWForCausalLM.from_pretrained( │ │ 1172 │ │ │ checkpoint, │ │ 1173 │ │ │ config=config, │ │ 1174 │ │ │ load_in_8bit=True, │ │ │ │ /usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py:2722 │ │ in from_pretrained │ │ │ │ 2719 │ │ │ │ │ key: device_map[key] for key in device_map.keys() │ │ 2720 │ │ │ │ } │ │ 2721 │ │ │ │ if "cpu" in device_map_without_lm_head.values() or "d │ │ ❱ 2722 │ │ │ │ │ raise ValueError( │ │ 2723 │ │ │ │ │ │ """ │ │ 2724 │ │ │ │ │ │ Some modules are dispatched on the CPU or the │ │ 2725 │ │ │ │ │ │ the quantized model. If you want to dispatch │ ╰──────────────────────────────────────────────────────────────────────────────╯ ValueError: Some modules are dispatched on the CPU or the disk. Make sure you have enough GPU RAM to fit the quantized model. If you want to dispatch the model on the CPU or the disk while keeping these modules in 32-bit, you need to set load_in_8bit_fp32_cpu_offload=True and pass a custom device_map to from_pretrained. Check https://huggingface.co/docs/transformers/main/en/main_cl asses/quantization#offload-between-cpu-and-gpu for more details.`

While with the instruct model i have no issue.

2 questions:

  1. Can this code run also the vanilla 40b model on colab?
  2. If not, can you upload a 4bit version of the vanilla 40b model please?

Thank you so much