h2oai / h2o-llmstudio

H2O LLM Studio - a framework and no-code GUI for fine-tuning LLMs. Documentation: https://docs.h2o.ai/h2o-llmstudio/
https://h2o.ai
Apache License 2.0
4.01k stars 417 forks source link

[BUG] App gets unresponsive after bitsandbytes training error #565

Closed maxjeblick closed 5 months ago

maxjeblick commented 10 months ago

🐛 Bug

App hangs after hitting some quantization issue in bitsandbytes. The issue seems related to bitsandbytes, raising AssertionErrors in random parts of the training pipeline does not result in UI issues.

To Reproduce

I ran into this issue while testing #564. To reproduce on current main:

  1. Disable configuration checks in the code
  2. Run default experiment with 0 gpus and 4 bit quantization
  3. App gets unresponsive after the AssertionError is raised

LLM Studio version

8d1c136

INFO:     127.0.0.1:34268 - "POST / HTTP/1.1" 200 OK
2024-01-15 14:47:42,145 - INFO: Initializing client True
2024-01-15 14:47:42,255 - INFO: {'home/compute_stats', 'dataset/list', 'home/disk_usage', 'home/experiments_stats', 'home/gpu_stats', 'init_app'}
2024-01-15 14:47:42,368 - INFO: PREV None text_causal_language_modeling_config None 1 None 14 
2024-01-15 14:47:42,368 - INFO: Starting from CFG
2024-01-15 14:47:42,380 - INFO: From dataset True
2024-01-15 14:47:42,380 - INFO: From cfg True
2024-01-15 14:47:42,380 - INFO: From default True
2024-01-15 14:47:42,380 - INFO: Config file: text_causal_language_modeling_config
INFO:     127.0.0.1:34268 - "POST / HTTP/1.1" 200 OK
2024-01-15 14:47:46,272 - INFO: Initializing client True
2024-01-15 14:47:46,387 - INFO: Starting experiment
2024-01-15 14:47:46,387 - INFO: experiment/start/cfg_file
2024-01-15 14:47:46,387 - INFO: CFG: text_causal_language_modeling_config
2024-01-15 14:47:46,511 - INFO: Percentage of RAM memory used: 24.0
2024-01-15 14:47:46,511 - INFO: Process: 258437, Queue: [], GPUs: ()
2024-01-15 14:47:46,853 - INFO: {'home/compute_stats', 'experiment/start/footer', 'dataset/list', 'home/disk_usage', 'home/experiments_stats', 'home/gpu_stats', 'init_app', 'experiment/start'}
[2024-01-15 14:47:47,948] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect)
INFO:     127.0.0.1:34268 - "POST / HTTP/1.1" 200 OK
2024-01-15 14:47:48,584 - INFO: Initializing client True
2024-01-15 14:47:48,709 - INFO: {'experiment/list', 'home/compute_stats', 'experiment/start/footer', 'dataset/list', 'home/disk_usage', 'home/experiments_stats', 'dataset/display/footer', 'home/gpu_stats', 'init_app', 'experiment/start'}
/home/max/.local/share/virtualenvs/h2o-llmstudio-0x1AIe7C/lib/python3.10/site-packages/transformers/deepspeed.py:23: FutureWarning: transformers.deepspeed module is deprecated and will be removed in a future version. Please import deepspeed modules directly from transformers.integrations
  warnings.warn(
2024-01-15 14:47:49,287 - WARNING: Training on CPU. This will be slow.
2024-01-15 14:47:49,287 - INFO: Problem Type: text_causal_language_modeling
2024-01-15 14:47:49,287 - INFO: Global random seed: 85069
2024-01-15 14:47:49,287 - INFO: Preparing the data...
2024-01-15 14:47:49,287 - INFO: Setting up automatic validation split...
2024-01-15 14:47:49,339 - INFO: Preparing train and validation data
2024-01-15 14:47:49,339 - INFO: Loading train dataset...
INFO:     127.0.0.1:34268 - "POST / HTTP/1.1" 200 OK
2024-01-15 14:47:49,512 - INFO: Initializing client True
2024-01-15 14:47:49,628 - INFO: {'experiment/list', 'home/compute_stats', 'experiment/start/footer', 'dataset/list', 'home/disk_usage', 'home/experiments_stats', 'dataset/display/footer', 'home/gpu_stats', 'init_app', 'experiment/start'}
2024-01-15 14:47:49,743 - INFO: Stop token ids: [tensor([  529, 29989, 12011, 29989, 29958]), tensor([  529, 29989,  5205, 29989, 29958]), tensor([  529, 29989, 14032,   415, 29989, 29958])]
2024-01-15 14:47:49,768 - INFO: Loading validation dataset...
2024-01-15 14:47:49,968 - INFO: Stop token ids: [tensor([  529, 29989, 12011, 29989, 29958]), tensor([  529, 29989,  5205, 29989, 29958]), tensor([  529, 29989, 14032,   415, 29989, 29958])]
2024-01-15 14:47:49,972 - INFO: Number of observations in train dataset: 8191
2024-01-15 14:47:49,972 - INFO: Number of observations in validation dataset: 83
2024-01-15 14:47:50,310 - INFO: Stop token ids: [tensor([  529, 29989, 12011, 29989, 29958]), tensor([  529, 29989,  5205, 29989, 29958]), tensor([  529, 29989, 14032,   415, 29989, 29958])]
2024-01-15 14:47:50,312 - WARNING: PAD token id not matching between config and tokenizer. Overwriting with tokenizer id.
2024-01-15 14:47:50,312 - INFO: Setting pretraining_tp of model config to 1.
2024-01-15 14:47:50,314 - INFO: Using int4 for backbone
2024-01-15 14:47:50,314 - INFO: Loading h2oai/h2ogpt-4096-llama2-7b. This may take a while.
INFO:     127.0.0.1:34268 - "POST / HTTP/1.1" 200 OK
2024-01-15 14:47:50,392 - INFO: Initializing client True
2024-01-15 14:47:50,519 - INFO: {'experiment/list', 'home/compute_stats', 'experiment/start/footer', 'dataset/list', 'home/disk_usage', 'home/experiments_stats', 'dataset/display/footer', 'home/gpu_stats', 'init_app', 'experiment/start'}
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00,  8.74it/s]
2024-01-15 14:47:51,058 - INFO: Loaded h2oai/h2ogpt-4096-llama2-7b.
INFO:     127.0.0.1:34268 - "POST / HTTP/1.1" 200 OK
2024-01-15 14:47:51,224 - INFO: Initializing client True
2024-01-15 14:47:51,368 - INFO: {'experiment/list', 'home/compute_stats', 'experiment/start/footer', 'dataset/list', 'home/disk_usage', 'home/experiments_stats', 'dataset/display/footer', 'home/gpu_stats', 'init_app', 'experiment/start'}
INFO:     127.0.0.1:35864 - "POST / HTTP/1.1" 200 OK
2024-01-15 14:47:56,412 - INFO: Initializing client True
2024-01-15 14:47:56,538 - INFO: {'experiment/list', 'home/compute_stats', 'experiment/start/footer', 'dataset/list', 'home/disk_usage', 'home/experiments_stats', 'dataset/display/footer', 'home/gpu_stats', 'init_app', 'experiment/start'}
INFO:     127.0.0.1:35864 - "POST / HTTP/1.1" 200 OK
2024-01-15 14:47:57,353 - INFO: Initializing client True
2024-01-15 14:47:57,477 - INFO: {'experiment/list', 'home/compute_stats', 'experiment/start/footer', 'dataset/list', 'home/disk_usage', 'home/experiments_stats', 'dataset/display/footer', 'home/gpu_stats', 'init_app', 'experiment/start'}
2024-01-15 14:48:16,556 - INFO: Lora module names: ['q_proj', 'k_proj', 'v_proj', 'o_proj', 'gate_proj', 'up_proj', 'down_proj']
trainable params: 9,994,240 || all params: 13,224,415,232 || trainable%: 0.0755741545064032
2024-01-15 14:48:44,508 - INFO: Enough space available for saving model weights.Required space: 26515.43MB, Available space: 511215.53MB.
2024-01-15 14:48:44,754 - INFO: Training Epoch: 1 / 1
2024-01-15 14:48:44,755 - INFO: train loss:   0%|          | 0/4095 [00:00<?, ?it/s]
2024-01-15 14:48:46,332 - INFO: Evaluation step: 4095
2024-01-15 14:48:49,137 - INFO: Stop token ids: [tensor([  529, 29989, 12011, 29989, 29958]), tensor([  529, 29989,  5205, 29989, 29958]), tensor([  529, 29989, 14032,   415, 29989, 29958])]
/home/max/.local/share/virtualenvs/h2o-llmstudio-0x1AIe7C/lib/python3.10/site-packages/torch/utils/checkpoint.py:429: UserWarning: torch.utils.checkpoint: please pass in use_reentrant=True or use_reentrant=False explicitly. The default value of use_reentrant will be updated to be False in the future. To maintain current behavior, pass use_reentrant=True. It is recommended that you use use_reentrant=False. Refer to docs for more details on the differences between the two variants.
  warnings.warn(
FP4 quantization state not initialized. Please call .cuda() or .to(device) on the LinearFP4 layer first.
2024-01-15 14:48:49,556 - ERROR: Exception occurred during H2O LLM Studio run:
Traceback (most recent call last):
  File "/media/max/3tbdrive/PycharmProjects/h2o-llmstudio/train_wave.py", line 106, in <module>
    run(cfg=cfg)
  File "/media/max/3tbdrive/PycharmProjects/h2o-llmstudio/train.py", line 963, in run
    val_loss, val_metric = train_function(
  File "/media/max/3tbdrive/PycharmProjects/h2o-llmstudio/train.py", line 268, in run_train
    output_dict = model.forward(batch)
  File "/media/max/3tbdrive/PycharmProjects/h2o-llmstudio/llm_studio/src/models/text_causal_language_modeling_model.py", line 95, in forward
    output = self.backbone(
  File "/home/max/.local/share/virtualenvs/h2o-llmstudio-0x1AIe7C/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/max/.local/share/virtualenvs/h2o-llmstudio-0x1AIe7C/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/max/.local/share/virtualenvs/h2o-llmstudio-0x1AIe7C/lib/python3.10/site-packages/peft/peft_model.py", line 918, in forward
    return self.base_model(
  File "/home/max/.local/share/virtualenvs/h2o-llmstudio-0x1AIe7C/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/max/.local/share/virtualenvs/h2o-llmstudio-0x1AIe7C/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/max/.local/share/virtualenvs/h2o-llmstudio-0x1AIe7C/lib/python3.10/site-packages/peft/tuners/tuners_utils.py", line 94, in forward
    return self.model.forward(*args, **kwargs)
  File "/home/max/.local/share/virtualenvs/h2o-llmstudio-0x1AIe7C/lib/python3.10/site-packages/accelerate/hooks.py", line 165, in new_forward
    output = old_forward(*args, **kwargs)
  File "/home/max/.local/share/virtualenvs/h2o-llmstudio-0x1AIe7C/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 1181, in forward
    outputs = self.model(
  File "/home/max/.local/share/virtualenvs/h2o-llmstudio-0x1AIe7C/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/max/.local/share/virtualenvs/h2o-llmstudio-0x1AIe7C/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/max/.local/share/virtualenvs/h2o-llmstudio-0x1AIe7C/lib/python3.10/site-packages/accelerate/hooks.py", line 165, in new_forward
    output = old_forward(*args, **kwargs)
  File "/home/max/.local/share/virtualenvs/h2o-llmstudio-0x1AIe7C/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 1058, in forward
    layer_outputs = self._gradient_checkpointing_func(
  File "/home/max/.local/share/virtualenvs/h2o-llmstudio-0x1AIe7C/lib/python3.10/site-packages/torch/_compile.py", line 24, in inner
    return torch._dynamo.disable(fn, recursive)(*args, **kwargs)
  File "/home/max/.local/share/virtualenvs/h2o-llmstudio-0x1AIe7C/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 328, in _fn
    return fn(*args, **kwargs)
  File "/home/max/.local/share/virtualenvs/h2o-llmstudio-0x1AIe7C/lib/python3.10/site-packages/torch/_dynamo/external_utils.py", line 17, in inner
    return fn(*args, **kwargs)
  File "/home/max/.local/share/virtualenvs/h2o-llmstudio-0x1AIe7C/lib/python3.10/site-packages/torch/utils/checkpoint.py", line 451, in checkpoint
    return CheckpointFunction.apply(function, preserve, *args)
  File "/home/max/.local/share/virtualenvs/h2o-llmstudio-0x1AIe7C/lib/python3.10/site-packages/torch/autograd/function.py", line 539, in apply
    return super().apply(*args, **kwargs)  # type: ignore[misc]
  File "/home/max/.local/share/virtualenvs/h2o-llmstudio-0x1AIe7C/lib/python3.10/site-packages/torch/utils/checkpoint.py", line 230, in forward
    outputs = run_function(*args)
  File "/home/max/.local/share/virtualenvs/h2o-llmstudio-0x1AIe7C/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/max/.local/share/virtualenvs/h2o-llmstudio-0x1AIe7C/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/max/.local/share/virtualenvs/h2o-llmstudio-0x1AIe7C/lib/python3.10/site-packages/accelerate/hooks.py", line 165, in new_forward
    output = old_forward(*args, **kwargs)
  File "/home/max/.local/share/virtualenvs/h2o-llmstudio-0x1AIe7C/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 796, in forward
    hidden_states, self_attn_weights, present_key_value = self.self_attn(
  File "/home/max/.local/share/virtualenvs/h2o-llmstudio-0x1AIe7C/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/max/.local/share/virtualenvs/h2o-llmstudio-0x1AIe7C/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/max/.local/share/virtualenvs/h2o-llmstudio-0x1AIe7C/lib/python3.10/site-packages/accelerate/hooks.py", line 165, in new_forward
    output = old_forward(*args, **kwargs)
  File "/home/max/.local/share/virtualenvs/h2o-llmstudio-0x1AIe7C/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 691, in forward
    query_states = self.q_proj(hidden_states)
  File "/home/max/.local/share/virtualenvs/h2o-llmstudio-0x1AIe7C/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/max/.local/share/virtualenvs/h2o-llmstudio-0x1AIe7C/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/max/.local/share/virtualenvs/h2o-llmstudio-0x1AIe7C/lib/python3.10/site-packages/peft/tuners/lora.py", line 1208, in forward
    result = super().forward(x)
  File "/home/max/.local/share/virtualenvs/h2o-llmstudio-0x1AIe7C/lib/python3.10/site-packages/bitsandbytes/nn/modules.py", line 248, in forward
    out = bnb.matmul_4bit(x, self.weight.t(), bias=bias, quant_state=self.weight.quant_state)
  File "/home/max/.local/share/virtualenvs/h2o-llmstudio-0x1AIe7C/lib/python3.10/site-packages/bitsandbytes/autograd/_functions.py", line 567, in matmul_4bit
    assert quant_state is not None
AssertionError
maxjeblick commented 10 months ago

Will check if it this issue occurs with cpu only and push a fix to #564 eventually. What's puzzling is that the error in the subprocess freezes the parent process.