oobabooga / text-generation-webui

A Gradio web UI for Large Language Models.
GNU Affero General Public License v3.0
40.5k stars 5.31k forks source link

KeyError: 'module name can\'t contain ".", got: liuhaotian_llava-v1.5-13b-lora' #4689

Closed theoden8 closed 10 months ago

theoden8 commented 11 months ago

Describe the bug

KeyError: 'module name can\'t contain ".", got: liuhaotian_llava-v1.5-13b-lora'

I just downloaded the model using the download button. I'm completely new to this repo.

Is there an existing issue for this?

Reproduction

  1. Put the lora's name into the box
  2. Click download button
  3. Try to apply lora

Screenshot

No response

Logs

2023-11-21 04:51:39 INFO:Loading the extension "gallery"...
Running on local URL:  http://127.0.0.1:13680

To create a public link, set `share=True` in `launch()`.
$CONDA_PREFIX/lib/python3.10/site-packages/gradio/components/dropdown.py:231: UserWarning: The value passed into gr.Dropdown() is not in the list of choices. Please update the list of choices to include: llama or set allow_custom_value=True.
  warnings.warn(
2023-11-21 04:56:21 INFO:Loading lmsys_vicuna-13b-v1.5...
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:14<00:00,  4.78s/it]
$CONDA_PREFIX/lib/python3.10/site-packages/transformers/generation/configuration_utils.py:381: UserWarning: `do_sample` is set to `False`. However, `temperature` is set to `0.9` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `temperature`. This was detected when initializing the generation config instance, which means the corresponding file may hold incorrect parameterization and should be fixed.
  warnings.warn(
$CONDA_PREFIX/lib/python3.10/site-packages/transformers/generation/configuration_utils.py:386: UserWarning: `do_sample` is set to `False`. However, `top_p` is set to `0.6` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `top_p`. This was detected when initializing the generation config instance, which means the corresponding file may hold incorrect parameterization and should be fixed.
  warnings.warn(
2023-11-21 04:56:41 INFO:TRUNCATION LENGTH: 4096
2023-11-21 04:56:41 INFO:INSTRUCTION TEMPLATE: Vicuna-v1.1
2023-11-21 04:56:41 INFO:Loaded the model in 19.94 seconds.
2023-11-21 04:56:47 INFO:Applying the following LoRAs to lmsys_vicuna-13b-v1.5: liuhaotian_llava-v1.5-13b-lora
Traceback (most recent call last):
  File "$CONDA_PREFIX/lib/python3.10/site-packages/gradio/queueing.py", line 407, in call_prediction
    output = await route_utils.call_process_api(
  File "$CONDA_PREFIX/lib/python3.10/site-packages/gradio/route_utils.py", line 226, in call_process_api
    output = await app.get_blocks().process_api(
  File "$CONDA_PREFIX/lib/python3.10/site-packages/gradio/blocks.py", line 1550, in process_api
    result = await self.call_function(
  File "$CONDA_PREFIX/lib/python3.10/site-packages/gradio/blocks.py", line 1199, in call_function
    prediction = await utils.async_iteration(iterator)
  File "$CONDA_PREFIX/lib/python3.10/site-packages/gradio/utils.py", line 519, in async_iteration
    return await iterator.__anext__()
  File "$CONDA_PREFIX/lib/python3.10/site-packages/gradio/utils.py", line 512, in __anext__
    return await anyio.to_thread.run_sync(
  File "$CONDA_PREFIX/lib/python3.10/site-packages/anyio/to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "$CONDA_PREFIX/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "$CONDA_PREFIX/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "$CONDA_PREFIX/lib/python3.10/site-packages/gradio/utils.py", line 495, in run_sync_iterator_async
    return next(iterator)
  File "$CONDA_PREFIX/lib/python3.10/site-packages/gradio/utils.py", line 649, in gen_wrapper
    yield from f(*args, **kwargs)
  File "$PWD/modules/ui_model_menu.py", line 230, in load_lora_wrapper
    add_lora_to_model(selected_loras)
  File "$PWD/modules/LoRA.py", line 20, in add_lora_to_model
    add_lora_transformers(lora_names)
  File "$PWD/modules/LoRA.py", line 166, in add_lora_transformers
    shared.model = PeftModel.from_pretrained(shared.model, get_lora_path(lora_names[0]), adapter_name=lora_names[0], **params)
  File "$CONDA_PREFIX/lib/python3.10/site-packages/peft/peft_model.py", line 331, in from_pretrained
    model = MODEL_TYPE_TO_PEFT_MODEL_MAPPING[config.task_type](model, config, adapter_name)
  File "$CONDA_PREFIX/lib/python3.10/site-packages/peft/peft_model.py", line 973, in __init__
    super().__init__(model, peft_config, adapter_name)
  File "$CONDA_PREFIX/lib/python3.10/site-packages/peft/peft_model.py", line 121, in __init__
    self.base_model = cls(model, {adapter_name: peft_config}, adapter_name)
  File "$CONDA_PREFIX/lib/python3.10/site-packages/peft/tuners/lora/model.py", line 111, in __init__
    super().__init__(model, config, adapter_name)
  File "$CONDA_PREFIX/lib/python3.10/site-packages/peft/tuners/tuners_utils.py", line 94, in __init__
    self.inject_adapter(self.model, adapter_name)
  File "$CONDA_PREFIX/lib/python3.10/site-packages/peft/tuners/tuners_utils.py", line 251, in inject_adapter
    self._create_and_replace(peft_config, adapter_name, target, target_name, parent, **optional_kwargs)
  File "$CONDA_PREFIX/lib/python3.10/site-packages/peft/tuners/lora/model.py", line 193, in _create_and_replace
    new_module = self._create_new_module(lora_config, adapter_name, target, **kwargs)
  File "$CONDA_PREFIX/lib/python3.10/site-packages/peft/tuners/lora/model.py", line 321, in _create_new_module
    new_module = Linear(adapter_name, in_features, out_features, bias=bias, **kwargs)
  File "$CONDA_PREFIX/lib/python3.10/site-packages/peft/tuners/lora/layer.py", line 217, in __init__
    self.update_layer(adapter_name, r, lora_alpha, lora_dropout, init_lora_weights)
  File "$CONDA_PREFIX/lib/python3.10/site-packages/peft/tuners/lora/layer.py", line 73, in update_layer
    self.lora_dropout.update(nn.ModuleDict({adapter_name: lora_dropout_layer}))
  File "$CONDA_PREFIX/lib/python3.10/site-packages/torch/nn/modules/container.py", line 455, in __init__
    self.update(modules)
  File "$CONDA_PREFIX/lib/python3.10/site-packages/torch/nn/modules/container.py", line 531, in update
    self[key] = module
  File "$CONDA_PREFIX/lib/python3.10/site-packages/torch/nn/modules/container.py", line 462, in __setitem__
    self.add_module(key, module)
  File "$CONDA_PREFIX/lib/python3.10/site-packages/torch/nn/modules/module.py", line 616, in add_module
    raise KeyError(f"module name can't contain \".\", got: {name}")
KeyError: 'module name can\'t contain ".", got: liuhaotian_llava-v1.5-13b-lora'

System Info

OS: Linux Ubuntu 20.04 LTS
GPU: NVidia A6000
theoden8 commented 11 months ago

Can fix by renaming the lora folder in loras/

github-actions[bot] commented 10 months ago

This issue has been closed due to inactivity for 6 weeks. If you believe it is still relevant, please leave a comment below. You can tag a developer in your comment.