oobabooga / text-generation-webui

A Gradio web UI for Large Language Models.
GNU Affero General Public License v3.0
40.14k stars 5.26k forks source link

Type error on raw text tokenize, concatenating list and Tensor #3356

Closed Xabab closed 9 months ago

Xabab commented 1 year ago

Describe the bug

Seems like Tensor.tolist() is missing.

Is there an existing issue for this?

Reproduction

I tried to train a LoRA on Ouroboros 13B GPTQ on a raw text dataset.

Screenshot

No response

Logs

Can't copy full stacktrace cuz using colab from phone rn and it is extremely laggy. I gave up after 10 minutes of trying.

train_data = Dataset.from_list([tokenize(x) for x in text_chunks])
  File "/content/text-generation-webui/modules/training.py", line 346, in tokenize
    input_ids = [shared.tokenizer.pad_token_id] * (cutoff_len - len(input_ids)) + input_ids
TypeError: can only concatenate list (not "Tensor") to list

System Info

colab
goodglitch commented 1 year ago

I tried to train Lora using Stanford alpaca_data.json with removed empty output prompts on TheBloke/Llama-2-7B-GPTQ with ExLlama and got the same error.

Fusseldieb commented 1 year ago

Same error here. Any clue?

2023-08-22 21:16:48 WARNING:LoRA training has only currently been validated for LLaMA, OPT, GPT-J, and GPT-NeoX models. (Found model type: ExllamaModel)
2023-08-22 21:16:53 INFO:Loading JSON datasets...
Map:   0%|                                                                               | 0/21 [00:00<?, ? examples/s]
Traceback (most recent call last):
  File "C:\Users\vstil\Downloads\oobabooga_windows\oobabooga_windows\installer_files\env\lib\site-packages\gradio\routes.py", line 427, in run_predict
    output = await app.get_blocks().process_api(
  File "C:\Users\vstil\Downloads\oobabooga_windows\oobabooga_windows\installer_files\env\lib\site-packages\gradio\blocks.py", line 1323, in process_api
    result = await self.call_function(
  File "C:\Users\vstil\Downloads\oobabooga_windows\oobabooga_windows\installer_files\env\lib\site-packages\gradio\blocks.py", line 1067, in call_function
    prediction = await utils.async_iteration(iterator)
  File "C:\Users\vstil\Downloads\oobabooga_windows\oobabooga_windows\installer_files\env\lib\site-packages\gradio\utils.py", line 336, in async_iteration
    return await iterator.__anext__()
  File "C:\Users\vstil\Downloads\oobabooga_windows\oobabooga_windows\installer_files\env\lib\site-packages\gradio\utils.py", line 329, in __anext__
    return await anyio.to_thread.run_sync(
  File "C:\Users\vstil\Downloads\oobabooga_windows\oobabooga_windows\installer_files\env\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "C:\Users\vstil\Downloads\oobabooga_windows\oobabooga_windows\installer_files\env\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "C:\Users\vstil\Downloads\oobabooga_windows\oobabooga_windows\installer_files\env\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "C:\Users\vstil\Downloads\oobabooga_windows\oobabooga_windows\installer_files\env\lib\site-packages\gradio\utils.py", line 312, in run_sync_iterator_async
    return next(iterator)
  File "C:\Users\vstil\Downloads\oobabooga_windows\oobabooga_windows\text-generation-webui\modules\training.py", line 452, in do_train
    train_data = data['train'].map(generate_and_tokenize_prompt, new_fingerprint='%030x' % random.randrange(16**30))
  File "C:\Users\vstil\Downloads\oobabooga_windows\oobabooga_windows\installer_files\env\lib\site-packages\datasets\arrow_dataset.py", line 592, in wrapper
    out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
  File "C:\Users\vstil\Downloads\oobabooga_windows\oobabooga_windows\installer_files\env\lib\site-packages\datasets\arrow_dataset.py", line 557, in wrapper
    out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
  File "C:\Users\vstil\Downloads\oobabooga_windows\oobabooga_windows\installer_files\env\lib\site-packages\datasets\arrow_dataset.py", line 3097, in map
    for rank, done, content in Dataset._map_single(**dataset_kwargs):
  File "C:\Users\vstil\Downloads\oobabooga_windows\oobabooga_windows\installer_files\env\lib\site-packages\datasets\arrow_dataset.py", line 3450, in _map_single
    example = apply_function_on_filtered_inputs(example, i, offset=offset)
  File "C:\Users\vstil\Downloads\oobabooga_windows\oobabooga_windows\installer_files\env\lib\site-packages\datasets\arrow_dataset.py", line 3353, in apply_function_on_filtered_inputs
    processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
  File "C:\Users\vstil\Downloads\oobabooga_windows\oobabooga_windows\text-generation-webui\modules\training.py", line 448, in generate_and_tokenize_prompt
    return tokenize(prompt, add_eos_token)
  File "C:\Users\vstil\Downloads\oobabooga_windows\oobabooga_windows\text-generation-webui\modules\training.py", line 338, in tokenize
    input_ids = [shared.tokenizer.pad_token_id] * (cutoff_len - len(input_ids)) + input_ids
TypeError: can only concatenate list (not "Tensor") to list

EDIT: Progress! Apparently setting it to ExLlama doesn't work. Still figuring out which one works, but Auto-GPTQ goes further than this error. I'm now getting:

ValueError: Target modules [‘q_proj’, ‘v_proj’] not found in the base model. Please check the target modules and try again.
twiffy commented 1 year ago

Any updates? I'm getting the same errors.

sardanian commented 1 year ago

I am also getting these same issues just like @Fusseldieb

Exllama worked last night and I updated all files this morning and now produces errors. When I also try Auto-GPTQ I can also get further, but also run into the above errors.

Andie-Squirrel commented 1 year ago

Same issue when trying to train with Wizard Vicuna 13b SuperHOT 8k GPTQ on raw text input. Exllamav2 doesn't work it seems.

Traceback (most recent call last): File "N:\oobabooga\oobooga9_22_2023\oobabooga_windows\oobabooga_windows\installer_files\env\lib\site-packages\gradio\routes.py", line 427, in run_predict output = await app.get_blocks().process_api( File "N:\oobabooga\oobooga9_22_2023\oobabooga_windows\oobabooga_windows\installer_files\env\lib\site-packages\gradio\blocks.py", line 1323, in process_api result = await self.call_function( File "N:\oobabooga\oobooga9_22_2023\oobabooga_windows\oobabooga_windows\installer_files\env\lib\site-packages\gradio\blocks.py", line 1067, in call_function prediction = await utils.async_iteration(iterator) File "N:\oobabooga\oobooga9_22_2023\oobabooga_windows\oobabooga_windows\installer_files\env\lib\site-packages\gradio\utils.py", line 336, in async_iteration return await iterator.__anext__() File "N:\oobabooga\oobooga9_22_2023\oobabooga_windows\oobabooga_windows\installer_files\env\lib\site-packages\gradio\utils.py", line 329, in __anext__ return await anyio.to_thread.run_sync( File "N:\oobabooga\oobooga9_22_2023\oobabooga_windows\oobabooga_windows\installer_files\env\lib\site-packages\anyio\to_thread.py", line 33, in run_sync return await get_async_backend().run_sync_in_worker_thread( File "N:\oobabooga\oobooga9_22_2023\oobabooga_windows\oobabooga_windows\installer_files\env\lib\site-packages\anyio\_backends\_asyncio.py", line 2106, in run_sync_in_worker_thread return await future File "N:\oobabooga\oobooga9_22_2023\oobabooga_windows\oobabooga_windows\installer_files\env\lib\site-packages\anyio\_backends\_asyncio.py", line 833, in run result = context.run(func, *args) File "N:\oobabooga\oobooga9_22_2023\oobabooga_windows\oobabooga_windows\installer_files\env\lib\site-packages\gradio\utils.py", line 312, in run_sync_iterator_async return next(iterator) File "N:\oobabooga\oobooga9_22_2023\oobabooga_windows\oobabooga_windows\text-generation-webui\modules\training.py", line 415, in do_train train_data = Dataset.from_list([tokenize(x) for x in text_chunks]) File "N:\oobabooga\oobooga9_22_2023\oobabooga_windows\oobabooga_windows\text-generation-webui\modules\training.py", line 415, in <listcomp> train_data = Dataset.from_list([tokenize(x) for x in text_chunks]) File "N:\oobabooga\oobooga9_22_2023\oobabooga_windows\oobabooga_windows\text-generation-webui\modules\training.py", line 338, in tokenize input_ids = [shared.tokenizer.pad_token_id] * (cutoff_len - len(input_ids)) + input_ids TypeError: can only concatenate list (not "Tensor") to list

DrachenSeele commented 1 year ago

Are there any updates to this?

Pandananana commented 1 year ago

Same issue with Exllamav2 and TheBloke/Llama-2-13B-chat-GPTQ

coach1988 commented 12 months ago

I was able to overcome this error by using the HF versions of the loaders, but only to run into this afterwards (not printed in the CLI, only visible in the UI):

Traceback (most recent call last):
File "/home/user/text-generation-webui/modules/training.py", line 508, in do_train
lora_model = get_peft_model(shared.model, config)
File "/home/user/text-generation-webui/installer_files/env/lib/python3.10/site-packages/peft/mapping.py", line 106, in get_peft_model
return MODEL_TYPE_TO_PEFT_MODEL_MAPPING[peft_config.task_type](model, peft_config, adapter_name=adapter_name)
File "/home/user/text-generation-webui/installer_files/env/lib/python3.10/site-packages/peft/peft_model.py", line 889, in init
super().__init__(model, peft_config, adapter_name)
File "/home/user/text-generation-webui/installer_files/env/lib/python3.10/site-packages/peft/peft_model.py", line 111, in init
self.base_model = PEFT_TYPE_TO_MODEL_MAPPING[peft_config.peft_type](
File "/home/user/text-generation-webui/installer_files/env/lib/python3.10/site-packages/peft/tuners/lora.py", line 274, in init
super().__init__(model, config, adapter_name)
File "/home/user/text-generation-webui/installer_files/env/lib/python3.10/site-packages/peft/tuners/tuners_utils.py", line 88, in init
self.inject_adapter(self.model, adapter_name)
File "/home/user/text-generation-webui/installer_files/env/lib/python3.10/site-packages/peft/tuners/tuners_utils.py", line 222, in inject_adapter
raise ValueError(
ValueError: Target modules ['q_proj', 'v_proj'] not found in the base model. Please check the target modules and try again.
Xabab commented 11 months ago

Okay, I am onto something I think.

From oobabooga's newest colab:

textgen_requirements = open('requirements.txt').read().splitlines()
if is_cuda117:
    textgen_requirements = [req.replace('+cu121', '+cu117').replace('+cu122', '+cu117').replace('torch2.1', 'torch2.0') for req in textgen_requirements]
elif is_cuda118:
    textgen_requirements = [req.replace('+cu121', '+cu118').replace('+cu122', '+cu118') for req in textgen_requirements]
with open('temp_requirements.txt', 'w') as file:
    file.write('\n'.join(textgen_requirements))

So yeah, requirements.txt installs packages for wrong cuda toolkit, but instead of throwing an error it just dies at tokenize.

That, tho, still leaves Intel® Extension for PyTorch broken, so you have to run

!pip install intel_extension_for_pytorch -f https://developer.intel.com/ipex-whl-stable-cpu --force-reinstall

Which itself breaks several dependencies, but webui runs fine it seems.

That leaves me with:

File "/usr/local/lib/python3.10/dist-packages/peft/peft_model.py", line 637, in create_or_update_model_card
    quantization_config = self.config.quantization_config.to_dict()
AttributeError: 'dict' object has no attribute 'to_dict'

due to

def __init__(self, model: PreTrainedModel, peft_config: PeftConfig, adapter_name: str = "default"):
        super().__init__()
        self.base_model = model
        self.config = getattr(self.base_model, "config", {"model_type": "custom"})
if hasattr(self.config, "quantization_config"):
    quantization_config = self.config.quantization_config.to_dict()

This problem is somewhere with quantized model definitions, which could be anywhere inside GPTQ_for_llama, alpaca_lora_4bit, its monkeypatches or the way ooba handles models. I think.

That leaves me with a new error, and that's progress. If anyone wants to investigate further, be my guest. Ama going to get some sleep meanwhile. Have fun.

Xabab commented 11 months ago

I think this issue may be closed. Be sure to install webui requirements for your version of cuda toolkit.

Further discussion regarding peft should be done in #4074

AWAS666 commented 11 months ago

This is still broken btw, even with matching versions... Got cu121 on all the modules and in conda, running locally.

Same error as in the initial issue:

File "/home/user/Documents/test/text-generation-webui/modules/training.py", line 377, in tokenize
  input_ids = [shared.tokenizer.pad_token_id] * (cutoff_len - len(input_ids)) + input_ids
              ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~
TypeError: can only concatenate list (not "Tensor") to list
mjanek20 commented 11 months ago

Same here. I've been fighting with my input data for 3h (it was going to be my first LoRA training) only to find out that it's not my input data that's broken

github-actions[bot] commented 9 months ago

This issue has been closed due to inactivity for 6 weeks. If you believe it is still relevant, please leave a comment below. You can tag a developer in your comment.