DFKI-NLP / InterroLang

InterroLang: Exploring NLP Models and Datasets through Dialogue-based Explanations [EMNLP 2023 Findings]
https://arxiv.org/abs/2310.05592
5 stars 1 forks source link

Error in rationalize operation #101

Open schopra6 opened 1 year ago

schopra6 commented 1 year ago

Input : rationalize the prediction for id 9

Logs:

Exception in thread Thread-5:
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner
    self.run()
  File "/usr/src/app/timeout.py", line 19, in run
    self._result = self._func(*self._args, **self._kwargs)
  File "/usr/src/app/actions/explanation/rationalize.py", line 89, in rationalize_operation
    generation = conversation.decoder.gpt_model.generate(
  File "/usr/local/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/transformers/generation_utils.py", line 1296, in generate
    return self.greedy_search(
  File "/usr/local/lib/python3.9/site-packages/transformers/generation_utils.py", line 1690, in greedy_search
    outputs = self(
  File "/usr/local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/transformers/models/gpt_neo/modeling_gpt_neo.py", line 745, in forward
    transformer_outputs = self.transformer(
  File "/usr/local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/transformers/models/gpt_neo/modeling_gpt_neo.py", line 583, in forward
    position_embeds = self.wpe(position_ids)
  File "/usr/local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/torch/nn/modules/sparse.py", line 158, in forward
    return F.embedding(
  File "/usr/local/lib/python3.9/site-packages/torch/nn/functional.py", line 2199, in embedding
    return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
IndexError: index out of range in self
[2023-06-02 08:31:46 +0000] [8] [INFO] Traceback getting bot response: Traceback (most recent call last):
  File "/usr/src/app/flask_app.py", line 164, in get_bot_response
    response = BOT.update_state(user_text, conversation)
  File "/usr/src/app/logic/core.py", line 890, in update_state
    returned_item = run_action(
  File "/usr/src/app/logic/action.py", line 48, in run_action
    action_return, action_status = actions[p_text](
TypeError: cannot unpack non-iterable NoneType object

[2023-06-02 08:31:46,296] INFO in flask_app: Traceback getting bot response: Traceback (most recent call last):
  File "/usr/src/app/flask_app.py", line 164, in get_bot_response
    response = BOT.update_state(user_text, conversation)
  File "/usr/src/app/logic/core.py", line 890, in update_state
    returned_item = run_action(
  File "/usr/src/app/logic/action.py", line 48, in run_action
    action_return, action_status = actions[p_text](
TypeError: cannot unpack non-iterable NoneType object

[2023-06-02 08:31:46 +0000] [8] [INFO] Exception getting bot response: cannot unpack non-iterable NoneType object
[2023-06-02 08:31:46,296] INFO in flask_app: Exception getting bot response: cannot unpack non-iterable NoneType object
nfelnlp commented 1 year ago

Which config do you use? I get the same error with the FLAN-T5 config and I haven't figured out why this occurs.

tanikina commented 1 year ago

Just wanted to know if anyone came across the following error for the rationalize operation.

Input: "rational for id 16"

Stack trace:

[2023-06-08 17:01:47,370] INFO in core: USER INPUT: rational for id 16

Batches: 100%|███████████████████████████████████| 1/1 [00:00<00:00, 41.70it/s] [2023-06-08 17:01:47,441] INFO in core: adapters decoded text filter id 16 and rationalize [2023-06-08 17:01:47,443] INFO in flask_app: Traceback getting bot response: Traceback (most recent call last): File "/home/ubuntu/projects/InterroLang/flask_app.py", line 190, in get_bot_response response = BOT.update_state(user_text, conversation) File "/home/ubuntu/projects/InterroLang/logic/core.py", line 892, in update_state returned_item = run_action( File "/home/ubuntu/projects/InterroLang/logic/action.py", line 48, in run_action action_return, action_status = actions[p_text]( ValueError: too many values to unpack (expected 2)

[2023-06-08 17:01:47,444] INFO in flask_app: Exception getting bot response: too many values to unpack (expected 2)

tanikina commented 1 year ago

I'm using boolq_adapter.gin

qiaw99 commented 1 year ago

I think when you use adapter version, GPT is not initialized, see https://github.com/nfelnlp/InterroLang/blob/70cb65b20922e44fe84315668f9cf0cc991c0fa9/actions/explanation/rationalize.py#L35 It should actually return this string and also status code, as far as I can see. I don't think we still need this line since we just read results from cache (csv files).

tanikina commented 1 year ago

Thanks for the quick reply! Yes, I use the latest version of rationalize.py but it does not return this string, unfortunately :(

qiaw99 commented 1 year ago

You can try this: return f"Rationalize operation not enabled for {conversation.decoder.parser_name}" , 1

tanikina commented 1 year ago

Yes, it really needs the return code. Now it works, thanks! I can see the string.

qiaw99 commented 1 year ago

But still, I think these lines might not be neccessary for now.

nfelnlp commented 1 year ago

Wait, but this should now return the pre-computed Dolly rationales. Does it not? If this is implemented correctly, we should be able to use this operation regardless of the parsing model.

qiaw99 commented 1 year ago

Yes, that's what I mean. We don't actually need to check if GPT is loaded or not, since we only read results from cache.

See: https://github.com/nfelnlp/InterroLang/blob/70cb65b20922e44fe84315668f9cf0cc991c0fa9/actions/explanation/rationalize.py#L34-L35

tanikina commented 1 year ago

I see! Then we'll need to update this code a bit (i.e. remove the line as Qianli suggested), otherwise the adapter version won't get any rationales.

tanikina commented 1 year ago

Yes, it works perfectly w/o this line: Screenshot 2023-06-08 at 19-38-05 InterroLang

nfelnlp commented 1 year ago

Awesome! 🥳 Could one of you push the fix to main, please?

qiaw99 commented 1 year ago

Done

nfelnlp commented 1 year ago

Is this resolved? I'm not sure if the last fix also concerned the reason why this issue was created in the first place.