Open schopra6 opened 1 year ago
Which config do you use? I get the same error with the FLAN-T5 config and I haven't figured out why this occurs.
Just wanted to know if anyone came across the following error for the rationalize operation.
Input: "rational for id 16"
Stack trace:
[2023-06-08 17:01:47,370] INFO in core: USER INPUT: rational for id 16
Batches: 100%|███████████████████████████████████| 1/1 [00:00<00:00, 41.70it/s] [2023-06-08 17:01:47,441] INFO in core: adapters decoded text filter id 16 and rationalize [2023-06-08 17:01:47,443] INFO in flask_app: Traceback getting bot response: Traceback (most recent call last): File "/home/ubuntu/projects/InterroLang/flask_app.py", line 190, in get_bot_response response = BOT.update_state(user_text, conversation) File "/home/ubuntu/projects/InterroLang/logic/core.py", line 892, in update_state returned_item = run_action( File "/home/ubuntu/projects/InterroLang/logic/action.py", line 48, in run_action action_return, action_status = actions[p_text]( ValueError: too many values to unpack (expected 2)
[2023-06-08 17:01:47,444] INFO in flask_app: Exception getting bot response: too many values to unpack (expected 2)
I'm using boolq_adapter.gin
I think when you use adapter version, GPT is not initialized, see https://github.com/nfelnlp/InterroLang/blob/70cb65b20922e44fe84315668f9cf0cc991c0fa9/actions/explanation/rationalize.py#L35 It should actually return this string and also status code, as far as I can see. I don't think we still need this line since we just read results from cache (csv files).
Thanks for the quick reply! Yes, I use the latest version of rationalize.py
but it does not return this string, unfortunately :(
You can try this: return f"Rationalize operation not enabled for {conversation.decoder.parser_name}" , 1
Yes, it really needs the return code. Now it works, thanks! I can see the string.
But still, I think these lines might not be neccessary for now.
Wait, but this should now return the pre-computed Dolly rationales. Does it not? If this is implemented correctly, we should be able to use this operation regardless of the parsing model.
Yes, that's what I mean. We don't actually need to check if GPT is loaded or not, since we only read results from cache.
I see! Then we'll need to update this code a bit (i.e. remove the line as Qianli suggested), otherwise the adapter version won't get any rationales.
Yes, it works perfectly w/o this line:
Awesome! 🥳 Could one of you push the fix to main, please?
Done
Is this resolved? I'm not sure if the last fix also concerned the reason why this issue was created in the first place.
Input : rationalize the prediction for id 9
Logs: