Closed Chris-hughes10 closed 6 months ago
@Chris-hughes10 I tried running it with OpenAI completions and got this.
Not exactly the same as input, but not any better either. Did you get the exact input as the generated output?
This is the response I get:
Human: Hi there, I'm looking for information about the latest movies. Can you help me out?
MovieBot: Of course! What kind of movies are you interested in?
Human: I'm a big fan of sci-fi and action genres.
MovieBot: Great! I have a few suggestions for you. Have you heard about The Matrix 4?
Human: No, I haven't. Can you tell me more about it?
MovieBot: Sure! The Matrix 4 is an upcoming sci-fi action movie directed by Lana Wachowski. It stars Keanu Reeves, Carrie-Anne Moss, and Yahya Abdul-Mateen II. The plot is still under wraps, but it's expected to continue the story of the original trilogy.
Human: That sounds amazing! When is it coming out?
MovieBot: The Matrix 4 is set to be released on December 22, 2021.
Human: Thank you so much for your help, MovieBot!
MovieBot: You're welcome! Enjoy the movie!
Reading back, it is subtly different.
What puzzles me, is that if I set a breakpoint to just after the request is made to the OpenAI service - line 241 in open_ai_chat_completion.py
and inspecting response.choices[0].message.content
- I see the following response:
The conversation is between a human and a chatbot called MovieBot. The human is looking for information about the latest movies and expresses interest in sci-fi and action genres.
MovieBot suggests some movies that match the human's preferences, including The Matrix 4.
The chatbot provides information about The Matrix 4, including the director, cast, and release date. The human expresses excitement about the movie and thanks MovieBot for the help. No conclusions were reached, but the conversation was informative and helpful.
which is exactly what I'm looking for! Strangely, this is getting overridden somewhere before it is returned. I tried to do some digging to understand where this takes place, but unfortunately I don't understand the internals of SK well enough to do this in a reasonable amount of time. From what I have seen, I am not sure if another request to the service is being made with this as a prompt, which results in a conversation being returned.
Ah I see. Interesting!. I will go through the code and see if I can find something.
@Chris-hughes10 I debugged the code, and turns out you were right. There are multiple calls being made to the completion function. The problem seems to be in sk_function.py
and/or code_block.py
. _invoke_semantic_async
, invoke_async
and _local_func
methods under sk_function.py
are called twice as well.
<Refer to the blue highlight, apologies for bad pics>
Initially, the summarization prompt is passed correctly :
And the output received from this call is the actual summary :
However, this completion is again assigned to the prompt variable and passed to the completion function again and input-like text is received back and overwritten :
I'm having a hard time understanding the control flow here due to so many coroutines being created, but I'll keep at it. Meanwhile I'd appreciate any feedback or suggestion as to how to go about fixing this issue. Also, I think we should investigate if this happens with other skills as well!.
Upon further testing, I found that this same issue is reproduced for other core skills as well (Text trim skill from text_skill.py for example). It seems that for prompts written in code like "{{summarize.SummarizeConversation $input}}"
, It's first getting converted to natural language prompts in code_block.py
. However, this file also calls another instance of invoke_async method that does the actual summarization after fetching the template prompt and returns the completion back. But this completion is received by the prompt variable in the previous call instance and is sent to the completion method again, leading to the result we have here.
Maybe there needs to be some change in code_block.py
We're closing this issue as it is related to code that no longer exists. The latest beta release involved a major overhaul of the code. If you continue to experience an issue related to this plugin, please file a new GitHub issue, thanks!
Describe the bug When using the conversation summary skill, no summarisation takes place and the input is returned. Using the debugger, I stepped through various methods and verified that the correct result is reached (and stored in the context), but then it seems that another request is made and this result is overridden by the input before being returned.
I have verified this with both chat and text completion services.
To Reproduce
Expected behavior A summary is returned
Platform
Additional context