Closed jrudoler closed 1 year ago
@jrudoler, thank you for feedback 🙂! Did you try re-running the %%explain
magic function?
I followed the steps to reproduce the error, but couldn't reproduce it. Here are a few screenshot of responses:
First response is without any error in the code and the second picture is removing a colon in the for
loop under train
functions.
Please note that responses from LLMs are not always accurate.
Yep, it's still producing the same error for me (after starting from scratch multiple times):
I often get some variation of "What's the issue?" or "What seems to be the problem?".
By repeatedly clicking "submit request" I can get it to actually produce a response catching the bug sometimes:
My sample size is small, but the error came up probably 4/5 times.
So the response discrepancy seems to stem from the GPT side, which we cannot directly control. Nonetheless, we will keep this in mind while updating the prompts in future release. I will close this for now.
Describe the bug Chatify politely asks for the user to clarify what it can help debug, but as far as I can tell it is not possible to interact with the LLM in a back-and-forth chat (the widget has no text entry / chat box, at least).
To Reproduce Steps to reproduce the behavior:
# @markdown Execute this cell to train the network
and then defines theDeepNetReLU
class.Expected behavior I expected the Robo-tutor to give me a concrete response, either pointing out a problematic line of code or claiming that the code is error-free.
Screenshots
Desktop (please complete the following information):