Open pmalacho-mit opened 9 months ago
So currently story book doesn't have the AI responding back to us yet. How should we test this?
Great thinking @shengh318 ! I think probably the best way to do that is to use some kind of library / add-on to storybook in order to mock fetch
so that in the storybook "stories" we can effectively intercept the requests to the addToDB
and ask
endpoints.
This add on looks promising! https://storybook.js.org/addons/storybook-addon-mock Could you investigate it?
Looking through the documentation, it looks pretty straightforward -- the examples are written in/for react, but it looks like you'll just be adding a property to the parameters
field of the meta
object, like here:
https://github.com/mitmedialab/chat-tutor-embeddable/blob/main/src/stories/AddMessage.stories.ts#L8
Yep. I will see what this addon is about!
Amazing, thanks @shengh318 !!
Questions.
askChatTutor
? askChatTutor
function? Oh also experimented around with the storybook-addon-mock and came across a bunch of problems and concerns.
mock
package does not support CORSmock
package does not fully support Storybook v7mock
package so that we can see the 3 spinning dots
when the Tutor is "thinking".Current Solution
I just created my own flask python local server
and routing all the traffic to there instead. This way we can decide what content to send and how long to wait with the python time
package before we sent the actual JSON response.
Let me know what you think!
Gotcha! Let me play around with the addon today!
It'd be nice to include some visual indicators for the user while they are waiting on a response.
Perhaps as soon as a message is sent, a message from "Assistant" should pop up, but with the animating dots in it's message (perhaps these are displayed when content is null or the empty string): https://nzbin.github.io/three-dots/