Open liquidnya opened 2 years ago
Would you be open to changing up how they're formatted? I think I see how they're working, but having them in interpreted log files makes debugging a bit harder. Happy to take the work on and document as I go
The reason of how the files are formatted is because it is super easy to create these test cases. Since all you need to do is connect the bot to a twitch channel and then have Chatty open and then you can enter your commands and then when you are done manually testing you can copy the log of messages from chatty into a file. These log files are then replayed and time and timers are also simulated and the output of the bot messages are checked. As well as there are some further checks if files have certain content etc. I also tried to add a lot of debugging functionality to the test case, since I was debugging using the test cases myself. I think that it is just a lack of documentation on how these tests work and also a lack of guidance on how to debug using these test cases. But I also think that all the checks that are done apart from chat message replay can use some work for sure.
Would you be open to changing up how they're formatted?
@skybloo What would you change to make them better? What makes debugging harder; is it the lack of being able to set breakpoints? Is there something I can do to make debugging with the log replay easier?
I made it so that if a chat replay message is wrong, that both the test log location and the code location where the message was emitted is shown:
The reason of how the files are formatted is because it is super easy to create these test cases. Since all you need to do is connect the bot to a twitch channel and then have Chatty open and then you can enter your commands and then when you are done manually testing you can copy the log of messages from chatty into a file. This was the piece I was missing, how the logs were being generated. I'm personally not a huge fan of "one file reads a bunch of other files" but that's much more a personal preference than any kind of value difference. I'll look at adding some more specific documentation and might make a couple of helpers to generate logs, just because I'd like to be able to do TDD
@skybloo with the merge of #22 into master, I moved the code for setting up a test case into https://github.com/ToransuShoujo/quesoqueue_plus/blob/master/tests/simulation.js.
And I wrote a new test case (https://github.com/ToransuShoujo/quesoqueue_plus/blob/master/tests/data-conversion/data-conversion.test.js) which makes use of the queue programmatically without using the logs.
In that specific test case HandleMessage
(in index.js
) is never called, but you could also use this as some kind of template and then call test.handle_func(message, sender, mockedFunction)
yourself and then testing if mockedFunction
was called with the correct response.
In theory we could also setup test cases which only test queue.js
instead of using index.js
.
The log tests are really meant as like an integration/regression test and so far we do not have any unit tests except for testing twitch.js
.
This is something I plan on doing eventually: