pillargg / pillar_algos

Finds best timestamps to cut at
https://docs.pillar.gg/pillar_algos/
GNU General Public License v3.0
1 stars 0 forks source link

Unit Tests #17

Closed chand1012 closed 3 years ago

chand1012 commented 3 years ago

I think it would be beneficial to write unit tests for the algorithms with test data. These tests will run on push to any branch and will run on any pull requests before merge. This will ensure that all functionality remains intact and any new functionality works as intended.

I don't think the unit tests have to be super complex (@kjv13 would disagree with me) but I think just writing one test for each algorithm and having us write unit tests for any new algorithms would make future testing and changes easier to manage.

pomkos commented 3 years ago

Yeah that's in progress. Never done testing before, so had to learn how yesterday. I'll work on creating a test for each algo today.

chand1012 commented 3 years ago

I highly recommend using pytest, it's what we use on our other projects and is quite simple to set up.

pomkos commented 3 years ago

Yeah that's the conclusion I came to too haha. I just went over the basics of unittest, and then how pytest simplifies things. 100% committed to pytest at this point.

pomkos commented 3 years ago

unit_testing branch is being used for this

Commit 52a3084 added algo testing (rough), 4f21209 added test for data_handler.py. Forgot to reference issue during commits.

Algo testing is harder to do. RIght now its just asserting all future results to the result I got with today's version of the algos, not exactly foolproof.

Not sure how else to do it though, short of rewriting the algos by hand and using the results from that as the "official answer" and asserting all future results to it. Ideas?

pomkos commented 3 years ago

@chand1012 @RusseII @gatesyp thoughts?

chand1012 commented 3 years ago

I was thinking that we have a data-set we test with, and we get what the outputs should be. When the unit tests run they would test the specified data-set (or sets, no harm in doing it multiple times) and assert that the values should be the same (or within margin of error) of the known outputs using the assert keyword. If any adjustments are made to the algorithm the known outputs should also be adjusted. The main reason we want these tests is primarily to prevent unwanted side-effects.

pomkos commented 3 years ago

Right, but my question is how to determine an objective measure of the "correct" answer without running those datasets through the og algo to get the answer.

pomkos commented 3 years ago

I think we can close this issue now. So far I have unit tests for all algos and helpers. And creating new unit tests as new features are added.