Closed jranalli closed 2 years ago
FYI it's actually possible, if you implement testing this way, to have Github run the tests all the time to make sure that your code (and especially code that others wish to contribute) is continually passing the tests. See here: https://www.freecodecamp.org/news/what-are-github-actions-and-how-can-you-automate-tests-and-slack-notifications/#what-are-github-actions
This is a good idea. It's been on my to-do list for ages to write a more robust test suite. I'd bet we can adapt this to more streamlined set of functions. I'm going to leave this pull request open; I'll come back to this for the next major release.
Just a point to consider as to streamlining. The philosophical purpose of testing is to put a box around what the code does, so that you know if something you changed broke one of the outputs. For example, the reimplementation of d_s led to a failure in the supercritical cases, but not subcritical. As such, my understanding is that the idea is to very explicitly test one piece of functionality per test function, rather than trying to figure out efficient ways to write the tests. The testing output (in this case pytest, but others are similar) is actually harder to interpret when you have multiple asserts within a single function. So arguably, you could even say that what I wrote is already too condensed.
Your code has kind of two separate testing needs, and I don't know if they ought to be separated (I just put them together in one thing here). The more fundamental software thing is testing that the methods behave, even with variable input types, because that's how I identified a couple of these bugs. The other is the scientific validity relative to the NIST reference.
I'm pretty familiar with the philosophy of testing. I've had three incarnations of the _test() method that take the data through a series of "truth" values. The new versions are not yet finished (except on the ig class, which has embedded "truth" values in the data set), so version 2.1.0 was pushed with point-by-point validation against source data and 1000-element random value array generation at the command line. Still, as you're pointing out, the new version has had bugs.
Chris
From: Joe Ranalli @.> Sent: Friday, September 24, 2021 12:15 PM To: chmarti1/PYroMat @.> Cc: Martin, Christopher Reed @.>; Comment @.> Subject: Re: [chmarti1/PYroMat] Insert a test package for mp1 (#21)
Just a point to consider as to streamlining. The philosophical purpose of testing is to put a box around what the code does, so that you know if something you changed broke one of the outputs. For example, the reimplementation of d_s led to a failure in the supercritical cases, but not subcritical. As such, my understanding is that the idea is to very explicitly test one piece of functionality per test function, rather than trying to figure out efficient ways to write the tests. The testing output (in this case pytest, but others are similar) is actually harder to interpret when you have multiple asserts within a single function. So arguably, you could even say that what I wrote is already too condensed.
Your code has kind of two separate testing needs, and I don't know if they ought to be separated (I just put them together in one thing here). The more fundamental software thing is testing that the methods behave, even with variable input types, because that's how I identified a couple of these bugs. The other is the scientific validity relative to the NIST reference.
— You are receiving this because you commented. Reply to this email directly, view it on GitHubhttps://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fchmarti1%2FPYroMat%2Fpull%2F21%23issuecomment-926755787&data=04%7C01%7Ccrm28%40psu.edu%7C6a827074bc124b422d6b08d97f7678b5%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637680969069848696%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=QxNTNFEKAb0kLzKgPQO8FlDy%2FCygG9Pfghn2WFW6fHE%3D&reserved=0, or unsubscribehttps://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAEVXQ4XNKLDAQTQZDCHVXS3UDSPYRANCNFSM5EUBXOLA&data=04%7C01%7Ccrm28%40psu.edu%7C6a827074bc124b422d6b08d97f7678b5%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637680969069858687%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=OOoVTHSvNSflcd1P8NGscYRQ%2FjO3rn0bL7UVreHf%2BW8%3D&reserved=0. Triage notifications on the go with GitHub Mobile for iOShttps://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fapps.apple.com%2Fapp%2Fapple-store%2Fid1477376905%3Fct%3Dnotification-email%26mt%3D8%26pt%3D524675&data=04%7C01%7Ccrm28%40psu.edu%7C6a827074bc124b422d6b08d97f7678b5%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637680969069858687%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=jU3pIZyeIt0e6KjQyFRiDay7UHPyd%2FVLxym3RLoVFi4%3D&reserved=0 or Androidhttps://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fplay.google.com%2Fstore%2Fapps%2Fdetails%3Fid%3Dcom.github.android%26referrer%3Dutm_campaign%253Dnotification-email%2526utm_medium%253Demail%2526utm_source%253Dgithub&data=04%7C01%7Ccrm28%40psu.edu%7C6a827074bc124b422d6b08d97f7678b5%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0%7C637680969069868685%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=YWILsOMXJ5pTvReNO3fWetvgy8cKd%2F%2F9%2Bea92bXW1V8%3D&reserved=0.
Adjusting this by adding a separate branch with tests.
Create a test package requiring pytest. You can use this to determine whether you've fixed the bugs in mp1. It has about 70% coverage, but that could be improved by writing more tests.