Open missyschoenbaum opened 6 years ago
We can leave this open as long as we have new items coming through the pipeline.
Any Django tests written will be committed to this ticket as well as selenium tests.
I'm jumping into the Django test suite first - I've never done this before so a bit of a learning curve here. There are 5 existing tests that fail and 12 existing tests that have errors. This may not be cause for alarm - some of these tests have not been updated in a long time, and our intentions for how different components of ADSM work have changed over time. @BryanHurst you have a better grasp than I on the history of ADSM, so you should make a call on these. This is going to be a wall of text unfortunately, but I'm really trying to give all the needed information here so you don't have to go looking at the code yourself.
tests_parser_load_utf16, test_parser_load_utf8, and test_parser_multiple_herds. All three of these tests are failing for the same reason, that being that 'unit_id' was not included in the expected results. It also looks like 'user_notes' is not being parsed. 'user_notes' doesn't serve any logistical purpose, so maybe expected action is that that field is not parsed? If so then it should not be included in expected_results.
This failure is coming from an instance id test, expected is 1 and actual is 25. IDs are intended to be unique, and usually aren't static. I think this test is failing because when it was first written "Free Range Cows" were the first production type object, and actually did have and ID of 1 - but now that more tests have been written the ID has changed. Interesting note here is that with a fresh test database this test actually passes, which means that ProductionType's are being added after "Free Range Cows" that have smaller IDs. Not quire sure why this is happening but I don't think we need to worry about it. If it was up to me the test case would be deleted (or suppressed).
ResultsVersion is being testing to see if it is a singleton model, which it isn't. I did some sleuthing and found that Josiah intended to be a singleton model (#350), it was added to the singletons list for a short time but was later removed with the commit message "Now correctly showing Simulation version in Results again" (this was on 9/14/2015). Unfortunately a ticket number wasn't given, and I couldn't track down a ticket that I felt confident was tied to whatever problem was being fixed. I think this is just a bad test case, it should just be deleted. I'm not sure why it was added a singleton only to later removed, but it is working just fine as is.
I have a number of test cases that are failing because I don't have Roundtrip.db, which these tests are dependent on. They might actually work if you had said database, but I didn't even find it in the Google Drive - so I'm not sure.
db_status_tags.py was simply not being imported to the test file. Adding the import fixes the problem (see next commit).
I think this is just a setup error. The test is looking for a RelationalFunction with the name of "Prevalence", but it can't find it. I don't see where it is being created, which does explain why it can't find it. I'll look into fixing it.
OK that should be everything. Bryan any input you can give would be appreciated.
In regards to "1 error from tests_delete_links_exist
I think this is just a setup error. The test is looking for a RelationalFunction with the name of "Prevalence", but it can't find it. I don't see where it is being created, which does explain why it can't find it. I'll look into fixing it."
You probably just need some function, and prevalence was common. Do I need to give you an example prevalence function? @ConradSelig
@ConradSelig In regards to 5 errors relating to Roundtrip.db
Roundtrip.db is not something I have ever tested with.
@ConradSelig In regards to 3 failures from the Population Parser Originally, unit_id was a system generated ID. However, users could provide a unit_id or a logical herd_id. That user-provided ID didn't mean anything to the system UNTIL we found out that the supplemental outputs are using that user-generated ID to describe each herd in the output. Therefore, we had to make them in a stable ID where we could follow it through the system. User Notes was designed to catch any extra stuff the user may have had in their file. The idea is if the state provided you with a list, they probably had some other notes that would be meaningful to you at some point.
@missyschoenbaum I have a prevalence function I can use, I just need to figure out how to get it to plug into the test properly.
I actually found Roundtrip.db in my workspace folder! Apparently it comes with ADSM. I think the tests are just using an outdated path - I'll see if I can get it corrected.
Does user-notes have any effect on unit id? My impression was that it doesn't, but in the tests part of the expected results is this line: "'user_notes': 'UnitID=1'". Does that actually force a unit_id of 1? Or are we just ensuring that unit_ids are being assigned in order of unit creation?
Just for reference
On a closer look there is a function called "convert_user_notes_to_unit_id()" that does appear to be running. It looks like its only finding ids in the user notes if the user has "id" (both lowercase) not "ID", which is what the test cases are using.
Do we want this function to match an uppercase "ID"? I can fix the function, or the test cases to match how it's actually working. @missyschoenbaum your call.
@ConradSelig I think we should make it match id, ID and Id if possible in the app
@ConradSelig The test should match what case the "convert_user_notes_to_unit_id()" function is using.
@missyschoenbaum @BryanHurst I got both answers from the two of you. Do we want to change how user notes are being matched or not?
Regardless, I have a few concerns about what the user_notes is doing. Given this herd: Would we expect the id to change to '2'? When I run this through the parser the herd that comes out does not even have a "unit_id" field, which doesn't seem right to me at all.
I will alway defer to Bryan on technical questions.
My main concern if is you feed in something with an identifier that has a meaning to you (TX001 is the latent herd), in the Supp Outputs you are able to figure out who cam shooting out the other end of the simulation (which would include TX001 as an exposure). This ventures into how Neil output things from the C Engine.
The issues related to this are https://github.com/NAVADMC/ADSM/issues/879 and https://github.com/NAVADMC/ADSM/issues/916 if knowing some of the work around that function helps.
convert_user_notes_to_unit_id is a more recent addition to the program.
I think you could probably safely make the regex be case insensitive.
OK I see the problem here, convert_user_notes_to_unit_id() is ONLY called when importing scenarios, NOT populations.
I think that we would like both yes? I can add this function into the population parser somewhere, and with the tests we already have in place we can be quite sure that we haven't broken the population importing process.
Yes, to checking on both. It does run somewhere else, because I see the message frequently. Maybe it runs when you open a scenrio?
It's only called when importing legacy scenarios or opening scenarios, so you would see it every time you open a scenario. I didn't make that totally clear sorry.
OK that should fix everything mentioned above.
I did find a bug in the Population Parser that I went ahead and fixed. I did a good amount of testing of my own, importing populations of both types and both small and large, loading scenarios, etc. Regardless, someone else should look over it and make sure nothing is broken.
I'm going to move on to writing some new test cases for parts of ADSM that don't currently have any. @BryanHurst would you happen to know of any components that are not already being tested? Anything you can give me would save some time.
With this being an ongoing item that needs improvement, moving to the Backlog.
Update the the Selenium test suite