Closed SamikDatta closed 7 months ago
Hi Samik. Given that this function is marked explicitly as experimental, I am o.k. with changing its default. Just add an entry about this change to NEWS.md. Do the tests in tests/testthat/test-plotBiomassObservedVsModel.R
not need updating?
Hi Gustav, I have now done this, as well as fixed two failures which were happening with the tests. I started experimenting on removing the warnings in the tests, let me know if you want me to continue doing this.
Hi Samik. Thanks for doing this. I think that upgrading to edition 3 of the testthat package is a good idea. Please keep going. I'll wait with merging your changes relating to the plotBiomassObservedVsModel()
function until you are finished. In future, I think you may want to make pull requests not from your master branch but from dedicated feature branches, with one pull request and one branch for each feature. And while working on one feature on its dedicated feature branch, make sure you commit changes that are related to that feature only. That makes it faster to review and then merge the pull requests in the future.
Hi Gustav,
Thanks for the suggestion, I'll do my best to keep future pull requests on separate branches and focused so that they're easier for you to review.
In the meantime, I'll update as many of the tests as I can to remove the warnings. The one exception to this is the following warning, which occurs multiple times:
expect_known_value()
was deprecated in the 3rd edition. Please use expect_snapshot_value()
instead`
This is quite a sizeable task, as the files such as getBiomass
will need to be shifted into the correct format. I will do this in a separate pull request.
I'm done for now, Gustav - I'll handle the rest in specific branches as requested. You can merge these changes if you're happy.
Hi Samik. I'll merge this as soon as the other warnings are removed as well. I want to keep the ability to run tests and quickly see the relevant warnings without being swamped by irrelevant warnings.
When you are done you can also remove the calls to local_edition(3)
from the test files that were already using edition 3.
Hi Gustav - I think this is done now, snapshots have replaced the files which were being used as expected values. No more failures or warnings.
I went through each instance of expect_known_value
and checked that equal values were being generated at present.
For example, from test-project_methods.R
, line 91, I checked the command expect_known_value(fl, "values/getFeedingLevel")
by running expect_equal(fl, readRDS("values/getFeedingLevel"))
, and it was indeed equal.
This is important as these new snapshots will now be the baseline comparison whenever these tests are run. They all passed fine.
One thing it would be good for you to check is that the .svg
files created by vdiffr::expect_doppelganger
work fine - I can't upload these to the Github repo, I guess the file type has been exempt from uploading somewhere.
Hi Gustav - I think this is done now, snapshots have replaced the files which were being used as expected values. No more failures or warnings.
Thanks. I have now also removed the uncaught messages.
I went through each instance of
expect_known_value
and checked that equal values were being generated at present.For example, from
test-project_methods.R
, line 91, I checked the commandexpect_known_value(fl, "values/getFeedingLevel")
by runningexpect_equal(fl, readRDS("values/getFeedingLevel"))
, and it was indeed equal.This is important as these new snapshots will now be the baseline comparison whenever these tests are run. They all passed fine.
Thanks for doing this so carefully. But did you perhaps somehow mess up the creation of the snapshot for project_methods
? I had to replace it, see https://github.com/sizespectrum/mizer/pull/288/commits/b36b213da8dade9706e6b3d9451dcc335d8db00f. Please run the tests on your machine again after having pulled my corrected project_method.md
.
One thing it would be good for you to check is that the
.svg
files created byvdiffr::expect_doppelganger
work fine - I can't upload these to the Github repo, I guess the file type has been exempt from uploading somewhere.
I am not sure what you mean. The .svg
snapshots seemed to all be there on GitHub. What is it you think I should check?
Hi Gustav,
project_method.md
. .svg
files uploaded fine, please ignore.I'm happy for you to push these changes now. At some stage once we're happy with the snapshot
method we can delete the values
folder, but let's keep it there for now. Many thanks for all your guidance.
Unfortunately the tests don't seem to pass on all platforms. See https://github.com/sizespectrum/mizer/actions/runs/8559843355/job/23461814263#step:6:1075 for example. I guess that the edition 3 applies the tests with less tolerance and for the results of longer calculations the rounding errors appear to accrue differently on different platforms.
Hi Gustav, that's a shame. As it only seems to be the "getFeedingLevel for MizerParams" test which fails in test-project_methods.R
, I've tried just rounding the data frame to 5 decimal places. If this works all good - if there are multiple tests where the rounding is an issue we'll have to think of a more comprehensive fix.
For example, we could use expect_snapshot_value
(see here) using a higher tolerance, but the problem is that the print out in the .md file is a lot less nice to read than when using expect_snapshot
. Let's see if the tests pass on all platforms and take it from there.
This still does not pass all the tests. When you view this pull request, are you able to see the test results or are they only shown to me as the owner of the repository? Here is an example: https://github.com/sizespectrum/mizer/actions/runs/8562120352/job/23474964900?pr=288
I think using expect_snapshot_value()
is a good way forward. You must by now be regretting to have taken on the project of changing to testthat edition 3.
Hi Gustav. Unfortunately I can't see the test results (across all the different platforms like MacOS, Windows and Ubuntu) when I view the pull request.
I have updated the tests in test-project_methods.R
to use expect_snapshot_value
with a tolerance of 1e-5, let us see if all the tests now pass, and take it from there?
Thanks Samik. The tests now passed on all platforms.
For the function
plotBiomassObservedVsModel
, using the ratio of biomasses is much more visually useful than the log-log scale, where differences are not easily observable because of the log scale. Hence, the ratio should be the default.