Closed Rosejoycrocker closed 2 years ago
Hey @Rosejoycrocker
I've updated tests and the single scenario example, which apparently was not working since I put in the mimic_IPMF
option (whoops).
I'm down to ~four~ three failing tests, and they look to be something to do with matrix size mismatch in the DMCDA. Could you take a look?
As a reminder you can run the tests yourself with runtests("tests")
================================================================================
Error occurred in testMCDABehavior/TestingRandomisedSites and it did not run to completion.
---------
Error ID:
---------
'MATLAB:subsassigndimmismatch'
--------------
Error Details:
--------------
Unable to perform assignment because the indices on the left side are not compatible with the size of the right side.
Error in ADRIA_DMCDA (line 456)
rankings(:,3) = rankingsin(:,3);
Error in testMCDABehavior (line 50)
[prefseedsites, prefshadesites, nprefseedsites, nprefshadesites] = ADRIA_DMCDA(dMCDA_vars, 1, sslog, prefseedsites, prefshadesites, rankings);
================================================================================
.
================================================================================
Error occurred in testMCDABehavior/TestingZeroSites and it did not run to completion.
---------
Error ID:
---------
'MATLAB:subsassigndimmismatch'
--------------
Error Details:
--------------
Unable to perform assignment because the size of the left side is 26-by-1 and the size of the right side is 0-by-1.
Error in ADRIA_DMCDA (line 456)
rankings(:,3) = rankingsin(:,3);
Error in testMCDABehavior (line 78)
[prefseedsites, prefshadesites, nprefseedsites, nprefshadesites] = ADRIA_DMCDA(dMCDA_vars, 1, sslog, prefseedsites, prefshadesites, rankings);
================================================================================
..
Done testMCDABehavior
__________
Running testMetrics
Loading all coral types
.
Done testMetrics
__________
Running testResultMatch
.
Done testResultMatch
__________
Running testSamplingConversion
.....
Done testSamplingConversion
__________
Running testScatterGather
Loading all coral types
================================================================================
Error occurred in testScatterGather/testScatterGather and it did not run to completion.
---------
Error ID:
---------
'MATLAB:unassignedOutputs'
--------------
Error Details:
--------------
Error using coralScenario (line 268)
Output argument "nprefshadesites" (and maybe others) not assigned during call to "ADRIA_DMCDA".
Error in runCoralADRIA (line 91)
parfor i = 1:N
Error in ADRIA/run (line 320)
Y = runCoralADRIA(interv, crit, coral, obj.constants, ...
Error in testScatterGather (line 50)
Y_true = ai.run(p_sel, sampled_values=true, nreps=num_reps);
================================================================================
.
Done testScatterGather
__________
Failure Summary:
Name Failed Incomplete Reason(s)
========================================================================
testMCDABehavior/TestingRandomisedSites X X Errored.
------------------------------------------------------------------------
testMCDABehavior/TestingZeroSites X X Errored.
------------------------------------------------------------------------
testScatterGather/testScatterGather X X Errored.
ans =
1×14 TestResult array with properties:
Name
Passed
Failed
Incomplete
Duration
Details
Totals:
11 Passed, 3 Failed (rerun), 3 Incomplete.
6.381 seconds testing time.
Hey @ConnectedSystems, sure I'll have a look
Hey @ConnectedSystems , so I'm looking at the testMCDABehaviour issue and it seems to be because the input rankings include 27 sites (1: length of strongpred) while the output is based on 26 sites (as 'site_ids' is defined as [1:26]. The code assumes the size of rankings will not change (I thought sites that were filtered out were given rank =0?).. is this an ok assumption?
Filtered out sites from siteRankings()
?
It now returns nsites + 1
as discussed in our zoom chat the other day (yesterday?) to indicate sites that are not considered.
e.g., if Site 1 is never considered, it will have a constant score of 27, as opposed to 0 previously.
This explains strongpred
having 27 sites in it!
So the size of rankings (which was originally just an output of ADRIA_MCDA) can change? This is without calling siteRankings, just the ranking output that you added at the end of ADRIA_MCDA. If so, I'll try and work out how to integrate this tomorrow, I have to head off to something now.
So the size of rankings (which was originally just an output of ADRIA_MCDA) can change?
It shouldn't - strange that its picking up "27" sites then if its not using siteRankings. Just realised the number of sites shouldn't change in there either, its just the maximum (or lowest score really) possible...
I'll have a look when I can tonight too.
Some notes for you tomorrow:
The tests were using the older "MooreTPMean.xlsx" connectivity file but recent changes to siteConnectivity()
mean this is not longer usable due to differences in formatting:
But updating to the newer moore_d2_2015_transfer_probability_matrix_wide.csv
file leads to a different issue.
Somehow NaNs get into the predprior
variable (L 67 in ADRIA_DMCDA.m
)
which ultimately leads to:
K>> predec(predprior, 3)
Index in position 1 is invalid. Array indices must be positive integers or logical values.
In other words "you can't use NaNs as an index"
Is it that we need to update the dMCDA_vars
variable inside testMCDABehavior.m
or is it a general issue with ADRIA_DMCDA()
?
I suspect the latter as I run into this error when running the sampled_runs
and batch_runs
example scripts:
The single_scenario
example runs with default values (with the exception of Guided being set to 1).
UPDATE:
I can trigger the error manually with single_scenario
by setting ShadeTimes
and/or SeedTimes
to 25 (the upper most accepted value).
PS: Suggest changing these to ShadeFrequency
or similar.
"times" makes me think a value of 5 = "lets intervene 5 times".
Hey @ConnectedSystems, would you be able to push the version with the correct connectivity data?
I think I might have found the issue- I assumed matlab would allow the input nprefshadesites and nprefseedsites to become output if unassigned in the function (i.e. if site selection was not occuring that year), but it looks like I may have to have a line that assigns nprefseedsites (new) = nprefseedsites(old) even when the vector doesn't change. If you direct me to the connectivity data I'll change this.
Sorry Rose, thought I did but I must have gotten distracted when finishing up last night. It's up now.
Thanks, I'll pull and see if my changes do the trick :)
it looks like I may have to have a line that assigns nprefseedsites (new) = nprefseedsites(old) even when the vector doesn't change
Oh yeah, I forgot about that as well. It looks clunky but yeah, you need that nprefseedsites = nprefseedsites;
☹️
Suggest putting a comment explaining why its there too because future Rose and Takuya during a cleanup might go "ha, what's this for? It just reassigns itself - we don't need that" and remove it.
Good point, I can see myself doing that :'D
So all tests passing now except scatter-gather so I'll look into this
Awesome, I'll add a test to check the edge values for Seed/Shade Times
So the issue seems to be the final test in scatter gather testing which results in the error message 'results were all zeros'. I'm a little confused because I check scatter all(all(scattered(:, :, 1, 1))) and it is logical (1) so its all non-zero. Do you know what might do this?
All done on my side.
I've also adjusted the variable name from SeedTimes
to Seedfreq
etc to match the original title case naming convention in interventionDetails
(we should standardize at some point to reduce risk of confusion).
I also realised that your example_newsite_sel_struct.m
was being used as a test (you were checking the sum of ranks outputted at end of script?).
I moved this into testMCDABehavior
so its captured as an automated test now. Hope you don't mind.
Incidentally, you can run a specific test file:
runtests("tests/testMCDABehavior.m")
So the issue seems to be the final test in scatter gather testing which results in the error message 'results were all zeros'. I'm a little confused because I check scatter all(all(scattered(:, :, 1, 1))) and it is logical (1) so its all non-zero. Do you know what might do this?
Hmm, lemme look into it
Thanks :)
The issue was that the assertion itself was poorly formed:
assert(~all(all(scattered(:, :, 1, 1))), "Results were zeros!")
% above should have been
assert(all(all(scattered(:, :, 1, 1))), "Results were zeros!")
I have no idea how it was passing previously...
Could you push your changes? I'll deal with any conflicts
Ok cool, I'll push now :)
Hi @Rosejoycrocker all seems to be working great now!
I will merge this PR soon.
Awesome, thanks!