tawoodard / test101

Creative Commons Zero v1.0 Universal
0 stars 0 forks source link

UDef-ARP #1

Open tawoodard opened 1 month ago

tawoodard commented 1 month ago

UDef-ARP - Get it to match the functionality of the UDef-A tool that Yao-Ting developed for Verra. Functions/Panels (d or done before # means it is done & the results match Yao-Tings):

done 0. Calculate NRT done 1. Testing-Fitting-Vuln __done 2. Testing-Fitting-Allocated Risk Mapping - Matches 2b. my output CSV has 2 fields, YT's has 4 fields

  1. Testing-Fitting-Model Fit Assessment ~Different grid locations; YT - diff border __done 4. Testing-Prediction-Vuln
  2. Testing-Prediction-Allocated Risk Mapping 5b. my output CSV has 2 fields, YT's has 4 fields
  3. Testing-Prediction-Model Fit Assessment done 7. Application-Fitting-Vuln done 8. Application-Fitting-Allocated Risk Mapping 8b. my output CSV has 2 fields, YT's has 4 fields __done 9. Application-Prediction-Vuln 9b. Default # of iterations. Mine=3, YT's=5
  4. Application-Prediction-Allocated Risk Mapping
  5. Plotting - not working currently
  6. Testing-Fitting-Alternative Vuln
  7. Application-Fitting-Alternative VulnStill to do (known issues):

Alternative Model (testing-fitting-Alternative Vuln): Different!! Differences: YT's cell values are usually 2-7 higher than mine -Is it the stretch? (only diff by a tiny bit) -is it the geometric reclass? (I think this is very diff)

Call YT's Plot. -my CSV has 2 fields, hers has 4 (might not be a problem) -timing issues?

test with a different dataset (e.g. small_para)

tawoodard commented 1 month ago

I sent an email today to Ron with the following info:

I’ve looked at my code vs Yao-Ting’s code in the various steps of calculation for the alternative vulnerability. Most of the differences between our outputs happen at the stage of the geometric reclass. I’ve ran a few variations of geometric reclass in TerrSet on the image that is created right before that stage, and compared them to Yao-Ting’s output.

The options my code is using for the geometric reclass are 30 output classes, Decreasing class width progression, and Increasing ID order.

The difference image between Yao-Ting’s minus mine is:

[screenshot of diff.rst (values range from 0-5, largest areas have values of 1 and 2).]

If I change the number of classes I’m creating to be 29, the difference (YT-mine) is:

[screenshot of diff_29.rst. (values range from 0-5; laregest areas are 2 and 1)]

If I change it to be 29 classes, and change the class width to Increasing progression, then my image is closer to hers. The difference image is:

[sceenshot of diff_29_inc.rst. Values range from 0-2. Largest areas are 0 and then 1 and 2.]

Note: I am running all of my Reclasses on the same image that she is running her reclassification on. I went into her code and had it write an image to disk of the array values right before her reclassification stage, so that I could isolate which differences are caused by that step alone. There are slight differences that happen at the stretch stage, but those are very tiny.

Also, I copied a zip file to \3630-1\pass (udef_alt_comparison.zip) so that you can copy the files used in my last email to you, if you want to be able to look at them and query them in TerrSet.