@samGroy , can you share with me the non-normalized data for fish biomass and river rec criteria? I left those out of my test dataset for ease of troubleshooting. Or is this back to the original issue of needing to run Matlab in the background?
I still open the normalized dataset you shared. R throws an error saying it's corrupted, but can still use the WSM/ranking code you shared to identify the right map based on the ranked result. Since I wanted the steps of normalization to be included in the code, I went ahead and did it right in the WSM.R script, pulling in the ranking and map identification code from your version of WSM.R
See comment history below:
Hi Emma, the version of WSM I put on github uses pre-normalized data, so there’s no need for that step, only multiplication by preferences and summing scores. Also the data/maps should be the final version for the dams and criteria you and Sharon selected.
@elbfox regarding the normalization procedure below. how are we dealing with the alternative (3rd dimension) part of DamsDataMatrix?
Normalization procedure: get maximum and minimum criteria score for each criteria, each dam, produces two 2D matrices [dams, max/min criteria]
for positive scores: norm = (f - f_min) / (f_max - f_min)
for negative scores (like cost): norm = 1 - (f - f_max) / (f_min - f_max)
result is 3D matrix with dam-specific criteria scores normalized by min and max criteria sampled over all alternatives
The normalization is supposed to apply for each criterion across the set of alternatives for a single dam (2 dimensions: criteria, alternatives) repeated for the whole set of dams. I'm not sure that this is yet clear in the code. I'll take a look at it tomorrow and add some examples of what I'm trying to do.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
@samGroy , can you share with me the non-normalized data for fish biomass and river rec criteria? I left those out of my test dataset for ease of troubleshooting. Or is this back to the original issue of needing to run Matlab in the background?
I still open the normalized dataset you shared. R throws an error saying it's corrupted, but can still use the WSM/ranking code you shared to identify the right map based on the ranked result. Since I wanted the steps of normalization to be included in the code, I went ahead and did it right in the WSM.R script, pulling in the ranking and map identification code from your version of WSM.R
See comment history below: Hi Emma, the version of WSM I put on github uses pre-normalized data, so there’s no need for that step, only multiplication by preferences and summing scores. Also the data/maps should be the final version for the dams and criteria you and Sharon selected.
Best Sam
Originally posted by @samGroy in https://github.com/dams-mcda/Dams-MCDA/issues/90#issuecomment-521054260