Closed jzwart closed 6 years ago
Sorry that wasn't a great description from the last pull request. When running
ncvar_put(new_nc, new_nc$var$streamflow, streamflow)
we think it's writing data from streamflow matrix like you would write a book in English (left to right, top to bottom), but then it is reading the data like a table. So the first site has data from streamflow[1:(nrow(streamflow) / ncol(streamflow)), ]
, the second site has data from streamflow[(nrow(streamflow) / ncol(streamflow)):((nrow(streamflow) / ncol(streamflow))*2), ]
, and so on. I added some code to try and show our issues
OK, this is making a bit more sense now. Looking at it. I'll try and get things atleast in the right shape now and we'll fix other stuff next week.
I think we can close this. I'll put in a new one in a minute. Not really clear to me what all I should be checking in. I'm getting a lot of rebuilds and different hashes... Will start with a few obvious things and you can tell me if I should add more.
Sounds good! You should git commit any new/changed files in the build
folder, .ind
files, and code and remake files. Only thing to be careful not to commit is data files, but hopefully our .gitignore file is already doing a lot of good ignoring for us on that.
closing at @dblodgett-usgs 's request
copied from other PR: We were getting weird values when reading the streamflow data from the subsetted nc files. We looked back to the subset_nwm.R script and think we fixed part of the problem. The streamflow dimensions were being screwed up when put into the new_nc files. We now loop through the sites (and ref time if forecast) and use ncvar_put to insert streamflow data into new_nc as well as multiply by the scale factor. If not multiplied by the scale factor when data is put into the nc, the data read out will be 100x's too small. Double check our looping - we think we mirrored what you had but couldn't test it.