Closed jsadler2 closed 4 years ago
I feel like my code is a little messy in this area. There are multiple ways now to discount the observations:
pretrain_vars
or finetune_vars
to either flow
or temp
(or both
). - This sets a per-observation weight to zero for the variable that is not included in either of those parameters. These weights are multiplied by the per-observation RMSEs in the loss functionnp.nan
.So I think the easiest actually is just to switch how I do 3. I can just randomly set weights that correspond to observations to zero and that should have the same effect as setting a random number of observations to nan
.
I'm getting an error when I set the
reduce_flow_trn
orreduce_temp_trn
parameter to 1 in theprep_data
function. This is because I have an assertion to check if the standard deviation and mean of the training data are real numbers. When I set thereduce_*_
parameter to 1, it replaces all of the variables values withnan
, so the standard deviation and mean are not real numbers.