c3aidti / smoke

Gordon Group space
MIT License
0 stars 0 forks source link

CLWP Emulators only predicts values equal to zero #44

Closed vasanchez16 closed 4 months ago

vasanchez16 commented 6 months ago

For the emulators trained to predict CLWP (cloud liquid water path), no matter what set of values they are given for the features, they always predict meanResponses of zero. They also seem to predict a constant value for the standard deviation of the response. This can be seen here: image

Some testing has been done to observe if this problem is related to the kernel selected for training. So far Matern and RBF have been tested and both seem to display the same problem.

dadamsncsa commented 6 months ago

@vasanchez16 , I updated some routines to fully implement the capability of centering and normalizing the target data. I applied these changes, created a new technique and trained on clwp. However, the results indicate something is still off, as all variants give the same result: image

dadamsncsa commented 6 months ago

@vasanchez16 , After investigating some more, I decided to try the Matern kernel again, but change the nu parameter to 1.5 (or maybe 2.5). All previous training has been done using Matern with nu=0.5. According to the Matern kernel doc nu=0.5 is equivalent to RBF. My initial testing indicated that nu=1.5 was helping. Running the full training job now, the technique id is 'TECH-J' Here is an example prediction: image

vasanchez16 commented 6 months ago

@dadamsncsa ,

I tested this process manually by isolating the features and target for just one gstp, and in doing so I found that playing the arguments for GaussianProcessRegressor and the Matern kernel help sometimes but not consistently. Specifically, the 'n_restarts_optimizer' for the 'GaussianProcessRegressor' and the 'length_scale_bounds' for the Matern Kernel. I didn't tweak the nu value so hopefully this fixes the problem. Thanks for the updates

dadamsncsa commented 6 months ago

@vasanchez16 , Okay, I think that the nu setting is definitely an issue. The default is 1.5 and we had been using 0.5 which is the same as RBF. The TECH-J technique is also using the centering and normalization options, not sure if they are really needed, but probably can't hurt. The training job is done so feel free to try out predictions. Let me know how it looks...

tech_clwp = model_clwp.createTrainingTechnique(
    kernelName = 'Matern',
    #change kernel name this may be the issue
    serializedKernel = c3.PythonSerialization.serialize(sklKernel_clwp),
    targetName='clwp',
    centerTarget=True,
    standardizeTarget=True
)

image

vasanchez16 commented 6 months ago

@dadamsncsa,

It appears this change has fixed the problem, thanks darren! I will think of some ways to test the performance and accuracy of these emulators.

vasanchez16 commented 4 months ago

Issue resolved. As a note for the future, the initial length scale values given to the optimizer through the training technique are very important to building proper models.

CLWP initial length scales: [0.1]*12

AOD initial length scales: [1]*12