Closed orbitfold closed 3 years ago
There already is a simple surrogate model functionality in the SC analysis class (surrogate
subroutine). Once all the samples are computed, the SC expansion is basically a polynomial approximation of the code output. It interpolates the code samples via Lagrange polynomials. We could implement the same thing for PC I think @jlakhlili?
I'm also working on different surrogate models for time-dependent problems, but I'm not sure if EasyVVUQ is the place to incorporate these, since it does not follow the same workflow of sampling the code a number of times, and computing some stats from the output.
I'm also working on different surrogate models for time-dependent problems, but I'm not sure if EasyVVUQ is the place to incorporate these, since it does not follow the same workflow of sampling the code a number of times, and computing some stats from the output.
What workflow does it follow?
There already is a simple surrogate model functionality in the SC analysis class (
surrogate
subroutine). Once all the samples are computed, the SC expansion is basically a polynomial approximation of the code output. It interpolates the code samples via Lagrange polynomials. We could implement the same thing for PC I think @jlakhlili?
That class is very monolithic. Wonder if there is a way to modularize that functionality a bit.
As I said in the VecmaTk telco. it's a good Idea. The first draft that I can do with PC, it could be like @wedeling example. After that, I can also implement/interface something using Gaussian Processes.
Also you probably would want to store the model for later use I guess? So maybe it's a good idea to think what is the best place for it. The database maybe?
Is it not the case with CampaignDB?
I'm also working on different surrogate models for time-dependent problems, but I'm not sure if EasyVVUQ is the place to incorporate these, since it does not follow the same workflow of sampling the code a number of times, and computing some stats from the output.
What workflow does it follow?
I have a subgrid-scale source term in the large-scale equation, for which I have reference snapshots in time. The surrogate is trained on those snapshots, and it then replaces the subgrid scale source term in the large-scale equation. At every time step, the surrogate takes large-scale variables as input (features) and produces a value for the source term. I then integrate the system in time for a long time, and compute statistics for the output.
The main difference with for instance SC or PC methods is that it is coupled to a (large-scale) PDE , and the inputs are not uncertain inputs, but large-scale features.
As I said in the VecmaTk telco. it's a good Idea. The first draft that I can do with PC, it could be like @wedeling example. After that, I can also implement/interface something using Gaussian Processes.
That would be interesting, would you make the GPs part of EasyVVUQ?
There already is a simple surrogate model functionality in the SC analysis class (
surrogate
subroutine). Once all the samples are computed, the SC expansion is basically a polynomial approximation of the code output. It interpolates the code samples via Lagrange polynomials. We could implement the same thing for PC I think @jlakhlili?That class is very monolithic. Wonder if there is a way to modularize that functionality a bit.
Perhaps we could take out the Sobol indices part (which is quite large), and put it in a separate class?
Also you probably would want to store the model for later use I guess? So maybe it's a good idea to think what is the best place for it. The database maybe?
The SC surrogate only needs the code samples to be stored.
As I said in the VecmaTk telco. it's a good Idea. The first draft that I can do with PC, it could be like @wedeling example. After that, I can also implement/interface something using Gaussian Processes. That would be interesting, would you make the GPs part of EasyVVUQ?
I am thinking to start with a small example, and we can discuss about it in Poznan Hackathon.
Also you probably would want to store the model for later use I guess? So maybe it's a good idea to think what is the best place for it. The database maybe?
The SC surrogate only needs the code samples to be stored.
I think that COLLATION_APP1
contains already the samples data.
how about I add a simple GP analysis class now and then we can further discuss it during the hackathon?
how about I add a simple GP analysis class now and then we can further discuss it during the hackathon?
Please do it.
Should we try to implement those workflows simply via appropriate analysis classes? I'm thinking krigging as proof of concept. I don't know much about it though so would be nice if someone with an application would chip in.