Closed Jingfei-Liu closed 4 years ago
Good question.
The feature you are requesting is not supported, however, it is really simple to implement it. Update you installation to version v3.0.18, and you should be good to go.
Note though that neither the copula's nor Gram-Schmit orthogonalization are optimized for speed. The combination is likely going to be notoriously slow. If it doesn't matter, I recommend using the Cholesky decomposition orthogonalization instead: ch.orth_chol
. It should be an order of magnitude fasterto compute.
Excellent updating! I guss there may be new way to add multimodal distributions through GMM or other approaches in the new releasing. I will study it carefully. Yeah, I have read the paper about Cholesky decomposition orthogonalization[1], I think it is an ingenious idea, I will follow this idea. Thanks for your directions. [1] Multivariate polynomial chaos expansions with dependent variables.
Might be, yes. But note that GMM is not ready, and might not be for a while. My time is limited, and it still needs a bit of work.
Maybe it is naive, I think it is easier to define a distribution in the the follwing form : pdf=W1N(μ1,σ1)+W2N(μ2,σ2)+...+Wn*N(μn,σn) , W1+W2+...Wn=1 and leave those parameters to the users or the GMM procedure in the scikit-learn. For example, sklearn.mixture.GaussianMixture(...) can be used to determin the parameters of the pdf from the target data.
Ah, okay. I assumed that you were talking about #187 specifically.
If you only need the 1-dimensional case, you can do that, yes, and create a wrapper. In the multivariate case however chaospy
needs a probability decomposition which is not as trivial to do with GMM. That is what I was referring to as being a little bit into the future.
OK, thanks for your explanation, I believe I need to learn more about chaospy.
Hi Jonathan: I am sorry to bother you again, but some basic problems still bother me a lot, and I hoep you can give me some directions generously .
It's still a problem about dependent varibles, when I emplement the follwing code, it works quickly. x1 = cpNormal(1.0.2) x2 = cp.MvNormal([0, 0], [[2, .5], [.5, 1]]) dist = cp.J(x1,x2) P1 = cp.orth_chol(3, dist, cross_truncation=1.)
But if I try another way, it takes a long time to go, like this: x1= cp.Gumbel(cp.Iid(cp.Uniform(), 2), theta=1.5) x2 = cpNormal(1.0.2) dist = cp.J(x1,x2) P2 = cp.orth_chol(3, dist, cross_truncation=1.)
The fisrt question is wheather the above codes is right?
Besides, when I want to define two dependent variables with cp.Uniform() and cp.Normal(), and to obtain the orthogonal bases of the joint distribution, if there is a better way except the following form? x1= cp.Uniform() x2 = cp.Normal() dist=cp.J(x1,x2) distc= cp.Gumbel(dist, theta=1.5) P3 = cp.orth_chol(3, distc, cross_truncation=1.)
I hope I have explained my trouble clearly, thanks. Best regards!
It looks okay, but yes, I know it is really slow. The Copula's need some tweaking to make it faster. I really want to do so before I go to a conference in March. So just stay tuned, I guess.
The copulas have been upgraded and should be much faster. Support is limited to Clayton, Gumbel, Nataf, TCopula
though, loosing Joe
along the way. May be added again later, if I can get the results better.
Hi Jonathan:
I have got a problem when I practice PCE with chaospy. When there are bivariate dependent variables among the input variables, how to use the Schmidt orthogonality to compute the polynomial bases? For example, when y=f(x1,x2,x3), x1, x2 and x3 are all normal variable, x1 and x2 are dependent while both of them are independent with x3. I wonder if the following code is right: x1 = cp.Normal(1, 0.1) x2= cp.Normal(1, 0.1) x3 = cp.Normal(1, 0.1) dist12 = cp.Clayton(cp.J(x1,x2),theta=23) dist123=cp.J(dist12,x3) P = cp.orth_gs(2, dist123)
Best regards!