Open JohannesBuchner opened 7 years ago
I am attaching a python script with test cases for n_planets=0, to verify that our GP kernels give the same results (and reading the input files).
test0.txt Github doesn't let me attach .py files, but you can still run it with python.
The test cases are: (filename, V, sigma_j):
('rvs_0001.txt', -0.728509750583953064E+00, 0.208717785347551521E+01),
('rvs_0002.txt', 0.653665406348636679E+01, 0.171886843613598783E+01),
('rvs_0003.txt', -0.869753669161968901E+01, 0.994907921829712771E+00),
('rvs_0004.txt', -0.658082070168154587E+01, 0.682875011391089748E+00),
('rvs_0005.txt', -0.486556982931438142E+01, 0.930406316677828249E+00),
('rvs_0006.txt', -0.108329221460122653E+02, 0.115487852990739137E+01),
These should be close to the peak. I also add V=0 to the tests (always the second line).
The lnlikelihood comes out as:
rvs_0001.txt -477.199687239
rvs_0001.txt -477.932407311
rvs_0002.txt -442.938443624
rvs_0002.txt -506.288309991
rvs_0003.txt -379.804965902
rvs_0003.txt -503.529750381
rvs_0004.txt -361.577289304
rvs_0004.txt -430.821384141
rvs_0005.txt -373.800428916
rvs_0005.txt -410.592674841
rvs_0006.txt -403.316207475
rvs_0006.txt -589.055564505
Please check what your GP implementation gives at these points.
My results for the zero planet model likelihood match yours very nicely.
("rvs_0001.txt", [-0.72851, 2.08718]): -477.1996872389699 ("rvs_0001.txt", [0.0, 2.08718]): -477.93240731114145 ("rvs_0002.txt", [6.53665, 1.71887]): -442.93844362361403 ("rvs_0002.txt", [0.0, 1.71887]): -506.28830999134107 ("rvs_0003.txt", [-8.69754, 0.994908]): -379.804965901685 ("rvs_0003.txt", [0.0, 0.994908]): -503.529750380888 ("rvs_0004.txt", [-6.58082, 0.682875]): -361.57728930373844 ("rvs_0004.txt", [0.0, 0.682875]): -430.8213841409087 ("rvs_0005.txt", [-4.86557, 0.930406]): -373.80042891649305 ("rvs_0005.txt", [0.0, 0.930406]): -410.5926748410176 ("rvs_0006.txt", [-10.8329, 1.15488]): -403.3162074752204 ("rvs_0006.txt", [0.0, 1.15488]): -589.0555645046938
Cheers, Eric
On Tue, Aug 15, 2017 at 8:47 AM, Johannes Buchner notifications@github.com wrote:
GP kernel tests (no planets)
I am attaching a python script with test cases for n_planets=0, to verify that our GP kernels give the same results (and reading the input files).
test0.txt https://github.com/EPRV3EvidenceChallenge/Inputs/files/1225025/test0.txt Github doesn't let me attach .py files, but you can still run it with python.
The test cases are: (filename, V, sigma_j):
('rvs_0001.txt', -0.728509750583953064E+00, 0.208717785347551521E+01), ('rvs_0002.txt', 0.653665406348636679E+01, 0.171886843613598783E+01), ('rvs_0003.txt', -0.869753669161968901E+01, 0.994907921829712771E+00), ('rvs_0004.txt', -0.658082070168154587E+01, 0.682875011391089748E+00), ('rvs_0005.txt', -0.486556982931438142E+01, 0.930406316677828249E+00), ('rvs_0006.txt', -0.108329221460122653E+02, 0.115487852990739137E+01),
These should be close to the peak. I also add V=0 to the tests (always the second line).
The lnlikelihood comes out as:
rvs_0001.txt -477.199687239 rvs_0001.txt -477.932407311 rvs_0002.txt -442.938443624 rvs_0002.txt -506.288309991 rvs_0003.txt -379.804965902 rvs_0003.txt -503.529750381 rvs_0004.txt -361.577289304 rvs_0004.txt -430.821384141 rvs_0005.txt -373.800428916 rvs_0005.txt -410.592674841 rvs_0006.txt -403.316207475 rvs_0006.txt -589.055564505
Please check what your GP implementation gives at these points.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/EPRV3EvidenceChallenge/Inputs/issues/9#issuecomment-322456742, or mute the thread https://github.com/notifications/unsubscribe-auth/ABQywn9gWXdOBExTEmEiQ756mRmRFef9ks5sYZNRgaJpZM4O21Uy .
-- Eric Ford Professor of Astronomy & Astrophysics Center for Exoplanets & Habitable Worlds Center for Astrostatistics Institute for CyberScience Penn State Astrobiology Research Center Pennsylvania State University
Same here, using my implementation of the lnlikelihood, which is basically scipy.linalg's Cholesky decomposition, just like George with the basic solver. No big bugs, then.
rvs_0001.txt -477.199687239
rvs_0001.txt -477.932407311
rvs_0002.txt -442.938443624
rvs_0002.txt -506.288309991
rvs_0003.txt -379.804965902
rvs_0003.txt -503.529750381
rvs_0004.txt -361.577289304
rvs_0004.txt -430.821384141
rvs_0005.txt -373.800428916
rvs_0005.txt -410.592674841
rvs_0006.txt -403.316207475
rvs_0006.txt -589.055564505
Cheers,
Rodrigo
I found a bug in the kernel calculation, I was missing a factor of 2 for one of the hyper parameters. Currently checking if that's the only issue
This tests the RV curves produced. There are four test cases: two different input parameter vectors and two different input times.
Set the parameters
where the order is V, sigma_j, P, K, chi, e, omega
I took the time in days once from rvs_0005.txt and once generated equally spaced from 0 to 30 days. That is the 1st and 4th column of the attached file.
The output with parameter vector one (e=0) is the 2nd and 5th column. The output with parameter vector two (e>0) is the 3nd and 6th column.
Please carefully check against this output file: test1.txt
Should look like this (at least that is our output):
The above results were produced by code taken from ExoFit.
@JohannesBuchner, the first values of the parameters (and likelihoods) are the maximum-likelihood?
@j-faria for the GP test, this is the highest likelihood sample one of the multinest run has seen. I did not apply an optimizer here, so it is probably not the maximum likelihood, but should be in the vicinity.
All consistent with my ln likelihood values for the test cases
1 -477.199687 -477.932407 2 -442.938444 -506.288310 3 -379.804966 -503.529750 4 -361.577289 -430.821384 5 -373.800429 -410.592675 6 -403.316207 -589.055565
Thanks for checking this. Given the agreement from three separate implentations, I think it's safe for us to offer these as unit tests that anyone else can use.
On Aug 15, 2017 7:04 PM, "vineshrajpaul" notifications@github.com wrote:
All consistent with my ln likelihood values for the test cases
-477.199687 -477.932407
-442.938444 -506.288310
-379.804966 -503.529750
-361.577289 -430.821384
-373.800429 -410.592675
-403.316207 -589.055565
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/EPRV3EvidenceChallenge/Inputs/issues/9#issuecomment-322613035, or mute the thread https://github.com/notifications/unsubscribe-auth/ABQywr2FB71mQyHRsuSKvRoZNRNv22OZks5sYiQUgaJpZM4O21Uy .
Please also check your RV implementations against test1.txt.
How is the parameter chi defined? I had a look at the ExoFit paper (https://doi.org/10.1111/j.1365-2966.2008.14385.x) and though it says χ is defined in Section 2.2 of the paper, I don't find any definition there.
@vineshrajpaul We found the definition in the ExoFit User's guide (http://www.star.ucl.ac.uk/~lahav/ExoFitv2.pdf) in Appendices A.2. χ is used as periastron passage factor.
@SurangkhanaRukdee Ah, ok, thanks! Not relevant to the likelihood verification I realise, though for the evidence computation, what prior are you placing on χ?
@vineshrajpaul We keep it uniform from 0-1.
Ok, so I've run the 4 RV output test - see rajpaul_test1.txt
For the second time series (linearly spaced between 0 and 30 d), my RVs agree with those provided by @JohannesBuchner to within about a part in a million (or a few μm/s, which is surely good enough for EPRV). I guess the discrepancy comes down to different approaches to solving Kepler's equation or other such numerical details.
For the first time series, i.e. taken from rvs_0005.txt, I do not obtain consistent RVs. I assume it is purely because of a different RV parametrization (χ vs. periastron passage time, t0, in my case). I played around with translating between χ and t0 though still didn't achieve consistent outputs - unsurprising as I'm usually terrible at making sense of different schemes for parametrizing Keplerian orbits . Anyway, given that I do get consistent outputs for the time series which starts at t=0, the (presumably) linear relation between χ and t0, and the uniform priors for both parameters, I would guess the RV discrepancy is inconsequential.
As discussed in the breakout session, it would be useful to verify that the likelihood implementations are correct so that we can exclude that as a source of problems.
Please provide several (unit) test cases for the likelihood function with input parameters and output values.