Closed gostevehoward closed 7 years ago
This issue was never really tightly specified (my own fault), but one would imagine repeated draws from the same source parameters in isolated images to really check the frequentist properties of posterior uncertainty in a very focused manner. Whether it comes from GalSim or Synthetic.jl is somewhat orthogonal, really a question of which generative model you're validating against. The more important distinction, I think, is between
Approach 2 has two key differences:
The former distinction is arguably a good thing: by using a realistic population of source parameters (e.g., a Stripe 82 ground truth catalog), we get a single measure for uncertainty quantification, rather than having to quantify uncertainty separately for stars, galaxies, bright galaxies, dim galaxies, bright large galaxies, etc. The latter distinction is probably not a big effect (but I don't know for sure).
And, of course, the key advantage of approach 2 is that we already have all the driver code for it. Approach 1 would require new scaffolding to generate the images, run celeste on them, and summarize results.
So, I agree I think we have sufficient tools already and don't need to implement another test right now. Just wanted to clarify :)
Makes sense, thanks for the clarification.
I'm pretty happy with using
Synthetic.jl
to validate our variational approximation, and Q-Q plots on real data to validate our model. I'm not sure what drawing GalSim data with noise adds.