AstroJacobLi / popsed

Population-Level Inference for Galaxy Properties from Broadband Photometry with Neural Density Estimation
MIT License
0 stars 1 forks source link

noise model #4

Open changhoonhahn opened 2 years ago

changhoonhahn commented 2 years ago

In case it's useful, I've recently implemented noise models for the NASA-Sloan Atlas (SDSS DR8 re-reductions) in this notebook: https://github.com/changhoonhahn/SEDflow/blob/main/nb/training_data.ipynb

AstroJacobLi commented 2 years ago

Hmm... Added SNR=20 noise to the photometry (~0.05 mag), which kinda screws things up. The redshift distribution now is very off. image

changhoonhahn commented 2 years ago

Interesting. Is the redshift distribution always tighter when you train different models?

On Mon, Dec 6, 2021 at 5:03 PM Jiaxuan Li @.***> wrote:

Hmm... Added SNR=20 noise to the photometry (~0.05 mag), which kinda screws things up. The redshift distribution now is very off. [image: image] https://user-images.githubusercontent.com/29670581/144929677-1fa260d1-f6d1-4374-afc3-eb6db6c85501.png

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/AstroJacobLi/popsed/issues/4#issuecomment-987274571, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAOOZWDLKL2S3VHCH7WLOA3UPUXMVANCNFSM5JKU4JUQ .

AstroJacobLi commented 2 years ago

After playing the same game several times, generally, the redshift distribution is tighter than the truth, but has large variation from one NDE to another. I agree that we need to combine ~10 neural density estimators and get a distribution of posteriors.

AstroJacobLi commented 2 years ago

Results from SNR=10 (~0.1 mag) mock observation! image

changhoonhahn commented 2 years ago

That looks very promising.

The redshift distribution is still a bit narrower than truth and log tau distribution isn't perfect. But still, this is very convincing.

In #3 you mentioned

Train 10 NDEs (random realizations of Gaussians) with the same training hyperparams. By "same training hyerparams", do you mean that the NDEs all have the same architecture? Assuming you're using MAF, have you played around with different number of blocks or wider blocks?

changhoonhahn commented 2 years ago

BTW the noise model of your forward model and mock observations are the same right?

AstroJacobLi commented 2 years ago

Good point! I'm using neural spline flows (five NSFs as one NDE), each NSF has 50 hidden_features. I have played with n_NSFs a little bit, but find no huge improvement compared to the default number. Of course, I can play this game more.

BTW the noise model of your forward model and mock observations are the same right?

Yes, noises are the same in mock observations and in the inference (SNR = 10).