Open mbaumer opened 9 years ago
Will do. I owe @ericmorganson an email, having failed to organize a meeting with the rest of the group. I'm not sure where he is with the WISE data, but agree that it looks very useful based on Agnello et al 2014.
On Tue, Jan 27, 2015 at 12:27 AM, Mike Baumer notifications@github.com wrote:
Assigned #5 https://github.com/drphilmarshall/PS1QLS/issues/5 to @drphilmarshall https://github.com/drphilmarshall.
— Reply to this email directly or view it on GitHub https://github.com/drphilmarshall/PS1QLS/issues/5#event-225190168.
We have WISE data. It's simple to cross-match. I am just unclear on exactly what you want.
On Tue, Jan 27, 2015 at 10:46 AM, Phil Marshall notifications@github.com wrote:
Will do. I owe @ericmorganson an email, having failed to organize a meeting with the rest of the group. I'm not sure where he is with the WISE data, but agree that it looks very useful based on Agnello et al 2014.
On Tue, Jan 27, 2015 at 12:27 AM, Mike Baumer notifications@github.com wrote:
Assigned #5 https://github.com/drphilmarshall/PS1QLS/issues/5 to @drphilmarshall https://github.com/drphilmarshall.
— Reply to this email directly or view it on GitHub https://github.com/drphilmarshall/PS1QLS/issues/5#event-225190168.
— Reply to this email directly or view it on GitHub https://github.com/drphilmarshall/PS1QLS/issues/5#issuecomment-71669725.
Great! We're meeting this afternoon, and will design the catalog(s) we need. Thanks!
On Tuesday, January 27, 2015, ericmorganson notifications@github.com wrote:
We have WISE data. It's simple to cross-match. I am just unclear on exactly what you want.
On Tue, Jan 27, 2015 at 10:46 AM, Phil Marshall <notifications@github.com javascript:_e(%7B%7D,'cvml','notifications@github.com');> wrote:
Will do. I owe @ericmorganson an email, having failed to organize a meeting with the rest of the group. I'm not sure where he is with the WISE data, but agree that it looks very useful based on Agnello et al 2014.
On Tue, Jan 27, 2015 at 12:27 AM, Mike Baumer <notifications@github.com javascript:_e(%7B%7D,'cvml','notifications@github.com');> wrote:
Assigned #5 https://github.com/drphilmarshall/PS1QLS/issues/5 to @drphilmarshall https://github.com/drphilmarshall.
— Reply to this email directly or view it on GitHub https://github.com/drphilmarshall/PS1QLS/issues/5#event-225190168.
— Reply to this email directly or view it on GitHub https://github.com/drphilmarshall/PS1QLS/issues/5#issuecomment-71669725.
— Reply to this email directly or view it on GitHub https://github.com/drphilmarshall/PS1QLS/issues/5#issuecomment-71682104.
Hi @ericmorganson , here's what we came up with today for the columns we'd like in our catalog:
Total: 40 columns
Thanks for your help, and let us know what we need to clarify within this list. The size and variability measures in particular are up for debate/improvement!
So it would be pretty easy to do that for a point source or isolated extended source. If you are just looking for extended sources with quasar-like PS1-WISE colors, then that should work.
Here is the issue: the images are all processed independently. If we are doing semi-resolved lenses, then sometimes an object will be resolved as a pair (or triplet or quad). Sometimes it will be a single object. Also, variability just won't work on extended sources.
But if you don't care about variability, then that should (might) be fine.
The other half of this is what objects do you want? I have a list of lenses. So that's easy. Do you want a random square degree as your random patch? Do we want to do a WISE color cut to get a quasar sample?
-E
On Tue, Jan 27, 2015 at 8:57 PM, Mike Baumer notifications@github.com wrote:
Hi @ericmorganson https://github.com/ericmorganson , here's what we came up with today for the columns we'd like in our catalog:
- RA/DEC (2 columns)
- g,r,i,z, and Y magnitudes (5 columns)
- grizY magnitude uncertainties (5 columns)
- WISE W1-W4 magnitudes (4 columns)
- WISE W1-W4 uncertainties (4 columns)
- grizY object size measures (psf magnitude - fixed aperture magnitude, which @drphilmarshall https://github.com/drphilmarshall said you two had used before as a size proxy) (5 columns)
- grizY size uncertainties (5 columns)
- time variability of source in each band, as measured by the standard deviation of grizY light curves (correctly weighted to account for photometric noise) (5 columns)
- grizY time variability uncertainties (5 columns)
Total: 40 columns
Thanks for your help, and let us know what we need to clarify within this list. The size and variability measures in particular are up for debate/improvement!
— Reply to this email directly or view it on GitHub https://github.com/drphilmarshall/PS1QLS/issues/5#issuecomment-71767773.
Hey @ericmorganson, @drphilmarshall and I discussed these issues and, for ease of comparing to our results on simulated lenses, we'd like to start with a catalog of single isolated sources. As far as which objects, let's not do a WISE cut. Your suggestion of all objects in a random square degree, plus known lenses, sounds good.
What exactly is preventing us from measuring extended object variability? Why can't we just take the standard deviation of aperture magnitude (or a similar naive measure)?
This took forever. Sorry. We had a host of family health issues (all fine now). And I had some large databasing to do.
I hope it is still useful. We don't naturally produce everything you want, so the columns are a bit different. But they should contain all the information you want. The columns are:
ra dec
mean err stdev
mean_ap err_ap stdev_ap
mean_kron err_kron stdev_kron
nmag_ok
MAKE SENSE. This is a python versioning problem. But it can always be solved with a reshape (e.g. lc_mag=pyfits.open('lenses.fits'[1].data['lc_mag'].reshape([77,5,20] ) lenses.fits https://docs.google.com/file/d/0B7jp1vigcKd-enJOWE1CMk83UHc/edit?usp=drive_web onedeg.fits https://docs.google.com/file/d/0B7jp1vigcKd-UE14azJ2WXNuVXc/edit?usp=drive_web
lc_mag lc_err lc_mjd
lc_mag_ap lc_err_ap lc_mjd_ap
lc_mag_kron lc_err_kron lc_mjd_kron
22.5-2.5 log10(w1234_nanomaggies) w1_nanomaggies w1_nanomaggies_ivar w2_nanomaggies w2_nanomaggies_ivar w3_nanomaggies w3_nanomaggies_ivar w4_nanomaggies w4_nanomaggies_ivar
sdss_flux sdss_flux_ivar
On Fri, Jan 30, 2015 at 2:17 PM, Mike Baumer notifications@github.com wrote:
Hey @ericmorganson https://github.com/ericmorganson, @drphilmarshall https://github.com/drphilmarshall and I discussed these issues and, for ease of comparing to our results on simulated lenses, we'd like to start with a catalog of single isolated sources. As far as which objects, let's not do a WISE cut. Your suggestion of all objects in a random square degree, plus known lenses, sounds good.
What exactly is preventing us from measuring extended object variability? Why can't we just take the standard deviation of aperture magnitude (or a similar naive measure)?
— Reply to this email directly or view it on GitHub https://github.com/drphilmarshall/PS1QLS/issues/5#issuecomment-72254448.
Thanks a lot for putting this together, @ericmorganson! @drphilmarshall and I are still working on some issues in our SDSS testbed, but we can start testing on this in parallel.
One initial question: Each light curve has 20 samples taken in each band, but what is the time spacing between samplings?
It's unevenly spaced. The times are in the mjd.
-E
On Mon, Mar 2, 2015 at 6:18 PM, Mike Baumer notifications@github.com wrote:
Thanks a lot for putting this together, @ericmorganson https://github.com/ericmorganson! @drphilmarshall https://github.com/drphilmarshall and I are still working on some issues in our SDSS testbed, but we can start testing on this in parallel.
One initial question: Each light curve has 20 samples taken in each band, but what is the time spacing between samplings?
— Reply to this email directly or view it on GitHub https://github.com/drphilmarshall/PS1QLS/issues/5#issuecomment-76848988.
Hey, @drphilmarshall -- could you talk to @ericmorganson about getting our hands on some PS1 data? Based on our SDSS stuff so far, it looks like IR magnitudes have a lot of discriminating power, so a WISE-matched catalog would (eventually) be ideal. Although for starters, griz photometry (with uncertainties, as required by XD) we could use to test our machinery with the new data would be fine.