Open thjarrett opened 4 years ago
Here is a slight alternation, another flavour of the idea.
Allow jpegs (or pngs, or pdfs) to be associated to SOFIA sources,
and of which could be displayed in the VR from some command.
That is this: suppose you have some image created for a SOFIA source. It could be a nice Meerlicht image, or a greyscale DSS image, or a moment map, whatever. It is simply a jpeg (etc) file . In the idavie-vr, with a cube and sofia mask / src list, you also have links to these ancillary images (jpegs). The user simply asks for them to appear in VR when they engage a particular SOFIA source.
Back to the fits image idea -- something I do all the time with my galaxy analysis is to tap the DSS image server and pull a jpeg greyscale image of some defined area (ra/dec and width, say 2 arcmin). Since the DSS is allsky, I will always get something. I can then compare my data (whatever it is) with this optical image of the sky region. Nifty and powerful. It is a simple wget (aria2c) command to the server.
ideally you like to have the 2D image displayed within the cube, either at the end of the z-axis and/or sliceable over the z-axis throughout the cube in a semitransparent way with the slice it cuts out projected onto it. Like the two planes in the simple figure below.
Following a conversation between @thjarrett, @paoloserra and myself a note to summarise the desired features:
See also Issue #197
All of these suggestions will lead to a powerful functionality
this would be a super slick function -- the ability to draw upon a pre-defined image when exploring a cube, notably selected regions. This is how it works: you select an object (or piece of cube), which will then define a region of 3D sky: ra/dec/velocity, of which you then extract the 2D pixels (of the same ra.dec area) from a FITS image. It is then converted to a JPEG and displayed in the graphics window of VR (i.e.,, you can see it in VR, pops up next to the selected area). I'm specifically thinking of MEERLICHT imaging and WISE imaging, the former because it is attached to MeerKAT observations, and the latter because it is allsky and you can always chase down the pixels (and notably from cloud servers). This would be super useful for cube exploration because: (1) radio data is often mysterious and highly unhelpful, while optical/infrared is straight forward, and (2) to check for artifacts in the radio data, and (3) to discover new objects (i.e. radio not seen in opt/ir).