Rachmanin0xFF / jwst-twitter-bot

Auto-process and upload JWST images to Twitter
23 stars 4 forks source link

colors #1

Open yuval-harpaz opened 1 year ago

yuval-harpaz commented 1 year ago

Hi, thanks for this most useful tool. I am playing with your method for color balance, in the aim of making color images automatically. I am now using rrr gggg bbb approach, dividing the images to three groups from MIRI no NIRCam, then average each layer. I have to overcome some speed issues but if you wish I'd like to help making color twitter-bot images. here is an example code where I use your level_adjust function image

Rachmanin0xFF commented 1 year ago

Hi Yuval,

Wow! thank you for creating this; the results look very beautiful. I feel the need to apologize for the embarrassing state of the code; I have been meaning to rework it for a while now.

I would love to use your help making the bot post color photos. I was looking into it earlier, but I ran into three issues (ordered in descending priority):

  1. Alignment. JWST photos taken with different filters and instruments are often not perfectly aligned, and they sometimes contain nonlinear distortions that are tricky to correct. This is the biggest challenge; I was thinking of just throwing a general mesh-based correction method at it and seeing how that goes.
  2. Color. Currently, the bot posts grayscale photos. Obviously, it is impossible to create "true-color" from the filters photos (they would just be red). However, I would still like to use palettes that are somewhat representative of the physical nature of the light received by the JWST. There are a lot of ways to do this: linearly mapping filters to spectral colors, using gas emission spectra, etc. I'm also not sure how I feel about individually equalizing each filter/channel -- it might make more sense to do it in luminosity space, but this could decrease the vibrance of the output...
  3. When to post. If images using different filters are uploaded to the MAST database at different times, there is a chance that the bot could miss some of them, and post an incomplete image. This shouldn't be too difficult to work around, though.

As a student, I don't have much free time. I would be happy to receive your help with any of these issues.

Rachmanin0xFF commented 1 year ago

whoops, clicked the wrong button there

yuval-harpaz commented 1 year ago

Okay, great. Before going over the items you listed- I have been playing a lot with this in the last week or two so here are some impressions. Personally, I like MIRI to be very red as you can see in this collage, so I average all MIRI images to assign for the red layer. I also apply sqrt on the level-adjusted image (MIRI only) to bring forth the low luminosity dust..
So RE the points: okay, so let's go by your priority order. tackling one thing at a time, but here are my thoughts for all three. 1.a Different, or bad coordinates solutions is terrible, and I think we can ignore it. The most we could do is detect the problem and not tweet, or await manual approval. I wrote code to correct this, but then you have 30% of an image well aligned while the rest is shifted (Tarantula - big shift). 1.b about the non-linear noise, one option is to make it less visible (raise the data by power x or subtract a constant). If we prefer to clean it we can try a few things. one We can remove local median computed with a ring filter from low luminosity areas. I can start improvising or follow your lead, but first please give me some examples because there are all sorts of issues.

  1. I like the idea of assigning colors to different filters. We can choose a range, say, from red to bright yellow, and set 2100w as red, 200w as yellow and spread the rest in between. We can also try spreading the colors all the way to blue. I saw that Pasquale-Leon (for NASA PR) simply averages a few filters for each RGB layer, so if nothing else looks well we can also revert to that. A third approach I saw was to use total light as green, and then one filter for red and one for blue.
  2. Yes, I was wondering. Also, some galaxies have only partial coverage by one of the instruments. I sometimes tweet MIRI only to discover that had I waited a couple of hours I'd had NIRCam as well. But I think if we get a new object, we wait a day or two to see if enough data accumulated, say, at least 3 MIRI and three NIRCam images, with no new images for 12h or more. Okay, so thanks for getting back to me. I'll start looking into detecting bad alignment, and if you give me artifact examples we can start thinking about that too. I suggest opening new issues

image