IENT / YUView

The Free and Open Source Cross Platform YUV Viewer with an advanced analytics toolset
http://ient.github.io/YUView
Other
1.91k stars 376 forks source link

Do you recently have any dev plans for Bayer format image like RGGB GBRG? #381

Open zhengtianyu1996 opened 3 years ago

zhengtianyu1996 commented 3 years ago

: )

ChristianFeldmann commented 3 years ago

Hi! No we don't have any concrete plans there yet. But if you could provide more information on the formats maybe it is easy to add. Do you have any links to a specification document?

zhengtianyu1996 commented 3 years ago

Hi Christian, thanks for your great work which helps me a lot :) Bayer format image is obtained directly from the image sensor, without any post-processings. A simple architecture: https://images.app.goo.gl/F1P4CprJJAZqgYe59

You could find more information here: https://www.programmersought.com/article/20262225496/ https://web.stanford.edu/class/cs231m/lectures/lecture-11-camera-isp.pdf https://en.wikipedia.org/wiki/Bayer_filter

In short, bayer format image is a special MONO image (normally contains many frames like YUV file). Almost all the projects related to image sensor would deal with bayer image, like applying a denoise algorithm on it. But bayer image is different from human's intuition. So we should apply an Image Signal Processing (ISP) pipeline to convert it to RGB image, then we could tell whether the denoise algorithm is good or not. The pipleline could be simple for basic showing. Here is an example:

  1. Black Level Correction
  2. Digital Gain
  3. White Balance
  4. Demosaic/Interpolation (Necessary)
  5. Color Correction
  6. Tone Mapping / Gamma

If you are interested, I suggest you implement a simple algorithm for each step. For example, implement bilinear or Hamilton interpolation in Demosaic step. Simple algorithm makes playing faster.

Thank you so much! Contact me freely if you need more information :)

ChristianFeldmann commented 3 years ago

Hi, I was already aware of the basics about bayer patterns. The more specific points that I wanted to clear up first were:

Christian

zhengtianyu1996 commented 3 years ago

Hi, The reason why I open this issue, is that I noticed YUView support various raw file like raw RGB, raw YUV, etc... So I was wondering there should be a raw Bayer in that list. Because bayer format is often used in streaming media devolpment.

I am just looking for a simple conversion from bayer image to RGB imge, then view it in YUView. I could offer you a demo and all full implementations via Matlab (or Python) code. The demo would convert an RGB image to Bayer format raw file, then reconstruct the raw file back to RGB/YUV using a simple ISP pipeline.

I would prepare the code and data as soon as possible, thank you soooo much.

ChristianFeldmann commented 3 years ago

Yep you are right. This sort of reader for the raw bayer format actually fits YUView very well. It is very low level and engineering related and a very concrete type of image format that is in practical use. I think we have to go step by step here. The first step would be as you suggested to add support for the basic BGGR bayer pattern. If you could provide code in MATLAB or python along with an example bayer pattern raw file this should be quite easy to add. And then in a later stage we can think about what else to add. Maybe someone has some ideas here.

Looking forward to some code/links/files.

zhengtianyu1996 commented 3 years ago

Hi Christian, this is an implementation, supporting 4 kinds of bayer format (default: GBRG) https://github.com/zhengtianyu1996/bayer_image_process

Just try it. Contact me freely if you have any questions. Thank you so much!

zhengtianyu1996 commented 3 years ago

btw, for demosaic method, which affects final performance a lot, I would say bilinear method is enough for the current moment. It's easy to implement without time consuming. Some other methods like GCBI/GBTF/Hamilton achieve better performance with more complex calculation, maybe they could be added in the next stage, not now.

ChristianFeldmann commented 3 years ago

Thanks for the code. I will take a look at it and come back to you if I have any questions. My Matlab is a bit rusty. Also I don't have a Matlab license. Could you provide some sample files? Best would be the original bayer pattern file and the corresponding RGB file so I have some reference.

zhengtianyu1996 commented 3 years ago

I added example files in my repo, including a raw file, a ground truth RGB image and a reconstructed RGB image that you could refer to. I could also rewrite the code via Python (it may take another 2-3 days), if you feel it's necessary.

ChristianFeldmann commented 3 years ago

Nice thank you very much. I think I will try the Matlab code first. If you find anything in C++ then let me know 😆 But python is not necessary for now. I will just have to look up some Matlab things again.

ulfzie commented 3 years ago

Some feedback from my side: 1.) Matlab code references GBRG(and misses RGBG). I am not aware of this filter pattern. (See https://en.wikipedia.org/wiki/Bayer_filter) 2.) I am not a Matlab expert. It looks to me that the RAW10/RAW12/RAW14 pattern are not generated in the right way. Are these compliant to the SMIA 1.0 Part 2 CCP2 Specification? (RAW10 for example uses 4 bytes for the MSB of 4 pixels. 5th byte contains the 2 LSB of this 4 bytes).

regards, Ulf

PS: I would appreciate if you could integrate the RAW view into YUView!

zhengtianyu1996 commented 3 years ago

Hi ulfzie, 1.The GBRG format is commonly used in sensor manufacturing like Samsung or Sony, so does RGBG(I haven't used this kind of sensor so I didn't support it in my matlab code). However, it's not important at the moment, because it's easy to demosaic them using bilinear method. It could be added very soon. 2.Do you mean matlab function 'fwrite' doesn't work in the way of SMIA 1.0 Part 2 CCP2? Well, actually I am not an expert programmer, I don't know the details. I would use fread ( void *ptr, size_t size, size_t count, FILE *fp ); in C to read the generated raw file directly.

btw, thanks to ur comment. I found a writing bug in my matlab code main.m, which leads to wrong reconstructed images when setting 'nbits' to other values. The bug has been fixed, plz refer to the latest main.m

ulfzie commented 3 years ago

Hi, I was checking the code a little bit more. The fwrite is just using the 16bit for each pixel. This is diffent to the spec. (See SMIA ) Another thing which needs to be configurable is the endianess for RAW10/12/14/16 .

Ulf

zhengtianyu1996 commented 3 years ago

I took a look in the document. I think the CCP2 protocol is used for compressed transmission. I am not sure whether CCP2 is used in all the cameras like CCTV or SLR camera.

In my field (CCTV solution), my raw data is generated like: sensor -> transmission -> DSP compress -> PC decompress -> uint16/uint8 raw data then I design image processing algorithms to the final raw data in Windows (or Linux), in an upper level. The reason why we set the above pipeline is that different manufacturers has different compressions or protocols.

So converting raw data to uint16/uint8 style before YUView, there could be 2 advantages: 1.Format different sensors' data in the same way, easier for YUView 2.It's easy to implement the reading function by using fread in most computer languages

I hope my understanding is right. If anything wrong, plz feel free to correct me : )

ulfzie commented 3 years ago

Hi, interesting. I use the image sensors connected to an Image Sensor Processor (ISP). The sensors provide the data in the format described above. This data is put into memory of the ISP without any changes. I dump this memory for debugging. In this case the data is packed in this strange mixup in case of RAW10/12/14.

Ulf

zhengtianyu1996 commented 3 years ago

Good to know that. If the ISP is written in C(++)/Verilog for embedded devices. It's possible that ISP could support SMIA directly because I thought SMIA is designed by Nokia for compressed transmission in Mobile Phone. One more thing need to figure out is that whether SMIA is used in all image sensors. Once Sony or OmniVision use their own protocols in different sensors, we should better simply convert different raw data to a fixed format when debugging. A fixed format would be suitable for a general raw player.

ulfzie commented 3 years ago

Hi, I work with Sony, Omnivision and OnSemi sensors. All follow the same rules. And yes, the input interface on our ISP supports this in hardware.

Ulf

zhengtianyu1996 commented 3 years ago

Hi Ulf, I consulted my collegue working in hardware department a little bit. He says we use another protocol called MIPI CSI-2, which is also commonly used in sensors. Could you please tell me a specific sensor model using SMIA? Then I could learn more information from the manual.

ulfzie commented 3 years ago

Hi, Yes, all of them run on MIPI CSI-2 hardware level. Internal data presentation is done w.r.t. SMIA. Let's take further communication offline. I will send you an email.

Ulf