Closed ASabovic closed 1 year ago
The byte array from the JPEG decoder is in 565 RGB format for rendering direct to a TFT.
Is it possible to convert it to the image in Python script using one of the classic methods with pillow and base64 encoder?
I am not familiar with those tools, so am unable to help.
I am also not familiar with these formats, so the output of the JPEG decoder is still an array or?
Here https://github.com/Bodmer/JPEGDecoder/blob/master/examples/MCUFRIEND_kbv/jpeg_kbv/jpeg1.h you post it this solution so I am wondering if it is possible to get something similar from Arducam, a simple byte array which is possible to sent via BLE lets say and decoded on the receiver side?
The image is decoded into a set of sequential MCU blocks, typically each MCU is 8x8 or 16x16 pixels. The MCU pixels are stored in memory and a 16 bit pointer to the start of that memory block is provided in the callback.
Yes, I found that part in the end when you converted the image into grayscale. I also tried to send that data and convert it into image but I was not able to do that. Do you maybe know whether it is possible to convert such an array into a picture that can be displayed on the monitor?
Hi,
I am using the Arduino Nano board on which I connected the Arducam module. I am using the example for the person detection where the JPEG Decode function is used. I am able to send the final jpeg array to my PC (3080 bytes) but the construction of that array is a bit weird and not readable using classic pillow library in Python. What can be the issue here that I am not able to decode the image gotten from the camera and decoder? Thanks in advance!