Closed jepler closed 1 year ago
@ladyada putting this on your radar
I think that the code also doesn't switch to looking for jpeg SOI/EOI markers when switching modes at the sensor level
The reinitialization time of the sensor can be shortened to within 500ms. This depends on the relevant code of the sensor.
If you don't use the method of re-initializing the sensor, I suggest to let the sensor work in JPEG mode, and then display it on the LCD through the scaling function of the decoding function. see jpg2rgb565()
.
This issue appears to be stale. Please close it if its no longer valid.
@jepler any updates to this?
The goal, from an application standpoint, is to support a camera application with a LCD viewfinder mode and a JPEG shooting mode, with minimum shutter lag.
In CircuitPython, we have to violate the layering of esp32-camera to efficiently reconfigure for different graphics modes. This also reallocates the buffers to the correct size, but risks memory fragmentation when repeatedly changing buffer modes.
Ideally the application would be able to communicate that that it will operate in a set of at least 2 operating modes, e.g.,
CircuitPython code snippet for gang-changing all the values switched during the viewfinder<->jpeg shooting switch:
i2c_lock(self);
cam_deinit();
self->camera_config.pixel_format = pixel_format;
self->camera_config.frame_size = frame_size;
self->camera_config.grab_mode = grab_mode;
self->camera_config.fb_count = framebuffer_count;
sensor->set_pixformat(sensor, self->camera_config.pixel_format);
sensor->set_framesize(sensor, self->camera_config.frame_size);
cam_init(&self->camera_config);
cam_config(&self->camera_config, frame_size, sensor_info->pid);
i2c_unlock(self);
cam_start();
We don't want to use mode which continuously shoots JPEG mode and decodes it to the screen in our application because of various reasons.
It seems that the allocation of frame_buffer needs to be placed outside the esp_camera_init() function.
If only the allocation of frame_buffer is separated from the esp_camera_init(), can it meet your needs?
It seems that the allocation of frame_buffer needs to be placed outside the esp_camera_init() function. If only the allocation of frame_buffer is separated from the esp_camera_init(), can it meet your needs?
probably better to be part of changing the resolution? if frame buffer is not large enough for the new resolution and format, shut down the bus, free the old buffer and alloc a new one. What if there is not enough memory for that new buffer though?
The requirement of dynamic switching resolution is very common, however, it seems that our driver can only work in continuous mode at present. After the sensor has started to work, if we call a function similar to set_framesize (), we are not sure when the new configuration will really work, either two or three frames later. Our driver has not implemented a synchronization mechanism to ensure that the data obtained in the buffer must be the latest data after the configuration changes.
This is true only for JPEG. For other formats, check is done here. Given that this code needs to execute very fast (before next VSYNC), it was decided to drop JPEG header parsing to verify the resolution. Such check could be added, if it can happen fast. There is the case of frames with the old resolution still in the queue.
This is true only for JPEG. For other formats, check is done here. Given that this code needs to execute very fast (before next VSYNC), it was decided to drop JPEG header parsing to verify the resolution. Such check could be added, if it can happen fast. There is the case of frames with the old resolution still in the queue.
From the current requirements described by @jepler, they need to achieve the following two switching:
1) 240*240 rgb565 mode
switches to 2048*1536 jpeg mode
2) 2048*1536 jpeg mode
switches to 240*240 rgb565 mode
So you mean it's not easy to change from jpeg mode to rgb565?
I'm looking into it, but I hit another issue that I am trying to trace first. It is not hard to switch, but current API is not geared towards that at all. Some changes will be needed and I am hoping to not change public API much other than adding a new fb_get_advanced
method that accepts the needed mode as arguments and does everything internally.
Same here, I need to capture 2 differents image types. one is 12bit rgb like and the other is 8 bit grayscale. Both with a resolution of 120x60. So 2 fb size needed.
@me-no-dev ,There are some requirements for the camera to output data in RGB565 format(VGA), and then to output data in JPEG format(720p). We look forward to such switching as much as possible. It is hoped that the need for such switching can also be taken into account.
@WangYuxin-esp yes, you would call camera_fb_get(settings)
and settings will include resolution, mode, number of buffers, xclk and jpeg quality. If settings are the same as last time it will function as now, else it will switch to new settings, wait for proper frame and return it to you. You would be able to let's say stream RGB565 to a screen, then on button click grab a high resolution JPEG and continue to stream to the screen after that.
Yes, that's exactly what we want. May I know when I can get the API?
I have already started on it, so I hope soon
The reinitialization time of the sensor can be shortened to within 500ms. This depends on the relevant code of the sensor. If you don't use the method of re-initializing the sensor, I suggest to let the sensor work in JPEG mode, and then display it on the LCD through the scaling function of the decoding function. see
jpg2rgb565()
.
I have tried using PIXFORMAT_JPEG, FRAMESIZE_SVGA to initialize the camera and then using JPG_SCALE_4X to call jpg2rgb565,but found that the performance was difficult to keep up with, often prompting “Task watchdog got triggered”.
I've also encountered incorrect buffer size, but with ov2640's set_res_raw. It works with PIXFORMAT_JPEG, but with any other raw format it fails to get buffer:
E (467749) cam_hal: FB-SIZE: 88000 != 1920000
E (469881) cam_hal: FB-SIZE: 88000 != 1920000
E (472014) cam_hal: FB-SIZE: 88000 != 1920000
E (474146) cam_hal: FB-SIZE: 88000 != 1920000
E (476279) cam_hal: FB-SIZE: 88000 != 1920000
This issue appears to be stale. Please close it if its no longer valid.
I would also like this function for my line scanner. I've been trying for a week now until I came across this link.
Please program the switchover function into the ESP CAM !!
I am implementing camera support for CircuitPython on Espressif microcontrollers using esp32-camera.
One use case we identified is dynamically switching resolutions at runtime. For instance, consider a point & shoot camera with an LCD for preview. The program first displays a 320x240 RGB565 image on the viewfinder. Then, when the shutter button is pressed, the sensor is switched to a larger frame size and the format is set to JPEG. The image is captured and saved to an SD card. After the image is saved, the software returns to viewfinder mode.
Additionally, viewfinder mode benefits from double buffering, while JPEG capture should use single buffering to reduce overall memory usage.
However, as far as I can see the number and size of the capture buffer(s) is set during
esp_camera_init
and never changes, even when the sensor's frame size and image format are set via the appropriate APIs on the sensor object.Am I overlooking something, or is this use case not well supported by esp32_camera?
We "could" call
esp_camera_deinit
followed byesp_camera_init
but this takes a long time, creating a bad experience.Previously, CircuitPython had its own home-brewed API over the low level parallel capture function of the ESP32-S2. In this API, I made it possible to start/stop image capture at runtime. In Python terms this was written as
that is, the buffer details were set by the caller at the time capture was started. When the block ends, capture stops.
This makes me think an additional API, such as
esp32_camera_use_buffers(size_t n_buffers, void **buffers)
might be helpful and fulfill our use case.