In the official document of "Shared camera access with ARCore", it mentions that High-end phones can support Simultaneous streams as following:
2x YUV CPU streams, e.g. 640x480 and 1920x1080
1x GPU stream, e.g. 1920x1080
1x occasional high res still image (JPEG), e.g. 12MP
And the shared camera app sample code does not show how to capture a "high res still image (JPEG), e.g. 12MP". I also saw it configures the camera preview with the supported resolution queried from ARCore's supported camera config. However, in my case, I use have to use the front camera, which only have 1 supported config: 640 x 480.
My question is that, in the face augmentation use case using front camera, how to use this shared camera API to capture a high res still image (JPEG), e.g. 12MP during ARCore session?
In the official document of "Shared camera access with ARCore", it mentions that High-end phones can support Simultaneous streams as following:
2x YUV CPU streams, e.g. 640x480 and 1920x1080 1x GPU stream, e.g. 1920x1080 1x occasional high res still image (JPEG), e.g. 12MP And the shared camera app sample code does not show how to capture a "high res still image (JPEG), e.g. 12MP". I also saw it configures the camera preview with the supported resolution queried from ARCore's supported camera config. However, in my case, I use have to use the front camera, which only have 1 supported config: 640 x 480.
My question is that, in the face augmentation use case using front camera, how to use this shared camera API to capture a high res still image (JPEG), e.g. 12MP during ARCore session?
Thanks a lot.