intel / libxcam

libXCam is a project for extended camera(not limited in camera) features and focus on image quality improvement and video analysis. There are lots features supported in image pre-processing, image post-processing and smart analysis. This library makes GPU/CPU/ISP working together to improve image quality. OpenCL is used to improve performance in different platforms.
Other
589 stars 229 forks source link

Surround View with 2 cameras using DewarpSphere Mode #824

Closed shreshtashetty closed 1 year ago

shreshtashetty commented 1 year ago

Hi, I have started out with using the DewarpSphere mode to get Surround View with 2 cameras.

This is the command I run:

./test-surround-view --module soft --input ../front.yuv --input ../back.yuv --output output.yuv --in-w 640 --in-h 360 --out-w 1120 --out-h 480 --topview-w 640 --topview-h 360 --in-format yuv --fisheye-num 2 --cam-model cama2c1080p --blend-pyr-levels 2 --dewarp-mode sphere --scopic-mode mono --scale-mode dualcurve --frame-mode single --fm-mode cluster --fm-frames 2 --fm-status wholeway --save true --loop 1

Following is my input, consisting of 2 still images, 1 from the front view and 1 from the back view: orig_fisheye_0_0 orig_fisheye_0_1

Following is my output: output yuv_0

And here is my topview when I enable --save-topview true topview_output yuv_0

My questions:

  1. Are my outputs right?
  2. Are translation params used in the DewarpShere mode at all?
  3. Is my command right? Is --fm-frames=2 given that there are 2 images in total, or should it =1 since we have 1 frame per view?
  4. How is _fm_frame_count different from _fm_frames? How is the former set?
  5. Can I have more detailed steps on how to set FMConfig? The definition in xcore/interface/feature_match.h doesn't really help.
  6. On what basis do I set fm_region_ratio?
  7. I was trying to do this with images from the front and right camera and got a strange output:- output yuv_0 Is surround view with 2 cameras not possible between adjacent views? (Do the cameras HAVE TO be opposite each other?)
zongwave commented 1 year ago

Hi, you set command option “--fisheye-num 2 --cam-model cama2c1080p”, with this combination the surround view test app is implemented to receive one 1920x1080 (front & back images are captured in one frame) yuv video, output 1920x960 (horizontal resolution is as twice as vertical resolution, the stitched image covers 360 degree fov in horizon and 180 degree fov in vertical)

https://github.com/intel/libxcam/wiki#libxcam-stitch-processing-flow

From: Shreshta Shetty @.> Sent: Monday, May 22, 2023 8:49 PM To: intel/libxcam @.> Cc: Subscribed @.***> Subject: [intel/libxcam] Surround View with 2 cameras using DewarpSphere Mode (Issue #824)

Hi, I have started out with using the DewarpSphere mode to get Surround View with 2 cameras.

This is the command I run:

./test-surround-view --module soft --input ../front.yuv --input ../back.yuv --output output.yuv --in-w 640 --in-h 360 --out-w 1120 --out-h 480 --topview-w 640 --topview-h 360 --in-format yuv --fisheye-num 2 --cam-model cama2c1080p --blend-pyr-levels 2 --dewarp-mode sphere --scopic-mode mono --scale-mode dualcurve --frame-mode single --fm-mode cluster --fm-frames 2 --fm-status wholeway --save true --loop 1

Following is my input, consisting of 2 still images, 1 from the front view and 1 from the back view: [orig_fisheye_0_0]https://user-images.githubusercontent.com/14017651/239909766-c71e2192-bf30-4bee-9d04-daecfca16d5b.jpg [orig_fisheye_0_1]https://user-images.githubusercontent.com/14017651/239909893-87f53888-5312-4f8e-9af4-8252617b6041.jpg

Following is my output: [output yuv_0]https://user-images.githubusercontent.com/14017651/239909382-4bdc6b23-142c-48a1-b112-4e2c02dc8400.jpg

And here is my topview when I enable --save-topview true [topview_output yuv_0]https://user-images.githubusercontent.com/14017651/239909596-21a45a79-1bd6-47b9-b3da-ad54119bc697.jpg

My questions:

  1. Are my outputs right?
  2. Are translation params used in the DewarpShere mode at all?
  3. Is my command right? Is --fm-frames=2 given that there are 2 images in total, or should it =1 since we have 1 frame per view?
  4. How is _fm_frame_count different from _fm_frames? How is the former set?
  5. Can I have more detailed steps on how to set FMConfig? The definition in xcore/interface/feature_match.h doesn't really help.
  6. On what basis do I set fm_region_ratio?
  7. I was trying to do this with images from the front and right camera and got a strange output:- [output yuv_0]https://user-images.githubusercontent.com/14017651/239912104-4dfe681a-a54e-4aeb-9ca1-462cb24bb62f.jpg Is surround view with 2 cameras not possible between adjacent views? (Do the cameras HAVE TO be opposite each other?)

— Reply to this email directly, view it on GitHubhttps://github.com/intel/libxcam/issues/824, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ACXDSPYOX5UBQ7SDHF2AW5TXHNOCVANCNFSM6AAAAAAYKMWBTU. You are receiving this because you are subscribed to this thread.Message ID: @.**@.>>

shreshtashetty commented 1 year ago

@zongwave, thanks for the reply.

My front and back images are of size (640,360) each. Do I rotate them by 90 degrees (clockwise and anticlockwise respectively to get (360x640) images), resize them into (960x1080) images and then hstack them to get a 1920x1080 image? Wouldn't doing so make my intrinsic parameters got from calibration and fisheye radius redundant?

Any clues on how I can make this work with my 640x360 images?

EDIT: I tried this by putting my (640x360) fisheye images on a black background as follows (I didn't do cv2.resize because then my intrinsic calibration parameters might not be useful for dewarping the images) :- orig_fisheye_0

Clearly, this doesn't work and I get the wrong output.

Here is my output:- output yuv_0

And here is my topview:- topview_output yuv_0

Again, how do I make this work with 640x360 images?

dreamerfar commented 1 year ago

I noticed you set "--dewarp-mode" option as "sphere", the input fisheye images will be rectified by cylindrical warping, you need to set optical axis coordinate and the radius of fisheye images. in this case the fisheye images are approximated as circular (equal vertical & horizontal FoV ).

If your fisheye camera has very different vertical & horizontal FoV, the images are not circular, you should use "--dewarp-mode" option as "bowl", this will use perspective projection to rectify input fisheye images, you have to calibrate your camera and input "extrinsic & intrinsic" parameters.

shreshtashetty commented 1 year ago

@dreamerfar Yes, figured. Spherical mode doesn't apply to my case. Thanks.