Coastal-Imaging-Research-Network / CIRN-Quantitative-Coastal-Imaging-Toolbox

The CIRN Introduction to Quantitative Coastal Imaging Toolbox is a collection of MATLAB scripts to produce geo-rectified images specifically tailored for quantitative analysis of coastal environments.
GNU General Public License v3.0
18 stars 7 forks source link

Matlab's Camera Calibrator App #20

Open SRHarrison opened 4 years ago

SRHarrison commented 4 years ago

Hi Brittany,

After some testing, I think that the Camera Calibrator App (included with Matlab's Computer Vision toolbox) is not only convenient (no clicking), but also seems to work better in some cases at resolving the lens model than caltech toolbox.

For people wanting to use that, it might be worthwhile including a translator (similar to your caltech2CIRN.m) from that output to the CIRN intrinsic variable (and maybe call it camcalibrator2CIRN.m).

Assuming that the user exports the "camera parameters variable" as params to workspace from the Camera Calibrator, then the translation to intrinsics is:

%% Conversion
intrinsics(1) = params.ImageSize(2);            % Number of pixel columns
intrinsics(2) = params.ImageSize(1);            % Number of pixel rows
intrinsics(3) = params.PrincipalPoint(1);         % U component of principal point  
intrinsics(4) = params.PrincipalPoint(2);          % V component of principal point
intrinsics(5) = params.FocalLength(1);         % U components of focal lengths (in pixels)
intrinsics(6) = params.FocalLength(2);         % V components of focal lengths (in pixels)
intrinsics(7) = params.RadialDistortion(1);         % Radial distortion coefficient
intrinsics(8) = params.RadialDistortion(2);         % Radial distortion coefficient
intrinsics(9) = params.RadialDistortion(3);         % Radial distortion coefficient
intrinsics(10) = params.TangentialDistortion(1);        % Tangential distortion coefficients
intrinsics(11) = params.TangentialDistortion(2);        % Tangential distortion coefficients
burritobrittany commented 4 years ago

That is very helpful shawn! I think that is a good idea since we have been having issues with the Caltech toolbox running on certain versions of MATLAB. I will add it to the list. I am actually hoping to address some issues tomorrow and will start!

sivaiahborra commented 4 years ago

Dear SRHarrison,

As I am in starting stage of using this Toolbox, I have been going through the documentation and finished first step that movies2frame, I collected the user inputs such as GCPs in FOV of video, Extrinsics (X,Y,Z, Azimuth, Tilt, Roll of fixed camera), but, in case of Intrinsics (camera parameters such as above said 11 parameters). How would I get those 11 intrinsic parameters of camera? Please let me know. Do I need any per-requistes to get those 11 parameter. Sorry for inconvenient with my simple and silly questions.

SRHarrison commented 4 years ago

Dear SRHarrison,

As I am in starting stage of using this Toolbox, I have been going through the documentation and finished first step that movies2frame, I collected the user inputs such as GCPs in FOV of video, Extrinsics (X,Y,Z, Azimuth, Tilt, Roll of fixed camera), but, in case of Intrinsics (camera parameters such as above said 11 parameters). How would I get those 11 intrinsic parameters of camera? Please let me know. Do I need any per-requistes to get those 11 parameter. Sorry for inconvenient with my simple and silly questions.

Hi @sivaiahborra , Let me see if I understand correctly.... You have collected a video of the surfzone/beach using a UAS hovering 'still'. You were able to extract the video frames, and determine the camera position (extrinsic parameters) for each frame in time, using Brittany's Toolbox. Now you wonder how to get the intrinsic parameters that A_formatIntrinsics.m assumes that you've already gathered?

There are many ways to skin a dog, but typically we introduce people to intrinsic calibration / lens calibration with this presentation on Intrinsic Calibration and Distortion and with this hand's on Lens Calibration Practicum.

Basically, you have to assume that your UAS camera has a fixed aperture and fixed focus (probably not true), and use it to take photos of a graduated checkerboard pattern (must be on a flat surface, not curvy). You effectively 'paint' the entire FOV of the sensor with images of the checkerboard. Then you can try to fit a camera model with it... The CalTech toolbox is free and fairly accessible. I definitely prefer feeding the images to Matlab's Camera Calibrator App, but it is part of a toolbox and probably not worth the extra cost if that's all you need the toolbox for.

Structure from motion software, e.g. Agisoft Metashape, or Pix4D Mapper determine the intrinsic parameters in a similar way, but do not require you to do the checkerboard thing. They use images of the same object from differing views to determine the lens distortion. However, translating those parameters to the format that the CIRN toolbox expects is not always straight forward. Brittany or others might have some translation suggestions if you plan to go that route.

I suggest taking your UAS, placing it on a table and just walking around in front of it with the checkboard displayed. Make sure that you use the exact same settings on the camera that you did during your flight/video capture. Typically these UAS cameras will use a subset of the sensor to do video, and so you want to make sure and calibrate the lens for that subset. If you later decide to change the resolution to record video, you'll need to calibrate again for those settings.

sivaiahborra commented 4 years ago

Dear Sir, Thank you for a timely and concise response. Actually, I did capture a video by fixing my camera position as constant and collected few GCPs in FOV and also its extrinsics (X,Y,Z, azimuth, tilt and roll (which is zero since there is no any side to side movement if I am not wrong) of my camera), Later, I could extracted frames from a video by using movies2frame.m

Now, what I understood from your mail that intially I have to take few images (around 20) of checkborad by basing it on flat surface in different anlgles and mostly covering the FOV of camera. In this context, a small query that shall I only change the orientation of checkboard for each time by keeping FOV as constant? or can I also change the FOV of camera (Camera viewing angle) for each time? since I am dealing now with single fixed camera maybe later I can go for mutiple cameras if I once success in running this toolbox for my study region.

Then I will go through camera calibraion toolbox.

I have been through the documentation of toolbox_calib.zip and its examples. Hope

Thank you.

*Thanks & *Regards B. Sivaiah Research Scholar Dept. of Meteorology & Oceanography Andhra University Visakhapatnam - 530003 INDIA - 91-9676155827

On Mon, Jul 20, 2020 at 8:56 PM Shawn Harrison notifications@github.com wrote:

Dear SRHarrison,

As I am in starting stage of using this Toolbox, I have been going through the documentation and finished first step that movies2frame, I collected the user inputs such as GCPs in FOV of video, Extrinsics (X,Y,Z, Azimuth, Tilt, Roll of fixed camera), but, in case of Intrinsics (camera parameters such as above said 11 parameters). How would I get those 11 intrinsic parameters of camera? Please let me know. Do I need any per-requistes to get those 11 parameter. Sorry for inconvenient with my simple and silly questions.

Hi @sivaiahborra https://github.com/sivaiahborra , Let me see if I understand correctly.... You have collected a video of the surfzone/beach using a UAS hovering 'still'. You were able to extract the video frames, and determine the camera position (extrinsic parameters) for each frame in time, using Brittany's Toolbox. Now you wonder how to get the intrinsic parameters that A_formatIntrinsics.m assumes that you've already gathered?

There are many ways to skin a dog, but typically we introduce people to intrinsic calibration / lens calibration with this presentation on Intrinsic Calibration and Distortion https://drive.google.com/file/d/19urm-rg--ufdylFKeBv-2-9MDqarARRF/view?usp=sharing and with this hand's on Lens Calibration Practicum https://drive.google.com/file/d/1QIyd0wQGBVYKA9xLgK6W-sFqxOg3_C-H/view?usp=sharing .

Basically, you have to assume that your UAS camera has a fixed aperture and fixed focus (probably not true), and use it to take photos of a graduated checkerboard pattern (must be on a flat surface, not curvy). You effectively 'paint' the entire FOV of the sensor with images of the checkerboard. Then you can try to fit a camera model with it... The CalTech toolbox http://www.vision.caltech.edu/bouguetj/calib_doc/ is free and fairly accessible. I definitely prefer feeding the images to Matlab's Camera Calibrator App https://www.mathworks.com/help/vision/ug/single-camera-calibrator-app.html, but it is part of a toolbox and probably not worth the extra cost if that's all you need the toolbox for.

Structure from motion software, e.g. Agisoft Metashape, or Pix4D Mapper determine the intrinsic parameters in a similar way, but do not require you to do the checkerboard thing. They use images of the same object from differing views to determine the lens distortion. However, translating those parameters to the format that the CIRN toolbox expects is not always straight forward. Brittany or others might have some translation suggestions if you plan to go that route.

I suggest taking your UAS, placing it on a table and just walking around in front of it with the checkboard displayed. Make sure that you use the exact same settings on the camera that you did during your flight/video capture. Typically these UAS cameras will use a subset of the sensor to do video, and so you want to make sure and calibrate the lens for that subset. If you later decide to change the resolution to record video, you'll need to calibrate again for those settings.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Coastal-Imaging-Research-Network/CIRN-Introduction-to-Quantitative-Coastal-Imaging-Toolbox/issues/20#issuecomment-661109214, or unsubscribe https://github.com/notifications/unsubscribe-auth/AOKXS4S5JDYEDL2HEP6WQGTR4ROZ5ANCNFSM4O4OY2TA .

sivaiahborra commented 4 years ago

Dear SRHarrison,

I have been through the documentation of toolbox_calib.zip and tried the first example and could create the Calib_Results.mat file. so, Shall I try the same for my camera calibration? If Yes, should I need to measure the length and width of the square in my checkboard to give those values in place of dx and dy instead of default values in an example? Then I should use the Calib_Results.mat file to run the formatIntrinsics.m. Then, In which .m file do I need to give my gathered GCPs and extrinsics (X,Y, Z, Azimuth, Tilt, Roll) camera? Sorry for inconvenience with my simple and silly questions and I have been knowing well from day to day by going through the documentation as well. Thank you for your time and I hope I can successfully run this toolbox and carry my work for my study area with help of people like you.

*Thanks & *Regards B. Sivaiah Research Scholar Dept. of Meteorology & Oceanography Andhra University Visakhapatnam - 530003 INDIA - 91-9676155827

On Wed, Jul 22, 2020 at 12:59 AM Sivaiah Borra siva4dad@gmail.com wrote:

Dear Sir, Thank you for a timely and concise response. Actually, I did capture a video by fixing my camera position as constant and collected few GCPs in FOV and also its extrinsics (X,Y,Z, azimuth, tilt and roll (which is zero since there is no any side to side movement if I am not wrong) of my camera), Later, I could extracted frames from a video by using movies2frame.m

Now, what I understood from your mail that intially I have to take few images (around 20) of checkborad by basing it on flat surface in different anlgles and mostly covering the FOV of camera. In this context, a small query that shall I only change the orientation of checkboard for each time by keeping FOV as constant? or can I also change the FOV of camera (Camera viewing angle) for each time? since I am dealing now with single fixed camera maybe later I can go for mutiple cameras if I once success in running this toolbox for my study region.

Then I will go through camera calibraion toolbox.

I have been through the documentation of toolbox_calib.zip and its examples. Hope

Thank you.

*Thanks & *Regards B. Sivaiah Research Scholar Dept. of Meteorology & Oceanography Andhra University Visakhapatnam - 530003 INDIA - 91-9676155827

On Mon, Jul 20, 2020 at 8:56 PM Shawn Harrison notifications@github.com wrote:

Dear SRHarrison,

As I am in starting stage of using this Toolbox, I have been going through the documentation and finished first step that movies2frame, I collected the user inputs such as GCPs in FOV of video, Extrinsics (X,Y,Z, Azimuth, Tilt, Roll of fixed camera), but, in case of Intrinsics (camera parameters such as above said 11 parameters). How would I get those 11 intrinsic parameters of camera? Please let me know. Do I need any per-requistes to get those 11 parameter. Sorry for inconvenient with my simple and silly questions.

Hi @sivaiahborra https://github.com/sivaiahborra , Let me see if I understand correctly.... You have collected a video of the surfzone/beach using a UAS hovering 'still'. You were able to extract the video frames, and determine the camera position (extrinsic parameters) for each frame in time, using Brittany's Toolbox. Now you wonder how to get the intrinsic parameters that A_formatIntrinsics.m assumes that you've already gathered?

There are many ways to skin a dog, but typically we introduce people to intrinsic calibration / lens calibration with this presentation on Intrinsic Calibration and Distortion https://drive.google.com/file/d/19urm-rg--ufdylFKeBv-2-9MDqarARRF/view?usp=sharing and with this hand's on Lens Calibration Practicum https://drive.google.com/file/d/1QIyd0wQGBVYKA9xLgK6W-sFqxOg3_C-H/view?usp=sharing .

Basically, you have to assume that your UAS camera has a fixed aperture and fixed focus (probably not true), and use it to take photos of a graduated checkerboard pattern (must be on a flat surface, not curvy). You effectively 'paint' the entire FOV of the sensor with images of the checkerboard. Then you can try to fit a camera model with it... The CalTech toolbox http://www.vision.caltech.edu/bouguetj/calib_doc/ is free and fairly accessible. I definitely prefer feeding the images to Matlab's Camera Calibrator App https://www.mathworks.com/help/vision/ug/single-camera-calibrator-app.html, but it is part of a toolbox and probably not worth the extra cost if that's all you need the toolbox for.

Structure from motion software, e.g. Agisoft Metashape, or Pix4D Mapper determine the intrinsic parameters in a similar way, but do not require you to do the checkerboard thing. They use images of the same object from differing views to determine the lens distortion. However, translating those parameters to the format that the CIRN toolbox expects is not always straight forward. Brittany or others might have some translation suggestions if you plan to go that route.

I suggest taking your UAS, placing it on a table and just walking around in front of it with the checkboard displayed. Make sure that you use the exact same settings on the camera that you did during your flight/video capture. Typically these UAS cameras will use a subset of the sensor to do video, and so you want to make sure and calibrate the lens for that subset. If you later decide to change the resolution to record video, you'll need to calibrate again for those settings.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Coastal-Imaging-Research-Network/CIRN-Introduction-to-Quantitative-Coastal-Imaging-Toolbox/issues/20#issuecomment-661109214, or unsubscribe https://github.com/notifications/unsubscribe-auth/AOKXS4S5JDYEDL2HEP6WQGTR4ROZ5ANCNFSM4O4OY2TA .