Open fairf4x opened 7 years ago
Hey @fairf4x , Did you end up finding the solution for this? I am also trying to do vignettecalib and pcalib for my camera. For vignette calib, is there a certain way that you need to move the camera or is it just arbitrary? For pcalib, I have no idea where to even get started. Do you know what exactly to do for this? Thanks in advance.
I didn't manage to produce vignettecalib using vignetteCalib yet, but I was able to produce a pcalib.txt file from my own dataset using responseCalib. I will try to explain the process.
Suppose you have a directory with your dataset called sweepData and you want to use responseCalib
to create pcalib.txt
for you.
the directory sweepData
should have following content:
sweepData:
camera.txt
images.zip
times.txt
camera.txt
should contain camera calibration as described here
My camera.txt
look like this:
0.39500631 0.40439271 0.31469786 0.17470434 1.0
856 480
none
856 480
It can be written by hand on condition you know the intrinsic parameters for your camera.
images.zip
should be an archive of jpg
files with names in format: XXXX.jpg
where XXXX
is imageID
(leading zeroes starting with 0000)
times.txt
is a text file with three whitespace separated columns:
imageID timestamp exposureTime
Where timestamp
is a POSIX time and exposureTime
is a real number denoting exposure time in ms
Calling responseCalib
like this:
./responseCalib /path/to/data/sweepData/
should produce directory with pcalib.txt
Hey @fairf4x , First off thanks for the response. I understand camera.txt. As per images, i looked at a sample dataset here ([https://vision.in.tum.de/data/datasets/mono-dataset?redirect=1]()) which had a constant image with increasing exposures. What did you use? Also, how did you get the times.txt file or the corresponding data in it? Thanks in advance.
Edit1: I understand what everything is but i am not sure how to get the data. I can only change the exposure of my camera using v4l2 thru terminal and can't using videocapture(opencv). How did you get the data for times.txt? Thanks
Edit2: I got the images with increasing exposures. I changed the absolute exposure to get them. But I don't know what the values are in msec. Do you know how to get that?
Hey @fairf4x , So, I used videocapture and v4l2 to get the images with different absolute exposure values. But I am not sure how to get the exposure in msec. I looked at the difference between POSIX time of any consecutive frames and it is always constant(0.67sec (1/15th of a s)as my camera only does 15 fps). Did you have different msec exposure values or about the same?
Thanks in advance.
Hi, I did not capture video. I am experimenting with drone camera so my control over the camera settings is limited. For "sweep" dataset I was using a simple ROS node that automatically shoots a picture after some interval (i.e. 1.5s) and increases exposure of the camera between -3.0 and +3.0 with step 0.5. The values -/+ 3.0 are bounds for the camera exposure given by the drone API. I wasn't able to figure out units used here.
My only idea about how the dataset should look like comes from investigation of the sample datasets. Framerate of "sweep" datasets seems to be 10 fps and exposure values are varying from as low as 0.0597894751 to 19.2336845398. This might be in ms. However when I tried to change scale of exposure times in my own dataset (e.g. by multiplying with 100) it did not seem to have any effect on the result of responseCalib
. I suspect that the units are not important here.
Thanks for that, @fairf4x . So, I tried out some sample videos/images. I used different sample sizes and scenarios (for the camera). All of them had 12-14 "-nan"s in the front and then regular numbers in the pcalib.txt. In general the numbers are increasing but the distribution of the numbers was not like 0 to 255 like they had in the example. My numbers started from around 60 and went to 255. What were your pcalib.txt numbers like? Did you get any "-nan"s at all? What did your image/video have? Do you have any suggestions that would give me a better pcalib?
Also, did you figure out how to do the vignette calib?
Thanks for your help. I appreciate it.
@vannem95 Inverse of gamma is captured as G(x) which is dumped as '.png' images as the optimization progresses. If there are no images with very low exposure times, they are set to 0 (and hence the inverse of the value is Not a Number). You could solve this by adding images with very low exposure times (preferably, lots of them)
@fairf4x were the aruco markers detected and displayed when you ran the calibration? Between these two steps? SEQUENCE NAME: samples_vignette3/! plane image values 10000000000.000000 - -10000000000.000000!
For instance, my values start with an initialization of : plane image values 1.264622 - 131.889465! 100646408.000000 residual terms => 7213.918457 and coverage from there.
Is it possible that it does not work because it is not composed from video capture? (I can only get exposure times for the third column of times.txt when shooting images one by one with the camera)
Regarding your original question. This shouldn't matter; The authors too have used a similar setup.
Have you set the exposure to a constant or reading values calculated by auto exposure for this step? My setup was the latter.
@anilprasadmn Thanks. I will try that out. By the way, do you know how to perform vignettecalib?
@vannem95 Yes, I got the vignette calibration working.
Do you see a "vignettecalib" executable in you bin? Because when i did cmake and make, it didnt get built. @anilprasadmn
You need the pcalib.txt from the responsecalib as an input for vignettecalib. The steps are as described by @fairf4x . Create a folder with images.zip (or just a folder by the name 'images') of the aruco marker images along with pcalib.txt, camera.txt and times.txt. (similar to the folder structure for responsecalib with the addition of pcalib.txt) Once you have this, you can run the vignettecalib on this dataset (mind the trailing slash)
@anilprasadmn Also are we supposed to move the camera around like they did in the narrow_vignette video?
Yes, you need to tile all of the visible portion of your image view with the marker grid (which is 5 times the markers size, since this region is used for calculation. make sure its a non-glossy white plane)
@anilprasadmn Cool. Thank you so much for helping out. One last thing, when i did cmake and make, the vignettecalib executable didn't get built in the bin folder like playdataset and responsecalib. Did it get built for you?
You need to have aruco installed for the vignettecalib to be compiled. (It is included in the Thirdparty folder and instructions can be found in the mono dataset github description page)
@anilprasadmn Great. Thank you so much for your help, anil. You are a lifesaver.
@anilprasadmn Hey anil, sorry to bother you. but i still keep getting at least 10 nans(Got 12-14 before). Attached is the terminal output i am getting:
Load Dataset /home/vivek/sweepData/: found no in folder /images; assuming that images are zipped. got 9999 entries and 9999 files from zipfile! Input resolution: 640 480 Input Calibration (fx fy cx cy): 363357.156250 271843.906250 193337.625000 110752.367188 0.089228 Out: Crop Output resolution: 640 480 new K: 362791.531250 271664.718750 193053.156250 110654.179688 old K: 363357.156250 271843.906250 193337.125000 110751.867188
FOV Undistorter: Warning! Image has black pixels.
Reading Photometric Calibration from file /home/vivek/sweepData/pcalib.txt PhotometricUndistorter: Could not open file! Dataset /home/vivek/sweepData/: Got 9999 files! loaded 9999 images init RMSE = 127948.423747! Irradiance 17.220422 - 28.188419 init done optG RMSE = 30114.734982! Inv. Response 28233.615821 - 275309.873451 OptE RMSE = 27962.417690! Irradiance 8.876896 - 28.295072 resc RMSE = 30.205212! rescale with 0.001080! optG RMSE = 29.293719! Inv. Response 21.915380 - 297.203783 OptE RMSE = 28.536765! Irradiance 0.008563 - 0.029328 resc RMSE = 25.127685! rescale with 0.880537! optG RMSE = 24.489668! Inv. Response 17.835236 - 250.958528 OptE RMSE = 23.868996! Irradiance 0.007119 - 0.024561 resc RMSE = 24.928692! rescale with 1.044396! optG RMSE = 24.297015! Inv. Response 17.647475 - 249.256340 OptE RMSE = 23.681357! Irradiance 0.007059 - 0.024371 resc RMSE = 24.908544! rescale with 1.051821! optG RMSE = 24.277392! Inv. Response 17.628362 - 249.083994 OptE RMSE = 23.662233! Irradiance 0.007053 - 0.024351 resc RMSE = 24.906394! rescale with 1.052580! optG RMSE = 24.275297! Inv. Response 17.626335 - 249.065543 OptE RMSE = 23.660191! Irradiance 0.007052 - 0.024349 resc RMSE = 24.906168! rescale with 1.052661! optG RMSE = 24.275076! Inv. Response 17.626123 - 249.063598 OptE RMSE = 23.659976! Irradiance 0.007052 - 0.024349 resc RMSE = 24.906144! rescale with 1.052670! optG RMSE = 24.275053! Inv. Response 17.626100 - 249.063394 OptE RMSE = 23.659953! Irradiance 0.007052 - 0.024349 resc RMSE = 24.906142! rescale with 1.052671! optG RMSE = 24.275051! Inv. Response 17.626098 - 249.063373 OptE RMSE = 23.659951! Irradiance 0.007052 - 0.024349 resc RMSE = 24.906142! rescale with 1.052671! optG RMSE = 24.275051! Inv. Response 17.626098 - 249.063371 OptE RMSE = 23.659951! Irradiance 0.007052 - 0.024349 resc RMSE = 24.906142! rescale with 1.052671!
I am using a 15fps logitech 9000 quickcam pro on which i am changing the absolute exposure from 1 to 10000 using uvc drivers. My images is a bent cardboard covering the glossy floor. The exposure in my times.txt is 1000/(absolute exposure){just a guess. don't know how to get the exposure in msec}. Do you see any obvious mistakes i might be doing [or] any suggestions to solve this issue? Thanks in advance
Can you also provide the times.txt with a sample image? The value in the 3rd column should be exposure in ms. [shutter speed/ 1000 if you are using v4l2. I read exposure through the exif data since had issues with v4l2] #4
Also, I hope the white balance etc is set appropriately.
Here is the times.txt:
1492802212.15 0000 0.100010 1492802212.21 0001 0.100020 1492802212.28 0002 0.100030 1492802212.35 0003 0.100040 1492802212.41 0004 0.100050 1492802212.48 0005 0.100060 1492802212.55 0006 0.100070 1492802212.61 0007 0.100080 1492802212.68 0008 0.100090 1492802212.75 0009 0.100100 1492802212.81 0010 0.100110 1492802212.88 0011 0.100120 1492802212.95 0012 0.100130 1492802213.01 0013 0.100140 1492802213.08 0014 0.100150 .. .. ..
1492802886.62 9983 62.500000 1492802886.82 9984 66.666667 1492802887.02 9985 71.428571 1492802887.22 9986 76.923077 1492802887.42 9987 83.333333 1492802887.62 9988 90.909091 1492802887.82 9989 100.000000 1492802888.02 9990 111.111111 1492802888.22 9991 125.000000 1492802888.42 9992 142.857143 1492802888.62 9993 166.666667 1492802888.82 9994 200.000000 1492802889.02 9995 250.000000 1492802889.22 9996 333.333333 1492802889.42 9997 500.000000 1492802889.62 9998 1000.000000
I do not have control over exposure in msec. Here are my options on v4l2:
vivek-Precision-T1700: v4l2-ctl --all Driver Info (not using libv4l2): Driver name : uvcvideo Card type : UVC Camera (046d:0990) Bus info : usb-0000:00:1d.0-1.3 Driver version: 4.4.49 Capabilities : 0x84200001 Video Capture Streaming Extended Pix Format Device Capabilities Device Caps : 0x04200001 Video Capture Streaming Extended Pix Format Priority: 2 Video input : 0 (Camera 1: ok) Format Video Capture: Width/Height : 640/480 Pixel Format : 'YUYV' Field : None Bytes per Line : 1280 Size Image : 614400 Colorspace : sRGB Transfer Function : Default YCbCr Encoding : Default Quantization : Default Flags : Crop Capability Video Capture: Bounds : Left 0, Top 0, Width 640, Height 480 Default : Left 0, Top 0, Width 640, Height 480 Pixel Aspect: 1/1 Selection: crop_default, Left 0, Top 0, Width 640, Height 480 Selection: crop_bounds, Left 0, Top 0, Width 640, Height 480 Streaming Parameters Video Capture: Capabilities : timeperframe Frames per second: 15.000 (15/1) Read buffers : 0 brightness (int) : min=0 max=255 step=1 default=128 value=128 contrast (int) : min=0 max=255 step=1 default=32 value=32 saturation (int) : min=0 max=255 step=1 default=32 value=32 white_balance_temperature_auto (bool) : default=1 value=1 gain (int) : min=0 max=255 step=1 default=0 value=206 power_line_frequency (menu) : min=0 max=2 default=2 value=2 white_balance_temperature (int) : min=0 max=10000 step=10 default=4000 value=0 flags=inactive sharpness (int) : min=0 max=255 step=1 default=224 value=224 backlight_compensation (int) : min=0 max=2 step=1 default=1 value=1 exposure_auto (menu) : min=0 max=3 default=3 value=1 exposure_absolute (int) : min=1 max=10000 step=1 default=166 value=10000 exposure_auto_priority (bool) : default=0 value=1 focus (int) : min=0 max=255 step=1 default=0 value=0 led1_mode (menu) : min=0 max=3 default=3 value=3 led1_frequency (int) : min=0 max=255 step=1 default=0 value=0 disable_video_processing (bool) : default=0 value=0 raw_bits_per_pixel (int) : min=0 max=1 step=1 default=0 value=0
Here is why i went with that formula(exposure in msec = 1000/absolute exposure).
https://buildyourown.wordpress.com/2009/07/16/skype-logitech-quickcam-pro-9000/
The author of this post said the exposure range is 1/1000 to 1/5 of a sec and earlier versions had 1/10000 to 1/10 of a sec. The range seems to be 5 to 1000 and 10 to 10000 and the range i see on absolute exposure is 1 to 10000. So, i went with 1/10000 to 1/1 of a sec and i have no idea if it's wrong or right. I couldn't find a way to set the shutter speed.
Here is the responsecalib output when i run this dataset:
Load Dataset /home/vivek/sweepData/: found no in folder /images; assuming that images are zipped. got 9999 entries and 9999 files from zipfile! Input resolution: 640 480 Input Calibration (fx fy cx cy): 363357.156250 271843.906250 193337.625000 110752.367188 0.089228 Out: Crop Output resolution: 640 480 new K: 362791.531250 271664.718750 193053.156250 110654.179688 old K: 363357.156250 271843.906250 193337.125000 110751.867188
FOV Undistorter: Warning! Image has black pixels.
Reading Photometric Calibration from file /home/vivek/sweepData/pcalib.txt PhotometricUndistorter: Could not open file! Dataset /home/vivek/sweepData/: Got 9999 files! loaded 9999 images init RMSE = 126138.141890! Irradiance 17.052605 - 28.258626 init done optG RMSE = 29638.453328! Inv. Response 29438.359313 - 296847.513951 OptE RMSE = 27373.314080! Irradiance 8.413489 - 28.229015 resc RMSE = 23.514413! rescale with 0.000859! optG RMSE = 22.805304! Inv. Response 17.747669 - 256.516364 OptE RMSE = 22.224789! Irradiance 0.006380 - 0.023291 resc RMSE = 22.093410! rescale with 0.994089! optG RMSE = 21.540971! Inv. Response 16.254162 - 244.782224 OptE RMSE = 21.003272! Irradiance 0.005987 - 0.022036 resc RMSE = 21.879997! rescale with 1.041742! optG RMSE = 21.333924! Inv. Response 16.050825 - 242.683461 OptE RMSE = 20.801489! Irradiance 0.005926 - 0.021826 resc RMSE = 21.857195! rescale with 1.050751! optG RMSE = 21.311701! Inv. Response 16.029667 - 242.455629 OptE RMSE = 20.779821! Irradiance 0.005919 - 0.021804 resc RMSE = 21.854945! rescale with 1.051739! optG RMSE = 21.309507! Inv. Response 16.027596 - 242.433079 OptE RMSE = 20.777682! Irradiance 0.005919 - 0.021801 resc RMSE = 21.854728! rescale with 1.051837! optG RMSE = 21.309296! Inv. Response 16.027396 - 242.430903 OptE RMSE = 20.777476! Irradiance 0.005919 - 0.021801 resc RMSE = 21.854707! rescale with 1.051846! optG RMSE = 21.309276! Inv. Response 16.027377 - 242.430694 OptE RMSE = 20.777456! Irradiance 0.005919 - 0.021801 resc RMSE = 21.854705! rescale with 1.051847! optG RMSE = 21.309274! Inv. Response 16.027375 - 242.430674 OptE RMSE = 20.777454! Irradiance 0.005919 - 0.021801 resc RMSE = 21.854705! rescale with 1.051847! optG RMSE = 21.309273! Inv. Response 16.027375 - 242.430672 OptE RMSE = 20.777454! Irradiance 0.005919 - 0.021801 resc RMSE = 21.854705! rescale with 1.051847!
Here is the image with the highest exposure.
@anilprasadmn Thoughts?
@vannem95 Your times.txt is not apropriate:
1492802212.15 0000 0.100010 It should have been in the format of the datasets provided. Have a look at them, it will help you figure out the best practice to capture data. ie.,
framenumber, timestamp,exposuretime
Make sure that there are no change in the scene (such as moving people casting shadows on the setup) when you are capturing the data. The setup looks fine (but could be brighter; again, best practice is to mimic the authors setup)
Good catch. Can't believe i missed that. Thanks a lot. appreciate it.
hey @anilprasadmn , so that change didn't affect the result. i tried the calibs on a logitech c920. the calibs were very similar to the 9000 cam and i kept getting nans. but using a vignette image i found in one of the dso/dso-ros forums i got dso-ros to work barely. it fails if i dont move it very slowly. does it work fine for you? also do you know by chance how to output the pose of the camera? thanks in advance
@vannem95 Not all of the datasets I captured succeeded in converging and returning a valid result. [nor was I able to reproduce it with enough confidence.] So give it a couple of tries, read the paper multiple times and you need some luck as well. Prune your input data and scene to get better convergence. [look at the partial results that are dumped to get an idea of what might going wrong]
Vignette are specific for the lens. Since, they are not very critical (errors in it don't necessarily change the position of the energy functional) you could make do with a generic one. (for instance, my camera has an drastically different response to the authors, the images are brighter near the edge, in which replacing one with the other it will be worse than not giving it in)
I too face a similar problem and was hoping that @fairf4x would revert back with his results (whose thread we seem to have hijacked). I am using a Omnivision OV5674 sensor with a wide angle sunnex lens. I would like to see datasets with global shutter (so that I know what to blame for the errors)
Pose of the camera is dumped in results.txt
I see that results.txt is a by product of dso. Also, when i run dso_ros, it either crashes or i quit it and i don't get any outputs other than the pangolin window. do i have to remove pangolin to see the results.txt file? @anilprasadmn
I needed to solve some issue with vignetteCalibration. I am using exposure times from EXIF information for my dataset images multiplied by 1000 (in order to obtain time in ms as suggested by @anilprasadmn).
Still it produces the same output:
plane image values 10000000000.000000 - -10000000000.000000!
0.000000 residual terms => -nan
0.000000 residual terms => -nan
Is there some simple way to check whether the aruco markers are recognized?
by the way, is using a fisheye lens a requirement for dso_ros? @anilprasadmn
Is there some simple way to check whether the aruco markers are recognized?
Before you get to "plane image values 10000000000.000000 - -10000000000.000000!" stage, detected AR markers should have been displayed for your visual inspection. Run the play dataset to check if the format is appropriate.
I suspect that images might not be read in your case, you could modify the code a bit to check if each image is being read correctly. If there is no delay between you starting the calibration and the optimization starting, then its definitely due to errors in reading the images itself. In line 228 of src/main_vignetteCalib.cpp make these modifications:
for(int i=0;i<reader->getNumImages();i+=imageSkip)
{
printf("Currently operating on image number %d with skip of %d.\n", i, imageSkip);
std::vector<aruco::Marker> Markers;
ExposureImage* img = reader->getImage(i,true, false, false, false);
//cv::imshow("DEBUG", img);
cv::Mat InImage;
cv::Mat(h_out, w_out, CV_32F, img->image).convertTo(InImage, CV_8U, 1, 0);
delete img;
printf("h_out,w_out = %d,%d ;\n",h_out,w_out);
printf("img size = %d,%d ;\n",InImage.rows, InImage.cols);
MDetector.detect(InImage,Markers);
if(Markers.size() != 1) continue;
This should produce some debug info such as:
Currently operating on image number 757 with skip of 1.
h_out,w_out = 540,768 ;
img size = 540,768 ;
Currently operating on image number 758 with skip of 1.
h_out,w_out = 540,768 ;
img size = 540,768 ;
plane image values 30.626350 - 280.600342!
458994165.000000 residual terms => 223.750198
458994165.000000 residual terms => 25.215351
do i have to remove pangolin to see the results.txt file?
No. actually there are switches within the code which let you publish the pose. They aren't exposed and you will have to find it and enable them.
is using a fisheye lens a requirement for dso_ros?
No. Fisheyes track more points and loose tracking less often. So it is preferred.
@fairf4x what camera are you using? global shutter by any chance?
Thanks @anilprasadmn. I can confirm that aruco markers are not recognized.
The test Markers.size() != 1
is always true on my dataset.
I suspect that OpenCV somehow does not read the images because of some format mismatch (maybe sRGB vs RGB issue?). My images are in this format:
$ identify 0000.jpg
0000.jpg JPEG 856x480 856x480+0+0 8-bit sRGB 64.4KB 0.000u 0:00.000
Compared to author's dataset:
00000.png PNG 1280x1024 1280x1024+0+0 8-bit RGB 256c 581KB 0.000u 0:00.000
My camera does not have global shutter - it is built-in camera onboard Parrot Bebop drone. This is also the reason I can not set absoulute exposure time - only relative to automatic exposure [-3.0,+3.0].
In my case identify produces:
$identify 0001.jpg 0001.jpg JPEG 1024x720 1024x720+0+0 8-bit sRGB 49.2KB 0.000u 0:00.000
So sRGB shouldn't be an issue.
Hi @anilprasadmn! What would you say is a good value for RMSE after photometric calibration? I'm also getting around 23 with 1200 images as the guys above.
@vannem95 @anilprasadmn Hi, how did you get the exposure time(times.txt) with varying values? if possible, could you share more details, please? thank you in advance.
Can anyone tell me how to make times.txt file I tired to make but did'nt understand how to make that file. ?
@DxsGeorge that sounds reasonable (sorry, had missed your update)
@TengFeiHan0 I had used a rasbeerrypi cam with varying exposure times to get the task done (I took a couple of images and didn't' use them before getting the correct image, since the camera needs time to adapt to the commands)
@Maheshwariraghav112 you need to set the exposure times programatically to a camera which allows for such functionality
@anilprasadmn Hey, I am new to this field can you elaborate more how to begin to make this file ?
Ideally, you need a global shutter industrial camera to implement the work described here [DSO].
refer to https://picamera.readthedocs.io/en/release-1.10/recipes1.html
That will explain how you can build a setup with raspberry pi cam (cost effective alternative) The camera exposure can be varied using the API and the same value recorded to the times.txt file (you will have to verify that the exposure has actually changed with manual inspection)
@anilprasadmn Thanks for you response, by the way I am using fisheye camera is it not possible to make the same file with this camera.? Right now using Linux OS and have succesfully run the DSO code https://github.com/JakobEngel/dso. I have made my new dataset of images captured by fisheye camera, need to make times.txt, pcalib.txt and vignette.png to run my dataset. And is it necessary to create times.txt file to run the dataset.
@Maheshwariraghav112 , I had used a flat camera as well as a fisheye. A considerable part of the fisheye view might get cropped for active region, DSO supports fish eye camera (also, try out the new stereo-DSO algorithms that are available: I haven't used them personally) You need to generate the times.txt while you capture these images. Generating vignette.png and pcalib.txt is an iterative process to be honest. use the initial guess [I even cheated by manually 'fixing' the pcalib to force it to be monotonous the first time, I didnt' get the result, which occurs due to some variation]
@anilprasadmn , Hey can you tell me how you generated the pcalib.txt file i am unable to generate this file.
@vannem95 @fairf4x @anilprasadmn Hey, can you guys help me to make pcalib.txt files for my own dataset . I don't understand from where to begin to make pcalib.txt file also don't know about response callibration file. Your guidance may help me to proceed further. Thanks
@anilprasadmn and @fairf4x Thanks for you nice discussion. I am facing this problem during responseCalib, It produce nan for RMSE, like this: Dataset ../data/: Got 5069 files! loaded 5069 images
init RMSE = 0.000000! Irradiance 99.690471 - 194.882620 optG RMSE = 0.000000! Inv. Response 0.000000 - 0.000000 OptE RMSE = -nan! Irradiance 10000000000.000000 - -10000000000.000000 resc RMSE = -nan! rescale with inf! optG RMSE = -nan! Inv. Response 10000000000.000000 - -10000000000.000000 OptE RMSE = -nan! Irradiance 10000000000.000000 - -10000000000.000000 resc RMSE = -nan! rescale with -nan! optG RMSE = -nan! Inv. Response 10000000000.000000 - -10000000000.000000 OptE RMSE = -nan! Irradiance 10000000000.000000 - -10000000000.000000 resc RMSE = -nan! rescale with -nan! optG RMSE = -nan! Inv. Response 10000000000.000000 - -10000000000.000000 OptE RMSE = -nan! Irradiance 10000000000.000000 - -10000000000.000000
I generated exposure time from frame rate, it is constant exposure time =1/fps, because I have no control on camera. My time.txt file is look like: 00000 1569855600.0000000 0.03336666666666667 00001 1569855600.0333667 0.03336666666666667 00002 1569855600.0667334 0.03336666666666667 00003 1569855600.1001000 0.03336666666666667 00004 1569855600.1334667 0.03336666666666667 00005 1569855600.1668334 0.03336666666666667 00006 1569855600.2002000 0.03336666666666667 00007 1569855600.2335668 0.03336666666666667 00008 1569855600.2669334 0.03336666666666667 00009 1569855600.3003001 0.03336666666666667 00010 1569855600.3336668 0.03336666666666667 00011 1569855600.3670335 0.03336666666666667 00012 1569855600.4004002 0.03336666666666667 00013 1569855600.4337668 0.03336666666666667 00014 1569855600.4671335 0.03336666666666667 00015 1569855600.5005002 0.03336666666666667 00016 1569855600.5338670 0.03336666666666667 00017 1569855600.5672336 0.03336666666666667 00018 1569855600.6006002 0.03336666666666667 00019 1569855600.6339670 0.03336666666666667 00020 1569855600.6673336 0.03336666666666667 00021 1569855600.7007003 0.03336666666666667 00022 1569855600.7340670 0.03336666666666667 .................................................... Why RMSE is nan? Is it possible to use responseCalib using my data? I think you can solve my problem.
Is it possible to use responseCalib using my data?
No. Without varying the exposure, you can't study the effect it has on your images produced.
Hey @vannem95 can you share the source code of how you manage to create the time.txt file, I am using Blackfly GigE camera which has its own sdk on ros, I have successfully ran DSO with it, now I want to Integrate pcalib.txt to it for which I need time.txt file.
It would be great if someone can help me with it also Thanks In Advance!
Hey, I figured out how to do pcalib.txt, nbut my results are weird, I am getting increase in values above 255 then comming back to 255, and first value as -nan. Any ideas guyz, I really need some help here!
-nan 0.0304538314123542 0.0532701038711612 0.0973080141093104 0.156690146818502 0.220751005359101 0.292986692145073 0.373694556240523 0.45514403503921 0.541656394544099 0.632458457390751 0.728630655905973 0.828901594723918 0.953413774217377 1.17297301257366 1.53600174650937 1.88197897275722 2.10340276001252 2.27160208018437 2.4398599349328 2.63548934595587 2.86546655198378 3.09696328603177 3.33520747332946 3.58589287280338 3.86105776004936 4.17223260647411 4.51173712186661 4.84353455714835 5.14937245902436 5.4519650543975 5.77553287405983 6.11957715899787 6.44923027745495 6.76530323294952 7.09803427372118 7.39863090814452 7.71762282258766 8.04495203876262 8.38710655952352 8.72780427083243 9.10671068684482 9.46949923108282 9.83349431065804 10.220238101358 10.611935651745 11.0352694493227 11.4636251463296 11.9442236773381 12.3962763804501 12.8126684296959 13.2384394993186 13.6281013223075 14.0490866890382 14.4814688186822 14.8564161516929 15.2269598401561 15.60367172709 15.9649642683161 16.408531281038 16.9186887513718 17.5022932644952 17.9447663069619 18.3564640192061 18.7358918547836 19.092524425528 19.5125674601545 19.897162954508 20.3051000339393 20.7951631028953 21.3172244125528 21.8383373665387 22.3651908506346 22.8667609548543 23.2907159941569 23.760804328802 24.2594255947039 24.7805161565785 25.3275051877564 25.9525384219184 26.4911390961724 26.9823796479313 27.5757702473881 28.1322489267019 28.7058847762242 29.3362915378107 29.991848063365 30.5533996368009 31.1582597645561 31.745692817288 32.3172079031137 32.9218577054369 33.4956526017897 34.0305096307196 34.5946928789133 35.0960551450268 35.6074576693945 36.0876110769148 36.5806822059886 37.1483023547553 37.6253461616793 38.0135094143769 38.5231526445709 39.0340449592008 39.7131543687314 40.2626610098294 40.8763326083638 41.4778131969877 42.0608591615766 42.7491460848786 43.456065412856 44.164431114127 44.7532456733705 45.4141485049408 46.0695228624551 46.7075497705972 47.3643227895572 48.1539013924751 48.8280201562931 49.4052937431112 50.0326498075791 50.6113680537477 51.1767379629253 51.99256602539 52.5930259639489 53.1979903534724 53.9019926631801 54.5719774364767 55.2047972588925 55.8441856028447 56.3083148832278 56.7409361200905 57.1666332269683 57.5690218308593 58.0595210396775 58.6007384762221 59.0882462694996 59.468659228219 60.0311789088329 60.5426653263757 61.2192491541708 61.9305164551431 62.5637948556103 63.2537273612489 63.9968746124892 64.6388905911771 65.5063227193005 66.3638657498418 67.2704986394642 68.2228566413857 69.1878893769369 70.0756048169885 70.9920013771768 71.874070369246 72.6146318745793 73.3215584314173 73.7503522141941 74.4130441792747 74.7564080084991 75.2648339414856 75.9795211050058 77.0510541839377 78.1593933960293 79.4724449253786 80.5730833695956 81.6525536808565 82.6160229282421 83.5511642891004 84.4782391397034 85.751711158446 87.2297414735214 88.8847880168709 90.6701298512634 92.6126709130491 94.8883228502796 97.0619043464824 99.3030244873319 101.531594547837 103.400931309888 105.268020912629 107.089067257538 108.95875845954 110.726662997724 112.633257399196 114.244427454058 116.173938913359 117.809953892498 119.423688829198 121.231780713085 123.151155297458 125.059717623742 126.845496126457 128.98655395307 130.795166793619 132.806559393196 135.025293264565 137.214409296383 139.549362684487 142.311252798293 145.04555881062 147.965817258618 151.140988054827 154.159001574072 157.02351484412 159.597694658102 162.210917153741 164.714560822248 167.128947068862 169.076912725747 171.673291322462 174.070684813342 176.083413895573 178.101105937311 180.412462875763 182.387206182984 184.78919759227 187.385053048075 189.646203894483 192.267596035472 194.64204877445 196.607099154976 199.104323452737 201.1992791157 203.388931374875 206.00367254833 208.770536874357 211.11806921911 213.69014547664 216.598613737221 219.217655497099 221.682757709313 224.278505473465 226.182347232814 228.613554053259 231.074436036107 233.570170801845 236.508036129116 238.977394711186 241.481985836935 244.192216999467 246.767234936065 249.44863074699 251.896548525926 254.182044205669 256.674534248955 259.130646779944 261.639713481443 264.034724570419 265.80409533017 268.361309141733 269.053361603493 270.242024477419 269.610565146908 265.746692420149 260.373346210075 255
[I might be wrong here; this was my reasoning and what gave me results in practice]
The NaN occur due to zero increase in illumination with increased shutter time leading to some issues with numerical stability, so you can change this to zero '0'. Also, following the same logic, values greater than 255 doesn't make sense. You have two options [Manual fix]: A. For values in your output which are greater than 255, replace them with that value. [this is what I used] B. resize the values by the greatest number. This should be followed by a further cleaning to make the function monotonic (ie., non decreasing) [wrote a small program t o to do this] Once this clean up is completed, use this as the initial guess for the response calibration, and the program will refine it to be accurate.
I had done this process twice to get to the value. This problem is due to the numerical instability of the convergence and will happen when we don't have the right initial guess for the pcalib (and relatively bad data due to unresponsive camera.) This is solved by iterating over the guess of pcalib aka response calibration.
One other trick I found when I had to deal with unresponsive cameras (I had initially used PiCam aswell for this task) is to set an exposure, take some pictures and drop them, before getting the target picture [I used to take 4 pictures and throw them since I found that the exposure gradually varied, even though the PiCam data logs claimed that the exposure had been updated.]
Hope this helps.
I am running vignetteCalib on my own dataset - just 30 images to test the approach. The program produces only black vignette and following output:
the output is in the same fashion until the end.
Any idea what might be wrong with my dataset? Is it possible that it does not work because it is not composed from video capture? (I can only get exposure times for the third column of
times.txt
when shooting images one by one with the camera)