wgrosche / MAGE

Calibration of multiple cameras, with omnidirectional camera support, for ground plane homography estimation.
MIT License
1 stars 0 forks source link

Data format for calibration #1

Open MahejabeenNidhi opened 8 months ago

MahejabeenNidhi commented 8 months ago

It is really inspiring to see great recent work on multiview dataset construction.

I am planning on creating a multiview dataset for my research. I was wondering how to structure the data. I looked through the metadata_utils.py file but I am still not sure. Should I have create a data directory on the root directory and then create subdirectories for each camera? After that, should I create different directories for each camera where it holds the chessboard images for calibration and another directory that contains the actual footage? From my understanding, the annotation tool would only work after the calibration is complete? So in that case I need the images of my main dataset to be in a separate directory?

Would it be possible to use the calibration tool if I don't have an omnidirectional camera?

Thank you for your consideration!

wgrosche commented 8 months ago

Hello, thank you for your interest in our little package. I've quickly updated calibration.md to add some information on how the data should be structured. If you have further questions don't hesitate to ask.

Regarding your question on working without an omnidirectional camera, the point of the omnidirectional camera is to make matching easier in scenes where there isn't a lot of overlap between the features detected for different cameras (large angles / distances between views).

You can try running the calibration without such footage, if matches are poor it may be worth adding footage from a moving camera that is brought close to the stationary ones.