Open Jaeggi99 opened 2 months ago
Hello Dario, you are correct, the issue with the projector offset is not easily solvable due to the way the open-ar-sandbox is implemented. If you want to use open-ar-sandbox, the best would be do change the hardware configuration.
That being said, for the aforementioned reason and others we have decided a while ago to redesign the sandbox into a client-server based system. The client is written in C#/Unity and handles the image acquisition, ui, calibration and projection in a much more performant and versatile way than we could in our python implementation. The number crunching on the depth image however is done on the server, which can be python, java or anything you want, making the system hopefully much easier to expand and modify to your needs. You can find the first implementation of the system here: https://github.com/terranigma-solutions/cg3-ar-sandbox Currently, the only thing sent to the client is the depth data, but we might extend it soon to also support AruCo markers or custom ui fields, sliders, etc. Let me know if you have any questions. the installation should be also much easier now.
Best regards Simon
Hello Simon Thank you for your fast reply. Well then we will rebuild our construction. Your implementation of the SensiLab AR Sandbox looks very promising. I will test it. Does this new implementation then sort of replace the Open AR Sandbox project?
Kind regards Dario
Dear Open AR-Sandbox Team
I work for the Institute of Geomatics at the University of Applied Sciences of Northwestern Switzerland (FHNW) and we have had an AR Sandbox with the UC Davis software for more than ten years. Now we want to update to a simpler and extendable software, where we can create our own visualizations and applications. So, thank you for the opportunity to do so and the great work!
First our setup: Our Sandbox has the projector at the border of the Sandbox with quite a large offset in -y and small offset in -x direction in relation to the Kinect, which is located directly above the center of the sandbox. This works well with the UC Davis software, as one defines tie points in the calibration process.
The Problem: With the Open AR Sandbox software, the projection of an object is offset to the real object. This offset depends on the height z. At the top of the Sandbox, the level where projector and Kinect are calibrated on, the projected and real object match perfectly. Below this level, the projection offsets more into y direction, while increasing the height the projection offsets in -y direction. The same happens in x direction but to a much smaller extent.
Interpretation: Due to the height dependency of the projection offset, I believe this happens because our projector is offset from the center, as illustrated in the below sketch.
Question: With the UC Davis software, the projector can be offset to the Kinect and then be calibrated via measuring tie points. Does this mean that with your calibration method, of simply aligning the borders of Kinect image and projected image, this offset of the projector to the Kinect cannot be calibrated?
Thanks for clarifying this. If this cannot be calibrated, we would need to rebuild the projector/Kinect construction.
Kind regards
Dario
P.S. We are looking forward sharing our work here.