Closed HyoKong closed 8 months ago
Sorry, but without knowing the specific details of your situation, it's difficult for me to provide a tailored solution. However, one possible approach you could consider is normalizing the depth of each frame to a range between 0 and 1, as long as the depth values don't vary significantly during the editing / dragging. This normalization process may help ensure consistency and facilitate further analysis or processing of the data.
Thank you for your quick reply. How do you deal with depth when you drag some 3d points at one camera view? Is the depth (z) fixed and only x and y changed in the current camera coordinate?
Yes. it's our current implementation.
Thank you so much for your excellent work!
I'm encountering a challenge in a GUI application where users can drag points across the screen. Since we're using a mono camera setup, there's an inherent depth ambiguity issue. How does the system recalibrate the depth of these points upon being dragged? Are there standard practices or algorithms that address depth adjustments in such scenarios?
Any advice or pointers on this would be greatly appreciated. Thank you!