Closed sergiobd closed 4 years ago
Hi Sergio, we don't have ready-made Unity playback at the moment. We have to plans to start working on a game engine integration (Unity or Unreal) and potentially a creative framework as well (Ofx or Cinder), but have not set a milestone for it yet.
For now, you can dump the files you have recorded in a synchronized manner (will produce multiview depth and color images), and then use the extrinsics calibration information (dated folder in ./Data/Calibrations/
) and each sensor's intrinsic parameters (device_repository.json
) to create and merge/fuse the point clouds in Unity.
Hi,
I am mostly interested in using VCL3D to capture performers for experimental storytelling-based VR projects. I am currently in process of setting up the capture system, but I would also want to test how to import the captures in a 3D environment (I am currently using Unity). I have not found in your wiki, though, any indication on how your .cdv files are structured, or how to parse them. Any indications on this? Would you also be able to share some of your captures for testing?
Best,
Sergio