wrote blender bpy code to take in trajectories and track data then generate keyframes and objects to create a synth dataset
included some light augmentations, e.g light changes in lines and light color, randomly use the tiled floor or a random shade of grey, random light intensity etc...
left to do:
if troubles when trying to infer on real images:
look into camera calibration, current camera pose might not be really accurate (measured by hand + used cad data for rotation)
camera intrinsinc might be somewhat off as well?
for example: the focal lenght is 0.87mm and blender only accepts 1mm so I had to use fov degrees instead. I selected 132 deg which seem to correspond to 0.87 but the camera is supposed to be 200 fov so there are details I don't understand
i guess it'll be clearer when trying to infer on real images
left to do: