Open ArghyaChatterjee opened 11 months ago
@TontonTremblay, can you respond please ?
I am in vacation this week, will answer next week.
Wow this is some unknown territory. I am impressed by what you are trying to do, and I have always wanted to try something along these lines. Is this the tutorial you followed: https://blog.polyhaven.com/how-to-create-high-quality-hdri/ nvisii uses this model of hdr map, I am afraid I cannot help much more than that tbh. But please share your results here.
I see there are questions in there :P
Good luck
Hello,
I was trying to generate dataset for centerpose using your (dope) pipeline. There are 4 problems that I am facing.
Normal (Without changing anything in your script, auto zoomed in background which is a problem):
Camera Eye changed (from 'eye':visii.vec3(0,0,0) to 'eye':visii.vec3(0,0,-2), looks distorted):
Camera fov changed to 2 (from default 0.78 to 2, looks distorted):
In the original objectron dataset that centerpose is trained on, it contains keypoints 3d and scale of object in the corresponding annotated json file. As dope doesn't need that information, you haven't included that inside the nvisii interface. Can you tell me how to generate the information for centerpose dataset ?? Here is how the json file looks like for dope:
Here is how the json file looks like for centerpose:
When it's generating the annotated dataset, why can't I see the segmented.exr images ( I mean it's blank white). How to get that ?? I can only see a depth.exr which is a 32 bit binary image (which looked more like a segmentation image than a depth image). Here are the things that the pipeline is generating with the depth.exr as a binary image and seg.exr as a blank white image.
The depth image (.exr) looks like this:
The segmentation image (.exr) looks like this: