Open vahidEttehadiAniml opened 1 month ago
That's indeed a nice question, the code now doesn't support unposed input or custom format. Here is a loader example, the c2ws is in opencv format and the scene is normalized within [-0.5,0.5]. Please note that the current project doesn't support background, you need to preprocess your images by masking out the background first.
Thanks for your reply.
I followed your suggestion, but I'm still having issues.
Quick question: Is the scene/camera normalized within the range of [-0.5, 0.5], or are only the objects within [-0.5, 0.5] while the cameras are positioned outside this range?
Yes, only the objects (instead of cameras) are normalized in [-0.5,0.5]
BTW, after scaling the objects, please also need to align the cameras using: https://github.com/autonomousvision/LaRa/blob/main/dataLoader/gobjverse.py#L58-L66
me too.
would you like to share the images and camera parameters of this example? Such a reconstruction quality is not excepted
me too.
Would you like to share the images and camera parameters for this example? The reconstruction quality is unexpectedly low.
Thanks for your reply. I have sent to your e-mail address.
would you like to share the images and camera parameters of this example? Such a reconstruction quality is not excepted
me too.
Would you like to share the images and camera parameters for this example? The reconstruction quality is unexpectedly low.
Here are the results of the real-world images. The primary issue appears to be the improper setting of the scene center and scene scale. There is significant room for improvement with real-world inputs, such as using real-world images to train our model.
would you like to share the images and camera parameters of this example? Such a reconstruction quality is not excepted
me too.
Would you like to share the images and camera parameters for this example? The reconstruction quality is unexpectedly low.
Here are the results of the real-world images. The primary issue appears to be the improper setting of the scene center and scene scale. There is significant room for improvement with real-world inputs, such as using real-world images to train our model.
can you elaborate more? In my case I render images while looking at the object center. So, I think scene center should be ok, but I am no sure about the scale.
would you like to share the images and camera parameters of this example? Such a reconstruction quality is not excepted
me too.
Would you like to share the images and camera parameters for this example? The reconstruction quality is unexpectedly low.
Here are the results of the real-world images. The primary issue appears to be the improper setting of the scene center and scene scale. There is significant room for improvement with real-world inputs, such as using real-world images to train our model.
can you elaborate more? In my case I render images while looking at the object center. So, I think scene center should be ok, but I am no sure about the scale.
Could you please send me the data and your loader?
hi, @apchenstu
if I wanna use my real data, how to set the scale or scene center?
hi, @apchenstu
if I wanna use my real data, how to set the scale or scene center?
the scene center is the object center, and the bounding box is [-0.5,0.5], so you need to scale and shift the object that is roughly bounded by the bounding box.
would you like to share the images and camera parameters of this example? Such a reconstruction quality is not excepted
me too.
Would you like to share the images and camera parameters for this example? The reconstruction quality is unexpectedly low.
Here are the results of the real-world images. The primary issue appears to be the improper setting of the scene center and scene scale. There is significant room for improvement with real-world inputs, such as using real-world images to train our model.
can you elaborate more? In my case I render images while looking at the object center. So, I think scene center should be ok, but I am no sure about the scale.
Could you please send me the data and your loader?
Sorry for the late reply. I just sent the a sample+dataloader to you. Thanks in advacne.
I was wondering how I can feed my capture to model? Can you hint me to some code to understand better the coordinate and scaling of input camera poses.
Thanks in advance.