Open Dumeowmeow opened 1 week ago
Thank you for your interest in our work! Our Gaussian-related code is built upon GRM, which is currently unavailable for some reason. Unfortunately, I don't think we can release our copy of GRM before their official release, so we'll have to wait for that first.
Thank you for your interest in our work! Our Gaussian-related code is built upon GRM, which is currently unavailable for some reason. Unfortunately, I don't think we can release our copy of GRM before their official release, so we'll have to wait for that first.
I know.Thank you for your reply!
Thank you for your interest in our work! Our Gaussian-related code is built upon GRM, which is currently unavailable for some reason. Unfortunately, I don't think we can release our copy of GRM before their official release, so we'll have to wait for that first.
Sorry, I have another question.When I get the mesh, I tried to render it in the original view: camera_poses[0] https://github.com/Lakonik/MVEdit/blob/445a11c0e0e38a581f94df57af48357bd1c5cd47/lib/apis/adapter3d.py#L696 But I found it was not the same as the original picture.
rendered image: original image: Their Angle appears obvious offset, excuse me this is normal, how should solve?
Both the GRM Adapter and MVEdit Adapter adopt Zero123++ as the multi-view model. Since Zero123++ automatically aligns the output with the gravity axis, the elevation angle of the input is ignored.
I assume you are using the MVEdit Adapter. If so, MVEdit does have an internal estimation of the elevation angle: https://github.com/Lakonik/MVEdit/blob/445a11c0e0e38a581f94df57af48357bd1c5cd47/lib/apis/adapter3d.py#L807-L813
You can modify the code so that the estimated pose is returned.
Both the GRM Adapter and MVEdit Adapter adopt Zero123++ as the multi-view model. Since Zero123++ automatically aligns the output with the gravity axis, the elevation angle of the input is ignored.
I assume you are using the MVEdit Adapter. If so, MVEdit does have an internal estimation of the elevation angle:
You can modify the code so that the estimated pose is returned.
Thank you for your reply! I got it. And, another question is, if my input image is 512x512, How can I make mesh render 512x512 with the same scale as the original image?I found that there were a lot of image sizing operations in the code that confused me.
Both the GRM Adapter and MVEdit Adapter adopt Zero123++ as the multi-view model. Since Zero123++ automatically aligns the output with the gravity axis, the elevation angle of the input is ignored. I assume you are using the MVEdit Adapter. If so, MVEdit does have an internal estimation of the elevation angle: https://github.com/Lakonik/MVEdit/blob/445a11c0e0e38a581f94df57af48357bd1c5cd47/lib/apis/adapter3d.py#L807-L813
You can modify the code so that the estimated pose is returned.
Thank you for your reply! I got it. And, another question is, if my input image is 512x512, How can I make mesh render 512x512 with the same scale as the original image?I found that there were a lot of image sizing operations in the code that confused me.
I tried to render the resulting mesh with the following parameters and output a 288x288 image, but its scale is not the same as the 288x288 image sent to the model: https://github.com/Lakonik/MVEdit/blob/445a11c0e0e38a581f94df57af48357bd1c5cd47/lib/apis/adapter3d.py#L792-L804
the 288x288 image fed to mveditpipeline in https://github.com/Lakonik/MVEdit/blob/445a11c0e0e38a581f94df57af48357bd1c5cd47/lib/apis/adapter3d.py#L790 the image rendered from the resulting mesh using the parameters in the above parameters: It looked as if it had grown bigger.Is there something wrong with my camera Settings?I expect their results to be the same.
Both the GRM Adapter and MVEdit Adapter adopt Zero123++ as the multi-view model. Since Zero123++ automatically aligns the output with the gravity axis, the elevation angle of the input is ignored. I assume you are using the MVEdit Adapter. If so, MVEdit does have an internal estimation of the elevation angle: https://github.com/Lakonik/MVEdit/blob/445a11c0e0e38a581f94df57af48357bd1c5cd47/lib/apis/adapter3d.py#L807-L813
You can modify the code so that the estimated pose is returned.
Thank you for your reply! I got it. And, another question is, if my input image is 512x512, How can I make mesh render 512x512 with the same scale as the original image?I found that there were a lot of image sizing operations in the code that confused me.
I tried to render the resulting mesh with the following parameters and output a 288x288 image, but its scale is not the same as the 288x288 image sent to the model:
the 288x288 image fed to mveditpipeline in
the image rendered from the resulting mesh using the parameters in the above parameters: It looked as if it had grown bigger.Is there something wrong with my camera Settings?I expect their results to be the same.
Is this due to up and down sampling of the network?
Hi, thank you for your great work! May I ask where is the code for image to 3dgaussians in 3DAdapter?