apple / ml-hugs

Official repository of HUGS: Human Gaussian Splats (CVPR 2024)
https://machinelearning.apple.com/research/hugs
Other
222 stars 25 forks source link

getting manual alignment coordinates #5

Open yashgarg98 opened 6 months ago

yashgarg98 commented 6 months ago

Hi, thanks for oper-sourcing this code,

I have been trying to generate custom videos using different scenes with different humans. I'm facing trouble with how to set the manual alignment coordinates (translation, rotation, and scale) such that the human is generated at my desired position in the scene. I have followed the method explained in the NEUMAN paper, where I exported point clouds (.ply) of the scene and human in a blender and set the coordinates of the human at my desired location. However, this method doesn't generate animated videos with humans at the location I want.

I'm stuck here, can you explain what should be the correct way to get the manual alignment coordinates?

mkocabas commented 6 months ago

Hi @yashgarg98,

I have used the steps described here to align human and scenes. And then you can update the transformation parameters defined here: https://github.com/apple/ml-hugs/blob/main/hugs/datasets/neuman.py#L89.

yashgarg98 commented 6 months ago

Got it thanks, I just have another doubt when I'm trying to add the humans into my own trained background scene, the colors/texture on the humans is slightly transparent or more like a blend of their clothes with background pixel colors. Is there any hyperparameter or way to control this or resolve this? I can upload some sample images if needed for references to the problem

mkocabas commented 6 months ago

Could you share the sample images?

yashgarg98 commented 6 months ago

image

We can observe from this image that the lower body mostly merges with the background and the clothing colors (black t-shirt) look translucent, like merging with the background. Is there any hyperparameter to change this or put them in front?