JojiJoseph / 3dgs-gradient-segmentation

https://jojijoseph.github.io/3dgs-segmentation/
14 stars 1 forks source link

About Loss #5

Open Luoyoooo opened 2 hours ago

Luoyoooo commented 2 hours ago

Hello, thank you for your work. When I reproduced the source code and source data set, I discovered why the values ​​of the two losses before and after were both 0. If it was 0, why was there a gradient?

JojiJoseph commented 2 hours ago

I guess you are referring to the following line.

colors = colors_dc[:,0,:] * 0

It is just to show that the gradient (opacity * transmittance) is independent of color.

Luoyoooo commented 1 hour ago

I mean

target = output_for_grad[0] torch.from_numpy(mask)[..., None].cuda().float() loss = 1 target.mean() print("loss is ------------",loss) loss.backward(retain_graph=True)

output: loss is ------------ tensor(0., device='cuda:0', grad_fn=

JojiJoseph commented 1 hour ago

Oh, I got it.

I will explain why a loss of 0 can give a non-zero gradient with a small example.

Assume two variables $a=0$ and $b=5$ and $y=ab$

The numerical value of $y$, $$y=0 \times 5 = 0$$

But $$\frac{\partial{y}}{\partial{a}} = b = 5$$, $$\frac{\partial{y}}{\partial{b}} = a = 0$$. One of the partial derivatives, in this case, is non-zero. It means it is possible to have non-zero gradient for a function that evaluates to zero.

It is not a proper loss. But we can extend this argument for loss since it is a function of parameters and input values.

Luoyoooo commented 1 hour ago

Thank you for your answer, very useful! Do you think that adding depth attributes will improve the segmentation accuracy? Is it possible to build depth attributes during pre-training, or directly add depth estimation loss?