-
Thank you for your work! But I can't find the code for the backward propagation. Also, `style_loss` in `train.py` is a tensor with 4 items, which makes an error in `total_cd_pc += cd_pc_item`.
-
When we try to overload operations such that complex operations can be done in a single expression. Something like
```c++
Tensor = a;
Tensor = b;
Tensor = c;
Tensor = d;
Tensor e = a*b + c*d;
…
-
Hello, i have some questions:
1, Are the gradient calculation methods of WResNet18_A and WResNet18_C same? Are these methods all in this way of "grad_input = torch.add(torch.matmul(grad_L, matrix_L)…
-
I am interested to see where forward and backward propagation is happening in the code. Can you point me to that specific portion of the code ?
-
Hi. I am new to triton and cuda. From my understanding, when we implement a customized pytorch operator using cuda, we need to define both forward and backward function, so that the gradients are prop…
-
This is a follow-up to an issue I mentioned in the ACTS developer's meeting. I don't think the material is properly accounted for in backward propagation.
The material is often mapped to the repres…
-
Hello, if I want to incorporate KAN into a deep learning model, will the backward propagation in PyTorch be able to modify the activation functions in KAN? I would appreciate it if you could help me.
-
I am trying to understand the backward pass in this darknet code. It seems in the convolution backward pass that the weights_update is only related to the layers at which the weights are, not related…
-
Hello!
Impressive work!
I'd like to know the meaning of parameters in the function: nt_transfer_step (ntt.py):
masked_g = grad(self.nt_transfer_loss)(student_net_params, masks, teacher_net_params…
-
I ran the example code exactly as the same as provided.
```
from calflops import calculate_flops
from torchvision import models
model = models.alexnet()
batch_size = 1
input_shape = (batch_si…