Closed shlee782 closed 1 year ago
I will try to update the library later so the code can do this without having to repeat a tensor explicitly but for now the most trivial way to achieve it would be my replacing the offsets variable in the example code with
offsets = nn.Parameter(torch.ones(1, 1, 1, kernel_size, device='cuda' if torch.cuda.is_available() else 'cpu', requires_grad=True))
offsets = offsets.repeat(batch_size,1,length,1)
If you want it inside a module class the you put the first line in your init function (with self.offsets=...) and the second line in your forward function
Just another comment: This will give you fixed parameters for inference... if you want dynamic offsets then this is already done for you, see the PackedDeformConv1d class
Thank you for the prompt and friendly response.
What is the purpose of PackedDeformConv1d? (And what does 'packed' mean?) From my initial understanding, it seems to function as an offset subnetwork within your paper. If my interpretation is incorrect, please provide the accurate details.
Packed just means I have packaged together the offset computation (using conv layers) with the deformable convolutional layer so you don't need to implement your own method of computing the offsets. The packed class includes the offset subnetwork in the red box and the DD-Conv layer in Fig. 3 of my paper. Even if you don't want to use the same offset computation method as me, I would still use this class as a basis for what it sounds like you are trying to do.
Thank you! I'll give it a try.
I would like to embed your DeformConv1d model into my deep learning model as follows:
class MyModel(): init DeformConv1d Conv1d ...
forward(x) x =DeformConv1d(x) x=Conv1d(x) ...
Given that the learnable offsets, represented using 'nn.Parameter', are located outside the DeformConv1d class, I am encountering difficulties in incorporating the DeformConv1d into my model. Could you please provide guidance on how to achieve this? An illustrative code example would be greatly appreciated.