vsitzmann / siren

Official implementation of "Implicit Neural Representations with Periodic Activation Functions"
MIT License
1.72k stars 247 forks source link

Making trainable parameter instead of constant 30? #62

Closed ivanstepanovftw closed 7 months ago

ivanstepanovftw commented 8 months ago

What about making trainable (learnable) parameter(s) such as

class Sin(nn.Module):
    def __init__(self):
        super(Sin, self).__init__()
        self.a = nn.Parameter(torch.randn(1))
        self.b = nn.Parameter(torch.randn(1))

    def forward(self, input):
        return torch.sin(input * self.a) * self.b

?

ivanstepanovftw commented 7 months ago

Setting a and b to predefined 1 did not move far from its initial state.

Tried per-channel parameters:

class Sin2d(nn.Module):
    def __init__(self, num_channels: int):
        super(Sin2d, self).__init__()
        # Initialize parameters a and b for each channel
        self.a = nn.Parameter(torch.ones(num_channels))
        self.b = nn.Parameter(torch.ones(num_channels))

    def forward(self, input):
        # Reshape a and b to be broadcastable with the input
        # Shape of a and b becomes [batch_size, num_channels, 1, 1]
        a = self.a.view(1, -1, 1, 1)
        b = self.b.view(1, -1, 1, 1)

        # Apply the Sin2d function element-wise
        # It will automatically broadcast a and b across the spatial dimensions
        return torch.sin(input * a) * b
ivanstepanovftw commented 7 months ago

I got the idea about SIREN. Impressive.