Closed doronator closed 7 months ago
the design clearly is meant to allow choosing amongst various activations: https://github.com/automl/PFNs/blob/main/pfns/priors/simple_mlp.py#L29
But the simple MLP ends up always using tanh in the forward pass: https://github.com/automl/PFNs/blob/main/pfns/priors/simple_mlp.py#L57
the fix is simply to replace: x = torch.tanh(x) with: x = self.activation(x)
good catch! should be fixed now.
just tested, is fixed now :)
the design clearly is meant to allow choosing amongst various activations: https://github.com/automl/PFNs/blob/main/pfns/priors/simple_mlp.py#L29
But the simple MLP ends up always using tanh in the forward pass: https://github.com/automl/PFNs/blob/main/pfns/priors/simple_mlp.py#L57
the fix is simply to replace: x = torch.tanh(x) with: x = self.activation(x)