def softplus(x, beta=1, threshold=20):
x = x.value if isinstance(x, Array) else x
return jnp.where(x > threshold, x * beta, 1 / beta * jnp.logaddexp(beta * x, 0))
The softplus activation in brainpy is not correct in the reverted linear part, whose slop should always be 1. The behavior can be replicated with codes:
import matplotlib.pyplot as plt
import brainpy as bp
import brainpy.math as bm
softplus=bp.dnn.Softplus(beta=0.1) # try different beta
scale = 5e1
x=bm.linspace(-scale,scale,20001)
plt.plot(x,softplus(x))
Hi, brainpy team:
Based on the source code (https://brainpy.readthedocs.io/en/latest/_modules/brainpy/_src/math/activations.html#softplus):
The softplus activation in brainpy is not correct in the reverted linear part, whose slop should always be 1. The behavior can be replicated with codes:
Based on equation of softplus (https://en.wikipedia.org/wiki/Rectifier_(neural_networks)) and usage of threshold (https://pytorch.org/docs/stable/generated/torch.nn.Softplus.html#torch.nn.Softplus), It might be corrected by this:
or might just directly use jax's softplus, if it can be auto-graded by brainpy:
Best, XiaoyuChen, SJTU