Is it possible to apply any activation function between hidden states in IndRNN in tensorflow framework?
Currently I don't see any argument similar to "activation"
Keyword Arguments:
kernel_initializer: (optional) the initializer to use for the input
matrix weights. Defaults to `glorot_uniform`.
recurrent_initializer: (optional) the initializer to use for the
recurrent scale weights. Defaults to uniform random in [-0.5, 0.5].
Note that this initialization scheme is different than in the original
authors' implementation. See https://github.com/lmnt-com/haste/issues/7
for details.
bias_initializer: (optional) the initializer to use for the bias vector.
Defaults to `zeros`.
kernel_transform: (optional) a function with signature
`(kernel: Tensor) -> Tensor` that transforms the kernel before it is
used. Defaults to the identity function.
recurrent_transform: (optional) a function with signature
`(recurrent_scale: Tensor) -> Tensor` that transforms the recurrent
scale vector before it is used. Defaults to the identity function.
bias_transform: (optional) a function with signature
`(bias: Tensor) -> Tensor` that transforms the bias before it is used.
Defaults to the identity function.
zoneout: (optional) float, sets the zoneout rate for Zoneout
regularization. Defaults to 0.
dtype: (optional) the data type for this layer. Defaults to `tf.float32`.
name: (optional) string, the name for this layer.
It's not currently possible to apply arbitrary activations. I can point you to the code you'd need to change to add in your desired activation function if you'd like.
Hello.
Is it possible to apply any activation function between hidden states in IndRNN in tensorflow framework?
Currently I don't see any argument similar to "activation"