I invented a Mathematical Neuron model Y = Hz (Σ (weight * input) + bias) Activation of the layers in the frequencies of 8hz, 6hz and 4hz. Objective here is to enable "scalar learning" or SLM. With the reduction of the gradient, we obtain exponential transformation of the data, we can even make the matrix inversion diagonally and work with a sphere of the data volume. Deep Learning in exponential growth!
MIT License
0
stars
0
forks
source link
Neural Network with Nonlinear Exponential Activation Function #10
Here's the code for a neural network using the nonlinear exponential function (y = e^(mx + b)) as an activation function in Python with TensorFlow and Keras:
import tensorflow as tf
from tensorflow.keras import layers
# Define the non-linear exponential activation function (custom layer)
class ExponentialActivation(layers.Layer):
def __init__(self, **kwargs):
super(ExponentialActivation, self).__init__(**kwargs)
def call(self, inputs):
return tf.exp(tf.matmul(inputs, self.kernel) + self.bias)
def get_config(self):
config = super(ExponentialActivation, self).get_config()
config.update({'units': int(self.kernel.shape[1])})
return config
# Define the neural network
def create_model(input_dim, hidden_dim, output_dim):
# Input layer
inputs = layers.Input(shape=(input_dim,))
# Hidden layer with exponential activation
hidden = layers.Dense(hidden_dim, activation=ExponentialActivation())(inputs)
# Output layer
outputs = layers.Dense(output_dim)(hidden)
# Create the model
model = tf.keras.Model(inputs=inputs, outputs=outputs)
return model
# Example usage
model = create_model(10, 5, 1) # 10 input features, 5 hidden neurons, 1 output neuron
# Compile the model (define loss function and optimizer)
model.compile(loss='mse', optimizer='adam')
# Train the model (replace with your training data)
model.fit(x_train, y_train, epochs=10)
# Make predictions (replace with your test data)
predictions = model.predict(x_test)
Explanation:
Custom Activation Layer: We define a custom layer ExponentialActivation that inherits from layers.Layer. This layer performs the non-linear exponential function using matrix multiplication (tf.matmul) for weights (self.kernel) and bias (self.bias), followed by the exponential function (tf.exp).
Model Creation: The create_model function defines the network architecture. It takes input dimension, hidden layer dimension, and output dimension as arguments.
Hidden Layer: The hidden layer uses the custom ExponentialActivation function.
Output Layer: The output layer has a single neuron.
Model Compilation: The model is compiled with a mean squared error (mse) loss function and the Adam optimizer.
Training and Prediction: Train the model with your training data (x_train, y_train) and then use it to make predictions on your test data (x_test).
Important Notes:
This is a simple example. Real-world applications might require more complex architectures and hyperparameter tuning.
The exponential function can be numerically unstable for large values. Consider alternative non-linear activations like leaky ReLU for better training behavior.
Python data model: special methods
Here's the code for a neural network using the nonlinear exponential function (y = e^(mx + b)) as an activation function in Python with TensorFlow and Keras:
Explanation: