Closed 1611Dhruv closed 2 days ago
Useful Math functions and libraries:
Matrix Operations:
DotProduct(a, b []float64) float64
– Computes the dot product of two vectors.MatMul(a, b [][]float64) [][]float64
– Performs matrix multiplication.Transpose(matrix [][]float64) [][]float64
– Transposes a matrix.MatrixInverse(matrix [][]float64) [][]float64
– Inverts a square matrix (if invertible).Vector Operations:
AddVectors(a, b []float64) []float64
– Element-wise vector addition.SubVectors(a, b []float64) []float64
– Element-wise vector subtraction.ScaleVector(scalar float64, vector []float64) []float64
– Scales a vector by a scalar.Norm(vector []float64) float64
– Computes the norm of a vector.Element-wise Operations:
ApplyFunction(tensor [][]float64, fn func(float64) float64) [][]float64
– Applies a function to each element in a tensor.ElementWiseAdd(a, b [][]float64) [][]float64
– Element-wise addition for tensors.ElementWiseMultiply(a, b [][]float64) [][]float64
– Element-wise multiplication for tensors.Reduction Operations:
Sum(tensor [][]float64, axis int) []float64
– Sums elements along a specific axis.Mean(tensor [][]float64, axis int) []float64
– Computes mean along a specific axis.ArgMax(tensor []float64) int
– Returns the index of the maximum value in a vector.Common Functions:
Sigmoid(x float64) float64
– Computes the sigmoid activation.ReLU(x float64) float64
– Rectified Linear Unit activation.Tanh(x float64) float64
– Hyperbolic tangent activation.Softmax(vector []float64) []float64
– Computes the softmax over a vector.LeakyReLU(x, alpha float64) float64
– Computes Leaky ReLU.Derivatives for Backpropagation:
SigmoidPrime(x float64) float64
– Derivative of sigmoid.ReLUPrime(x float64) float64
– Derivative of ReLU.Point-wise Losses:
MeanSquaredError(yTrue, yPred []float64) float64
– Mean squared error.CrossEntropy(yTrue, yPred []float64) float64
– Cross-entropy loss.Gradients for Backpropagation:
MSEGradient(yTrue, yPred []float64) []float64
– Gradient of MSE loss.CrossEntropyGradient(yTrue, yPred []float64) []float64
– Gradient of cross-entropy loss.Gradient Operations:
ClipGradients(gradients [][]float64, clipValue float64) [][]float64
– Clips gradients to avoid exploding gradients.UpdateWeights(weights [][]float64, gradients [][]float64, learningRate float64) [][]float64
– Updates weights using gradients.Momentum & RMSProp:
MomentumUpdate(velocity, gradients [][]float64, learningRate, momentum float64) [][]float64
RMSPropUpdate(cache, gradients [][]float64, learningRate, decayRate, epsilon float64) [][]float64
Random Initialization:
RandomUniform(rows, cols int, min, max float64) [][]float64
– Generates a tensor with uniform random values.RandomNormal(rows, cols int, mean, stdDev float64) [][]float64
– Generates a tensor with normally distributed random values.Sampling:
Shuffle(data [][]float64) [][]float64
– Shuffles a dataset.Numerical Stability:
LogSafe(x float64) float64
– Computes the logarithm, avoiding negative infinities (log(x + epsilon)
).ExpSafe(x float64) float64
– Computes the exponential with clipping to prevent overflow.Miscellaneous:
Clip(x, min, max float64) float64
– Clips a value between min
and max
.Sign(x float64) float64
– Returns the sign of a value (-1, 0, or 1).Convolutions:
Conv2D(input, kernel [][]float64, stride, padding int) [][]float64
– Performs a 2D convolution.MaxPooling2D(input [][]float64, poolSize, stride int) [][]float64
– Performs max pooling.Eigenvalues & Decompositions (for advanced models like PCA):
EigenValues(matrix [][]float64) []float64
– Computes eigenvalues.SVD(matrix [][]float64) (u, s, v [][]float64)
– Performs Singular Value Decomposition.math/linear.go
, math/tensor.go
, activations.go
, and loss.go
.We can consider some useful ones to start off with?
https://www.youtube.com/watch?v=pauPCy_s0Ok&ab_channel=TheIndependentCode Useful Video, Here is my suggestion for the structure
classDiagram
class Tensor {
+float[] data
+int[] shape
+apply(func: Function): Tensor
}
classDiagram
class Layer {
<<abstract>>
+forward(input: Tensor): Tensor
+backward(grad: Tensor, optimizer: Optimizer): Tensor
}
class DenseLayer {
-Tensor weights
-Tensor biases
+forward(input: Tensor): Tensor
+backward(grad: Tensor, optimizer: Optimizer): Tensor
}
class ReLU {
+forward(input: Tensor): Tensor
+backward(grad: Tensor, optimizer: Optimizer): Tensor
}
class Conv2D {
-Tensor[] kernels
+forward(input: Tensor): Tensor
+backward(grad: Tensor, optimizer: Optimizer): Tensor
}
class MaxPool {
+forward(input: Tensor): Tensor
+backward(grad: Tensor, optimizer: Optimizer): Tensor
}
Layer <|-- DenseLayer
Layer <|-- ReLU
Layer <|-- Conv2D
Layer <|--MaxPool
classDiagram
class Optimizer {
<<interface>>
+step(params: Tensor, grads: Tensor): void
}
class SGD {
-float learningRate
+step(params: Tensor, grads: Tensor): void
}
Optimizer <|-- SGD
ALSO YOU CAN DO THISSSSS
mkdir -p neuralnet/pkg/{activations,layers,loss,metrics,models,optimizers,utils,training}