Open Kubha99 opened 2 years ago
close
y
LR algorith is missing
This code represents a basic implementation of simple linear regression using gradient descent. Replace X and y with your dataset. The code iterates through multiple epochs, calculating predictions, errors, gradients, and updating weights to minimize the error.
# Simple Linear Regression using Gradient Descent
# Sample data
X = np.array([1, 2, 3, 4, 5]) # Input feature
y = np.array([3, 4, 2, 5, 6]) # Target variable
# Hyperparameters
learning_rate = 0.01
epochs = 1000
# Initialize slope (weight) and intercept (bias)
m = 0 # Initial slope
c = 0 # Initial intercept
n = float(len(X)) # Number of elements in X
# Gradient Descent
for i in range(epochs):
# Predictions
y_pred = m * X + c
# Error calculation
error = np.mean((y_pred - y)**2)
# Gradient calculation
gradient_m = (-2/n) * np.sum(X * (y - y_pred))
gradient_c = (-2/n) * np.sum(y - y_pred)
# Update weights
m -= learning_rate * gradient_m
c -= learning_rate * gradient_c
# Final slope and intercept
print("Slope:", m)
print("Intercept:", c)
I saw that Linear Regression algorithm was missing under ML module. Will try to implement it.