Performed Simple Linear Regression without using scykit learn libraries.
Defined Hypothesis Function:
np.dot(X, w): This part uses the np.dot function from NumPy to perform matrix multiplication between X and w.
Defined Cost Function:
predictions = hypothesis(X, w): This line calls the hypothesis function to get the predicted y values for the given input data and weights.
errors = predictions - y: The difference between what the model predicted and what the actual values were for each training example.
(1 / (2 m)) np.sum(errors**2): This line calculates the mean squared error (MSE).
Defined a function to calculate gradient descent:
learning_rate: This is a hyperparameter that controls the step size taken during the weight updates. It determines how much the weights are adjusted in each iteration based on the calculated errors.
iterations: This is an integer value specifying the number of times the gradient descent loop will run.
cost_history = np.zeros(iterations): This line initializes an empty 1D NumPy array called cost_history with a size equal to the number of iterations and stores the cost value after each iteration of gradient descent.
w -= learning_rate (1/m) np.dot(X.T, errors): This line is the heart of gradient descent and updates the weights (w) based on the calculated errors.
cost_history[iter] = cost_function(X, y, w): This line calculates the cost (MSE) for the current weights (w) using the cost_function (assuming it's defined elsewhere) and stores the cost value in the cost_history array at the current iteration (iter).
Adding Bias Terms, Initializing Weights and Setting Hyperparameters.
Performed Simple Linear Regression without using scykit learn libraries.
Defined Hypothesis Function:
Defined Cost Function:
Defined a function to calculate gradient descent:
@darshbaxi Please Review.