Closed Umartahir93 closed 6 years ago
def gradientDescent(X, y, theta, alpha, iters): temp = np.matrix(np.zeros(theta.shape)) parameters = int(theta.ravel().shape[1]) cost = np.zeros(iters)
//why are we using below loop? Its a matrix multiplication I dont think we need to //loop here. It will always give the same answer. Can you please tell me what is the benefit of using loop here? for i in range(iters): error = (X * theta.T) - y
I understand now that is used for theta convergence oops my bad. It was easy.
def gradientDescent(X, y, theta, alpha, iters):
temp = np.matrix(np.zeros(theta.shape)) parameters = int(theta.ravel().shape[1]) cost = np.zeros(iters)
//why are we using below loop? Its a matrix multiplication I dont think we need to //loop here. It will always give the same answer. Can you please tell me what is the benefit of using loop here? for i in range(iters): error = (X * theta.T) - y