Closed NormanBeta closed 4 years ago
[b, w], losses = gradient_descent(data, initial_b, initial_w, lr, num_it erations)
但gradient_descent函数return的只是[b,w]
收到!谢谢。
训练优化1000次,返回最优w,b和训练Loss的下降过程
[b, w], losses = gradient_descent(data, initial_b, initial_w, lr, num_it erations)
但gradient_descent函数return的只是[b,w]