Open Whatasmallship opened 3 months ago
https://whatasmallship.github.io/2024/06/17/Principles-of-Large-Scale-hhMachine-Learning-Lecture-4/
Lecture 4 Learning with Gradient Descent 回顾:经验风险最小化与梯度下降 为每个预测器分配一个ddd维的参数向量,也即每一个ddd维参数向量对应一个预测器。并将经验风险最小化视作一个优化问题: minimize:R(hw)=1n∑i=1nL(hw(xi),yi) over w∈Rd\text{minimize}:R(h_w)=\frac{1}{n}\s
test
https://whatasmallship.github.io/2024/06/17/Principles-of-Large-Scale-hhMachine-Learning-Lecture-4/
Lecture 4 Learning with Gradient Descent 回顾:经验风险最小化与梯度下降 为每个预测器分配一个ddd维的参数向量,也即每一个ddd维参数向量对应一个预测器。并将经验风险最小化视作一个优化问题: minimize:R(hw)=1n∑i=1nL(hw(xi),yi) over w∈Rd\text{minimize}:R(h_w)=\frac{1}{n}\s