Whatasmallship / comments

0 stars 0 forks source link

Principles of Large-Scale hhMachine Learning [Lecture 4] | 干点啥好呢 #13

Open Whatasmallship opened 3 months ago

Whatasmallship commented 3 months ago

https://whatasmallship.github.io/2024/06/17/Principles-of-Large-Scale-hhMachine-Learning-Lecture-4/

Lecture 4 Learning with Gradient Descent 回顾:经验风险最小化与梯度下降 为每个预测器分配一个ddd维的参数向量,也即每一个ddd维参数向量对应一个预测器。并将经验风险最小化视作一个优化问题: minimize:R(hw)=1n∑i=1nL(hw(xi),yi)  over  w∈Rd\text{minimize}:R(h_w)=\frac{1}{n}\s

Whatasmallship commented 3 months ago

test