ssfc / PAT

program for PAT
0 stars 0 forks source link

2019-3-31 #43

Open ssfc opened 5 years ago

ssfc commented 5 years ago

1.Andrew Wu’s Machine Learning; 2.Ep 1-1: introduction; (2018-12-24) 3.Ep 1-2: application of machine learning; (2018-12-24) 4.Ep 1-3: tools and algorithm; (2018-12-24) Reading: Tom Mitchell provides a more modern definition: "A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E." (2019-3-23) 5.Ep 1-4: supervised learning; examples; house price prediction; (2018-12-24) Reading: In supervised learning, we are given a data set and already know what our correct output should look like, having the idea that there is a relationship between the input and the output; Supervised learning problems are categorized into "regression" and "classification" problems; (2019-3-23) 6.Ep 1-5: unsupervised learning; cocktail algorithm; (2019-3-22) Reading: We can derive this structure by clustering the data based on relationships among the variables in the data; (2019-3-23) ------------------------------- pass quiz -------------------------------- 7.Ep 2-1: training set; (2018-12-26) Reading: regression problem; classification problem; (2019-3-22) 8.Ep 2-2: cost function; hypothesis; cost function; (2018-12-26) Reading: This function is otherwise called the "Squared error function", or "Mean squared error"; 9.Ep 2-3: cost function 1: simplified example; (2019-3-18) Reading: The best possible line will be such so that the average squared vertical distances of the scattered points from the line will be the least; (2019-3-23) 10.Ep 2-4: cost function 2: (2019-3-18) Reading: A contour plot is a graph that contains many contour lines; (2019-3-23)

ssfc commented 5 years ago

11.Ep 2-5: gradient descent; gradient descent algorithm; correct/incorrect; (2019-3-18) Reading: The way we do this is by taking the derivative (the tangential line to a function) of our cost function; (2019-3-23) 12.Ep 2-6: gradient descent intuition; gradient descent algorithm; (2019-3-18) Reading: when the slope is negative, the value of theta, start subscript, 1, end subscript θ1\theta_1θ1​ increases and when it is positive, the value of theta, start subscript, 1, end subscript θ1\theta_1θ1​ decreases; (2019-3-23) 13.Ep 2-7: gradient descent for linear regression; linear regression model; batch gradient descent; (2019-3-19) Reading: When specifically applied to the case of linear regression, a new form of the gradient descent equation can be derived; (2019-3-23) 14.Ep 3-1: matrices and vectors; matrix, rectangular array of numbers; matrix element; (2019-3-19) 15.Ep 3-2: additional and scalar multiplication; matrix addition; scalar multiplication; (2019-3-19) 16.Ep 3-3: matrix vector multiplication; example; (2019-3-19) 17.Ep 3-4: matrix-matrix multiplication; having 3 competing hypotheses; (2019-3-19) 18.Ep 3-5: matrix multiplication properties; identity matrix; (2019-3-19) 19.Ep 3-6: inverse and transpose; matrix inverse; matrix transpose; (2019-3-19) 20.Ep 4-1: multiple features; hypothesis; (2019-3-19) Reading: Linear regression with multiple variables is also known as "multivariate linear regression"; (2019-3-23)