chenghong-lin-nu / blog

个人技术博客,博文写在issue里。
0 stars 0 forks source link

DLND-Week2 #2

Open chenghong-lin-nu opened 6 years ago

chenghong-lin-nu commented 6 years ago

Deep Learning所用到的数学知识

矩阵数学和Numpy复习

矩阵乘法:Matrix Product

矩阵转置:Matrix Transpose

image image

image image

chenghong-lin-nu commented 6 years ago

神经网络入门

Logistic Regression(逻辑回归)

image

神经网络图示

image

Perceptron-感知器

Weight-权重

输入数据加总

计算激活函数的输出

单位阶跃函数(Heaviside step function)

image

偏置项(bias)

完整感知器公式

image

AND感知器

OR感知器

XOR感知器

chenghong-lin-nu commented 6 years ago

最简单的神经网络

sigmoid(对数几率)激活函数

image

梯度下降

学习如何找到权重

Gradient Descent

image

梯度(Gradient)

chenghong-lin-nu commented 6 years ago

梯度下降-数学

  1. 首先,我们要选取衡量预测误差的标准。 image

    • 最简单的是用实际目标值-预测值,但是这样子误差的符号会不能保持一致。
    • 那么要使符号全都为正,我们可以使用平方的方式。(为什么不用绝对值?因为用平方值时,异常值会被赋予更高的惩罚值;而较小误差的惩罚值较低。)
    • 但是现在仅是单次的误差。
  2. 全体数据的整体误差 image image

    • SSE可以衡量神经网络的预测效果。数值越高,预测效果越差;越低,效果则越好;
    • 加1/2是为了简化计算。 image
    • y一把(数学符号打不出来)代表的是预测值。
    • 权重(weight)可以调整预测值。-> 从而影响整体误差 -> 我们目标求取是误差最小化的权重值image
    • 如上图,我们的目的就是为了找到在碗底的那个E(就是最小的那个E对应的w值。) image
    • 它走的方向是和gradient(斜率)相反的。 image
    • 新的wi和gradient成正比,然后那个learning rate指的是梯度下降中更新步长的大小。

高数求导 - 链式法则

image

chenghong-lin-nu commented 6 years ago

梯度下降-代码

实现梯度下降来更新权重Weights

image

import numpy as np
# 梯度下降-代码
# f(h)是sigmoid

# 定义sigmoid激活函数
def sigmoid(x):
    return 1/(1+np.exp(-x))

# 定义激活函数的导数
def sigmoid_prime(x):
    return np.exp(-x)*(1+np.exp(-x))**(-2)

# Input Data
x = np.array([0.1, 0.3])
print(x)

# Target
y = 0.2

# Weight
weights = np.array([-0.8, 0.5])
print(weights)

# 权重更新的学习率
learnrate = 0.5

# 输入和权重的组合
h = np.dot(x, weights.T)

# 神经网络输出
nn_output = sigmoid(h)

# 输出误差
error = y - nn_output

# 输出梯度(f'(h))
output_grad = sigmoid_prime(h)

# error term (lowercase delta)
error_term = error * output_grad

# Gradient descent step (delta wi)
# Calculate change in weights
del_w = [learnrate * error_term * x[0],
        learnrate * error_term * x[1]]
print(del_w)
chenghong-lin-nu commented 6 years ago

实现梯度下降

代码实现

import numpy as np
from data_prep import features, targets, features_test, targets_test

def sigmoid(x):
    """
    Calculate sigmoid
    """
    return 1 / (1 + np.exp(-x))

# TODO: We haven't provided the sigmoid_prime function like we did in
#       the previous lesson to encourage you to come up with a more
#       efficient solution. If you need a hint, check out the comments
#       in solution.py from the previous lecture.

# Use to same seed to make debugging easier
np.random.seed(42)

n_records, n_features = features.shape
last_loss = None

# Initialize weights
weights = np.random.normal(scale=1 / n_features**.5, size=n_features)

# Neural Network hyperparameters
epochs = 1000
learnrate = 0.5

for e in range(epochs):
    del_w = np.zeros(weights.shape)
    for x, y in zip(features.values, targets):
        # Loop through all records, x is the input, y is the target

        # Note: We haven't included the h variable from the previous
        #       lesson. You can add it if you want, or you can calculate
        #       the h together with the output

        # TODO: Calculate the output
        output = sigmoid(np.dot(x,weights))

        # TODO: Calculate the error
        error = y - output

        # TODO: Calculate the error term
        error_term = error * (1 - output) * output

        # TODO: Calculate the change in weights for this sample
        #       and add it to the total weight change
        del_w += error_term * x 

    # TODO: Update weights using the learning rate and the average change in weights
    weights += learnrate * del_w / n_records

    # Printing out the mean square error on the training set
    if e % (epochs / 10) == 0:
        out = sigmoid(np.dot(features, weights))
        loss = np.mean((out - targets) ** 2)
        if last_loss and last_loss < loss:
            print("Train loss: ", loss, "  WARNING - Loss Increasing")
        else:
            print("Train loss: ", loss)
        last_loss = loss

# Calculate accuracy on test data
tes_out = sigmoid(np.dot(features_test, weights))
predictions = tes_out > 0.5
accuracy = np.mean(predictions == targets_test)
print("Prediction accuracy: {:.3f}".format(accuracy))
chenghong-lin-nu commented 6 years ago

多层感知器

image image

import numpy as np

def sigmoid(x):
    """
    calculate sigmoid
    """
    return 1/(1+np.exp(-x))

# Network size
N_input = 4
N_hidden = 3
N_output = 2

np.random.seed(42)
# Make some fake data
X = np.random.randn(4)

# scale是标准差,第一个0是中心点的位置
# 最后一个则是output的大小,N_input * N_hidden
# 下面两个都是随机生成的数据
weights_input_to_hidden = np.random.normal(0, scale=0.1, size=(N_input, N_hidden))
weights_hidden_to_output = np.random.normal(0, scale=0.1, size=(N_hidden, N_output))

# TODO: Make a forward pass through the network

hidden_layer_in = np.dot(X,weights_input_to_hidden)
hidden_layer_out = sigmoid(hidden_layer_in)

print('Hidden-layer Output:')
print(hidden_layer_out)

output_layer_in = np.dot(hidden_layer_out,weights_hidden_to_output)
output_layer_out = sigmoid(output_layer_in)

print('Output-layer Output:')
print(output_layer_out)
chenghong-lin-nu commented 6 years ago

反向传播

image

步骤

image

具体例子

代码

import numpy as np

def sigmoid(x):
    """
    Calculate sigmoid
    """
    return 1 / (1 + np.exp(-x))

x = np.array([0.5, 0.1, -0.2])
target = 0.6
learnrate = 0.5

weights_input_hidden = np.array([[0.5, -0.6],
                                 [0.1, -0.2],
                                 [0.1, 0.7]])

weights_hidden_output = np.array([0.1, -0.3])

## Forward pass
hidden_layer_input = np.dot(x, weights_input_hidden)
hidden_layer_output = sigmoid(hidden_layer_input)

#print(hidden_layer_output)
#print(weights_hidden_output)

output_layer_in = np.dot(hidden_layer_output, weights_hidden_output)
output = sigmoid(output_layer_in)

## Backwards pass
## TODO: Calculate output error
error = target - output

# TODO: Calculate error term for output layer
output_error_term = error * sigmoid(output_layer_in) * (1 - sigmoid(output_layer_in))

# TODO: Calculate error term for hidden layer
hidden_error_term = weights_hidden_output * output_error_term * hidden_layer_output * (1 - hidden_layer_output)
# print(hidden_error_term)
# print(str(weights_hidden_output)+","+str(output_error_term)+","+str(hidden_layer_output)+","+str(1-hidden_layer_output))

# TODO: Calculate change in weights for hidden layer to output layer
delta_w_h_o = learnrate * output_error_term * hidden_layer_output

# TODO: Calculate change in weights for input layer to hidden layer
x = x.reshape(1,3)
delta_w_i_h = learnrate * hidden_error_term * x.T

print('Change in weights for hidden layer to output layer:')
print(delta_w_h_o)
print('Change in weights for input layer to hidden layer:')
print(delta_w_i_h)
chenghong-lin-nu commented 6 years ago

Implementing BackPropagation

代码实现

import numpy as np
from data_prep import features, targets, features_test, targets_test

np.random.seed(21)

def sigmoid(x):
    """
    Calculate sigmoid
    """
    return 1 / (1 + np.exp(-x))

# Hyperparameters
n_hidden = 2  # number of hidden units
epochs = 900
learnrate = 0.005

# 上面所说的m是数据点的数量
# 也就是下面的n_records
n_records, n_features = features.shape
last_loss = None
# Initialize weights
weights_input_hidden = np.random.normal(scale=1 / n_features ** .5,
                                        size=(n_features, n_hidden))
weights_hidden_output = np.random.normal(scale=1 / n_features ** .5,
                                         size=n_hidden)

for e in range(epochs):
    del_w_input_hidden = np.zeros(weights_input_hidden.shape)
    del_w_hidden_output = np.zeros(weights_hidden_output.shape)
    for x, y in zip(features.values, targets):
        ## Forward pass ##
        # TODO: Calculate the output
        hidden_input = np.dot(x,weights_input_hidden)
        hidden_output = sigmoid(hidden_input)
        output = sigmoid(np.dot(hidden_output, weights_hidden_output))

        ## Backward pass ##
        # TODO: Calculate the network's prediction error
        error = y - output

        # TODO: Calculate error term for the output unit
        output_error_term = error * (1-output) * output

        ## propagate errors to hidden layer

        # TODO: Calculate the hidden layer's contribution to the error
        hidden_error = weights_hidden_output * output_error_term 

        # TODO: Calculate the error term for the hidden layer
        hidden_error_term = hidden_error * (1-hidden_output) * hidden_output

        # TODO: Update the change in weights
        del_w_hidden_output += output_error_term * hidden_output
        del_w_input_hidden += x[:,None] * hidden_error_term

    # TODO: Update weights
    weights_input_hidden += learnrate * del_w_input_hidden / n_records
    weights_hidden_output += learnrate * del_w_hidden_output / n_records

    # Printing out the mean square error on the training set
    if e % (epochs / 10) == 0:
        hidden_output = sigmoid(np.dot(x, weights_input_hidden))
        out = sigmoid(np.dot(hidden_output,
                             weights_hidden_output))
        loss = np.mean((out - targets) ** 2)

        if last_loss and last_loss < loss:
            print("Train loss: ", loss, "  WARNING - Loss Increasing")
        else:
            print("Train loss: ", loss)
        last_loss = loss

# Calculate accuracy on test data
hidden = sigmoid(np.dot(features_test, weights_input_hidden))
out = sigmoid(np.dot(hidden, weights_hidden_output))
predictions = out > 0.5
accuracy = np.mean(predictions == targets_test)
print("Prediction accuracy: {:.3f}".format(accuracy))