chenghong-lin-nu / blog

个人技术博客,博文写在issue里。
0 stars 0 forks source link

DLND-Week3 #6

Open chenghong-lin-nu opened 6 years ago

chenghong-lin-nu commented 6 years ago

Intro to Tensorflow

Session

Create TensorFlow object called hello_constant

hello_constant = tf.constant('Hello World!')

with tf.Session() as sess:

Run the tf.constant operation in the session

output = sess.run(hello_constant)
print(output)

### Feeding data into Tensorflow
#### tf.placeholder() & feed_dict
- 要使用这个的原因是因为TensorFlow model to take in different datasets with different parameters.
![image](https://user-images.githubusercontent.com/16880879/36352015-2216575c-14ed-11e8-97d2-e509f126f743.png)

### TensorFlow Math
- Addition: It takes in two numbers, two tensors, or one of each, and returns their sum as a tensor.
```python
x = tf.add(5, 2)  # 7
chenghong-lin-nu commented 6 years ago

Classification

Supervised Classification

Logistic Classifier(又叫做Linear Classifier)

image

tf.Variable()

x = tf.Variable(5)

tf.truncated_normal()

tf.zeros()

n_labels = 5
bias = tf.Variable(tf.zeros(n_labels)) # [ 0.  0.  0.  0.  0.]
chenghong-lin-nu commented 6 years ago

Activation Functions

ReLU and Softmax Activation Functions

Rectified Linear Units

Softmax

TensorFlow Softmax

x = tf.nn.softmax([2.0, 1.0, 0.2])

One hot Encoding

Example labels

labels = np.array([1,5,3,2,1,4,2,1,3])

Create the encoder

lb = preprocessing.LabelBinarizer()

Here the encoder finds the classes and assigns one-hot vectors

lb.fit(labels)

And finally, transform the labels into one-hot encoded vectors

lb.transform(labels)

array([[1, 0, 0, 0, 0], [0, 0, 0, 0, 1], [0, 0, 1, 0, 0], [0, 1, 0, 0, 0], [1, 0, 0, 0, 0], [0, 0, 0, 1, 0], [0, 1, 0, 0, 0], [1, 0, 0, 0, 0], [0, 0, 1, 0, 0]])

Categorical Cross-Entropy(分类的交叉熵)

Cross Entropy in TensorFlow

Minimize Cross Entropy

Measuring Performance

SGD(Stochastic Gradient Descent-随机梯度下降)

Momentum & Learning rate decay(动量和学习率衰减)

Mini-batching

Lab: Tensorflow Neural Network

chenghong-lin-nu commented 6 years ago

Deep Neural Networks

TensorFlow ReLUs

Deep Neural Network in TensorFlow

Training a Deep Learning Network

Save and Restore TensorFlow Models(保存&恢复tf模型)

The file path to save the data

save_file = './model.ckpt'

Two Tensor Variables: weights and bias

weights = tf.Variable(tf.truncated_normal([2, 3])) bias = tf.Variable(tf.truncated_normal([3]))

Class used to save and/or restore Tensor Variables

saver = tf.train.Saver()

with tf.Session() as sess:

Initialize all the Variables

sess.run(tf.global_variables_initializer())

# Show the values of weights and bias
print('Weights:')
print(sess.run(weights))
print('Bias:')
print(sess.run(bias))

# Save the model
saver.save(sess, save_file)
### Loading Variables
```python
# Remove the previous weights and bias
tf.reset_default_graph()

# Two Variables: weights and bias
weights = tf.Variable(tf.truncated_normal([2, 3]))
bias = tf.Variable(tf.truncated_normal([3]))

# Class used to save and/or restore Tensor Variables
saver = tf.train.Saver()

with tf.Session() as sess:
    # Load the weights and bias
    saver.restore(sess, save_file)

    # Show the values of weights and bias
    print('Weight:')
    print(sess.run(weights))
    print('Bias:')
    print(sess.run(bias))

Name Error

Regularization(正则化)

L2 Regularization

image

Dropout(丢弃?)

TensorFlow Dropout

image

hidden_layer = tf.add(tf.matmul(features, weights[0]), biases[0]) hidden_layer = tf.nn.relu(hidden_layer) hidden_layer = tf.nn.dropout(hidden_layer, keep_prob)

logits = tf.add(tf.matmul(hidden_layer, weights[1]), biases[1])


- tf.nn.dropout()中有两个参数:
- 1.hidden_layer: the tensor to which you would like **to apply dropout**
- 2.keep_prob: **the probability of keeping (i.e. not dropping) any given unit**
- (keep_prob allows you to adjust the number of units to drop)
- During **training**, a good starting value for keep_prob is 0.5.
- During **testing**, use a keep_prob value of 1.0 to keep all units and maximize the power of the model.