Open changtimwu opened 6 years ago
transfer learning works Convolutional Neural Networks for Medical Image Analysis: Full Training or Fine Tuning?
Always start from basics
The magic of transfer learning is that lower layers that have been trained to distinguish between some objects can be reused for many recognition tasks without any alteration.
Bottleneck is an informal term we often use for the layer just before the final output layer that actually does the classification.
The reason our final layer retraining can work on new classes is that it turns out the kind of information needed to distinguish between all the 1,000 classes in ImageNet is often also useful to distinguish between new kinds of objects.
codelab https://codelabs.developers.google.com/codelabs/tensorflow-for-poets/
testimonies
It's worthy to read how they adjust testset. Then we run every image of our dataset (even those images we just used to train!) through the network and keep track of the images it classified incorrectly or with little confidence. Then we go through each of those images and move them to their proper classes, if applicable.
Early articles
recent
What's in a bottleneck file
# mobilenet
bottleneck_tensor_name = 'MobilenetV1/Predictions/Reshape:0'
bottleneck_tensor_size = 1001
resized_input_tensor_name = 'input:0'
# inception V3
bottleneck_tensor_name = 'pool_3/_reshape:0'
bottleneck_tensor_size = 2048
resized_input_tensor_name = 'Mul:0'
在final class output 前的那一層 , 倒數第二層 所以檔案內會是 tensor_size 那麼多個 float number
def add_final_training_ops(class_count, final_tensor_name, bottleneck_tensor,
bottleneck_tensor_size):
"""Adds a new softmax and fully-connected layer for training.
Identifying Medical Diagnoses and Treatable Diseases by Image-Based Deep Learning