Closed albertyou2 closed 8 years ago
You can use the 'simple_bind' function. In example/notebooks/simple_bind.ipynb and tutorial.ipynb
Try the following script, you can use the debug_symbol() function to get all the forward blobs information, and visualize them. I also want to get the backward gradient information, but did not figure it out yet. If you only to visualize a particular layers, use net = net.get_internals()['fc2_output']
instead.
import mxnet as mx
from collections import OrderedDict
def debug_symbol(sym):
'''Get internals values for blobs (forward only).'''
args = sym.list_arguments()
output_names = [] # sym.list_outputs()
sym = sym.get_internals()
blob_names = sym.list_outputs()
sym_group = []
for i in range(len(blob_names)):
if blob_names[i] not in args:
x = sym[i]
if blob_names[i] not in output_names:
x = mx.symbol.BlockGrad(x, name=blob_names[i])
sym_group.append(x)
sym = mx.symbol.Group(sym_group)
return sym
def get_mlp():
"""
multi-layer perceptron
"""
data = mx.symbol.Variable('data')
fc1 = mx.symbol.FullyConnected(data = data, name='fc1', num_hidden=128)
act1 = mx.symbol.Activation(data = fc1, name='relu1', act_type="relu")
fc2 = mx.symbol.FullyConnected(data = act1, name = 'fc2', num_hidden = 64)
act2 = mx.symbol.Activation(data = fc2, name='relu2', act_type="relu")
fc3 = mx.symbol.FullyConnected(data = act2, name='fc3', num_hidden=10)
mlp = mx.symbol.SoftmaxOutput(data = fc3, name = 'softmax')
return mlp
if __name__ == '__main__':
input_shape = (128, 28*28)
net = get_mlp()
net = debug_symbol(net) # for all the blobs information
# net = net.get_internals()['fc2_output'] # for only fc2 blob
executor = net.simple_bind(mx.cpu(), data=input_shape)
executor.forward()
out_dict = OrderedDict(zip(net.list_outputs(), executor.outputs))
print out_dict
@wushouchuan Thank you! But this example uses low level mxnet APIs and I don't think I can use it because I'am new to mxnet. I am training my model by using example/image-classification/train_imagenet.py. It's too difficult for me to modify that example script.Thank you again.
@taoari thank you !! BUT I want to know if it is possible to use get_internals() and simple_bind() in example/image-classification/train_model.py file to save hidden layer output.
train_model.py uses mx.model.FeedForward & model.fit for training automatically.I don't know how to modify it.
If it's possible,would you please give me a example?Thank you !
@albertyou2 MXNet is designed with the computational graph framework, so it will automatically optimize the internals and hide them from user. So I do not think it is suitable to retrieve the internals values from the high-level model
API (e.g. modify the train_model.py file). You are trying to get something that is designed to hide from you! So your task is more appropriate to use the Symbol API and simple_bind(), it is much easier.
You may also try the high-level Module API, which is recently added and designed for imperative calculation. But I have not tried it yet.
@albertyou2 If you only want to extract one particular conv layer output, you can try the last cell in
https://github.com/dmlc/mxnet/blob/master/example/notebooks/predict-with-pretrained-model.ipynb
@taoari ok! Thanks you for your suggestion! I'll try this!
@taoari Thanks to your suggestion, I can successfully extract features of ' conv5_2' layer ! Thank you again!
The code is here: ......
data_shape = (args.rgb, args.shape,args.shape)
images = mx.io.ImageRecordIter( path_imgrec = args.data_file,mean_img = args.min_file,
rand_crop = False, rand_mirror = False, data_shape = data_shape, batch_size = 3)
model = mx.model.FeedForward.load(args.prefix, args.iter)
internals = model.symbol.get_internals() fea_symbol = internals["conv5_2_output"]
feature_extractor = mx.model.FeedForward(ctx=mx.cpu(), symbol=fea_symbol,numpy_batch_size=1,arg_params=model.arg_params,aux_params=model.aux_params, allow_extra_params=True)
global_pooling_feature = feature_extractor.predict(images) print(global_pooling_feature.shape)
the output is : (3, 512, 4, 4)
Could you please show me : 0) what dose (3,512,4,4) mean ? Dose that mean "channel = 3 (color image ),number = 512, kernel size = 4*4" ?
1) how to save the features ? And how to draw them ? 2) Dose " mx.model.FeedForward(... numpy_batch_size=1 ...)" means this code only extract one image features ?
Thank you very much!!!
I finally find how to save them, close it
@albertyou2 Can you share how you save the features? Thank you!
But how to extract features from an image?
import matplotlib.pyplot as plt import mxnet as mx import logging import numpy as np from skimage import io, transform import argparse
parser = argparse.ArgumentParser(description='load model trained befor and give prediction for .rec file') parser.add_argument('--prefix', type=str, default='./saved_model/jd_50_vgg' , help='load model structure by name which has a prefix') parser.add_argument('--iter', type=int, default=318, help='load model weight from iteration') parser.add_argument('--shape', type=int, default=50, help='data shape of .rec files') parser.add_argument('--rgb', type=int, default=3, help='color , 1 for gray,3 for color image') parser.add_argument('--data-file', type=str, default='./data/pred_50.rec', help='prediction .rec file path') parser.add_argument('--min-file', type=str, default='./data/pred_50.min', help='prediction images mean file') parser.add_argument('--list-file', type=str, default='./data/pred.lst', help='prediction images lst file') parser.add_argument('--layer', type=str, default='fc7', help='layer you want to visualize') args = parser.parse_args()
logger = logging.getLogger() logger.setLevel(logging.DEBUG)
data_shape = (args.rgb, args.shape,args.shape)
images = mx.io.ImageRecordIter(
path_imgrec = args.data_file,
mean_img = args.min_file,
rand_crop = False,
rand_mirror = False,
shuffle = False,
data_shape = data_shape,
batch_size = 1)
model = mx.model.FeedForward.load(args.prefix, args.iter)
internals = model.symbol.get_internals()
layer = args.layer+'_output' fea_symbol = internals[layer]
allow_extra_params
to Truefeature_extractor = mx.model.FeedForward(ctx=mx.cpu(), symbol=fea_symbol, numpy_batch_size=1, arg_params=model.arg_params, aux_params=model.aux_params, allow_extra_params=True)
global_pooling_feature = feature_extractor.predict(images)
a = np.asarray(global_poolingfeature) # np.savetxt('./pred'+args.layer+'.csv',a)
NOTE:you have to create .rec file for your images , A pre-trained model(both son and weight file). Then pass them to this program.
I want save the features of the final conV layer to training my SVM classfier . But I don't see any example of how to save these features step by step with python .
Could some one give me a example ?thank you very much!