Closed eddy-ilg closed 10 years ago
Yeah even I am interested in this. Could anyone give some idea on how to do this? Thanks :)
I managed to them by inserting into net_output_blobs in net.cpp:
for (int topid = 0; topid < top_vecs_[i].size(); ++topid) {
LOG(INFO) << "Top shape: " << top_vecs_[i][topid]->num() << " "
<< top_vecs_[i][topid]->channels() << " "
<< top_vecs_[i][topid]->height() << " "
<< top_vecs_[i][topid]->width() << " ("
<< top_vecs_[i][topid]->count() << ")";
if (!in_place)
memory_used += top_vecs_[i][topid]->count();
net_output_blobs_.push_back(top_vecs_[i][topid]); // ADD THIS LINE
}
Thanks @freiburg I will try this out.. Btw just wanted to confirm, the layer DECAF6 mentioned in experiments of the paper: DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition is referring to fc6 after relu and dropout has been applied right? Or it is without the relu/dropout?
I have used this instead
const vector<string>& blob_names = net_->blob_names();
const vector<shared_ptr<Blob<float> > >& blobs = net_->blobs();
int id=16; //The layer number you are extracting - find the number by searching through
// names in blob_names - here I am extracting 'relu7'
mxArray* mx_blob = mxCreateNumericMatrix(blobs[id]->count(), 1, mxSINGLE_CLASS, mxREAL);
float* layer_data_ptr = reinterpret_cast<float*>(mxGetPr(mx_blob));
switch (Caffe::mode()) {
case Caffe::CPU:
memcpy(layer_data_ptr, blobs[id]->cpu_data(), sizeof(float) * blobs[id]->count());
break;
case Caffe::GPU:
cudaMemcpy(layer_data_ptr, blobs[id]->gpu_data(), sizeof(float) * blobs[id]->count(), cudaMemcpyDeviceToHost);
break;
default:
LOG(FATAL) << "Unknown Caffe mode.";
}
return mx_blob;
Added to the end of do_forward
in matcaffe.cpp
.
If anyone successfully got the outputs of the individual layers in the Matlab wrapper, could you explain more in detail? I tried to follow the comments, but I could not get it... Thank you!
Hi @dkkim930122
In the examples/imagenet folder, theres a file named imagenet_deploy.prototxt. You just have to remove whatever layers you do not want. For example if you want the output at layer 6, remove the last 4 layers (just delete them from the prototxt file). And run the matlab code. You will then get a 4096 dim output, instead of 1000 dim.
Hope this helps.
Regards, Sharath Chandra Guntuku B.E. (Hons). Computer Science BITS-Pilani
On 6 June 2014 06:36, dkkim930122 notifications@github.com wrote:
If anyone successfully got the outputs of the individual layers in the Matlab wrapper, could you explain more in detail? I tried to follow the comments, but I could not get it... Thank you!
— Reply to this email directly or view it on GitHub https://github.com/BVLC/caffe/issues/299#issuecomment-45283951.
@aybassiouny Is this hack working on the current master build?
Hi @sharathchandra92 I want to see the features output after the first fully connected layer after the training is complete using all the layers for training. However if I remove the layers after that in the prototxt file, wont the output change since there will not be back-propogation from the layers after this?
Hi @anurikadisha
Feature extraction is essentially after the model has been trained. So it does not matter which layer you are extracting the features from, as the model that is being used is already trained. You are just doing a feed-forward to get the activations at the layer you are interested in.
From: anurikadisha notifications@github.com Sent: Tuesday, June 2, 2015 2:12 PM To: BVLC/caffe Cc: #SHARATH CHANDRA GUNTUKU# Subject: Re: [caffe] Retrieving Network Outputs in Matlab (#299)
Hi @sharathchandra92https://github.com/sharathchandra92 I want to see the features output after the first fully connected layer after the training is complete using all the layers for training. However if I remove the layers after that in the prototxt file, wont the output change since there will not be back-propogation from the layers after this?
Reply to this email directly or view it on GitHubhttps://github.com/BVLC/caffe/issues/299#issuecomment-107822040.
Once the network is loaded, e.g.
net = caffe.Net(model.deployFile, model.caffemodel, 'test');
You can access to the output of any layer with blobs(layer_name).get_data() as follows:
function feature_maps = getLayerOutput(images, net, layerName)
input_data = {images};
net.forward(input_data); % forward pass
% get the layer output
feature_maps = net.blobs(layerName).get_data();
Hi,
I use ImageNet and would like to get the outputs of the individual layers in matlab. Is this possible? It seems caffe('get_weights'); only gets the filter masks.
Best,
Eddy