Closed wangfudong closed 6 years ago
Hi @wangfudong,
For Pascal dataset, the feature points are provided. Suppose that you can get a set of points P1 from one image and a set P2 from another (P1 and P2 are [X Y] coordinates of the points), then you can use the following code to build the third order tensor from these points:
nP1 = size(P1, 2);
nP2 = size(P2, 2);
% random permutation
seq = randperm(nP2);
P2(:,seq) = P2;
trueMatch = seq(1:nP1);
%% Tensor construction (same as in demo.m)
nT = nP1*nP2; % # of triangles in graph 1
t1=floor(rand(3,nT)*nP1);
while 1
probFound=false;
for i=1:3
ind=(t1(i,:)==t1(1+mod(i,3),:));
if(nnz(ind)~=0)
t1(i,ind)=floor(rand(1,nnz(ind))*nP1);
probFound=true;
end
end
if(~probFound)
break;
end
end
%generate features
t1=int32(t1);
[feat1,feat2] = mexComputeFeature(P1,P2,int32(t1),'simple');
%number of nearest neighbors used for each triangle (results can be bad if too low)
nNN = nT;
%find the nearest neighbors
[inds, dists] = annquery(feat2, feat1, nNN, 'eps', 10);
%build the tensor
[i j k]=ind2sub([nP2,nP2,nP2],inds);
tmp=repmat(1:nT,nNN,1);
indH3 = double(t1(:,tmp(:))'*nP2) + [k(:)-1 j(:)-1 i(:)-1];
valH3 = exp(-dists(:) / coeff*mean(dists(:)));
% This was added by Quynh Nguyen (CVPR 2015), I'm not sure how it affects the accuracy
% remove duplicated tuples
indH3 = sort(indH3, 2);
[indH3 id1 id2] = unique(indH3, 'rows');
valH3 = valH3(id1);
%remove duplicated indices: (i,i,i), (i,i,j), (i,j,i), (i,j,j), etc
t1 = indH3(:, 1) - indH3(:, 2);
t2 = indH3(:, 1) - indH3(:, 3);
t3 = indH3(:, 2) - indH3(:, 3);
t4 = (t1 == 0) + (t2 == 0) + (t3 == 0);
indH3 = indH3(t4 == 0, :);
valH3 = valH3(t4 == 0);
% upperbound the number of nonzeros
maxL = min(5*10^6, length(valH3));
[v id] = sort(valH3, 'descend');
% id = randperm(length(valH3));
id = id(1:maxL);
valH3 = valH3(id);
indH3 = indH3(id, :);
Please let me know if you can get any improvement. And I didn't get what you meant by "two graphs X and Y were normalized into [0,1] divided by max(image_width,image_length)", but it is not necessary.
Would you mind sharing what you compare ADGM with? Please keep in mind that we primarily compared the objective values in the paper and not the accuracy. If you are developing a method that optimize the graph matching objective function, then it's fair comparing the objective values and not the accuracy. If you're developing a method that improves the matching accuracy, then the above third-order tensor construction is not appropriate (at least for Pascal datasets). The above potentials only take into account the angle information and not the length. For Pascal datasets (where there's virtually no difference in scale between the images), adding length information will increase significantly the matching accuracy, as we showed in the last pairwise experiment in the paper.
Hi @netw0rkf10w Thank you so much for your detailed replies!
I first repeated the code above with and without the part of code added by Quynh Nguyen, here is the average accuracy on Cars (the number of outliers varies from 0 to 16 by intervals of 2): with Quynh Nguyen: ADGM1: 0.6889 0.7554 0.6210 0.6885 0.6811 0.6520 0.4721 0.4226 0.4197 ADGM2: 0.7634 0.7818 0.7015 0.7477 0.6175 0.5991 0.4519 0.4344 0.5408 without Quynh Nguyen: ADGM1: 0.7400 0.7222 0.5442 0.4745 0.4387 0.4643 0.4479 0.3501 0.2982 ADGM2: 0.7007 0.6763 0.5450 0.5160 0.4804 0.4831 0.4090 0.4150 0.4004 This result is better that the previous one I got yesterday, and is nearly the same as shown in your paper. The code added by Quynh Nguyen improves the accuracy indeed.
I met another problem with this code: When I repeated this code on all the pairs of Pascal Cars dataset, the memory gradually increased with repetitions increased (from 600M to 11.5G after 30*9 repetitions) . Maybe there are some bugs like memory leak in the mexcode (I only repeated this code and did not operate on the memory directly).
For what I wrote "two graphs X and Y were normalized into [0,1] divided by max(image_width,image_length)", it means that I first compute the maximum between image length and width, then the coordinates of nodes are divided by this maximum.
My main purpose is to compare the matching accuracy. Months ago I began interested in the graph matching problem and I constructed a "new" model (to my knowledge, I'm not very sure, hahaha...) dealing with the case that graphs can be embedded in Euclidean space. The objective function of my model is a little different from that of the previous methods (the pairwise affinity matrix is replaced by an explicit function of assignment matrix, and I do not use the general quadratic form), so I'm not sure whether it is fair and necessary to compare the objective values. ( At least I'm sure I should compare the matching accuracy, hahaha....)
At last I'm a little confused that you didn't evaluate your method on the synthetic data as many other papers did. Is there no necessity to do this experiment? Or it is due to space constraints and you leave the results in the supplement.
@wangfudong To avoid memory explosion you can put the above code into a function, something like this:
function [indH3, valH3, trueMatch] = buildTensor(P1, P2)
nP1 = size(P1, 2);
nP2 = size(P2, 2);
% random permutation
seq = randperm(nP2);
P2(:,seq) = P2;
trueMatch = seq(1:nP1);
%% Tensor construction (same as in demo.m)
nT = nP1*nP2; % # of triangles in graph 1
t1=floor(rand(3,nT)*nP1);
while 1
probFound=false;
for i=1:3
ind=(t1(i,:)==t1(1+mod(i,3),:));
if(nnz(ind)~=0)
t1(i,ind)=floor(rand(1,nnz(ind))*nP1);
probFound=true;
end
end
if(~probFound)
break;
end
end
%generate features
t1=int32(t1);
[feat1,feat2] = mexComputeFeature(P1,P2,int32(t1),'simple');
%number of nearest neighbors used for each triangle (results can be bad if too low)
nNN = nT;
%find the nearest neighbors
[inds, dists] = annquery(feat2, feat1, nNN, 'eps', 10);
%build the tensor
[i j k]=ind2sub([nP2,nP2,nP2],inds);
tmp=repmat(1:nT,nNN,1);
indH3 = double(t1(:,tmp(:))'*nP2) + [k(:)-1 j(:)-1 i(:)-1];
valH3 = exp(-dists(:) / coeff*mean(dists(:)));
% remove duplicated tuples
indH3 = sort(indH3, 2);
[indH3 id1 id2] = unique(indH3, 'rows');
valH3 = valH3(id1);
%remove duplicated indices: (i,i,i), (i,i,j), (i,j,i), (i,j,j), etc
t1 = indH3(:, 1) - indH3(:, 2);
t2 = indH3(:, 1) - indH3(:, 3);
t3 = indH3(:, 2) - indH3(:, 3);
t4 = (t1 == 0) + (t2 == 0) + (t3 == 0);
indH3 = indH3(t4 == 0, :);
valH3 = valH3(t4 == 0);
% upperbound the number of nonzeros
maxL = min(5*10^6, length(valH3));
[v id] = sort(valH3, 'descend');
% id = randperm(length(valH3));
id = id(1:maxL);
valH3 = valH3(id);
indH3 = indH3(id, :);
Let me know if it helps.
Concerning experiments on synthetic data, I don't think they are necessary because we already have a lot of real-world datasets. But if you want to please the reviewers then you should consider adding them to your paper ;)
Hello! I met some problems when I repeated your experiment on Pascal datasets with the third-order model. I set some parameters in demo.m as follows:
I used the MATLAB function t1=nchoosek (1:nP1,3)-1 to generate all the triangles of full-connected graph X , nT = length(t1(1,:)) was set as the number of triangles. Features were generated by the mexfile mexComputeFeature after two graphs X and Y were normalized into [0,1] divided by max(image_width,image_length). Then I set nNN=300 to encode the number of nearest neighbors. All other parameters remained as the same as demo.m set. Finally the average accuracy on Cars and Motorbikes reduced by 3%~5%. So I'm wondering are there any parameters wrong I used. Thanks!