Closed Jingbx closed 2 months ago
Sorry, I was careless and didn't check your dataset format carefully. Just change 'uv' to 'normal'.
Normal may be a normal map? Instead of uv。 It seems that normal cannot generate the correct match for me 😭😭😭, i need uv map. T.T.... Looking forward to your reply, thank you again for your work
I found an unexpected question that is very important to me. I hope you can see it and look forward to your reply. Thank you from the bottom of my heart! ! ! ! ! 🙏🏻🙏🏻🙏🏻 I used the official weights of DALF_2048 to verify deformation_2 and the results are as follows,
0.0017766137951518832,0.004566783628866802,0.1439962909306569
I don't know why the data is so low, I would like to ask you for advice.
# main
def main():
args = parse()
selected_pairs = load_benchmark(args.dataset)
nproc = args.nproc
predictions = json.load(open(args.input, 'r'))
outfile_path = args.output
split = args.split
# Create output directory if it doesn't exist
output_dir = os.path.join(os.path.dirname(outfile_path), 'dalf_pairs')
os.makedirs(output_dir, exist_ok=True)
metrics = {
'ms': [],
'ma': [],
'rr': [],
}
if nproc > 1:
with multiprocessing.Pool(nproc) as pool:
args = [(pair, prediction, args.matching_th, args.plot, i) for i, (pair, prediction) in enumerate(zip(selected_pairs[split], predictions))]
results = list(tqdm(pool.imap(eval_pair, args), total=len(args)))
for result in results:
metrics['ms'].append(result['ms'])
metrics['ma'].append(result['ma'])
metrics['rr'].append(result['rr'])
else:
for i, (pair, prediction) in enumerate(zip(selected_pairs[split], predictions)):
result = eval_pair((pair, prediction, args.matching_th, args.plot, i))
metrics['ms'].append(result['ms'])
metrics['ma'].append(result['ma'])
metrics['rr'].append(result['rr'])
# mean score
ms = np.mean(metrics['ms'])
ma = np.mean(metrics['ma'])
rr = np.mean(metrics['rr'])
with open(outfile_path, 'w') as f:
f.write(f"{ms},{ma},{rr}")
f.write(f"ms : {metrics['ms']}")
f.write(f"ma : {metrics['ma']}")
f.write(f"rr : {metrics['rr']}")
Here is a visualization of the match. At the same time, I made sure I followed the model to infer the json and passed it through the L2norm in the model
In addition, I used the examples in your assert directory to verify and found that the results were correct. You used uv0000.png in your assert. I don’t know whether it was normal0000.png that caused the incorrect matching score or other problems. However, when I calculated the json data, everything was correct. The problem should be in evaluate. The following is the result of my reasoning using the data in assert. I really hope to get your reply. Thank you very much! ! ! !
Hi @Jingbx! Thank you very much for the interest in our work. Sorry for the delay, but I hope it's not too late.
First of all, the reason you won't find the uv_*.png
files for the testing dataset is because we have a benchmark system with the hidden UV's of the test set at https://benchmark.eucadar.com. You can submit your json prediction there if you want. All results are private by default and you can make it public at any time.
Secondly, about that:
I used the official weights of DALF_2048 to verify deformation_2 and the results are as follows,
I only provided a sample for the deformation_3
split (which is not present on the benchmark test set). If you are trying to use the deformation_3
UVs on the deformation_2
images, I would not work.
And I see that you could reproduce the DALF results on the sample dataset (I got the same numbers), so I don't see what is the problem with the evaluation. If you can be more specific maybe I can help you better.
Thank you very much for your reply! I think my problem is the test error caused by not finding uv_*.png, which has nothing to do with the DALF model. A few days ago, I uploaded my test results (json file) to https://benchmark.eucadar.com/submissions, but it still shows PENDING
in the result. I don't know if it is my operation problem. I hope my problem will not cause you any trouble. Thank you again for your reply!🌹
Oh, I see. Sorry for that, our evaluation server is probably down. I'll figure it out later today.
No rush! Take care of the important things first. Thank you again for your help and for the excellent work you’ve done.
Eval server back up! Thx for the contact :)
First of all, thank you for your excellent work. When I run evaluate.py to test the json file I generated (containing keypoints and matches), I get the following error:
I suspect the reason appears here:
That is, there is no uv_...png in the test_sequence024/scenario.../directory of your test data set, and the coordinate information cannot be obtained. How can I solve this problem? Looking forward to your reply, thank you🙏