verlab / nonrigid-benchmark

2 stars 1 forks source link

can't open/read file: test_single_obj/test_sequence_024/scenario_003/uv_00000.png... #1

Closed Jingbx closed 2 months ago

Jingbx commented 2 months ago

First of all, thank you for your excellent work. When I run evaluate.py to test the json file I generated (containing keypoints and matches), I get the following error:

╰─ python src/nonrigid_benchmark/evaluate.py ─╯ ['/home/jingbx/nonrigid_benchmark/test_single_obj/test_sequence_024/scenario_003/rgba_00000.png', '/home/jingbx/nonrigid_benchmark/test_single_obj/test_sequence_024_deformed_timestep_00001/scenario_003/rg ba_00000.png'] [ WARN:0@1.005] global loadsave.cpp:241 findDecoder imread_('/home/jingbx/nonrigid_benchmark/test_single_obj/test_sequence_024/scenario_003/uv_00000.png'): can't open/read file: check file path/integrity [ WARN:0@1.012] global loadsave.cpp:241 findDecoder imread_('/home/jingbx/nonrigid_benchmark/test_single_obj/test_sequence_024_deformed_timestep_00001/scenario_003/uv_00000.png'): can't open/read file: check file path/integrity Traceback (most recent call last): "/home/jingbx/keypoints/mypaper/nonrigid-benchmark/src/nonrigid_benchmark/evaluate.py", line 154, in <module> main() File "/home/jingbx/keypoints/mypaper/nonrigid-benchmark/src/nonrigid_benchmark/evaluate.py", line 139, in main result = eval_pair((pair, prediction, args.matching_th, args.plot)) File "/home/jingbx/keypoints/mypaper/nonrigid-benchmark/src/nonrigid_benchmark/evaluate.py", line 65, in eval_pair projection_1to2 = warp_keypoints( File "/home/jingbx/keypoints/mypaper/nonrigid-benchmark/src/nonrigid_benchmark/compute. py", line 24, in warp_keypoints H, W, _ = coords1.shape AttributeError: 'NoneType' object has no attribute 'shape' 

I suspect the reason appears here:

def load_sample(rgb_path:str, read_coords:bool=False, read_segmentation:bool=False): 
image = cv2.imread(rgb_path) 
mask = cv2.imread(rgb_path.replace('rgba', 'bgmask'), cv2.IMREAD_UNCHANGED) 
sample = { 'image': image, 'mask': mask, } 
if read_coords: sample['uv_coords'] = cv2.imread(rgb_path.replace('rgba', 'uv'), cv2.IMREAD_UNCHANGED) 
# print (sample['uv_coords']) # debug: None 
if read_segmentation: sample['segmentation'] = cv2.imread(rgb_path.replace('rgba', 'segmentation'), cv2.IMREAD_UNCHANGED) 
return sample

That is, there is no uv_...png in the test_sequence024/scenario.../directory of your test data set, and the coordinate information cannot be obtained. How can I solve this problem? Looking forward to your reply, thank you🙏

Jingbx commented 2 months ago

Sorry, I was careless and didn't check your dataset format carefully. Just change 'uv' to 'normal'.


Normal may be a normal map? Instead of uv。 It seems that normal cannot generate the correct match for me 😭😭😭, i need uv map. T.T.... Looking forward to your reply, thank you again for your work

Jingbx commented 2 months ago

I found an unexpected question that is very important to me. I hope you can see it and look forward to your reply. Thank you from the bottom of my heart! ! ! ! ! 🙏🏻🙏🏻🙏🏻 I used the official weights of DALF_2048 to verify deformation_2 and the results are as follows,

0.0017766137951518832,0.004566783628866802,0.1439962909306569

I don't know why the data is so low, I would like to ask you for advice.

# main
def main():
    args = parse()    
    selected_pairs = load_benchmark(args.dataset)
    nproc = args.nproc

    predictions = json.load(open(args.input, 'r'))

    outfile_path = args.output
    split = args.split

    # Create output directory if it doesn't exist
    output_dir = os.path.join(os.path.dirname(outfile_path), 'dalf_pairs')
    os.makedirs(output_dir, exist_ok=True)

    metrics = {
        'ms': [],
        'ma': [],
        'rr': [],
    }

    if nproc > 1:    
        with multiprocessing.Pool(nproc) as pool:
            args = [(pair, prediction, args.matching_th, args.plot, i) for i, (pair, prediction) in enumerate(zip(selected_pairs[split], predictions))]
            results = list(tqdm(pool.imap(eval_pair, args), total=len(args)))
            for result in results:
                metrics['ms'].append(result['ms'])
                metrics['ma'].append(result['ma'])
                metrics['rr'].append(result['rr'])
    else:
        for i, (pair, prediction) in enumerate(zip(selected_pairs[split], predictions)):
            result = eval_pair((pair, prediction, args.matching_th, args.plot, i))
            metrics['ms'].append(result['ms'])
            metrics['ma'].append(result['ma'])
            metrics['rr'].append(result['rr'])

    # mean score
    ms = np.mean(metrics['ms'])
    ma = np.mean(metrics['ma'])
    rr = np.mean(metrics['rr'])

    with open(outfile_path, 'w') as f:
        f.write(f"{ms},{ma},{rr}")
        f.write(f"ms : {metrics['ms']}")
        f.write(f"ma : {metrics['ma']}")
        f.write(f"rr : {metrics['rr']}")

Here is a visualization of the match. At the same time, I made sure I followed the model to infer the json and passed it through the L2norm in the model

image

In addition, I used the examples in your assert directory to verify and found that the results were correct. You used uv0000.png in your assert. I don’t know whether it was normal0000.png that caused the incorrect matching score or other problems. However, when I calculated the json data, everything was correct. The problem should be in evaluate. The following is the result of my reasoning using the data in assert. I really hope to get your reply. Thank you very much! ! ! !

image
felipecadar commented 2 months ago

Hi @Jingbx! Thank you very much for the interest in our work. Sorry for the delay, but I hope it's not too late.

First of all, the reason you won't find the uv_*.png files for the testing dataset is because we have a benchmark system with the hidden UV's of the test set at https://benchmark.eucadar.com. You can submit your json prediction there if you want. All results are private by default and you can make it public at any time.

Secondly, about that:

I used the official weights of DALF_2048 to verify deformation_2 and the results are as follows,

I only provided a sample for the deformation_3 split (which is not present on the benchmark test set). If you are trying to use the deformation_3 UVs on the deformation_2 images, I would not work.

And I see that you could reproduce the DALF results on the sample dataset (I got the same numbers), so I don't see what is the problem with the evaluation. If you can be more specific maybe I can help you better.

Jingbx commented 2 months ago

Thank you very much for your reply! I think my problem is the test error caused by not finding uv_*.png, which has nothing to do with the DALF model. A few days ago, I uploaded my test results (json file) to https://benchmark.eucadar.com/submissions, but it still shows PENDING in the result. I don't know if it is my operation problem. I hope my problem will not cause you any trouble. Thank you again for your reply!🌹

felipecadar commented 2 months ago

Oh, I see. Sorry for that, our evaluation server is probably down. I'll figure it out later today.

Jingbx commented 2 months ago

No rush! Take care of the important things first. Thank you again for your help and for the excellent work you’ve done.

felipecadar commented 2 months ago

Eval server back up! Thx for the contact :)