Open andreazuna89 opened 2 years ago
Hi, This should be the semantic label. Are you using the pre-trained weights or the weights trained by yourself?
I am using the pre-trained weights that you made available.
You should probably try to visualize the output by using matplotlib instead of printing it. Numpy trunk the array when it prints to terminal.
In my case when I run: sem_out, radial_out = FCResBackbone(model_list[keypoint_count], input_path, normalized_depth) the range of values of the variable sem_out is much lower than the 0.8, so when it searches for values higher than 0.8 it is not able to find values.
I would suggest double-checking the input image, pre-processing, and ckpt. The semantic masks should be straightforward. It should not fail.
The input image, the pre-processing and the ckpts are correct. The semantic mask contains values but have low values for ape model:
sem_out
array([[-0.01329617, -0.02820876, -0.02045768, ..., -0.01773728,
-0.02727482, 0.00527006],
[-0.00628132, 0.0007932 , 0.01419168, ..., -0.00127108,
0.00156223, 0.02728454],
[-0.01058791, 0.00766725, 0.02309231, ..., 0.0010717 ,
0.00357385, 0.01576756],
...,
[-0.02356877, 0.01472774, 0.0129464 , ..., 0.02446496,
0.02668547, 0.02482457],
[-0.02399805, 0.00741669, 0.00434565, ..., 0.02365991,
0.02466155, 0.02573763],
[ 0.02953274, 0.01905412, 0.01773933, ..., -0.01274869,
-0.01899906, 0.00153356]], dtype=float32)
Can you tell me which ranges of values you have in sem_out when you run: sem_out, radial_out = FCResBackbone(model_list[keypoint_count], input_path, normalized_depth)?
Thanks
Before thresholding After thresholding This is what I get when I run the code. The screenshot attached is truncated by Numpy. Have you tried to print the output of np.where? Is it all zeros?
This is sem_out in my case before thresholding:
np.where gives all zeros. Any suggestions?
Another thing, I had to change the line here:
if os.path.splitext(filename)[0][5:].zfill(6) in test_list:
with:
if "LINEMOD/"+class_name+'/JPEGImages/'+filename in test_list: because the first line was not working and the script was never going inside the if statement. Does it make sense to you? And does this affect the code?
Thanks
I will download and try the one on Onedrive and get back to you.
Could you double-check your 'Split/test.txt'? It should be a list with numbers only, i.e. '000000 000001 000002 000003 000005 000006 000007' It should not impact anything if it is just the list is different. The condition is for distinguishing whether a sample is within the test list or not.
I had to modify also this line:
test_list = open(opts.root_dataset + "LINEMOD/"+class_name+"/" +"Split/val.txt","r").readlines()
in
test_list = open(opts.root_dataset + "LINEMOD/"+class_name+"/" +"val.txt","r").readlines()
since the LINEMOD data folder does not contain Split folders. In the class_name folders there are directly the txt files of the lists which they are not only lists with numbers (in my val.txt I have: LINEMOD/ape/JPEGImages/000000.jpg ....). I have downloaded the LINEMOD dataset from your link. Is there something wrong there?
The split file format might have changed since it is hosted by PVNet. I will generate, upload, and update the split myself.
The split link is available now in readme(https://queensuca-my.sharepoint.com/:u:/g/personal/16yw113_queensu_ca/EUdgZviAX0RHo8y38PDWt-EBLjTEg6AxSWwJdjWFWhPR9w?e=bx8I62).
It seems that there is some issue with the Onedrive share link of ckpts. The performance has just changed somehow. I am looking into it and I will update here once resolved.
Ok. I have downloaded the split list you gave me but still I do not understand the statement to check if the image is inside the test list. For example if filename='000685.jpg', os.path.splitext(filename)[0][5:].zfill(6) is equal to '000005' and totally changes the name of the image to be found. Why this?
By the way, even with the new lists the sem_out outputted by the model still has not values greater than 0.8 and the np.where function is all zeros.
You will need to download the ORIGINAL LINEMOD as well. The condition is based on the sample images in the original LINEMOD.
I have already download it but the if statement does not depend on ORIGINAL LINEMOD. Please adjust the link to download the datasets and put what you have used to make the code running. The link you provided are not correct.
OK. link and the condition are fixed. I still have no idea why the sem failed on your side since it works on both my local machines (Windows) and remote server(Linux).
Sorry, the link to LINEMOD_ORIG is still not working.
Could you try now? it is supposed to work now. I also look into the sem failure and it was probably due to my data-parallel multi GPU training. Multiple GPUs are required to make it work properly. I will upload ckpts working on a single GPU soon and update Accumulator Space to deal with sem failure properly.
There are still two typos in Accumulator Space script: 1) line 492 model_path = opts.model_dir + class_name+"_pt"+str(keypoint_count)+".pth.tar" -> model_path = opts.model_dir + class_name+"_pt"+str(i)+".pth.tar" and 2) line 519 if filename in test_list: -> if filename[:6] in test_list: .
The sem_out problem still remains. Let us know when you will solve it.
Thanks
Hi,did you solve the problem? I recently met the same problem you mentioned above.
Hi, while testing the RCVPose I get sem_out variable always all zeros at line 556 of AccumulatorSpace.py. To solve, is it possible that we have to multiply the variable by 1000 like here: sem_out = np.where(sem_out*1000>0.8,1,0) ?
Thanks