Open tiger-bug opened 4 years ago
Not sure if this helps you at all, and i would like to know if you figure this out. But here is a snippet of the article:
"Each testing point cloud is sampled multiple times to make sure all the points are evaluated at least r (r = 10 in our experiments) times at testing time"
Can this maybe explain why there are repeats?
I am having a similar issue, have you made any progress on this?
Good morning everyone!
Thank you for the comments. Nicolai Mogensen I did not recall that part of the article but it does make sense why there are repeats. It seems I will have to use indices_split_to_full instead of data_num. I will work on this more. I went back and worked on some training code so I stepped away from this for a while. I will post more if I figure it out. Thanks!
Please let me know if you are able to recall the indices from the original data with "indices_split_to_full". I am finding that the normalized blocks stored in "data" are correct but when I try to map them back to the original data in the merge using "indices_split_to_full" , the blocks are not making sense.
Sorry for the late response. Here's something I've come up with that may work (May need to check my work though)
# Import modules
import h5py
import numpy as np
# Load in h5 file (just as an example)
pred_file = '/path/to/h5-files/file_pred.h5'
# Load in h5 files
data = h5py.File(pred_file)
img = data['data'][...]
data_num = data['data_num'][...]
indices = data['indices_split_to_full'][...]
label_seg = data['label_seg'][...]
confidence = data['confidence'][...]
max_ind = np.max(indices) # Get max index
label_flat = -1 * np.ones(max_ind, dtype=np.int32) # I make it -1 since a label of '0' is an actual label
label_flat[indices.flatten()-1]= label_seg.flatten() # indicies.flatten() - 1 because the index is out of range with indices.flatten()
Not sure how to loop through the rest of them but I think this right?... Not sure. I'm also not sure how to add in confidence either
Good morning,
I'm trying to run some of my own data and I am at the evaluation section. I'm calculating a confusion matrix and the numbers seem to be a bit strange so I just want to make sure I'm doing it correctly or figure out what I'm doing wrong. Here is a snippet of my code.
Now, when I look at
tot
,max_ind
, andnp.sum(result)
, I would have thought thattot==max_ind==np.sum(result)
, howevertot>max_ind
. Is this to be expected? Because themax_ind
is the maximum index point in the test set, so I'm not sure how the total points tested can be greater than that unless there are repeats.