Closed dardanbekteshi closed 5 years ago
I finally figured out what the problem was. Basically when one changes the training and test sets, it needs to delete the previously generated (if any) lmdb folders before running the gen_lmdb.py script.
I'm sorry. We were very busy as the deadline of CVPR2019 had been approaching. I think you are right, as the lmdb won't cover itself as you run gen_lmdb.py.
No worries. And good luck with the conference! :smiley:
Training and evaluation the network on the already set-up training and testing sets (i.e. t5.txt and v5.txt) works fine. But, when I try to change the content of these files by adding/removing mesh indexes the evaluation part fails. And I do make sure to run the lmdb generation before the eval-solve.py. I am getting the following error:
`File "eval-solve.py", line 109, in
IndexError: index 17408 is out of bounds for axis 0 with size 17408 ` In particular it is the following line that raises the error (I have changed some variable names, but that should not be a problem): https://github.com/yuangan/3D-Shape-Segmentation-via-Shape-Fully-Convolutional-Networks/blob/17235112b944bf1b85e03e0d5a803b5c2968f7dd/caffe-plier/eval-solve.py#L44
Can it be that changing the t5.txt and v5.txt files cause the error?
I am not entirely sure what the following lines do: https://github.com/yuangan/3D-Shape-Segmentation-via-Shape-Fully-Convolutional-Networks/blob/17235112b944bf1b85e03e0d5a803b5c2968f7dd/caffe-plier/eval-solve.py#L31-L39
Any explanation or hint is more than welcomed! ping @yuangan