Shathe / ML-Superpixels

19 stars 7 forks source link

Question about the percentage of sparse points #2

Closed JamesPatrick1014 closed 4 years ago

JamesPatrick1014 commented 4 years ago

@Shathe thanks a lot for including the source code for the over-segmentation algorithms, things are running smoothly now.

I did have a question about the number of sparse points generated for each dataset though. In your two papers you mention sampling values like .01, .001, and .0001, which correspond to 1%, .1% and .01% of the total number of pixels in an image. But, in most of the tables the percentage of pixels and an actual number in parenthesis is provided (e.g. “Ours from 1% of pixels or 3,000”).

I’m a bit confused because each of the datasets used all have different dimensions, and none of them have dimensions where 1% of the total number of pixels is 3,000. I probably missed something obvious but I can’t quite figure out what it is, would you mind clarifying?

Thanks again for the repo! Glad I don’t need to create masks for images anymore 😀

JamesPatrick1014 commented 4 years ago

So I believe I found the issue.

I was trying to recreate the results in table 12, specifically the CamVid dataset,. Because all of the images are 480 x 360 pixels I was taking the literal .1% of that, which is ~173, but the end results from the evaluation script were lower than that reported in the paper. Then I accidentally ran the code using the default parameter 500 (which is about .3%) and got the results I was looking for.

So slight difference but 🤷‍♀️ closing for now

Shathe commented 4 years ago

Hi there! As I can recall I used 200 sparse labels (in order to do it with round numbers) for that experiment which is 1.16%

In fact, the original camvid dataset is 720x960 but I use this downsampled because deep learning papers tend to report results with 480x360 resolution.

Have you tried to see the difference between the --gridlike parameter set to 0 or to 1?

The code is not exactly the one from the paper and some minor things may have change (e.g., the way I decrease the number of superpixels at each iteration), but the results should give the same or very similar results.

Thanks for the question and interest!