erfannoury / SuperEdge

Supervised Edge Detection
MIT License
4 stars 2 forks source link

Evaluation Code #3

Closed yassersouri closed 9 years ago

yassersouri commented 9 years ago

Salam

As I told you we need a nice "evaluation code". It does not matter in what programming language it is but we need it to be nice.

Please make this code with aditional documentation.

Note that we are going to make our own datasets for boundary detection. So the evaluation code must specify how the dataset should be made. And obviously it must be specified how should the predictions be saved for the evaluation code to work without problem.

Special thanks to you!

erfannoury commented 9 years ago

BSDS dataset has a complete evaluation code, implemented in MATLAB. Though I first wanted to convert that to Python, after discussions with you, I realized that it is not necessary and it might be prone to wrong implementations and hence getting wrong evaluation results. So, for now we can use the stock evaluation code, though we might be able to add extra metrics to our evaluation, such as MSSIM similarity measure or other measure we might think of.

"Note that we are going to make our own datasets for boundary detection. So the evaluation code must specify how the dataset should be made."

I didn't get it, would you clarify more?

And obviously it must be specified how should the predictions be saved for the evaluation code to work without problem.

For the default evaluation code, results should be saved as png images or .mat files. Results in this case are the posterior probability map of each pixel being a boundary pixel.

yassersouri commented 9 years ago

Very well then.

I should take a look at the evaluation code myself.

On Mon, May 11, 2015 at 1:52 AM, Erfan Noury notifications@github.com wrote:

BSDS dataset has a complete evaluation code, implemented in MATLAB. Though I first wanted to convert that to Python, after discussions with you, I realized that it is not necessary and it might be prone to wrong implementations and hence getting wrong evaluation results. So, for now we can use the stock evaluation code, though we might be able to add extra metrics to our evaluation, such as MSSIM similarity measure or other measure we might think of.

"Note that we are going to make our own datasets for boundary detection. So the evaluation code must specify how the dataset should be made."

I didn't get it, would you clarify more?

And obviously it must be specified how should the predictions be saved for the evaluation code to work without problem.

For the default evaluation code, results should be saved as png images or .mat files. Results in this case are the posterior probability map of each pixel being a boundary pixel.

— Reply to this email directly or view it on GitHub https://github.com/erfannoury/SuperEdge/issues/3#issuecomment-100703611.

erfannoury commented 9 years ago

Should I include the evaluation code in this repository?

yassersouri commented 9 years ago

Why not.

On Mon, May 11, 2015 at 2:16 AM, Erfan Noury notifications@github.com wrote:

Should I include the evaluation code in this repository?

— Reply to this email directly or view it on GitHub https://github.com/erfannoury/SuperEdge/issues/3#issuecomment-100704656.

erfannoury commented 9 years ago

Alright. I'll add them, too.

erfannoury commented 9 years ago

Everything (at least for now) related to this issue is fixed. So I'm closing it.