ImageMonkey / imagemonkey-core

ImageMonkey is an attempt to create a free, public open source image dataset.
https://imagemonkey.io
47 stars 10 forks source link

feedback (showing another person) #267

Open dobkeratops opened 4 years ago

dobkeratops commented 4 years ago

ok what i tried to do was show this dataset and site to someone else i was collaborating with, what happened was [1] i showed him 'explore dataset' [2] suggested 'type road to look for roads' [3] he typed frog [4] he told me "i see a picture of a girl :(" [5] i told him , "ok, its got a frog, and a girl, the idea is we annotate both" .. but basically this had given him the impression 'its got wrong images', when really it was 'incomplete labelling'

Suggestions: [1] make it default to searching for existing annotations , if you keep this explore view

[2]One of my other suggestions is to just merge explore with search+unified e.g.

(i) 'explore' lets you see the annotations, but what if the annotations were always highlighted in any browse view (and you just happned to always have a filter: existing annotatios - yes/no/maybe)

(ii) explore also lets you query the data: that's been really useful for debugging - but how about just adding that to the unified toolbar (e.g. a [?] which you can press any time during unified annotation, and it'll show you the json annotation data, perhaps eventually with additional stats (% pixels covered,..)

as you know .. the overall theme here is I think it would be possible to simplify out the presentation and navigation of the side by rolling many functions into Search+Unified, then keeping some simpler (automatic workflow) modes beside it

bbernhard commented 4 years ago

oh, that's a tricky one :D

First of all, many thanks for the feedback. Getting feedback from a fresh pair of eyes, is always really valuable!

As you correctly mentioned, the easiest solution would probably be to tick the "Annotations only" checkbox per default. The main reason why it isn't already this way is, that we have far more labeled objects than annotated objects. So, if we tick the "Annotations only" checkbox per default, we might have some search expressions where we need to return 0 results (although we have hundreds of labeled objects matching that expression).

So, I guess if we want to "show off" and impress new users, I think it might be better to not tick the "Annotations only" checkbox, as we get way more results that way. But I agree, it's a currently a bit misleading. I don't hold any strong opinions on that one, so if you prefer it the other way round, we can also do that :)

The reason why I don't want to get rid of that view completely is, that this view displays exactly the data that's also then used when training your own image classifier/object detector. So, if you want to train your own cat/dog classifier, you could first use the explore mode to quickly verify that the input data is sane (before you start training your own neural net). Another nice addition is, that it shows you some metadata too (if you check the "Export" radiobutton).

regarding merging the functionality into the unified mode. I think this should also be possible. But wouldn't we have the same problem there too? If you would search for frog in the unified browse based mode, the picture with the girl and the frog would appear there too, no?

I think it's a great idea to extend the unified browse mode with more functionality! I am just wondering, if the unified mode and the explore view can coexist (maybe rename the explore view or make add some additional description to make it clearer?) or if we should get rid off the explore view altogether? Personally, I really like the unified mode...but I am not sure if it's the best entry point for new users? As the unified mode is more targeted towards power users (and probably will get more and more "pro functionality" in the future), I am a bit afraid that all the bells and whistles will overwhelm and put off ordinary users. But really not sure about that...it's just a feeling :)

dobkeratops commented 4 years ago

The reason why I don't want to get rid of that view completely is, that this view displays exactly the data that's also then used when training your own image classifier/object detector.

right - and it's been useful for debug, indeed we used it to track down mistaken labels before. I just wonder if eventually something like a truly unified search page with a [advanced +] options box might put everything in one place (we wouldn't kill the explore page, just absorb its features to another place with existing overlap). The tool has been through evolutionary path.. take time to consider what the best cleanup is.

But wouldn't we have the same problem there too? If you would search for frog in the unified browse based mode, the picture with the girl and the frog would appear there too, no?

one idea is an explore view that actually crops each show image to show the anotated areas (so it would have shown just the small frog region) - that might be useful for verification. Its just a case of perception on that other user.. he was expecting frogs, but maybe it just needs to be made more obvious to him (somehow) that the images hold many objects. There was no mistake, there was definitely a frog in the image

but I am not sure if it's the best entry point for new users?

hopefully showing the most important options first , but then with an 'advanced..' box that can expand to show more, it can be simultaneously friendly to new users, and fully featured for experienced. the experienced user can always use a hotkey, so they wont be inconvenienced by unfolding the UI.

I think you can get casual users into unified eventually, since it pretty much replicates the annotation and add labels workflows; in this specific instance having seen that the image has several labels would have actually made it clearer to that new user. Perhaps the main 'instruction' at the top can change - if you fire up an unlabelled image it will ask you "Add labeld.."/"what objects do you see?", then once there's a label without annotation it can inform you "annotate..". Perhaps detection of the state where most of the screen is still mostly un-annotated can give a hint - "add more labels", or it could explicitely put some question marks on other area of the screen ("what is here?")

.. it might be possible to turn it into a kind of self-tutoring mode.. (actually a spot question with a few random points might be an interesting way to close gaps in the label list generally.. there's a completely orthogonal idea of 'point labels' a half way house)