commons-app / apps-android-commons

The Wikimedia Commons Android app allows users to upload pictures from their Android phone/tablet to Wikimedia Commons
https://commons-app.github.io/
Apache License 2.0
988 stars 1.18k forks source link

Suggest depictions using image recognition #75

Open nicolas-raoul opened 8 years ago

nicolas-raoul commented 8 years ago

It would be great if I was proposed the item Elephant when I take a picture of an elephant.

There are some APIs for this, not sure if any is usable for free. The API would provide a few words such as {elephant, zoo} and we would perform a Wikidata item search on these words and add the resulting items to the list of suggestions.

In using an online service, the feature should probably be opt-in since the privacy policy of the API will most probably be incompatible with the Wikimedia privacy policy.

nicolas-raoul commented 7 years ago

A friend has left Google to create an AI company and is looking for people to test his library. He promises to open source it soon. Unlike Google libraries, it is usable offline.

This looks like a great opportunity to develop this feature, since no such library existed so far (as far as I know) Anyone interested in working on this right now? I can send you the library. Thanks a lot!

misaochan commented 7 years ago

Sounds great, and yeah should definitely be opt-in. I could chuck this into my IEG renewal proposal, but that probably won't be for another couple of months, so anyone who wants to work on it sooner is most welcome.

nicolas-raoul commented 6 years ago

There is a grant proposal to create an API for that: https://meta.wikimedia.org/wiki/Grants:Project/AICAT

misaochan commented 6 years ago

@nicolas-raoul Sounds very useful! How did you hear of it? I wanted to post an endorsement, but their community notifications section is still empty so I was hesitant. :)

nicolas-raoul commented 6 years ago

@misaochan I learned about it here: https://www.wikidata.org/wiki/Wikidata:Project_chat#Interesting_initiative I added an endorsement.

misaochan commented 6 years ago

I did the same. :) Even if the grant is approved though, it will probably be about a year before the API is usable (the grant is 8 months, and I believe the next Project Grant round starts in July).

alexeymorgunov commented 6 years ago

Thanks for the endorsement @nicolas-raoul! I am one of the guys behind the proposal. We welcome any suggestions and advice!

nicolas-raoul commented 6 years ago

Recent WMF blog post https://blog.wikimedia.org.uk/2018/02/structured-data-on-commons-is-the-most-important-development-in-wikimedias-usability/ :

show[...] the user images with suggested ‘fields’, allowing the user to then swipe left or right to say whether or not the image should be tagged with the suggested category. This would allow the community to help organise the uncategorised images on Commons much more efficiently.

This sounds very similar to the present issue. Categories will become Structured Commons properties in the future, but that does not make that much difference from the point of view of this issue.

The idea of swiping left/right is interesting, let's gather the pros/cons: Pros of swiping:

Cons of swiping:

The other new idea we can steal from this blog is that category suggestion could be used not only for the picture I just uploaded, but also for uncategorized pictures uploaded by other people.

aaronpp65 commented 6 years ago

Hai , My name is Aaron.So i am interested in contributing to the Commons App for GSoC18 to allow users to browse.So i was wondering if i could use image processing,Like when the user uses the camera to take a photo,the app scans the area and gives possible suggestions which could include users see other people's work etc. we could use TensorFlow lite and an image processing model like Inception-v3 Inception-v3 has already been tested successfully in TensorFlow lite, they say and i quote "the model is guaranteed to work out of the box" Do you think this could work?Looking forward to suggestions.....

nicolas-raoul commented 6 years ago

@aaronpp65 basic questions about this solution:

Also, if I understand correctly that library gives you a word like "leopard" or "container ship", right? How do you propose matching these strings to:

aaronpp65 commented 6 years ago

It’s machine-learning on the go, without the need for connectivity. TensorFlow Lite is < 300KB in size when all operators are linked and <= 200KB when using only the operators needed for standard supported models (MobileNet and InceptionV3). *TensorFlow is an open-source software,it was released under the Apache 2.0 open source license

aaronpp65 commented 6 years ago

Yes the library gives you a word like "leopard" or "container ship" but it happens when we use a pre-trained Incpetion v3. Its trained using Imagnet data set. So instead of using a pretrained model, we can train the Inception model using our own wikimedia commons dataset.Hence we will get strings similar to that of the commons.Then we can query this string in the commons database and retrieve other people work . But then as you asked before we will need connectivity to do this part of querying the database

nicolas-raoul commented 6 years ago

@aaronpp65 Very impressive, thanks! Requiring connectivity during training is no problem, of course. But using Commons as a training set unfortunately sounds difficult, because:

port_of_salem_container_ship tiff2016-08-05_frachtschiffreise_stockwerkbetten_auf_containerfeeder_ms_dornbusch180px-container_terminal_layout_nt180px-2006container_fleet180px-container-ship-rates svg180px-mv-ascension-routestowage_numbering_system_by_lisa_staugaard pdf180px-rock_near_sutro_baths_ 2897398080135px-mooring_boat_with_container_ship

So I guess we'd be better off trying to match from ImageNet categories to Commons or Wikidata.
https://opendata.stackexchange.com/questions/12541/mapping-between-imagenet-and-wikimedia-commons-categories
https://opendata.stackexchange.com/questions/12542/mapping-between-imagenet-and-wikidata-entities

aaronpp65 commented 6 years ago

Yeah.....So mapping ImageNet with commons should do the trick

aaronpp65 commented 6 years ago

@nicolas-raoul will you please check my draft and give possible feedbacks. Thanks

nicolas-raoul commented 6 years ago

@aaronpp65 Could you please post a link to your draft? Thanks!

aaronpp65 commented 6 years ago

https://docs.google.com/document/d/1am3EbhBrwaYn2_LLKAmnrXlzTGVWgttCdALAV4fy_NU/edit?usp=sharing @nicolas-raoul here is the link to the draft. I should make one in phabricator too right?

nicolas-raoul commented 6 years ago

Yes, please post it on Phabricator, thanks :-)

On Fri, Mar 23, 2018 at 1:56 AM, aaronpp65 notifications@github.com wrote:

https://docs.google.com/document/d/1am3EbhBrwaYn2_ LLKAmnrXlzTGVWgttCdALAV4fy_NU/edit?usp=sharing @nicolas-raoul https://github.com/nicolas-raoul here is the link to the draft. I should make one in phabricator too right?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/commons-app/apps-android-commons/issues/75#issuecomment-375379344, or mute the thread https://github.com/notifications/unsubscribe-auth/AAGFBg4ZoDeKOeG6RxeFEOexRjr-wttXks5tg9eigaJpZM4HiYl2 .

nicolas-raoul commented 6 years ago

Could you please explain in more details the following steps: - Convert the model to the TensorFlow Lite file format. - Integrating the converted model into Android application

Also, please add a step-by-step description of what the user will see, what screen they will go to, what button they click, so that we understand what this project will bring to the app. Feel free to include hand-drawn screens to make it clearer if necessary.

Thanks! :-)

On Fri, Mar 23, 2018 at 1:34 PM, Nicolas Raoul nicolas.raoul@gmail.com wrote:

Yes, please post it on Phabricator, thanks :-)

On Fri, Mar 23, 2018 at 1:56 AM, aaronpp65 notifications@github.com wrote:

https://docs.google.com/document/d/1am3EbhBrwaYn2_LLKAmnrXlz TGVWgttCdALAV4fy_NU/edit?usp=sharing @nicolas-raoul https://github.com/nicolas-raoul here is the link to the draft. I should make one in phabricator too right?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/commons-app/apps-android-commons/issues/75#issuecomment-375379344, or mute the thread https://github.com/notifications/unsubscribe-auth/AAGFBg4ZoDeKOeG6RxeFEOexRjr-wttXks5tg9eigaJpZM4HiYl2 .

aaronpp65 commented 6 years ago

@nicolas-raoul i have made the required changes and have added a basic wireframe Thanks for the feedback.

nicolas-raoul commented 6 years ago

@aaronpp65 Thanks! If I understand correctly, your idea would work like this:

  1. I take a picture of a butterfly
  2. I upload it to Commons via the app
  3. I go to the app's gallery and touch my picture
  4. In the details view that opens, pictures that are similar to my picture (other pictures of butterflies) are shown below my picture Is my understanding correct? Thanks!
aaronpp65 commented 6 years ago

@nicolas-raoul actually what i am suggesting is much simpler than that.....

User clicks the camera icon to snap Camera opens up Before taking the pic,the camera scans the premise and provides suggestion at the bottom of the screen The user scroll through these suggestions(while still in camera view ) and takes appropriate pic.

so number of clicks a user has to make is same as the current app.....hence more user friendly

get it?

nicolas-raoul commented 6 years ago

Oh, I see, when I point the camera towards a container ship it will show me other pictures of container ships, am I understanding correctly? Please not that most user don't use our app's camera, using their favorite camera app instead and then sharing or selecting from gallery.

aaronpp65 commented 6 years ago

Yep exactly...

People take a pic using their camera app and upload it only later(correct me if i am wrong),like when they are home or when they have good connectivity.But providing suggestions at that time will not be useful na cause they probably might not be in that location to retake the pic according to the suggestions we provide....

aaronpp65 commented 6 years ago

screenshot @nicolas-raoul This i what i have in my mind.You can see the suggestions below in small thumbnails.

nicolas-raoul commented 6 years ago

Here is a web-based tool that suggests categories for any image: https://youtu.be/Y9lvXVJCiyc?t=1932 It seems to work quite well, judging from the demo.

Image labelling and category suggester Phab: https://phabricator.wikimedia.org/T155538 (not exactly all of the things this ticket wants). User script that finds labels for the image and suggests categories I will use the provided laptop Niharika Demo a user script that uses a Google (?) image recoginition API to detect contents of an image and suggest possible categories. Works, but not perfect. (hilarious skeleton example) you can play with it yourself https://commons.wikimedia.org/wiki/User:NKohli_(WMF)/sandbox - https://commons.wikimedia.org/wiki/User:NKohli_(WMF)/imagery.js

If I understand correctly the wiki page calls a mediawiki API which in turn calls a third-party image recognition tool. Having mediawiki in the middle allows the IP address of the user to not be leaked, so I guess we could actually use this right now.

whym commented 5 years ago

https://commons.wikimedia.org/wiki/User:NKohli_(WMF)/sandbox - https://commons.wikimedia.org/wiki/User:NKohli_(WMF)/imagery.js

It looks like this uses a Toolforge tool (https://tools.wmflabs.org/imagery/api.php) which is currently down(?) - it returns a 500 error on a query from the script for me. It's been a long time, I believe it was meant to be a proof of concept that was not going to be maintained as it was.

nicolas-raoul commented 5 years ago

it was meant to be a proof of concept

I hope the source code is still available somewhere and someone turns it into a more permanent tool :-)

nicolas-raoul commented 5 years ago

My understanding is that we still need to find either:

The API or library must output either Commons category(ies) (example: "the submitted image contains a https://commons.wikimedia.org/wiki/Category:Dogs") or Wikipedia/Wikidata item(s) (example: "the submitted image contains a https://www.wikidata.org/wiki/Q144").

madhurgupta10 commented 5 years ago

@nicolas-raoul I agree that using third party API such as Azure will be a concern for privacy. There is an alternative to it https://wadehuang36.github.io/2017/07/20/offline-image-classifier-on-android.html

nicolas-raoul commented 5 years ago

Thanks @madhurgupta10 ! This seems to be the best fork of that project: https://github.com/mnnyang/tensorflow_mobilenet_android_example Unfortunately I did not manage to build it, there must be some necessary step that I am not thinking of.

madhurgupta10 commented 5 years ago

@nicolas-raoul I managed to build it, If you would like, I shared the apk file

.

nicolas-raoul commented 5 years ago

Thanks! Did you modify any of that project's files? If yes please fork/commit and push your fork to GitHub, thanks :-)

wow 32 MB is very big. I am sure the tensorflow libraries contain many unused classes/network types/etc. Ideally image recognition should not add more than a few MB to the total size of our app's APK. Anyone willing to take on this issue for GSoC/Outreachy, please include that trimming task in your schedule, thanks!

madhurgupta10 commented 5 years ago

@nicolas-raoul Sure, I will add that in my proposal :) and will commit the files soon, also the TF 2.0 is out so it would be much more optimized and would be better than this example, it's pretty old.

nicolas-raoul commented 5 years ago

I believe the project above uses the normal Tensorflow. Using Tensorflow Lite will certainly reduce size a lot, but still not enough I am afraid. Other things to try: https://developer.android.com/topic/performance/reduce-apk-size https://stackoverflow.com/questions/51784882/how-to-decrease-the-size-of-android-native-shared-libaries-so-files/51814290#51814290

madhurgupta10 commented 5 years ago

@nicolas-raoul Thanks for the links, I will explore them :+1:

nicolas-raoul commented 5 years ago

There is a pre-built APK at https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/android#bazel I just tried it, the app itself is rather buggy, but when it works it is super-fast and not too bad at recognizing things, I mean the 3 first guesses contains the depicted object 50% of the time, so showing them as suggestions would be helpful. The APK is 23 MB unfortunately.

maskaravivek commented 5 years ago

I would take a look at this and hopefully, we can incorporate this in the app for category suggestions and later maybe to suggest depicts for Wikidata.

nicolas-raoul commented 5 years ago

Most image classification implementations output WordNet 3.0 concepts.

I just wrote this query that shows the mapping between WordNet concepts, Wikidata items, and Commons categories. It takes a while to execute, so here is a screenshot:

Screenshot from 2019-07-02 18-39-54

There are currently 474 mappings, and it has not increased in a year. I will try to motivate people to add more mappings.

nicolas-raoul commented 4 years ago

Good news, this is starting to get implemented on commons.wikimedia.org : https://commons.wikimedia.org/wiki/Commons:Structured_data/Computer-aided_tagging

nicolas-raoul commented 4 years ago

This page seems to do exactly what we want: https://commons.wikimedia.org/wiki/Special:SuggestedTags Everyone please try it out, and post here how often at least one useful suggestion is shown (for instance 50% of the time, etc). Other thoughts welcome too of course. Thanks! :-)

I have asked whether an API could be made for us: https://commons.wikimedia.org/wiki/Commons_talk:Structured_data/Computer-aided_tagging/Archive_2020#API_to_retrieve_%22depicts%22_suggestions_from_Commons_app? (no reply unfortunately)

maskaravivek commented 4 years ago

Wow! The suggestions look quite useful. For each of the first 5 images, I found at least 1 relevant tag suggested.

misaochan commented 4 years ago

Their algorithm is fantastic IMO!! Had at least 2 relevant tags for 3/3 of the photos I saw. @macgills , is this something that you and Mark have identified as a future potential task for you (after getting our SDC branch merged)?

macgills commented 4 years ago

I couldn't say! Will for sure discuss it with him at our next meeting on monday.

misaochan commented 4 years ago

Awesome! Let us know how that goes. :)

sivaraam commented 4 years ago

Everyone please try it out, and post here how often at least one useful suggestion is shown (for instance 50% of the time, etc).

In my attempt at testing 6 or 7 images, the suggestions were mostly relevant. In cases there were even 10 appropriate suggestion! Also, none of the suggestions can be called totally irrelevant. This looks great!

Their algorithm is fantastic IMO!!

Yeah, guess what they are using in the backend, Google cloud vision. 😎 [ref]

On a related note, the Wikipedia app is adding a new option in their Suggested edits feature that allows users to tag Commons images with suggested image tags [ref 1] [ref 2] [ref 3]. This is already in their alpha app. Not sure if it's in the production version, though. I suppose they're using an API related to the Special:SuggestedTags page.

nicolas-raoul commented 6 months ago

Ideally in the future we could use on-device models to do this. This would remove the need to either call a we service or embed a bulky model in our APK.

https://ai.google.dev/tutorials/android_aicore :

The APIs for accessing Gemini Nano support text-to-text modality, with more modalities coming in the future. [apps using this can] provide a LoRA fine-tuning block to improve performance of the model for your use case.

Hopefully image-to-text will come soon.

sivaraam commented 6 months ago

Ideally in the future we could use on-device models to do this.

The idea is nice. I'm nust unsure what the community consensus is about using machine assistance in order to edit depictions. Do you happen to be aware of any guidelines about it Nicolas?

nicolas-raoul commented 6 months ago

@sivaraam I don't think there are any guidelines about this currently. The AICAT experiment was stopped due to some strong opposing voices, but I believe our app is a very different use case. In our app: