tue-robotics / tue_robocup

RoboCup challenge implementations
https://github.com/orgs/tue-robotics/projects/2
41 stars 12 forks source link

[EPIC] Object recognition demo #146

Closed Rayman closed 7 years ago

Rayman commented 8 years ago

Demo

Performance tests

LoyVanBeek commented 8 years ago

I was experimenting with https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html. In the gdocs TODO list, there's a little investigation into frameworks and Keras (a wrapper for Theano and/or Tensorflow) came out best for us to use.

Rayman commented 8 years ago

For TensorFlow there is also a (small) tutorial here. An example can be found here:

https://www.tensorflow.org/versions/master/how_tos/image_retraining/index.html#bottlenecks https://www.tensorflow.org/versions/master/tutorials/image_recognition/index.html

LoyVanBeek commented 8 years ago

@rokusottervanger has some annotated data, so we can extract the labeled image patches from that data. Then we use that to do transfer learning: take a fully trained object detection/recognition net and re-train the top few layers for our application.

Rayman commented 8 years ago

Where is the data stored? We could try to put it on GitHub LFS: https://git-lfs.github.com/

LoyVanBeek commented 8 years ago

On his computer at home.

rokusottervanger commented 8 years ago

Annotated training data, available here for the next 7 days: https://we.tl/1Ag3iq9L6R

Rayman commented 8 years ago

Unfortunately, git-lfs has some limitations:

LoyVanBeek commented 8 years ago

I'm still doubtful that we really need to segment and classify these images manually. With the depth images, segmentation should be simpler: where there is a shape edge in both depth and colors, then there is an object boundary. This will not work for all cases, but maybe wit will for the objects in the closet. Then we bootstrap from there.

I think we might also be able to cluster the segmented objects by histogram (or something similar) and apply a label to each cluster's items.

@reinzor did you have an utility to easily split up the .rgbd-files into color and depth?

reinzor commented 8 years ago

Lets wait for Rokus, maybe he knows how all the tooling works ... Hopefully. ​

-Rein

On Thu, Sep 29, 2016 at 10:48 PM, Loy notifications@github.com wrote:

I'm still doubtful that we really need to segment and classify these images manually. With the depth images, segmentation should be simpler: where there is a shape edge in both depth and colors, then there is an object boundary. This will not work for all cases, but maybe wit will for the objects in the closet. Then we bootstrap from there.

I think we might also be able to cluster the segmented objects by histogram (or something similar) and apply a label to each cluster's items.

@reinzor https://github.com/reinzor did you have an utility to easily split up the .rgbd-files into color and depth?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/tue-robotics/tue_robocup/issues/146#issuecomment-250586906, or mute the thread https://github.com/notifications/unsubscribe-auth/AD-4lx3cTjA_9d-5DDJxhn_0WNDcYBIWks5qvCQ2gaJpZM4Jw2pR .

reinzor commented 7 years ago

https://github.com/tue-robotics/tue_robocup/wiki/Tournament-start-up-guide