Closed Rayman closed 7 years ago
I was experimenting with https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html. In the gdocs TODO list, there's a little investigation into frameworks and Keras (a wrapper for Theano and/or Tensorflow) came out best for us to use.
For TensorFlow there is also a (small) tutorial here. An example can be found here:
https://www.tensorflow.org/versions/master/how_tos/image_retraining/index.html#bottlenecks https://www.tensorflow.org/versions/master/tutorials/image_recognition/index.html
@rokusottervanger has some annotated data, so we can extract the labeled image patches from that data. Then we use that to do transfer learning: take a fully trained object detection/recognition net and re-train the top few layers for our application.
Where is the data stored? We could try to put it on GitHub LFS: https://git-lfs.github.com/
On his computer at home.
Annotated training data, available here for the next 7 days: https://we.tl/1Ag3iq9L6R
Unfortunately, git-lfs has some limitations:
I'm still doubtful that we really need to segment and classify these images manually. With the depth images, segmentation should be simpler: where there is a shape edge in both depth and colors, then there is an object boundary. This will not work for all cases, but maybe wit will for the objects in the closet. Then we bootstrap from there.
I think we might also be able to cluster the segmented objects by histogram (or something similar) and apply a label to each cluster's items.
@reinzor did you have an utility to easily split up the .rgbd-files into color and depth?
Lets wait for Rokus, maybe he knows how all the tooling works ... Hopefully.
-Rein
On Thu, Sep 29, 2016 at 10:48 PM, Loy notifications@github.com wrote:
I'm still doubtful that we really need to segment and classify these images manually. With the depth images, segmentation should be simpler: where there is a shape edge in both depth and colors, then there is an object boundary. This will not work for all cases, but maybe wit will for the objects in the closet. Then we bootstrap from there.
I think we might also be able to cluster the segmented objects by histogram (or something similar) and apply a label to each cluster's items.
@reinzor https://github.com/reinzor did you have an utility to easily split up the .rgbd-files into color and depth?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/tue-robotics/tue_robocup/issues/146#issuecomment-250586906, or mute the thread https://github.com/notifications/unsubscribe-auth/AD-4lx3cTjA_9d-5DDJxhn_0WNDcYBIWks5qvCQ2gaJpZM4Jw2pR .
Demo
Pre-implementation
Implementation
Performance tests