hammerlab / SmartCount

Repository for collaboration on Celldom computer vision solutions
Apache License 2.0
2 stars 0 forks source link

Adding tools for automated annotation #73

Closed eric-czech closed 5 years ago

eric-czech commented 5 years ago

cc: @benjaminyellen + @jmotschman

This includes two things to help train new cell detection models:

  1. A CLI command for generating RectLabel-compatible datasets from an arbitrary set of single apartment images. In this example, all the images matching data-file-patterns will be run through a cell detection model already configured for some other experiment and then copy those same images to separate folder (example-annotation-dataset) alongside RectLabel xml annotations:
celldom run_annotator \
--experiment-config-path=/lab/repos/celldom/config/experiment/exp-20180614-G3-K562-imatinib-poc-01.yaml \
--data-file-patterns=/lab/data/dataset/dataset02/*.jpg \
--output-dir=/lab/data/tmp/example-annotation-dataset \
--copy-original-images=True
  1. An export feature in the app to do the same as the above for a single apartment. Currently, this looks like:

Pick an apartment and the "RectLabel Annotations" export type

screen shot 2019-02-13 at 6 44 53 am

This will generate data in a folder like this:

screen shot 2019-02-13 at 6 45 17 am

Point RectLabel at the above folder:

screen shot 2019-02-13 at 6 45 44 am

This PR won't require a container rebuild and it will work for datasets you've already generated. I am though about to make some serious updates to how the app works to make it much faster and start deploying the container to DockerHub so that you can just download the latest version (docker pull eczech/celldom:latest) instead of ever needing to build it. It's probably worth waiting for that unless you're anxious to start training cell detection models (in which case this commit should take minimal effort to use).