Tyelab / specialk

Team Specialk metadata/data manipulation repo
0 stars 0 forks source link

Method for Generating "Eyeball" Masks with a Neural Network #5

Open jmdelahanty opened 2 years ago

jmdelahanty commented 2 years ago

This is for everyone! So @hadiviko , @samashathaya , @viviannvvn , @rpamintu.

Something that has been on the backburner for quite a while now is making something that can grab the size of the subject's eye and output that data on a frame by frame basis for a given 2P recording. This would allow us to have a more continuous measure of eye size during the 2p experiment that can parse out wincing behaviors, blinks, and whatever else the mouse might be up to during the experiment. This is something lower on the priority list overall, but would be a cool way for you all to get your fingers moving around developing Python code for an automated processing pipeline using Neural Networks. The example we'd use here would be a super simple version.

There's a software called Paintera that we could all use to paint eyes of different subjects that's pretty simple to install/use.

Ideally, we could have something that goes into a given video and grabs frames around different timestamps of behavior (so sucrose delivery, airpuff delivery, etc). From there, it would convert these examples into a format called N5, which stands funny enough for "Not HDF5", a super popular file format for large datasets. After that, we'd paint some eyeballs. I can show you an example in the Cornflakes chat. Next, we'd output the masks/training data as Zarrs. Then, we'll train a network and use the cluster for doing so. Finally, we'll run predictions using it and see how it goes.

This is an enormous amount of things to do to have this happen, but if literally any of you are interested, we can get started. I already have examples, but it's not a very efficient way of doing things...