Lots of images and notes to add, but quick summary
FlowCam has an integrated PC, runs Windows
Does a lot of traditional, scikit-image looking computer vision analysis onboard, which you can't get at programatically. Stores a lot of intermediate images which we never see, including raw camera output (very greedy low threshold object detection, researchers page through 2/3 volume of blank images before useful collages)
Source of analytical / cognitive drag in the short term is hand and eye classification (high level, into about 6 genuses of plankton populations)
Lot of effort into getting data and metadata off the instrument all oriented to bigger goals (data sharing, serving as a network repository for freshwater plankton, external partners for model development, etc) which isn't answering researchers' questions.
What are the odds of running a model directly on the FlowCam's OS with a trial and error approach to doing it?
"hello world" proof of concept, can we build and run the simplest executable (cf the .Net work for the flow cytometer at Cefas for the build parts [add link here])
usefully read some of the intermediate outputs, run python, bundle scikit-image, avoid more than minimal UI development
run pytorch, model choice influenced by what will run on the hardware, no transformers etc. OR sustainable way of offering a short-term proxy to a model API during recording sessions, which could be on-prem (ongoing discussion about security options)
Lots of images and notes to add, but quick summary
Lot of effort into getting data and metadata off the instrument all oriented to bigger goals (data sharing, serving as a network repository for freshwater plankton, external partners for model development, etc) which isn't answering researchers' questions.
What are the odds of running a model directly on the FlowCam's OS with a trial and error approach to doing it?