Great project. In terms of deployment, it would be very useful to be able to use an inference device on a Pi like the Intel Neural Compute Stick 2. This would offload inference from the Pi thus avoiding the need for additional compute power or overloading the (octo)pi computer.
This is something that might be worth looking into. I don't know how well the Compute Stick 2 performs, but from a quick search it looks like it could be a very viable way to deploy this locally.
Great project. In terms of deployment, it would be very useful to be able to use an inference device on a Pi like the Intel Neural Compute Stick 2. This would offload inference from the Pi thus avoiding the need for additional compute power or overloading the (octo)pi computer.