Describe your Request
Right now, the implementation of PrintNanny Vision is embedded into the PrintNanny OS system image. PrintNanny OS bundles the whole webrtc-based video streaming stack, camera drivers, and vision/detection applications (gstreamer pipelines).
We want to separate the vision components, so these can exist as a stand-alone SDK for OEMs looking to integrate PrintNanny into their existing software stack.
Community Edition
tl;dr:: Connect PrintNanny up to any camera system using an open-source model.
demo: Included in PrintNanny OS
licensing: AGPL
Please take a look at inference step 4 below.
OEM Edition
tl;dr:: Train a PrintNanny model customized for YOUR 3D printer hardware.
demo: TBD
licensing: Commercial
Plug PrintNanny into your existing camera system. The bare-bones interfaces needed to collect data, train, deploy:
1. Data collection
Define an Arrow schema for raw bayer sensor data (so we're agnostic to encoding stack), temperature sensor data, and filament flow rate sensors.
Collect one sample frame and histogram of temperature readings per Z-axis movement.
import printnanny_vision
# Configure your API key
printnanny_vision.init(api_key="demo")
# Provide a name and schema for your dataset
SCHEMA = "/path/to/arrow/schema"
DATASET_NAME = "2023-05-08__Printer1__ModelFilename"
# Collect data samples until someone runs control+c to interrupt this script.
my_dataset = printnanny_vision.Dataset(schema=SCHEMA, name=DATASET_NAME)
try:
print("PrintNanny is collecting samples for dataset {DATASET_NAME}. Press control+c to interrupt and upload dataset.")
my_dataset.run_collector()
except KeyboardInterrupt:
print(f"PrintNanny is uploading {DATASET_NAME}. This could take a while, you might want to grab a coffee☕")
# Upload dataset, and print upload progress to terminal
my_dataset.upload(progress=True)
print(f"PrintNanny finished uploading {DATASET_NAME}! You can view it at: {my_dataset.url}")
2. Labeling
Bounding box defective areas
Paint (segment) defective areas
TBD. I use a fork of VoTT for my labeling infrastructure, with a guidance model to speed up manual labeling.
We have the option of partnering with a data labeling service here.
3. Training
EffidientDet backbone
BiFPN allows us to start with image data, then add additional feature extractor networks for temperature/flow
import printnanny_vision
DATASET_NAME = "2023-05-08__Printer1__ModelFilename"
# Submit training job via Google Cloud Platform AutoML platform (get a quick working prototype for ~$200, minimum 4,000 samples)
# See this blog post for an example: https://medium.com/towards-data-science/soft-launching-an-ai-ml-product-as-a-solo-founder-87ee81bbe6f6
printnanny_vision.train(dataset_name=DATASET_NAME, timeout="6h", backend="gcp-automl", model_name="2023-05-08_AutoML")
# Run a local EfficientDet training job, incorporating flow rate and temperature data
printnanny_vision.train(dataset_name=DATASET_NAME, timeout="6h", backend="printnanny-efficientdet", model_name="2023-05-08-efficientdet")
4. Inference
1 inference pass per Z-axis layer
Online (cloud) inference
Offline (air-gapped) inference remains available in PrintNanny OS as a reference implementation, and we'll work with vendors directly where air-gapped operations are p0.
Describe your Request Right now, the implementation of PrintNanny Vision is embedded into the PrintNanny OS system image. PrintNanny OS bundles the whole webrtc-based video streaming stack, camera drivers, and vision/detection applications (gstreamer pipelines).
We want to separate the vision components, so these can exist as a stand-alone SDK for OEMs looking to integrate PrintNanny into their existing software stack.
Community Edition
tl;dr:: Connect PrintNanny up to any camera system using an open-source model. demo: Included in PrintNanny OS licensing: AGPL
Please take a look at inference step 4 below.
OEM Edition
tl;dr:: Train a PrintNanny model customized for YOUR 3D printer hardware. demo: TBD licensing: Commercial
Plug PrintNanny into your existing camera system. The bare-bones interfaces needed to collect data, train, deploy:
1. Data collection
Define an Arrow schema for raw bayer sensor data (so we're agnostic to encoding stack), temperature sensor data, and filament flow rate sensors.
Collect one sample frame and histogram of temperature readings per Z-axis movement.
2. Labeling
TBD. I use a fork of VoTT for my labeling infrastructure, with a guidance model to speed up manual labeling.
We have the option of partnering with a data labeling service here.
3. Training
For a first pass (without temperature/flow rate data), we can use any commodity vision AutoML product. Here's an example of the results achieved with Google Cloud AutoML Vision, for example.
4. Inference
5. Feedback
printnanny_vision.monitor()
call.This gives us everything we need to train and deploy a pilot model.