CONNX is abbreviation of C language implementation of ONNX Runtime. It's targeted to running on IoT devices such as ESP32, Raspberry Pi 3/4 or FreeRTOS. CONNX can be used as an alternative of tflite.
# Run MNIST example.
connx/build$ poetry run ninja mnist
# Run MOBILENET example.
connx/build$ poetry run ninja mobilenet
# Run YOLOV4 example.
connx/build$ poetry run ninja yolov4
or use connx excutable
connx/build$ ./connx ../examples/mnist/ ../examples/mnist/test_data_set_1/input_0.data
or use python script
connx$ python3 bin/run.py examples/mnist/ examples/mnist/test_data_set_1/input_0.data
Notice: If you want to run on Raspberry Pi 3, please compile with Release mode(CMAKE_BUILD_TYPE=Release) for sanitizer makes some problem.
import connx
model = connx.load_model('examples/mnist')
input_data = connx.load_data('examples/mnist/test_data_set_0/input_0.data')
reference_data = connx.load_data('examples/mnist/test_data_set_0/output_0.data')
# Run model with input data
output_data = model.run([input_data])
# output_data is an array that contains output tensors
# Convert to numpy ndarray
reference_nparray = reference_data.to_nparray()
output_nparray = output_data[0].to_nparray
# Check output with reference_data
assert reference_data.shape == output_data[0].shape
import numpy
numpy.allclose(reference_nparray, output_nparray)
# You can also convert numpy.ndarray to connx.Tensor
connx.Tensor.from_nparray(ndarray)
Please refer bin/run.py for more information
ONNX compatibility test is moved to onnx-connx project.
See CONTRIBUTING.md
CONNX is licensed under GPLv3. See LICENSE If you need other license than GPLv3 for proprietary use or professional support, please mail us to contact at tsnlab dot com.