Generate saved_model, tfjs, tf-trt, EdgeTPU, CoreML, quantized tflite, ONNX, OpenVINO, Myriad Inference Engine blob and .pb from .tflite. Support for building environments with Docker. It is possible to directly access the host PC GUI and the camera to verify the operation. NVIDIA GPU (dGPU) support. Intel iHD GPU (iGPU) support. Supports inverse quantization of INT8 quantization model.
I'm doing some test on pointnet2. See https://github.com/charlesq34/pointnet2.
There are some custom operators, see https://github.com/charlesq34/pointnet2/tree/master/tf_ops.
I want to do inference with tflite_runtime, but always got error "RuntimeError: Encountered unresolved custom op: FarthestPointSample.".
It seems that need to add these custom operators to tensorflow lite.
Do you know how to convert there tf custom operators to tflite_runtime?
Issue Type
Others
OS
Ubuntu
OS architecture
x86_64
Programming Language
C++
Framework
TensorFlowLite
Download URL for tflite file
https://github.com/charlesq34/pointnet2
https://github.com/charlesq34/pointnet2/tree/master/tf_ops
Convert Script
None
Description
Hi,
Relevant Log Output
Source code for simple inference testing code
One of the tf custom operators.
REGISTER_OP("FarthestPointSample") .Attr("npoint: int") >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>Attr .Input("inp: float32") .Output("out: int32") .SetShapeFn([](::tensorflow::shape_inference::InferenceContext c) { ::tensorflow::shape_inference::ShapeHandle dims1; // batch_size npoint * 3 TF_RETURN_IF_ERROR(c->WithRank(c->input(0), 3, &dims1)); int npoint; TF_RETURN_IF_ERROR(c->GetAttr("npoint", &npoint)); ::tensorflow::shape_inference::ShapeHandle output = c->MakeShape({c->Dim(dims1, 0), npoint}); c->set_output(0, output); return OkStatus(); });