Closed terminalh2t3 closed 2 months ago
@terminalh2t3 Thanks for your issue – it's very good point. Unfortunately, objects are stored in memory as Mat objects, not buffers. However, due to the efficiency of the OpenCV operations themselves, it seems to me that this is the best solution. It is of course possible to access the selected object from within the C++ library.
Therefore, after some thought, I added (as you indicated) a new function to the API called matToBuffer
. It can be used as follows:
matToBuffer(mat: Mat, type: 'uint8'): { cols: number; rows: number; channels: number; buffer: Uint8Array };
matToBuffer(mat: Mat, type: 'float32'): { cols: number; rows: number; channels: number; buffer: Float32Array };
The data can be converted to UInt8Array
or Float32Array
. Remember, however, that it is also important what is the current type of data stored in Mat (enum DataTypes
). That's why I have also added support for the convertTo
function:
invoke(name: 'convertTo', src: Mat, dst: Mat, rtype: DataTypes): void;
TFLite models usually need the input type as a uint8 or float32 tensor, so this should be enough.
Example of use:
// Picked photo to Mat
const src = OpenCV.base64ToMat(photo.base64);
// Conversion because base64ToMat use Float32 DataType by default. Needed if we want then return data as Uint8Array. Skip if you want get Float32Array.
OpenCV.invoke('convertTo', src, src, DataTypes.CV_8U);
// Returns Uint8Array if second parameter === 'uint8' or Float32Array when === 'float32'.
const { buffer, cols, rows } = OpenCV.matToBuffer(src, 'uint8');
// we can use one more time as input for frameBuffer.
const dst = OpenCV.frameBufferToMat(rows, cols, buffer);
Changes are already available in version 0.2.2
Regards!
First of all, thanks for your effort to build this port. It looks more clear than the prior OpenCV port.
I'm building a React Native app that needs image preprocessing before feeding the image data to TensorFlow. However, current APIs only support
toJSValue
and return base64 (https://lukaszkurantdev.github.io/react-native-fast-opencv/apidetails#to-js-value) from Mat which is inefficient if we pass the output to TensorFlow or other modules.For C++, there is a way to pass the data directly to tensorflow or Pytorch without copying new objects:
But I don't know how to do this in Fast OpenCV. Or should we support a new API:
OpenCV.matToBuffer(mat)
?Really appreciate your effort.