Open mrousavy opened 1 month ago
Name | Link |
---|---|
Latest commit | 57c1430615fc95b4e2720857b19b61ea31920876 |
Latest deploy log | https://app.netlify.com/sites/react-native/deploys/6638ba18f6701a0008c58747 |
Deploy Preview | https://deploy-preview-4105--react-native.netlify.app |
Preview on mobile | Toggle QR Code...Use your smartphone camera to open QR code link. |
To edit notification comments on pull requests, go to your Netlify site configuration.
Just a tiny detail, it doesn't really matter much but I thought I wanted to clarify this a bit.
There's two formats in video processing; YUV and RGB. RGB is always BGRA (4 bytes), and YUV is a bi-planar format with a Y plane of 1 byte per pixel, and UV plane of half the size of the image.
For 4k buffers, let's calculate the size of one frame:
While VisionCamera implements optimizations to trim the buffers and uses YUV or even compressed YUV whenever possible, almost 90% of the times people need to use RGB because the ML models just work in RGB.
So the exact number would be 33.177.600 bytes for a Frame, which is 1.990.656.000 bytes per second (or ~2 GB per second) of data flowing through the Frame Processor.
Thank to JSI it does not matter how big the data is, because we only pass references without making any copies or serialization - this is the part that should be highlighted here.