Closed mrousavy closed 1 year ago
I think I'm gonna make the Skia thing iOS only for now, it's just insanely difficult on Android and I don't wanna block the release. Can be an addition later on
@mrousavy We would be glad to support more, can we discuss a bit on twitter? Currently I can not send a message to you.
my DMs should be open - but I just sent you a message!
I think I'm gonna make the Skia thing iOS only for now, it's just insanely difficult on Android and I don't wanna block the release. Can be an addition later on
I think that's a great idea. I can also contribute a bit more and help test. I have a big very nice update on a mobile app waiting just for this. But my problem is actually with the worklets, as mentioned on the PR (IDK if it's related)
But my problem is actually with the worklets
can you explain more? Is there something not working with v2?
@mrousavy Can you tell when you can release a version where the iOS only part works and only the features are missing in Android?
I can do another RC next week. Need to update a few parts here and there, but actually I am still unsure about the Worklet situation. I would love to use Reanimated's Worklet implementation, but I really don't want to depend on a UI animation library just to have the Worklets feature (users shouldn't need to be on REA v3 just to use Frame Processors). Ideally a separate core library just for the Worklets infra makes sense, obviously this is more maintenance efforts
Maybe I can find a solution together with @tomekzaw
@mrousavy, based on my understanding of your response to my previous question, it seems that there are limitations when using multiple types of worklets within a single RN app. Therefore, it would be ideal for the entire RN ecosystem to have a unified worklet library that caters to various functionalities such as animation, camera, and gesture handling. Although we currently lack such a library, reanimated worklets serve as the closest alternative. I believe it's acceptable to make short-sighted decisions for now, with the anticipation that worklet-related breaking changes will eventually be introduced someday a standardized worklet library comes(maybe the one separated from Reanimated as you suggested?).
Additionally, I noticed that you didn't mention the Reanimated worklet's missing features required by RNVC in your previous comment. I'm curious to know if these features are now supported by reanimated worklets.
But my problem is actually with the worklets
can you explain more? Is there something not working with v2?
It's what I've mentioned here https://github.com/mrousavy/react-native-vision-camera/pull/1466#issuecomment-1553381626, it just crashes.
@mrousavy Will vision camera will fix this issue in V3 ReferenceError: Property '_setGlobalConsole' doesn't exist
But my problem is actually with the worklets
can you explain more? Is there something not working with v2?
It's what I've mentioned here #1466 (comment), it just crashes.
I noticed that with the newer worklets library, unlike the reanimated version, I was getting errors when calling the function in the same line as declaring it. In the reanimated version of worklets you could do something like this:
runOnJS(somefunction)(args);
In the newer version I found I had to do this:
const somejsfunction = Worklets.createRunInJsFn(somefunction);
somejsfunction(args);
Does that help resolve the errors you are seeing?
// load the two Tensorflow Lite models - those can be swapped out at runtime and hot-reloaded - just like images :) const faceDetection = loadModel(require("../assets/face_detection.tflite")) const handDetection = loadModel(require("../assets/hand_detection.tflite"))
@mrousavy I'm wanting to test the hot reload function. Is the loadModel a function supposed to be exported by react-native-vision-camera? Getting an undefined function when running this import. import { loadModel} from "react-native-vision-camera";
Okay so here's a full explanation on the Worklet dilemma for everyone who missed it.
Currently there are two Worklet implementations:
@mrousavy, based on my understanding of your response to my previous question, it seems that there are limitations when using multiple types of worklets within a single RN app. Therefore, it would be ideal for the entire RN ecosystem to have a unified worklet library that caters to various functionalities such as animation, camera, and gesture handling. Although we currently lack such a library, reanimated worklets serve as the closest alternative. I believe it's acceptable to make short-sighted decisions for now, with the anticipation that worklet-related breaking changes will eventually be introduced someday a standardized worklet library comes(maybe the one separated from Reanimated as you suggested?).
@bglgwyng yes, I 100% agree. It would be much better for libraries like VisionCamera if there was a unified core library for Worklet functionality. react-native-worklets could be a core dependency for VisionCamera, Reanimated, GestureHandler, and potentially more libraries like audio processing (something coming soon?) as said above in point 2.
The software mansion team is doing a great job at building the Worklet part in Reanimated, but unfortunately it is really hard for me to depend on Reanimated (a pretty big animation + sensors library) just for the Worklet functionality. Some people don't use REA, some people are on REA 2 and some on REA 3. It's just pretty much impossible to support all of that in VisionCamera.
I submitted a bunch of PRs to the Reanimated 2 repo a while ago to expose some Worklet APIs to the C++ side so I can use them from VisionCamera, but with the Reanimated 3 update some of those APIs broke or don't work anymore. So now I have to update VisionCamera and add extra ifs or version checks. So from time to time VisionCamera just broke because of some changes in Reanimated, or if the user doesn't use Reanimated, or if the .zip extraction failed, or if the RN version is unknown, or if they changed something in their build.gradle, or ........
Additionally, I noticed that you didn't mention the Reanimated worklet's missing features required by RNVC in your previous comment. I'm curious to know if these features are now supported by reanimated worklets.
Unfortunately those features are still not all supported. Here's what I need in VisionCamera:
C++ side:
JS side:
Those are the 6 APIs that I currently use in VisionCamera V3 with react-native-worklets and it works perfectly fine (see the react-native-worklets USAGE docs), but unfortunately Reanimated 3 doesn't expose the last 3 APIs.
I don't know about any ETA, but I talked to @tomekzaw and they said that they are working on that.
So in short;
Not sure what the best approach would be right now, I hate blocking VisionCamera V3 because of that but that's how it is 😓
I'm wanting to test the hot reload function. Is the loadModel a function supposed to be exported by react-native-vision-camera? Getting an undefined function when running this import.
@pvedula7 this was a proof of concept. It's not on GitHub, but it could be easily exposed via a C++ host object or a separate library.
@mrousavy Thanks for the detailed explanation. I believe Reanimated 3 is not that far from what you guys need. How about setting up a call so we can all take a look at the code, better understand what's missing and plan some further actions?
Hi, is that library is available to be used ? where can I install it from ?
I am interested in accessing the Frame byte array from javascript
Thanks
Hi guys, is there a release date yet?
Btw I created a discussion in the Tensorflow JS repo about adding VisionCamera bindings to it - this will allow you to use any Tensorflow model without ever touching native code. https://github.com/tensorflow/tfjs/issues/7773
@mrousavy First of all, I want to thank you for this amazing work in RNVC. It is such a strong contribution to the RN ecosystem. Really, very much appreciated.
Regarding the decision on worklet library dependency and the RN ecosystem, I want to quickly share my pain and thoughts:
I personally have found that it is rediculously hard to create a consistent stack with all the major libraries working reliably next to each other. Just thinking of react-navigation plus some bottom-tab-bar variation, react-reanimated V2 vs V3, gesture-handler, some bottom-sheet library...
So far RN has been continuously a painful experience regarding libs compatibility and maintenance status of seemingly important libraries.
After one year and even though I have years of experience with TS and React and a bunch of internal code that I know inside out, I made the attempt to switch to Flutter (even though Flutter seems quite verbose and never really attracted me and I finally decided to give RN another chance). I would love to see RN become more stable and reliable. I actually believe that stability of the ecosystem will be a deciding factor in RN future success.
From that point of view, we definitely need a dedicated worklet library which all other major libs depend on. Depending on worklets inside of REA would just continue on the road of weird dependency trees and unclear compatibility and it would WEAKEN the ecosystem in the long run. Also RNVC would suffer from bugs introduced by changes in worklet code inside of REA. I find that rather unacceptable.
I believe we "REALLY NEED MORE STANDARD LIBRARIES" within the RN ecosystem. Not just for worklets, but definitely for worklets. Worklets is clearly a core feature: processing data outside of the main thread. It has absolutely nothing to do with animations, audio or vision by itself.
So the only SANE solution would be to make REA use the same worklet library as RNVC. Even if that means that for a while there would be just another worklet library out there. But someone has to make the first move.
How difficult would it be to make REA adopt react-native-worklets? Do we need so send them flowers and gift cards? Is there anything that REA internal worklet code can do which react-native-worklets can not?
Risking to repeat myself: WE NEED MORE ACCEPTED STANDARD LIBS FOR REACT-NATIVE ECOSYSTEM.
RNVC could make a move towards more standardization and stability - cooperating with all possible client libs like REA to make react-native-worklets provide what they need.
Do we need so send them flowers and gift cards?
yes, please 🌹
Thanks for your take on this @bennidi, I 100% agree with you. Libraries should try to be as stable as possible with as little dependencies as possible. A dedicated worklet library is exactly what would solve this problem, however it makes things a bit more difficult for the Reanimated team as they then work across two projects instead of one. I guess the decision is purely up to them at this point 😄
How difficult would it be to make REA adopt react-native-worklets? Do we need so send them flowers and gift cards? Is there anything that REA internal worklet code can do which react-native-worklets can not?
So at this point we worked on react-native-worklets and SWM worked on Reanimated at the same time, SWM made their Worklet implementation faster, we made our worklet implementation into a better third-party API.
I think if we combine the fast implementation with the easy to use API, we got a golden standard for Worklets. There even is a JS based API in react-native-worklets which basically makes my react-native-multithreading library redundant.
yes, please 🌹
@tomekzaw I'll bring u some next time I'm in Krakow 😂
Since the task 'Convert it to a TurboModule/Fabric' in the to-do list is still unchecked, my understanding is that the RNVC's API does not currently have a JSI interface. In simpler terms, once RNVC is converted to TurboModule/Fabric, will APIs such as takePhoto have a JSI interface written in C++? Am I understanding this correctly?
To provide some context: I was attempting to test 'react-native-jsi-image' with RNVC, but I encountered a hurdle when I discovered that the implementation of takePhoto
is solely in Swift.
@bglgwyng yea this is kinda like the last priority right now, because TurboModules/Fabric are still maturing in RN Core. Ideally I want JSI-Image to work with takePhoto, but this requires lots of custom bindings, and other parts are more performance critical at the moment. The V3 checklist is really huge 😅
@mrousavy, great initiative can't wait to give it a go.
Update 3.7.2023:
The Tensorflow Lite Plugin is working perfectly. 😍 Even I was a bit surprised when I was successfully running an object detector and drawing red boxes around it in VisionCamera / JavaScript at >60 FPS 🤯. It's perfectly smooth and even faster in release builds! Check it out & try it here: https://github.com/mrousavy/react-native-vision-camera/pull/1633
I might build the Tensorflow Lite plugin into a separate general purpose module, with the added integration for VisionCamera. So then this can also run all kinds of different things, not only Camera stuff.
I talked to @tomekzaw and came to the conclusion that we will try to use Reanimated V3 Worklets for VisionCamera V3. They will try to expose some APIs to make the API compatible, e.g. some stuff like nested Worklets are still missing and at the moment it does not run at all for VisionCamera. In my opinion this is a temporary solution, as a standalone core Worklet library is the most stable solution for the RN ecosystem. But we'll see :)
I'll update all dependencies in the V3 branch today, as there were some changes in RN Skia and I need to make sure that still builds. We will need to introduce a build pipeline to RN Skia at some point so that changes in that lib doesn't break in VisionCamera - wdyt @chrfalch?
I'll try to investigate RN Skia on Android again as this was still unfinished. I will not focus on performance too much at first, then re-investigate perf improvements later.
Thanks everyone who's sponsoring me, I'm working hard on getting V3 out!! ❤️
I'm happy for you who make React Native beyond to what we could imagine before, thanks for all your hard work sir... Waiting for Android support.
Tbh you should be work for Meta for what your contribution to React Native ecosystem
Hey Marc, I was wondering if V3 will support AVMultiCam and Concurrent Camera Streaming? I read in another one of your comments that to accomplish it would require a large rewrite due to how to package was designed, thought it might have been made possible to implement/implemented as part of V3.
Thanks for the awesome package!
@mrousavy Hi Marc! Can you make a new RC today? You promised it one or 2 weeks ago. Maybe if it builds and works with new dep versions, then you can make it. Fingers crossed. I ask it because I am currently using V2 and it breaks with the latest react-native 0.72
Gonna do a new RC today, y'all want the Tensorflow Lite integration in there or not? Just for playing around - this will be a separate package later
@mrousavy We do not need the Tensorflow integration just yet. Other question is, that the only thing is missing for Android implementation is the direct GPU based drawing to Skia? So can we still access the bytearray, and recreate a Skia Image from that and use it in a separated Skia view? Performance is not priority for us just yet. I just want it to work.
@mrousavy We do not need the Tensorflow integration just yet. Other question is, that the only thing is missing for Android implementation is the direct GPU based drawing to Skia? So can we still access the bytearray, and recreate a Skia Image from that and use it in a separated Skia view? Performance is not priority for us just yet. I just want it to work.
@metrix-hu bro if you want it ready, just pay him, so he will finishing it for you, we should know in open source they do it by their free time, no one pay him, so don't force him to do as your wish, just enjoy the process..
🤞peace
@timotismjntk Bro I am already supporting him, and not forcing anything, just asking questions...
VisionCamera 3.0.0-rc.3 is out! 🥳🎉 Test it:
yarn add react-native-vision-camera@3.0.0-rc.3
yarn add react-native-worklets@https://github.com/chrfalch/react-native-worklets#3ac2fbb
yarn add yarn add @shopify/react-native-skia@0.1.197
What you can play around with in this release:
.tflite
model!! ✨ (see this PR for more info)Frame.toArrayBuffer()
if you want to convert the Frame to an ArrayBuffer cc @metrix-hu What still needs to be done:
I wonder how complicated it would be to add an AR layer to VisionCamera 🤔
Example API:
return (
<Camera ...>
// and then you can add one of those 3 children
<Camera.Preview />
<Camera.SkiaPreview />
<Camera.ARPreview />
</Camera>
);
VisionCamera 3.0.0-rc.3 is out! 🥳🎉 Test it:
yarn add react-native-vision-camera@3.0.0-rc.3 yarn add react-native-worklets@https://github.com/chrfalch/react-native-worklets#3ac2fbb yarn add yarn add @shopify/react-native-skia@0.1.197
What you can play around with in this release:
- TensorFlow Lite plugin to load any
.tflite
model!! ✨ (see this PR for more info)- More stable Worklet Runtime (copying in objects from outside)
- New Frame Processor Plugin API (object oriented now in native code)
- Updated Skia version, potentially performance improvements
Frame.toArrayBuffer()
if you want to convert the Frame to an ArrayBuffer cc @metrix-huWhat still needs to be done:
- Making the RN Skia dependency entirely optional so you can also opt out of it and it doesn't affect people who are not using it.
- Improving Device/Format selection API on Android (still thinking about CameraX -> Camera2 rewrite?)
- Improving orientation issues
- Figuring out what the best approach for the Worklet problem is - I would love to maintain a separate core Worklet library together with Software Mansion if that works for them. Right now we have two libs, react-native-worklets and Reanimated and they might not always be compatible.
- Saving Skia rendered Frame to output Video
- Saving Skia rendered Frame to output Photo
- Extracting the TensorFlow Lite thing to a separate library
- Cleanups and looooooots of testing (this is where I need you guys!!)
@mrousavy Wow, this sounds awesome. I will definitely try it today.
@mrousavy Does it have reanimated V3 support? I am trying to upgrade my rn to v72 but reanimated V2 doesn't support and when I update reanimated V2 to v3 it doesn't support the vision camera v2... What is the solution for that?
hey folks, Do you know if the fabric module is compatible with the expo? Will v3 be compatible after migration to the new architecture?
@xts-bit I'm here want to help answering you question, for vision camera v2 is more compatible with reanimated v2...
If you try to use reanimated v3 in vision camera v2 it's work but you can't use frameProcessor.
So why don't give a chance to use reanimated v3 + vision camera v3?
@timotismjntk Thank you for your answer, But will vision camera V3 + reanimated V3 will support frame processor and with rn v72?
@dimaportenko yes of course bro, but it will be long awaited for us
@xts-bit I hope so brother, just stay in tune waiting for update on this great library
Hello,
I wanted to test the rc3 version, but on ios I got an error at build > TensorFlowLiteObjC/TFLTensorFlowLite.h
file not found
how to proceed?
Thanks for your help.
and Android :
A problem occurred configuring project ':react-native-vision-camera'.
> Could not determine the dependencies of null.
> Could not resolve all task dependencies for configuration ':react-native-vision-camera:classpath'.
> Could not find com.android.tools.build:gradle:.
Required by:
project :react-native-vision-camera```
Hi @timotismjntk,
Could you guide me or suggest how I can run Reanimated 3 with vision camera v2?
I tried to disable frame processor but it doesn't work.
GCC_PREPROCESSOR_DEFINITIONS = ( "DEBUG=1", "VISION_CAMERA_DISABLE_FRAME_PROCESSORS=1", "$(inherited)", );
Thank you
Hey @rocket13011 - my bad you need to add
pod 'TensorFlowLiteObjC', :subspecs => ['Metal', 'CoreML']
To your Podfile.
Regarding Android; Android does not work in this RC and is work in progress.
I was originally getting the same error as @rocket13011, now I'm getting the error that 'include/core/SkBlendMode.h' file not found. I'm using react-skia 0.1.197. When I run pod install, it says skia disabled?
@mrousavy Even without Skia support I actually wanted to try this RC also actually on Android. If you can guide me, how can I test the current version, for example which branch should I be on? Do we have an example app inside VisionCamera repo, or do I have to link it somehow to my real app? Then I would be glad to help to make it work. Maybe we can discuss on Twitter in details, or somewhere else. Drop me a PM if you are interested.
Tks @mrousavy, but now I same issue @willburstein ;)
I'm still getting the error TensorFlowLiteObjC/TFLTensorFlowLite.h file not found
even with the pod added
I am having this error https://github.com/mrousavy/react-native-vision-camera/issues/1283 on a Apple M2, and I can't figure out how to resolve it. My cmake 3.22.1 is working fine.
We at Margelo are planning the next major version for react-native-vision-camera: VisionCamera V3 :sparkles:
For VisionCamera V3 we target one major feature and a ton of stability and performance improvements:
runAtTargetFps(fps, ...)
to run code at a throttled FPS rate inside your Frame ProcessorrunAsync(...)
to run code on a separate thread for background processing inside your Frame Processor. This can take longer without blocking the Camera.getAvailableCameraDevices()
so external devices can become plugged in/out during runtimeFrameHostObject
instancemContext.handleException(..)
)toByteArray()
: Gets the Frame data as a byte array. The type isUint8Array
(TypedArray/ArrayBuffer
). Keep in mind that Frame buffers are usually allocated on the GPU, so this comes with a performance cost of a GPU -> CPU copy operation. I've optimized it a bit to run pretty fast :)orientation
: The orientation of the Frame. e.g."portrait"
isMirrored
: Whether the Frame is mirrored (eg in selfie cams)timestamp
: The presentation timestamp of the FrameOf course we can't just put weeks of effort into this project for free. This is why we are looking for 5-10 partners who are interested in seeing this become reality by funding the development of those features. Ontop of seeing this become reality, we also create a sponsors section for your company logo in the VisionCamera documentation/README, and we will test the new VisionCamera V3 version in your app to ensure it's compatibility for your use-case. If you are interested in that, reach out to me over Twitter: https://twitter.com/mrousavy or email: me@mrousavy.com
Demo
Here's the current proof of concept we built in 3 hours:
Progress
Currently, I spent around ~60 hours to improve that proof of concept and created the above demos. I also refined the iOS part a bit and created some fixes, did some research and improved the Skia handling.
Here is the current Draft PR: https://github.com/mrousavy/react-native-vision-camera/pull/1345
Here's a TODO list:
runAtTargetFps
runAsync
toByteArray()
,orientation
,isMirrored
andtimestamp
onFrame
orientation
toFrame
runAtTargetFps
runAsync
toByteArray()
,orientation
,isMirrored
andtimestamp
onFrame
orientation
toFrame
runAtTargetFps
runAsync
I reckon this will be around 500 hours of effort in total.
Update 15.2.2023: I just started working on this here: feat: ✨ V3 ✨ #1466. No one is paying me for that so I am doing all this in my free time. I decided to just ignore issues/backlash so that I can work as productive as I can. If someone is complaining, they should either offer a fix (PR) or pay me. If I listen to all issues the library will never get better :)