mrousavy / react-native-vision-camera

📸 A powerful, high-performance React Native Camera library.
https://react-native-vision-camera.com
MIT License
6.72k stars 1k forks source link

‼️‼️‼️‼️ ✨ VisionCamera V3 ‼️‼️‼️‼️‼️ #1376

Closed mrousavy closed 9 months ago

mrousavy commented 1 year ago

We at Margelo are planning the next major version for react-native-vision-camera: VisionCamera V3 :sparkles:

For VisionCamera V3 we target one major feature and a ton of stability and performance improvements:

  1. Write-back frame processors. We are introducing a new feature where you can simply draw on frame in a Frame Processor using RN Skia. This allows you to draw face masks, filters, overlays, color shadings, shaders, Metal, etc..)
    • Uses a hardware accelerated Skia layer for showing the Preview
    • Some cool examples like inverted colors shader filter, VHS filter (inspired by Snapchat's VHS + distortion filter), and a realtime text/bounding box overlay
    • Realtime face blurring or license plate blurring
    • Easy to write color correction, beauty filters
    • All in simple JS (RN Skia) - no native code and hot reload while still maintaining pretty much native performance!
  2. Sync Frame Processors. Frame Processors will now be fully synchronous and run on the same Thread as the Camera is running.
    • Pretty much on-par with native performance now.
    • Run frame processing without any delay - everything until your function returns is the latest data.
    • Use runAtTargetFps(fps, ...) to run code at a throttled FPS rate inside your Frame Processor
    • Use runAsync(...) to run code on a separate thread for background processing inside your Frame Processor. This can take longer without blocking the Camera.
  3. Migrate VisionCamera to RN 0.71. Benefits:
    • Much simpler build setup. The CMakeLists/build.gradle files will be simplified as we will use prefabs, and a ton of annoying build errors should be fixed.
    • Up to date with latest React Native version
    • Prefabs support on Android
    • No more Boost/Glog/Folly downloading/extracting
  4. Completely redesigned declarative API for device/format selection (resolution, fps, low-light, ...)
    • Control exactly how much FPS you want to record in
    • Know exactly if a desired format is supported and be able to fall back to a different one
    • Control the exact resolution and know what is supported (e.g. higher than 1080, but no higher than 4k, ...)
    • Control settings like low-light mode, compression, recording format H.264 or H.265, etc.
    • Add reactive API for getAvailableCameraDevices() so external devices can become plugged in/out during runtime
    • Add zero-shutter lag API for CameraX
  5. Rewrite the native Android part from CameraX to Camera2
    • Much more stability as CameraX just isn't mature enough yet
    • Much more flexibility with devices/formats
    • Slow-motion / 240 FPS recording on Android
  6. Use a custom Worklet Runtime instead of Reanimated
    • Fixes a ton of crashes and stability issues in Frame Processors/Plugins
    • Improves compilation time as we don't need to extract Reanimated anymore
    • Doesn't break with a new Reanimated version
    • Doesn't require > Reanimated v2 anymore
  7. ML Models straight from JavaScript. With the custom Worklet Runtime, you can use outside HostObjects and HostFunctions. This allows you to just use things like TensorFlow Lite or PyTorch Live in a Frame Processor and run ML Models fully from JS without touching native code! (See proof of concept PR: https://github.com/facebookresearch/playtorch/pull/199 and working PR for Tensorflow: https://github.com/mrousavy/react-native-vision-camera/pull/1633)
  8. Improve Performance of Frame Processors by caching FrameHostObject instance
  9. Improve error handling by using default JS error handler instead of console.error (mContext.handleException(..))
  10. More access to the Frame in Frame Processors:
    • toByteArray(): Gets the Frame data as a byte array. The type is Uint8Array (TypedArray/ArrayBuffer). Keep in mind that Frame buffers are usually allocated on the GPU, so this comes with a performance cost of a GPU -> CPU copy operation. I've optimized it a bit to run pretty fast :)
    • orientation: The orientation of the Frame. e.g. "portrait"
    • isMirrored: Whether the Frame is mirrored (eg in selfie cams)
    • timestamp: The presentation timestamp of the Frame

Of course we can't just put weeks of effort into this project for free. This is why we are looking for 5-10 partners who are interested in seeing this become reality by funding the development of those features. Ontop of seeing this become reality, we also create a sponsors section for your company logo in the VisionCamera documentation/README, and we will test the new VisionCamera V3 version in your app to ensure it's compatibility for your use-case. If you are interested in that, reach out to me over Twitter: https://twitter.com/mrousavy or email: me@mrousavy.com


Demo

Here's the current proof of concept we built in 3 hours:

const runtimeEffect = Skia.RuntimeEffect.Make(`
  uniform shader image;
  half4 main(vec2 pos) {
    vec4 color = image.eval(pos);
    return vec4((1.0 - color).rgb, 1.0);
  }
`);

const paint = paintWithRuntimeEffect(runtimeEffect)

function App() {
  const frameProcessor = useFrameProcessor((frame) => {
    'worklet'
    frame.drawPaint(paint)
  }, [])

  return <Camera frameProcessor={frameProcessor} />
}





Progress

Currently, I spent around ~60 hours to improve that proof of concept and created the above demos. I also refined the iOS part a bit and created some fixes, did some research and improved the Skia handling.

Here is the current Draft PR: https://github.com/mrousavy/react-native-vision-camera/pull/1345

Here's a TODO list:

I reckon this will be around 500 hours of effort in total.

Update 15.2.2023: I just started working on this here: feat: ✨ V3 ✨ #1466. No one is paying me for that so I am doing all this in my free time. I decided to just ignore issues/backlash so that I can work as productive as I can. If someone is complaining, they should either offer a fix (PR) or pay me. If I listen to all issues the library will never get better :)

mrousavy commented 11 months ago

I think I'm gonna make the Skia thing iOS only for now, it's just insanely difficult on Android and I don't wanna block the release. Can be an addition later on

metrix-hu commented 11 months ago

@mrousavy We would be glad to support more, can we discuss a bit on twitter? Currently I can not send a message to you.

mrousavy commented 11 months ago

my DMs should be open - but I just sent you a message!

obernardovieira commented 11 months ago

I think I'm gonna make the Skia thing iOS only for now, it's just insanely difficult on Android and I don't wanna block the release. Can be an addition later on

I think that's a great idea. I can also contribute a bit more and help test. I have a big very nice update on a mobile app waiting just for this. But my problem is actually with the worklets, as mentioned on the PR (IDK if it's related)

mrousavy commented 11 months ago

But my problem is actually with the worklets

can you explain more? Is there something not working with v2?

metrix-hu commented 11 months ago

@mrousavy Can you tell when you can release a version where the iOS only part works and only the features are missing in Android?

mrousavy commented 11 months ago

I can do another RC next week. Need to update a few parts here and there, but actually I am still unsure about the Worklet situation. I would love to use Reanimated's Worklet implementation, but I really don't want to depend on a UI animation library just to have the Worklets feature (users shouldn't need to be on REA v3 just to use Frame Processors). Ideally a separate core library just for the Worklets infra makes sense, obviously this is more maintenance efforts

Maybe I can find a solution together with @tomekzaw

bglgwyng commented 11 months ago

@mrousavy, based on my understanding of your response to my previous question, it seems that there are limitations when using multiple types of worklets within a single RN app. Therefore, it would be ideal for the entire RN ecosystem to have a unified worklet library that caters to various functionalities such as animation, camera, and gesture handling. Although we currently lack such a library, reanimated worklets serve as the closest alternative. I believe it's acceptable to make short-sighted decisions for now, with the anticipation that worklet-related breaking changes will eventually be introduced someday a standardized worklet library comes(maybe the one separated from Reanimated as you suggested?).

Additionally, I noticed that you didn't mention the Reanimated worklet's missing features required by RNVC in your previous comment. I'm curious to know if these features are now supported by reanimated worklets.

obernardovieira commented 11 months ago

But my problem is actually with the worklets

can you explain more? Is there something not working with v2?

It's what I've mentioned here https://github.com/mrousavy/react-native-vision-camera/pull/1466#issuecomment-1553381626, it just crashes.

xts-bit commented 11 months ago

@mrousavy Will vision camera will fix this issue in V3 ReferenceError: Property '_setGlobalConsole' doesn't exist

rkmackinnon commented 11 months ago

But my problem is actually with the worklets

can you explain more? Is there something not working with v2?

It's what I've mentioned here #1466 (comment), it just crashes.

I noticed that with the newer worklets library, unlike the reanimated version, I was getting errors when calling the function in the same line as declaring it. In the reanimated version of worklets you could do something like this:

runOnJS(somefunction)(args);

In the newer version I found I had to do this:

const somejsfunction = Worklets.createRunInJsFn(somefunction);
somejsfunction(args);

Does that help resolve the errors you are seeing?

pvedula7 commented 11 months ago
// load the two Tensorflow Lite models - those can be swapped out at runtime and hot-reloaded - just like images :)
const faceDetection = loadModel(require("../assets/face_detection.tflite"))
const handDetection = loadModel(require("../assets/hand_detection.tflite"))

@mrousavy I'm wanting to test the hot reload function. Is the loadModel a function supposed to be exported by react-native-vision-camera? Getting an undefined function when running this import. import { loadModel} from "react-native-vision-camera";

mrousavy commented 11 months ago

Okay so here's a full explanation on the Worklet dilemma for everyone who missed it.


Currently there are two Worklet implementations:

  1. react-native-reanimated: They invented Worklets for React Native. They are working great in REA, but if third-party libraries (like VisionCamera) want to use that, it's getting really complicated.
  2. react-native-worklets: Built by @chrfalch and us @margelo - a core library with the sole purpose of providing Worklet APIs to third-party libraries. Libs like Reanimated, VisionCamera, GestureHandler, and more can use react-native-worklets in their code to spawn Runtimes and use Worklets. The benefit of that would be that we can focus on performance and API stability and the code would be separated.

@mrousavy, based on my understanding of your response to my previous question, it seems that there are limitations when using multiple types of worklets within a single RN app. Therefore, it would be ideal for the entire RN ecosystem to have a unified worklet library that caters to various functionalities such as animation, camera, and gesture handling. Although we currently lack such a library, reanimated worklets serve as the closest alternative. I believe it's acceptable to make short-sighted decisions for now, with the anticipation that worklet-related breaking changes will eventually be introduced someday a standardized worklet library comes(maybe the one separated from Reanimated as you suggested?).

@bglgwyng yes, I 100% agree. It would be much better for libraries like VisionCamera if there was a unified core library for Worklet functionality. react-native-worklets could be a core dependency for VisionCamera, Reanimated, GestureHandler, and potentially more libraries like audio processing (something coming soon?) as said above in point 2.

The software mansion team is doing a great job at building the Worklet part in Reanimated, but unfortunately it is really hard for me to depend on Reanimated (a pretty big animation + sensors library) just for the Worklet functionality. Some people don't use REA, some people are on REA 2 and some on REA 3. It's just pretty much impossible to support all of that in VisionCamera.

I submitted a bunch of PRs to the Reanimated 2 repo a while ago to expose some Worklet APIs to the C++ side so I can use them from VisionCamera, but with the Reanimated 3 update some of those APIs broke or don't work anymore. So now I have to update VisionCamera and add extra ifs or version checks. So from time to time VisionCamera just broke because of some changes in Reanimated, or if the user doesn't use Reanimated, or if the .zip extraction failed, or if the RN version is unknown, or if they changed something in their build.gradle, or ........

Additionally, I noticed that you didn't mention the Reanimated worklet's missing features required by RNVC in your previous comment. I'm curious to know if these features are now supported by reanimated worklets.

Unfortunately those features are still not all supported. Here's what I need in VisionCamera:

C++ side:

  1. API to create a new RuntimeManager from C++ (this will internally also create a new JS Runtime) (see current code here)
  2. On that RuntimeManager, I need a way to create a Worklet from a given jsi::Function (see current code here)
  3. I want to cache that Worklet and call it in my Camera frame callback (see current code here)

JS side:

  1. API to create a new RuntimeManager from JS (this will internally also create a new JS Runtime) (see current code here)
  2. On that RuntimeManager, I need a way to create a Worklet from a given jsi::Function (see current code here)
  3. I want to be able to call that Worklet from a different Worklet context (aka nested Worklets), so JS -> Camera Thread 1 -> Camera Thread 2 (see current code here)

Those are the 6 APIs that I currently use in VisionCamera V3 with react-native-worklets and it works perfectly fine (see the react-native-worklets USAGE docs), but unfortunately Reanimated 3 doesn't expose the last 3 APIs.

I don't know about any ETA, but I talked to @tomekzaw and they said that they are working on that.


So in short;

  1. I want to use react-native-worklets because it is a core library focusing only on Worklet functionality. But for this to work in the ecosystem, the software mansion team also needs to agree to use that library and we can maintain and improve it together. I'm willing to dedicate free time to that.
  2. If REA doesn't want to depend on react-native-worklets, it's better for VisionCamera to depend on REA as otherwise there will be two separate Worklet libraries in the ecosystem. But for me to depend on REA, the software mansion team needs to expose the three APIs in REA and also we need to somehow agree on not making any breaking changes to those APIs.

Not sure what the best approach would be right now, I hate blocking VisionCamera V3 because of that but that's how it is 😓

mrousavy commented 11 months ago

I'm wanting to test the hot reload function. Is the loadModel a function supposed to be exported by react-native-vision-camera? Getting an undefined function when running this import.

@pvedula7 this was a proof of concept. It's not on GitHub, but it could be easily exposed via a C++ host object or a separate library.

tomekzaw commented 11 months ago

@mrousavy Thanks for the detailed explanation. I believe Reanimated 3 is not that far from what you guys need. How about setting up a call so we can all take a look at the code, better understand what's missing and plan some further actions?

AmrElsersy commented 11 months ago

Hi, is that library is available to be used ? where can I install it from ?

I am interested in accessing the Frame byte array from javascript

Thanks

Jorge-Luis-Rangel-Peralta commented 11 months ago

Hi guys, is there a release date yet?

mrousavy commented 11 months ago

Btw I created a discussion in the Tensorflow JS repo about adding VisionCamera bindings to it - this will allow you to use any Tensorflow model without ever touching native code. https://github.com/tensorflow/tfjs/issues/7773

bennidi commented 11 months ago

@mrousavy First of all, I want to thank you for this amazing work in RNVC. It is such a strong contribution to the RN ecosystem. Really, very much appreciated.

Regarding the decision on worklet library dependency and the RN ecosystem, I want to quickly share my pain and thoughts:

I personally have found that it is rediculously hard to create a consistent stack with all the major libraries working reliably next to each other. Just thinking of react-navigation plus some bottom-tab-bar variation, react-reanimated V2 vs V3, gesture-handler, some bottom-sheet library...

So far RN has been continuously a painful experience regarding libs compatibility and maintenance status of seemingly important libraries.

After one year and even though I have years of experience with TS and React and a bunch of internal code that I know inside out, I made the attempt to switch to Flutter (even though Flutter seems quite verbose and never really attracted me and I finally decided to give RN another chance). I would love to see RN become more stable and reliable. I actually believe that stability of the ecosystem will be a deciding factor in RN future success.

From that point of view, we definitely need a dedicated worklet library which all other major libs depend on. Depending on worklets inside of REA would just continue on the road of weird dependency trees and unclear compatibility and it would WEAKEN the ecosystem in the long run. Also RNVC would suffer from bugs introduced by changes in worklet code inside of REA. I find that rather unacceptable.

I believe we "REALLY NEED MORE STANDARD LIBRARIES" within the RN ecosystem. Not just for worklets, but definitely for worklets. Worklets is clearly a core feature: processing data outside of the main thread. It has absolutely nothing to do with animations, audio or vision by itself.

So the only SANE solution would be to make REA use the same worklet library as RNVC. Even if that means that for a while there would be just another worklet library out there. But someone has to make the first move.

How difficult would it be to make REA adopt react-native-worklets? Do we need so send them flowers and gift cards? Is there anything that REA internal worklet code can do which react-native-worklets can not?

Risking to repeat myself: WE NEED MORE ACCEPTED STANDARD LIBS FOR REACT-NATIVE ECOSYSTEM.

RNVC could make a move towards more standardization and stability - cooperating with all possible client libs like REA to make react-native-worklets provide what they need.

tomekzaw commented 11 months ago

Do we need so send them flowers and gift cards?

yes, please 🌹

mrousavy commented 11 months ago

Thanks for your take on this @bennidi, I 100% agree with you. Libraries should try to be as stable as possible with as little dependencies as possible. A dedicated worklet library is exactly what would solve this problem, however it makes things a bit more difficult for the Reanimated team as they then work across two projects instead of one. I guess the decision is purely up to them at this point 😄

How difficult would it be to make REA adopt react-native-worklets? Do we need so send them flowers and gift cards? Is there anything that REA internal worklet code can do which react-native-worklets can not?

So at this point we worked on react-native-worklets and SWM worked on Reanimated at the same time, SWM made their Worklet implementation faster, we made our worklet implementation into a better third-party API.

I think if we combine the fast implementation with the easy to use API, we got a golden standard for Worklets. There even is a JS based API in react-native-worklets which basically makes my react-native-multithreading library redundant.

yes, please 🌹

@tomekzaw I'll bring u some next time I'm in Krakow 😂

bglgwyng commented 11 months ago

Since the task 'Convert it to a TurboModule/Fabric' in the to-do list is still unchecked, my understanding is that the RNVC's API does not currently have a JSI interface. In simpler terms, once RNVC is converted to TurboModule/Fabric, will APIs such as takePhoto have a JSI interface written in C++? Am I understanding this correctly?

To provide some context: I was attempting to test 'react-native-jsi-image' with RNVC, but I encountered a hurdle when I discovered that the implementation of takePhoto is solely in Swift.

mrousavy commented 11 months ago

@bglgwyng yea this is kinda like the last priority right now, because TurboModules/Fabric are still maturing in RN Core. Ideally I want JSI-Image to work with takePhoto, but this requires lots of custom bindings, and other parts are more performance critical at the moment. The V3 checklist is really huge 😅

rukundob451 commented 11 months ago

@mrousavy, great initiative can't wait to give it a go.

mrousavy commented 11 months ago

Update 3.7.2023:

Thanks everyone who's sponsoring me, I'm working hard on getting V3 out!! ❤️

timotismjntk commented 11 months ago

I'm happy for you who make React Native beyond to what we could imagine before, thanks for all your hard work sir... Waiting for Android support.

Tbh you should be work for Meta for what your contribution to React Native ecosystem

finnholland commented 11 months ago

Hey Marc, I was wondering if V3 will support AVMultiCam and Concurrent Camera Streaming? I read in another one of your comments that to accomplish it would require a large rewrite due to how to package was designed, thought it might have been made possible to implement/implemented as part of V3.

Thanks for the awesome package!

metrix-hu commented 11 months ago

@mrousavy Hi Marc! Can you make a new RC today? You promised it one or 2 weeks ago. Maybe if it builds and works with new dep versions, then you can make it. Fingers crossed. I ask it because I am currently using V2 and it breaks with the latest react-native 0.72

mrousavy commented 11 months ago

Gonna do a new RC today, y'all want the Tensorflow Lite integration in there or not? Just for playing around - this will be a separate package later

metrix-hu commented 11 months ago

@mrousavy We do not need the Tensorflow integration just yet. Other question is, that the only thing is missing for Android implementation is the direct GPU based drawing to Skia? So can we still access the bytearray, and recreate a Skia Image from that and use it in a separated Skia view? Performance is not priority for us just yet. I just want it to work.

timotismjntk commented 11 months ago

@mrousavy We do not need the Tensorflow integration just yet. Other question is, that the only thing is missing for Android implementation is the direct GPU based drawing to Skia? So can we still access the bytearray, and recreate a Skia Image from that and use it in a separated Skia view? Performance is not priority for us just yet. I just want it to work.

@metrix-hu bro if you want it ready, just pay him, so he will finishing it for you, we should know in open source they do it by their free time, no one pay him, so don't force him to do as your wish, just enjoy the process..

🤞peace

metrix-hu commented 11 months ago

@timotismjntk Bro I am already supporting him, and not forcing anything, just asking questions...

mrousavy commented 11 months ago

VisionCamera 3.0.0-rc.3 is out! 🥳🎉 Test it:

yarn add react-native-vision-camera@3.0.0-rc.3
yarn add react-native-worklets@https://github.com/chrfalch/react-native-worklets#3ac2fbb
yarn add yarn add @shopify/react-native-skia@0.1.197

What you can play around with in this release:

What still needs to be done:

mrousavy commented 11 months ago

I wonder how complicated it would be to add an AR layer to VisionCamera 🤔

Example API:

return (
    <Camera ...>
      // and then you can add one of those 3 children
      <Camera.Preview />
      <Camera.SkiaPreview />
      <Camera.ARPreview />
    </Camera>
  );
metrix-hu commented 11 months ago

VisionCamera 3.0.0-rc.3 is out! 🥳🎉 Test it:

yarn add react-native-vision-camera@3.0.0-rc.3
yarn add react-native-worklets@https://github.com/chrfalch/react-native-worklets#3ac2fbb
yarn add yarn add @shopify/react-native-skia@0.1.197

What you can play around with in this release:

  • TensorFlow Lite plugin to load any .tflite model!! ✨ (see this PR for more info)
  • More stable Worklet Runtime (copying in objects from outside)
  • New Frame Processor Plugin API (object oriented now in native code)
  • Updated Skia version, potentially performance improvements
  • Frame.toArrayBuffer() if you want to convert the Frame to an ArrayBuffer cc @metrix-hu

What still needs to be done:

  • Making the RN Skia dependency entirely optional so you can also opt out of it and it doesn't affect people who are not using it.
  • Improving Device/Format selection API on Android (still thinking about CameraX -> Camera2 rewrite?)
  • Improving orientation issues
  • Figuring out what the best approach for the Worklet problem is - I would love to maintain a separate core Worklet library together with Software Mansion if that works for them. Right now we have two libs, react-native-worklets and Reanimated and they might not always be compatible.
  • Saving Skia rendered Frame to output Video
  • Saving Skia rendered Frame to output Photo
  • Extracting the TensorFlow Lite thing to a separate library
  • Cleanups and looooooots of testing (this is where I need you guys!!)

@mrousavy Wow, this sounds awesome. I will definitely try it today.

xts-bit commented 11 months ago

@mrousavy Does it have reanimated V3 support? I am trying to upgrade my rn to v72 but reanimated V2 doesn't support and when I update reanimated V2 to v3 it doesn't support the vision camera v2... What is the solution for that?

dimaportenko commented 11 months ago

hey folks, Do you know if the fabric module is compatible with the expo? Will v3 be compatible after migration to the new architecture?

timotismjntk commented 11 months ago

@xts-bit I'm here want to help answering you question, for vision camera v2 is more compatible with reanimated v2...

If you try to use reanimated v3 in vision camera v2 it's work but you can't use frameProcessor.

So why don't give a chance to use reanimated v3 + vision camera v3?

xts-bit commented 11 months ago

@timotismjntk Thank you for your answer, But will vision camera V3 + reanimated V3 will support frame processor and with rn v72?

timotismjntk commented 11 months ago

@dimaportenko yes of course bro, but it will be long awaited for us

timotismjntk commented 11 months ago

@xts-bit I hope so brother, just stay in tune waiting for update on this great library

rocket13011 commented 11 months ago

Hello,

I wanted to test the rc3 version, but on ios I got an error at build > TensorFlowLiteObjC/TFLTensorFlowLite.h file not found

how to proceed?

Thanks for your help.

rocket13011 commented 11 months ago

and Android :


A problem occurred configuring project ':react-native-vision-camera'.
> Could not determine the dependencies of null.
   > Could not resolve all task dependencies for configuration ':react-native-vision-camera:classpath'.
      > Could not find com.android.tools.build:gradle:.
        Required by:
            project :react-native-vision-camera```
animatereactnative commented 11 months ago

Hi @timotismjntk,

Could you guide me or suggest how I can run Reanimated 3 with vision camera v2?

I tried to disable frame processor but it doesn't work.

GCC_PREPROCESSOR_DEFINITIONS = ( "DEBUG=1", "VISION_CAMERA_DISABLE_FRAME_PROCESSORS=1", "$(inherited)", );

Thank you

mrousavy commented 11 months ago

Hey @rocket13011 - my bad you need to add

pod 'TensorFlowLiteObjC', :subspecs => ['Metal', 'CoreML']

To your Podfile.

Regarding Android; Android does not work in this RC and is work in progress.

willburstein commented 11 months ago

I was originally getting the same error as @rocket13011, now I'm getting the error that 'include/core/SkBlendMode.h' file not found. I'm using react-skia 0.1.197. When I run pod install, it says skia disabled?

metrix-hu commented 11 months ago

@mrousavy Even without Skia support I actually wanted to try this RC also actually on Android. If you can guide me, how can I test the current version, for example which branch should I be on? Do we have an example app inside VisionCamera repo, or do I have to link it somehow to my real app? Then I would be glad to help to make it work. Maybe we can discuss on Twitter in details, or somewhere else. Drop me a PM if you are interested.

rocket13011 commented 11 months ago

Tks @mrousavy, but now I same issue @willburstein ;)

timothygorer commented 11 months ago

I'm still getting the error TensorFlowLiteObjC/TFLTensorFlowLite.h file not found even with the pod added

ivosabev commented 11 months ago

I am having this error https://github.com/mrousavy/react-native-vision-camera/issues/1283 on a Apple M2, and I can't figure out how to resolve it. My cmake 3.22.1 is working fine.