Open marcusx2 opened 3 years ago
It seems that everything is displayed on a canvas....can you guys find a way to put the data on a webcamtexture or texture 2d? So we can code the UI in Unity, display particles, etc? Otherwise I don't see the point if we can't use Unity's UI or particles,
Hey, thank you! What do you mean by "video feed"? At the moment, everything is implemented in Unity, so you can create your custom UI and particles from Unity. We did a complete example with the Crash AR Demo, and we have shown the full process in this video: https://www.youtube.com/watch?v=bcw7mwjXgpE
Hey, thank you! What do you mean by "video feed"? At the moment, everything is implemented in Unity, so you can create your custom UI and particles from Unity. We did a complete example with the Crash AR Demo, and we have shown the full process in this video: https://www.youtube.com/watch?v=bcw7mwjXgpE
I see!! My mistake then.
What do you mean by "video feed"?
By video feed I mean the data from the camera. If everything was disaplayed on a canvas overlay and not transferred to Unity, we wouldn't be able to use Unity's UI or particles. But I can see that is not the case, wonderful <3.
`
@doJester13 I can't wait for image tracking support! Any ETA? Of all AR experiences, image tracking make up 90% of the use cases, the other 10 are world tracking(plane detection)/face tracking. Pretty please put image recognition ahead on the queue! Also how much does a simple experience like this weight? 30MB?
I am also curious about Plane Detection. :) Keep up your work. It's amazing!!!
Hello there! This is just a thread to show my appreciation for your efforts. What you're doing is huge and I can't wait until image tracking is available. If you can put that feature ahead on the queue hahaha. Also just curious, the video feed that is showing, is it in a canvas overlay or is it actually inside a Texture2D in Unity?