Closed smxthereisonlyone closed 5 years ago
Not possible to do it at the same time.
Any idea why or if it's going to be changed?
Realtime neural networks require a lot of processing power, and Apple probably decided to only allow one of these at a time.
It will almost certainly be changed when there are more efficient networks and/or more powerful hardware.
Given that these networks only run on the latest iOS devices, I would not count on current generation phones to be able to run both at once.
Somebody did it: https://twitter.com/mechpil0t/status/1148150238206578688
How was it achieved then? @sridevipavithra @sam598
Without knowing exactly what they did, that effect can be achieved using only hip and head position. I do not see any segmentation or depth estimation in that example.
Another example from them is doing a similar effect but only using segmentation: https://twitter.com/mechpil0t/status/1146657374771351554
lots of ARKit demos you see on twitter are smoke and mirrors. that could easily have been the person holding their phone just using a touch on the screen to keep the mosaic'd portion in place.
Without knowing exactly what they did, that effect can be achieved using only hip and head position. I do not see any segmentation or depth estimation in that example.
Another example from them is doing a similar effect but only using segmentation: https://twitter.com/mechpil0t/status/1146657374771351554
Ok even if only using hip and head, using which SDK then?
lots of ARKit demos you see on twitter are smoke and mirrors. that could easily have been the person holding their phone just using a touch on the screen to keep the mosaic'd portion in place.
Might be but I wanna get to the ground of it :)
actually... just looked at it again... uh... there is nothing to suggest they are using segmentation in that video
actually... just looked at it again... uh... there is nothing to suggest they are using segmentation in that video
You could be right, the effect doesn't suggest it, but the caption implies.
just to confirm that video is mine and its just using 2D pose tracking I mislabelled it as human segmentation because I wasnt paying attention >.< (at the time i thought all the body tracking was "human segmentation" in arfoundation speak)
no "smoke" but there is one mirror :)
just to confirm that video is mine and its just using 2D pose tracking I mislabelled it as human segmentation because I wasnt paying attention >.< (at the time i thought all the body tracking was "human segmentation" in arfoundation speak)
no "smoke" but there is one mirror :)
Thanks for the explanation!
Hey,
is it theoretically possible to run Human Segmentation + Human Body Pose at the same time?
If not when can we expect an update that allows it?
Thanks very much