Closed gwendall closed 4 months ago
can you add a screenshot or a video showing what you mean with "detections appear erratic"?
There you go. It's probably because the model expects images with faces up. I tried doing frame.rotate(90, 0, 0)
in the frame processor before passing the frame to the face detector but it doesn't seem to have any effect.
Okay, can you please check if frame.orientation
change when you rotate your device?
Currently I’m using frame’s orientation to determine MLKit orientation.
My app was locked in portrait mode so I just unlocked it. Oddly frame.orientation
still never changes. Orientation change does get captured with expo-screen-orientation
though (see recording below). So I guess it is a react-native-vision-camera
issue? It is possible that expo
or the screen orientation lib interferes with frame.orientation
@mrousavy?
What would be awesome (and maybe the simpler fix) would be to be able to manually pass the screen orientation to detectFaces
. May also enable to lock the UI in portrait mode while still handling frames for vision computing in their actual directions like it's done on instagram, tiktok, etc. What do you think?
Okay I definitely need to update the documentation on frame.orientation
as a lot of people seem to misunderstand what it actually means and I get asked about this almost daily.
Frame
is probably not what most people think under orientation. Maybe I can find a different name for it. But it is always the way you need to read the Frame to make sure it is up-right. That's because Camera sensors are often in landscape orientation and that means you need to rotate the Frame by 90deg to display it up-right. This is handled already for preview photo and video, but in Frame Processors you need to do it yourself - if I would always rotate the frame to be 90deg upright position, that would introduce a huge latency and performance bottleneck in the pipeline. So instead, ML models work by internally just reading the Frame rotated. That way I don't have to actually rotate any buffers around and it's much faster. Hope this explains why it stays the same.
I'll do some tests and try to implement this feature ASAP. The problem is I'm busy with another project right now so I can't say when I'll have time to do this.
Anyway, if you want to try it yourself: orientation is handled on native side in these lines:
a good start point is to set fixed values on native side and check if it works:
// ios
let image = VisionImage(buffer: frame.buffer)
image.orientation = .up //or .left or .right etc....
// android
val image = InputImage.fromMediaImage(frame.image, 0) // or 90, 180, etc...
maybe we can add the orientation
prop on config so when orientation changes it will recreate face detector:
const orientation = someOrientationHook()
const { detectFaces } = useFaceDetector( {
... other props,
orientation
} )
Or maybe get device's orientation on native side instead?
@gwendall I just released 1.7.0
version.
Can you please check if it really fix this issue?
When switching to landscape mode the detections appear erratic. Is there something we can do to support them better in the current version?