OpenFTC / EasyOpenCV

Finally, a straightforward and easy way to use OpenCV on an FTC robot!
219 stars 100 forks source link

Manually set camera exposure? #7

Closed nch0w closed 4 years ago

nch0w commented 4 years ago

We need a way to set the camera exposure (i.e. brightness of the image) on the internal camera. Currently, it seems that the OpenCvInternalCamera API does not contain something we can use to do this (it only has settings like flashlight). Is there a workaround?

Windwoes commented 4 years ago

To quote the readme:

Internal camera support is currently provided via the Android Camera v1 API. This means that manual focus/exposure/ISO control is not possible. However, the architecture of this library has been designed such that it would be straightforward to integrate an alternate implementation that used the Camera v2 API.

However, the Camera v1 API does support "exposure compensation" i.e. a limited range of adjustment from the automatic exposure value. If this would satisfy your requirement, I can work on implementing support for it into the API.

nch0w commented 4 years ago

The problem with exposure compensation is that it is relative. I.e. if the exposure is already good, we don't want to increase/decrease it.

You're looking at the features in v1 API in the "Camera features" section here, right? In that case, I think focus areas would be most useful in our situation. We want to make sure the stones are well lit regardless of background lighting, so we'd want to focus (recalculate the optimal exposure) on wherever the stones are in the image. Do you think you could implement focus areas easily?

Windwoes commented 4 years ago

The problem with exposure compensation is that it is relative. I.e. if the exposure is already good, we don't want to increase/decrease it.

Yeah, I know, but there's also an ability to lock the automatic exposure. So, if you were able to set the exposure compensation during pre-match setup, then lock the exposure with a button press or something, that might work.

You're looking at the features in v1 API in the "Camera features" section here, right?

Yes

I think focus areas would be most useful in our situation. We want to make sure the stones are well lit regardless of background lighting, so we'd want to focus (recalculate the optimal exposure) on wherever the stones are in the image. Do you think you could implement focus areas easily?

Hmm, I don't like how that API works, I think I would want to remap the coordinate system to not only account for rotation but also to match the X and Y in the image, not some abstract +-2000 that the raw API uses.....

nch0w commented 4 years ago

I'm worried that even with locking exposure, you can't be sure what the actual exposure will be. EDIT: misread your comment, actually that would work. Anyways, I might try to add support for focus areas and submit a PR if I get anything working.

Windwoes commented 4 years ago

One thing I will say is that I use the pretty trash front camera on our Nexus 5 for SkyStone position detection (without any kind of exposure compensation or anything) and it has been 100% accurate both at home and in competition. I simply have 3 hardcoded sample regions (one positioned over the middle of each stone) and simply choose the region with the highest average Cb value from the YCrCb color space.

nch0w commented 4 years ago

Ok. The way our phone is mounted, outside light can really interfere with automatic exposure and make all three stones an indistinguishable dark gray. I'm getting mechanical to tilt the phone toward the stones to reduce outside light, but obviously there is a programmatic fix as well.

Windwoes commented 4 years ago

FYI, since I was going to do a 1.3.2 release anyway, I went ahead and added exposure compensation support.

nch0w commented 4 years ago

Actually the 1.3.2 release ended up working perfectly for our team. For other teams, here is what we ended up using to control it (a/b decrease/increase the exposure compensation, x locks the exposure compensation). We run this before the match starts. Thanks for the release!

    boolean exposureLocked = false;
    boolean xPressed = true;
    int exposureCompensation = 0;

    while(!isStarted()) {
        if (gamepad2.x) {
            if (!xPressed)
                exposureLocked = !exposureLocked;
            xPressed = true;
        } else {
            xPressed = false;
        }

        if (!exposureLocked) {
            if (gamepad2.a) {
                exposureCompensation = Math.max(exposureCompensation - 1, phoneCam.getMinSupportedExposureCompensation());
            } else if (gamepad2.b) {
                exposureCompensation = Math.min(exposureCompensation + 1, phoneCam.getMaxSupportedExposureCompensation());
            }
        }
    ...
Windwoes commented 4 years ago

Glad that it works for you!

nch0w commented 4 years ago

@FROGbots-4634 how would you implement your detection pipeline? Like take a rectangle in the matrix and measure its average Cb value?

Windwoes commented 4 years ago

@nchowder you can make a submat for each sample region, then take the average Cb value of each submat, and then simply choose the one with the highest value.

nch0w commented 4 years ago

Yeah I figured it out.