Open iTestAndroid opened 3 years ago
Hi @iTestAndroid ,
Thanks for the backing and sorry about the delay in response here. Normally we'd be a lot faster but we're inundated with shipping-status emails after Brexit forced some 1,300 shipments to be relabeled while in Sweden on their way to customers throughout Europe, in addition to a slew of other Brexit + pandemic related hiccups.
Anyway, unfortunately, we're behind on writing examples. We do have some things that are in these directions though and will circle back to share them.
And in the meantime, what code are you running when requesting jpeg output? If you are running detphai_demo.py
, than likely the issue is that the jpegout
stream needs to be enabled when running this. So for example, running depthai_demo -s previewout metaout jpegout
will then enable capturing JPEG.
Will circle back with the other examples soon here.
Thanks, Brandon
Hi again @iTestAndroid ,
So here is what we currently have for the license plate recognition.
For doing license plate detection itself:
python3 depthai_demo.py -cnn vehicle-license-plate-detection-barrier-0106
So as you mentioned this does not do the OCR on the license plate itself. We are meaning to write a specific example for this. The pieces are there to allow use of the Gen2 Pipeline Builder to build it in the meantime. We've just been swamped.
So in terms of detection and then OCR, here is an example for more general application:
So very likely this can be used as-is, with the first-stage EAST text detector being replaced with the vehicle-license-plate-detection-barrier-0106
instead. We have been meaning to write this as well, but have just be behind. We will catch up though!
And in the meantime, if you would like to see the whole license plate OCR, plus plate region recognition, plus vehicle type pipeline running, we had hard-coded it as a static pipeline early-on where we were architecting the Gen2 Pipeline Builder system. So the link for that is below, but note it is compiled for Ubuntu 18.04/x86, so is quite limited at this time:
And to see the overall performance, see here:
Note that this example (and the example we will make with the Gen2 Pipeline Builder) uses some models that were trained by Intel and used in the OpenVINO example repository. In this case, they were trained on what look to be Chinese plates. So these my not be immediately applicable in the US or other regions. We have asked for the training scripts but haven't heard back yet. Will ask our contact at OpenVINO as well to see if he can help.
Thoughts?
And sorry about the delay here.
Hi again @iTestAndroid ,
So I forgot to mention the pedestrian re-identification example:
So this one is currently slower than it should be as a result of some unintended consequence of upgrading our underlying API. That said, it does uniquely identify each person, and so can re-identify when the person re-enters the frame. We don't currently display a count of each time the person is in the frame, but it should be relatively easy to add, so we'll plan on doing so.
Thoughts?
Thanks, Brandon
And as a quick update @iTestAndroid , here is the Github issue for the license-plate-specific flow, https://github.com/luxonis/depthai-experiments/issues/47
In doing so, I did test the text detection + optical character recognition example on some license plates and it does do something, but is clearly not tuned for working with license plates:
So doing the specific example will definitely be beneficial.
Thanks, Brandon
Thanks for the detailed information and samples. As for running OCR and doing extra processing, I need to be able to capture a still when there is a detection. I tried to call the function to grab a still image, a JPEG shot, but I couldn't do that. Can you please show me an example where I can get a high quality, high res (say 12MP with the camera that supports 12MP) when there is a detection say for a pedestrian and/or car plate?
As for pedestrian detection, I just need to see a sample where you re-identify the person and where I can get the still shot/JPEG. Would really appreciate an example for this. Thanks @Luxonis-Brandon
Thanks @iTestAndroid . So yes we can make examples of saving a JPEG when a class is detected. I just brought this up internally.
So if you would like to do so as well, here is the reference code for having depthai encode JPEG, send them to the host, and save them to disk: https://docs.luxonis.com/projects/api/en/latest/samples/VideoEncoder/rgb_full_resolution_saver/
So you could use one of these other examples, set up the JPEG encoder, and then just set an if statement (say, for a car or a plate, plus other conditionals as you choose, such as time/duration/number of pictures to save) to save the JPEG when those cases are met.
We'll make an examples as well, but just wanted to let you know you can do this now if you'd like.
Thanks, Brandon
Hi,
I backed the project and got 2 of OAK-D and OAK-1
In the kickstarter page I saw and still see that there will be samples of reading plates and returning actual plate value and identifying and re-identifying the person and saying something like "Seen X times".
I tried samples, demos, different models, but
request_jpeg
function to work.(E: [global] [ 444666] [Scheduler00Thr] dispatcherEventSend:53 Write failed (header) (err -4) | event XLINK_RESET_REQ)
Can you help me out here? What am I missing? Thanks