luxonis / depthai-experiments

Experimental projects we've done with DepthAI.
MIT License
835 stars 369 forks source link

No USB device #50

Open yoich opened 3 years ago

yoich commented 3 years ago

Hi, I bought OAK-1. The depthai_demo.py worked fine then I tried to work depthai-experiments. I ran people-tracker main.py, but I got the message No USB device [03e7:2485], still looking... 10.045s NOT FOUND, err code 5. The full log is below. I'm working on windows 10 and I watched this issue https://github.com/luxonis/depthai-experiments/issues/36 So I used virtualenv and created each environment to depthai-demo.py and people tracker/main.py. What do I need to do to resolve this?


python .\main.py XLink initialized. Sending internal device firmware Successfully connected to device. Loading config file Attempting to open stream config_d2h watchdog started Successfully opened stream config_d2h with ID #0!

Closing stream config_d2h: ... Closing stream config_d2h: DONE. EEPROM data: invalid / unprogrammed D:\home\iprediction\depthAI\depthai-experiments\people-tracker\model\config.json depthai: Calibration file is not specified, will use default setting; config_h2d json: {"_board":{"calib_data":[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0],"mesh_left":[0.0],"mesh_right":[0.0]},"_load_inBlob":true,"_pipeline":{"_streams":[{"name":"metaout"},{"name":"previewout"}]},"ai":{"NCEs":1,"NN_config":{"NN_family":"mobilenet","confidence_threshold":0.5,"output_format":"detection"},"blob0_size":2290560,"blob1_size":0,"calc_dist_to_bb":false,"camera_input":"rgb","cmx_slices":7,"keep_aspect_ratio":true,"num_stages":1,"shaves":7},"app":{"sync_sequence_numbers":false,"sync_video_meta_streams":false,"usb_chunk_KiB":64},"board":{"clear-eeprom":false,"left_fov_deg":69.0,"left_to_rgb_distance_m":0.0,"left_to_right_distance_m":0.03500000014901161,"name":"","override-eeprom":false,"revision":"","rgb_fov_deg":69.0,"stereo_center_crop":false,"store-to-eeprom":false,"swap-left-and-right-cameras":false},"camera":{"mono":{"fps":30.0,"resolution_h":720,"resolution_w":1280},"rgb":{"fps":30.0,"resolution_h":1080,"resolution_w":1920}},"depth":{"depth_limit_mm":10000,"lr_check":false,"median_kernel_size":7,"padding_factor":0.30000001192092896,"warp_rectify":{"edge_fill_color":-1,"mirror_frame":true,"use_mesh":false}},"ot":{"confidence_threshold":0.5,"max_tracklets":20}} size of input string json_config_obj to config_h2d is ->1589 size of json_config_obj that is expected to be sent to config_h2d is ->1048576 Attempting to open stream config_h2d Successfully opened stream config_h2d with ID #1! Writing 1048576 bytes to config_h2d !!! XLink write successful: config_h2d (1048576) Closing stream config_h2d: ... Closing stream config_h2d: DONE. Creating observer stream host_capture: ... Attempting to open stream host_capture Successfully opened stream host_capture with ID #0! Creating observer stream host_capture: DONE. Read: 2290560 Attempting to open stream inBlob Successfully opened stream inBlob with ID #1! Writing 2290560 bytes to inBlob !!! XLink write successful: inBlob (2290560) Closing stream inBlob: ... Closing stream inBlob: DONE. depthai: done sending Blob file D:\home\iprediction\depthAI\depthai-experiments\people-tracker\model\model.blob Attempting to open stream outBlob Successfully opened stream outBlob with ID #2! Closing stream outBlob: ... Closing stream outBlob: DONE. Input layer : Name: data Index: 0 Element type: uint8 Element size: 1byte Offset: 0 byte Dimensions: [Batch : 1, Channel : 3, Height : 320, Width : 544]

Output layer : Name: detection_out Index: 0 Element type: float16 Element size: 2 bytes Offset: 0 byte Dimensions: [Batch : 1, Channel : 1, Height : 200, Width : 7]

CNN to depth bounding-box mapping: start(0, 0), max_size(0, 0) Host stream start:metaout Opening stream for read: metaout Attempting to open stream metaout Successfully opened stream metaout with ID #3! Starting thread for stream: metaout Host stream start:previewout Opening stream for read: previewout Attempting to open stream previewout Started thread for stream: metaout Successfully opened stream previewout with ID #4! Starting thread for stream: previewout depthai: INIT OK! Started thread for stream: previewout XLink initialized. No USB device [03e7:2485], still looking... 10.045s NOT FOUND, err code 5 depthai: Error initializing xlink device is not initialized Traceback (most recent call last): File "", line 1, in File "C:\Python38\lib\multiprocessing\spawn.py", line 116, in spawn_main exitcode = _main(fd, parent_sentinel) File "C:\Python38\lib\multiprocessing\spawn.py", line 125, in _main prepare(preparation_data) File "C:\Python38\lib\multiprocessing\spawn.py", line 236, in prepare _fixup_main_from_path(data['init_main_from_path']) File "C:\Python38\lib\multiprocessing\spawn.py", line 287, in _fixup_main_from_path main_content = runpy.run_path(main_path, File "C:\Python38\lib\runpy.py", line 265, in run_path return _run_module_code(code, init_globals, run_name, File "C:\Python38\lib\runpy.py", line 97, in _run_module_code _run_code(code, mod_globals, init_globals, File "C:\Python38\lib\runpy.py", line 87, in _run_code exec(code, run_globals) File "D:\home\iprediction\depthAI\depthai-experiments\people-tracker\main.py", line 13, in d = DepthAI() File "D:\home\iprediction\depthAI\depthai-experiments\people-tracker\depthai_utils.py", line 21, in init raise RuntimeError("Error creating a pipeline!") RuntimeError: Error creating a pipeline!

User8395 commented 3 years ago

Maybe it's a faulty USB port, try other ones. If that doesn't work, try restarting your PC. If the restart doesn't work, try running pip install -r requirements.txt in people-tracker

Luxonis-Brandon commented 3 years ago

Thanks for the assist @qaqak . And sorry about the delay @yoich .

So when you ran the other example, did you run the python3 -m pip install -r requirements.txt?

Right now we kind of containerize the demos so that as we do API changes, the demo still works with the API for which it was written.

So I think this may be the issue, but not sure (way behind because of shipping issues).

User8395 commented 3 years ago

Right now we kind of containerize the demos so that as we do API changes, the demo still works with the API for which it was written.

What do you mean by containerize?

yoich commented 3 years ago

Hi, qaqak and Luxonis-Brandon. Thank you for your advice. Some demo programs (face detection) could run and I could see images, so I think the USB port is fine. https://ibb.co/cgn38gq

I did pip install -r requirements.txt after activated venv. Is there any difference between pip install -r requirements.txt and python3 -m pip -r requirements.txt ?

I just started. Sorry.

Luxonis-Brandon commented 3 years ago

Right now we kind of containerize the demos so that as we do API changes, the demo still works with the API for which it was written.

What do you mean by containerize?

Sorry by that I mean that the demos have the API version that they were written for contained in the demo. So the requirements should be installed when running the demo.

Luxonis-Brandon commented 3 years ago

Hi, qaqak and Luxonis-Brandon. Thank you for your advice. Some demo programs (face detection) could run and I could see images, so I think the USB port is fine. https://ibb.co/cgn38gq

I did pip install -r requirements.txt after activated venv. Is there any difference between pip install -r requirements.txt and python3 -m pip -r requirements.txt ?

I just started. Sorry.

So @realWadim could you please help to explain the difference (if any) here? I don't know.

realWadim commented 3 years ago

It's safer to use python3 -m pip -r requirements.txt, since we explicitly use python3 as interpreter. If we're using a virtual environment and use python3 by default, we can simply use pip install -r requirements.txt.

yoich commented 3 years ago

Hi, I did python -m pip install -r requirements.txt again. But nothing changed. This is result of pip freeze.

> pip freeze
depthai==0.3.0.0+aeda4a9fdef6edc9f826b7dc354a123d1611a7c6
numpy==1.19.5
opencv-python==4.5.1.48
scipy==1.4.1`
User8395 commented 3 years ago

Try python -m pip uninstall -r requirements.txt then python -m pip install -r requirements.txt

Thanks, Qasim

yoich commented 3 years ago

Hi, qaqak. Thank you for your advice. But it did not work. Now I try to work another example "people-counter" and I got an error and could not run. At the moment I could run only depthai_demo.py What's wrong with me.

yoich commented 3 years ago

Hi, I could run 'gen2-human-pose'. python main.py -cam

TannerGilbert commented 3 years ago

I'm actually facing the same issue, eventhough most other scripts seem to work without any problems

yoich commented 3 years ago

Hi,

I still could not run 'people-tracker/main.py'

But I could run 'people-counter/main.py' by adding if __name__ == '__main__':

Just for your information.

Luxonis-Brandon commented 3 years ago

Thanks. Not sure on this one, bringing up this fix internally, and thanks @yoich !

magallardo commented 3 years ago

I am having the same issue when trying the coronamask sample. I was able to run the demo hello world example which uses the latest version of the api, but somehow when I try the coronamask sample (after creating a virtual environment for the requirements), I get the device not found error.
Is there any plan for these examples to be upgraded to the latest depthai API?

Thanks

Luxonis-Brandon commented 3 years ago

Hi @magallardo sorry about the trouble. So actually ArduCam actually had beaten us to the punch on making an updated version using Gen2 API: https://github.com/OAKChina/depthai-examples/tree/master/face_mask

That said, I haven't checked if these have been upgraded to the latest stable release of Gen2, as they are a month or two old now, so I'm thinking not.

So actually I'll ask the team if we can go through and do pull-requests on all of them to get them to the latest DepthAI API.

And we are also in the process of migrating all those over to this Github as well to update the face mask example. Sorry again about the trouble.

-Brandon

magallardo commented 3 years ago

@Luxonis-Brandon Thanks for the prompt response.

I have a question regarding the Gen2 API. I was looking at the requirements file on the ArduCam link you provided for face_mask example and it is using depthai==0.0.2.1+ab14564b91fd7cdd98a70ccda438cf1482839cdd

However, in the requirements file for the coronamask in the luxonis/depthai-experiments repository, the depthai used is:depthai==0.3.0.0+aeda4a9fdef6edc9f826b7dc354a123d1611a7c6

So, I am a little bit confused with the versioning scheme. Which version is supposed to be newer or Gen2?

Thanks again. Marcelo

Luxonis-Brandon commented 3 years ago

Hi Marcelo,

Sorry about the confusion. And agreed the numbering was quite confusing WRT Gen1 and Gen2 prior to the formal Gen2 release.

So ArduCam wrote those a bit ago (thanks to them for being really early adopters of Gen2) so they're actually using a pre-release version of the Gen2 API, which used that confusing 0.0.2.x format.

So now that Gen2 is formally released, the version numbering is WAY clearer.

Gen1 is now 1.x. E.g. here Gen2 is now 2.x. E.g. here for Python and here for C++.

Those are formal releases, but you can always just pull the latest from those respective repositories.

Anyway, going forward there won't be any of that odd 0.0.2.x release, it was just a temporary stop-gap while we were developing Gen2 API and it was not yet stable. And I just struck up a conversation internally to Luxonis and with ArduCam about Luxonis helping to update all those examples to the Gen2 2.x formal release.

And beside that exception, anything with version <2 is Gen1, and anything 2 and above is Gen2. (And ideally any code with 0.0.2.x references will be replaced over time as we catch stuff like that. The outdated examples there were my fault as I only remembered them earlier this week when a Luxonis engineer reminded me about them.)

Thoughts?

Thanks, Brandon

magallardo commented 3 years ago

@Luxonis-Brandon Thanks again for the pointers.

I have another question. While running these samples I have found that when first running a sample with one versions of the API, and then running a sample with version 2, the applications will start giving the no device found error. I was able to recover from that by restarting the device (in my case I am using an oak-d device). Is that what is expected and how can I get around this issue without having to restart the device.

Thanks again, Marcelo

User8395 commented 3 years ago

Does it give the no device error with version 2, version 1, or both versions?

Thanks, Qasim

magallardo commented 3 years ago

@qaqak The error occurs if you run first a sample with Gen1 api and then try to run a sample with Gen2 api.

I also tried to run again the gen1 sample after that and got the same error. At that point nothing was working so I decided to restart the device (oak-d) and then I was able to run the Gen2 sample.

Thanks

User8395 commented 3 years ago

@magallardo, what is your computer model? And the error doesn't occur with the Gen 1 API?

Thanks, Qasim

magallardo commented 3 years ago

@qaqak I am running the samples on a Rpi3 and the Oak-d is connected to the USB.

Both versions work ok after restarting the oak-d device. However, if I first run a gen1 sample and then a gen2 sample, then I get the device can't be found. Similar, if I run first a gen2 sample followed by a gen1 sample, I get the error while running the second sample.

I am getting around this issue by restarting the oak-d device when running different versions of the api.

Thanks, Marcelo

User8395 commented 3 years ago

So it's like this?

1. Run Gen 1 Sample: OK Run Gen 2 Sample: Error

Restart device

2. Run Gen 2 Sample: OK Run Gen 1 Sample: Error

Restart device...

Also, does the error occur when you run a Gen 1 sample followed by another Gen 1 Sample, and Gen 2 followed by Gen 2?

Thanks, Qasim

magallardo commented 3 years ago

@qaqak That is correct. The error happens when changing the API.

Also, the error does not occur when running multiple samples using the same version of the API.

Thanks, Marcelo

Luxonis-Brandon commented 3 years ago

Very interesting. We'll try to reproduce. Thanks for iterating to make this clear. @cafemoloko would you mind seeing about reproducing this?

Thanks again, Brandon

User8395 commented 3 years ago

How about a replacement OAK-D? Also @magallardo, just choose one version of the API and stick with that?

magallardo commented 3 years ago

@qaqak I am able to get around now by restarting the oak-d, which is not terrible but annoying.

Also, are you suggesting the oak-d device is defective?

Unfortunately I have been trying some gen1 and gen2 samples as not all samples which I am interested (coronamask) have been upgraded to gen2 API.

Thanks, Marcelo

User8395 commented 3 years ago

It might be defective. By restarting the device, you mean re-plugging it?

Thanks, Qasim

magallardo commented 3 years ago

@qaqak Yes. I unplug the power, wait a few seconds and plug it again.

After that the samples for gen1 or gen2 work unless you mix the apis from one sample to another.

Thanks, Marcelo

Luxonis-Brandon commented 3 years ago

If the problem only occurs when switching between API in Gen1 and Gen2 without a power cycle, and it works fine otherwise, it is extremely unlikely to be a bad OAK-D. We have actually not seen a single bad OAK-D in the field yet. We've seen 3x failed cables, however.

Anyway, I am asking offline to see if @cafemoloko is seeing about reproducing this.

Thanks, Brandon

magallardo commented 3 years ago

@Luxonis-Brandon Are you guys planning to upgrade the coronamask sample to gen2 API? That is the only reason I need to switch to gen1 and get the errors.

I will appreciate any help upgrading the coronamask sample to gen2 api.

Thanks, Marcelo

Luxonis-Brandon commented 3 years ago

Hi @magallardo ,

Ah yes good to know!

Yes, definitely. So @VanDavv I think is already working on this. In the meantime, ArduCam actually did write a Gen2 example: https://github.com/OAKChina/depthai-examples/tree/master/face_mask

But it is actually on an older version of Gen2, so we will be updating it when we integrate it into our examples.

Thoughts?

Thanks, Brandon

magallardo commented 3 years ago

@Luxonis-Brandon I have tried running the ArduCam example but unfortunately I am getting errors when trying to install their requirements on my Rpi3. I am getting a Failed to build scipy with the following error: ERROR: Could not build wheels for scipy which use PIE 517 and cannot be installed directly.

Thanks, Marcelo

User8395 commented 3 years ago

Also, I use my OAK-D without the power cable. Can you try using yours without the cable, @magallardo?

magallardo commented 3 years ago

@qaqak Very interesting as it works going from gen1 sample to gen2 sample. However, when I try running the gen1 sample after the gen2 sample, I get the error again.

Summary:

1) cd coranamaks // set the virtualenv python3 main.py

Works

2) cd ../gen2-human-pose // set the virtualenv python3 main.py -vid ./input.mp4

Works

3) cd ../coronamask //set the virtualenv python3 main.py

Does not WORK!! Gives the following error:

python3 main.py 2021-03-19 11:28:14,623 - root - INFO - Logging system initialized, kept in file /home/pi/workspace/github/depthai-experiments/coronamask/camera.log... 2021-03-19 11:28:14,625 - main - INFO - Setting up debug run... XLink initialized. Sending internal device firmware Failed to boot the device: 1.1.3-ma2480, err code 3 depthai: Error initializing xlink 2021-03-19 11:28:16,731 - depthai_utils - INFO - Creating DepthAI pipeline... device is not initialized Traceback (most recent call last): File "main.py", line 45, in MainDebug().run() File "main.py", line 14, in init self.depthai = self.depthai_class(MODEL_LOCATION, 'people') File "/home/pi/workspace/github/depthai-experiments/coronamask/depthai_utils.py", line 58, in init super().init(*args, **kwargs) File "/home/pi/workspace/github/depthai-experiments/coronamask/depthai_utils.py", line 27, in init 'blob_file_config': str(Path(model_location, 'config.json').absolute()) File "/home/pi/workspace/github/depthai-experiments/coronamask/depthai_utils.py", line 18, in create_pipeline raise RuntimeError("Pipeline was not created.") RuntimeError: Pipeline was not created. Exception ignored in: <function DepthAIDebug.del at 0x6b0f6930> Traceback (most recent call last): File "/home/pi/workspace/github/depthai-experiments/coronamask/depthai_utils.py", line 78, in del self.fps.stop() AttributeError: 'DepthAIDebug' object has no attribute 'fps'

Hope this helps.

Thanks, Marcelo

cafemoloko commented 3 years ago

@magallardo, @qaqak, @Luxonis-Brandon

I've run few examples on pc with Ubuntu 20.04. Switching between gen1 and gen2 examples gives me the following error:

(exp) karolina@karolina:~/experiments/depthai-experiments/people-tracker$ python3 main.py 
XLink initialized.
Sending internal device firmware
Failed to boot the device: 2-ma2480, err code 3
depthai: Error initializing xlink
device is not initialized
Traceback (most recent call last):
  File "main.py", line 13, in <module>
    d = DepthAI()
  File "/home/karolina/experiments/depthai-experiments/people-tracker/depthai_utils.py", line 21, in __init__
    raise RuntimeError("Error creating a pipeline!")
RuntimeError: Error creating a pipeline!

I unplugged and plugged the device again and got it working:

(exp) karolina@karolina:~/experiments/depthai-experiments/people-tracker$ python3 main.py 
XLink initialized.
Sending internal device firmware
Successfully connected to device.
Loading config file
Attempting to open stream config_d2h
watchdog started 
Successfully opened stream config_d2h with ID #0!
Closing stream config_d2h: ...
Closing stream config_d2h: DONE.
EEPROM data: invalid / unprogrammed
/home/karolina/experiments/depthai-experiments/people-tracker/model/config.json
depthai: Calibration file is not specified, will use default setting;

Coronamask example runs my pc, however the image is upside down and barely moving:

2021-03-19 19:40:28,437 - depthai_utils - INFO - Creating DepthAI pipeline...
2021-03-19 19:40:28,847 - depthai_utils - INFO - Pipeline created.
2021-03-19 19:40:28,847 - __main__ - INFO - Setup complete, parsing frames...
2021-03-19 19:41:38,053 - depthai_utils - INFO - [INFO] elapsed time: 69.21
2021-03-19 19:41:38,053 - depthai_utils - INFO - [INFO] approx. FPS: 0.30

Log:

2021-03-19 20:16:19,976 - root - INFO - Logging system initialized, kept in file /home/karolina/experiments/depthai-experiments/coronamask/camera.log...
2021-03-19 20:16:19,976 - __main__ - INFO - Setting up debug run...
XLink initialized.
Sending internal device firmware
Successfully connected to device.
Loading config file
Attempting to open stream config_d2h
watchdog started 
Successfully opened stream config_d2h with ID #0!
Closing stream config_d2h: ...
Closing stream config_d2h: DONE.
EEPROM data: invalid / unprogrammed
2021-03-19 20:16:21,061 - depthai_utils - INFO - Creating DepthAI pipeline...
/home/karolina/experiments/depthai-experiments/coronamask/models/mask-detector/config.json
depthai: Calibration file is not specified, will use default setting;
config_h2d json:
{"_board":{"calib_data":[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0],"mesh_left":[0.0],"mesh_right":[0.0]},"_load_inBlob":true,"_pipeline":{"_streams":[{"name":"previewout"},{"name":"metaout"}]},"ai":{"NCEs":1,"NN_config":{"NN_family":"mobilenet","confidence_threshold":0.5,"output_format":"detection"},"blob0_size":13490432,"blob1_size":0,"calc_dist_to_bb":false,"camera_input":"rgb","cmx_slices":7,"keep_aspect_ratio":true,"num_stages":1,"shaves":7},"app":{"sync_sequence_numbers":false,"sync_video_meta_streams":false,"usb_chunk_KiB":64},"board":{"clear-eeprom":false,"left_fov_deg":69.0,"left_to_rgb_distance_m":0.0,"left_to_right_distance_m":0.03500000014901161,"name":"","override-eeprom":false,"revision":"","rgb_fov_deg":69.0,"stereo_center_crop":false,"store-to-eeprom":false,"swap-left-and-right-cameras":false},"camera":{"mono":{"fps":30.0,"resolution_h":720,"resolution_w":1280},"rgb":{"fps":30.0,"resolution_h":1080,"resolution_w":1920}},"depth":{"depth_limit_mm":10000,"lr_check":false,"median_kernel_size":7,"padding_factor":0.30000001192092896,"warp_rectify":{"edge_fill_color":-1,"mirror_frame":true,"use_mesh":false}},"ot":{"confidence_threshold":0.5,"max_tracklets":20}}
size of input string json_config_obj to config_h2d is ->1590
size of json_config_obj that is expected to be sent to config_h2d is ->1048576
Attempting to open stream config_h2d
Successfully opened stream config_h2d with ID #1!
Writing 1048576 bytes to config_h2d
!!! XLink write successful: config_h2d (1048576)
Closing stream config_h2d: ...
Closing stream config_h2d: DONE.
Creating observer stream host_capture: ...
Attempting to open stream host_capture
Successfully opened stream host_capture with ID #0!
Creating observer stream host_capture: DONE.
Read: 13490432
Attempting to open stream inBlob
Successfully opened stream inBlob with ID #1!
Writing 13490432 bytes to inBlob
!!! XLink write successful: inBlob (13490432)
Closing stream inBlob: ...
Closing stream inBlob: DONE.
depthai: done sending Blob file /home/karolina/experiments/depthai-experiments/coronamask/models/mask-detector/model.blob
Attempting to open stream outBlob
Successfully opened stream outBlob with ID #2!
Closing stream outBlob: ...
Closing stream outBlob: DONE.
Input layer : 
Name: image_tensor
Index: 0
Element type: uint8
Element size:  1byte
Offset: 0 byte
Dimensions: [Batch : 1, Channel : 3, Height : 300, Width : 300]

Output layer : 
Name: DetectionOutput
Index: 0
Element type: float16
Element size:  2 bytes
Offset: 0 byte
Dimensions: [Batch : 1, Channel : 1, Height : 100, Width : 7]

CNN to depth bounding-box mapping: start(0, 0), max_size(0, 0)
Host stream start:metaout
Opening stream for read: metaout
Attempting to open stream metaout
Successfully opened stream metaout with ID #1!
Starting thread for stream: metaout
Host stream start:previewout
Opening stream for read: previewout
Attempting to open stream previewout
Started thread for stream: metaout
Successfully opened stream previewout with ID #2!
Starting thread for stream: previewout
depthai: INIT OK!
Started thread for stream: previewout
2021-03-19 20:16:21,498 - depthai_utils - INFO - Pipeline created.
2021-03-19 20:16:21,499 - __main__ - INFO - Setup complete, parsing frames...
E: [global] [    384966] [python3] addEvent:262 Condition failed: event->header.flags.bitField.ack != 1
E: [global] [    384966] [python3] addEventWithPerf:276  addEvent(event) method call failed with an error: 3
E: [global] [    384966] [python3] XLinkReadData:156    Condition failed: (addEventWithPerf(&event, &opTime))
Device get data failed: 7
Closing stream previewout: ...
E: [global] [    384966] [Scheduler00Thr] dispatcherEventSend:53        Write failed (header) (err -4) | event XLINK_READ_REL_REQ

E: [xLink] [    384966] [Scheduler00Thr] sendEvents:1027        Event sending failed
E: [global] [    384966] [Scheduler00Thr] dispatcherEventSend:53        Write failed (header) (err -4) | event XLINK_CLOSE_STREAM_REQ
Closing stream previewout: DONE.
Thread for previewout finished.

E: [xLink] [    384966] [Scheduler00Thr] sendEvents:1027        Event sending failed
E: [global] [    384967] [python3] addEvent:262 Condition failed: event->header.flags.bitField.ack != 1
E: [global] [    384967] [python3] addEventWithPerf:276  addEvent(event) method call failed with an error: 3
E: [global] [    384967] [python3] XLinkReadData:156    Condition failed: (addEventWithPerf(&event, &opTime))
Device get data failed: 7
Closing stream metaout: ...
Closing stream metaout: DONE.
Thread for metaout finished.
E: [global] [    384967] [Scheduler00Thr] dispatcherEventSend:53        Write failed (header) (err -4) | event XLINK_CLOSE_STREAM_REQ

E: [xLink] [    384967] [Scheduler00Thr] sendEvents:1027        Event sending failed
watchdog triggered 
=== New data in observer stream host_capture, size: 4
Writing 4 bytes to host_capture
E: [global] [    391068] [Scheduler00Thr] dispatcherEventSend:53        Write failed (header) (err -4) | event XLINK_WRITE_REQ

E: [xLink] [    391068] [Scheduler00Thr] sendEvents:1027        Event sending failed
!!! XLink write successful: host_capture (4)
Stopping threads: ...
Stopping threads: DONE 0.000s.
Closing all observer streams: ...
Closing stream host_capture: ...
Closing stream host_capture: DONE.
Closing all observer streams: DONE.
Reseting device: 0.
E: [global] [    391068] [Scheduler00Thr] dispatcherEventSend:53        Write failed (header) (err -4) | event XLINK_RESET_REQ

E: [xLink] [    391068] [Scheduler00Thr] sendEvents:1027        Event sending failed
Reseting: DONE.
Luxonis-Brandon commented 3 years ago

Thanks. So I think what will resolve this is to just update the CoronaMask example, which shouldn't be hard to do.

@Erol444 - would you mind working with @cafemoloko to just quickly crank out making the coronavirus mask example in Gen2?

It's a single-stage thing so it should just be a couple lines of changes say from the MobileNetSSD examples.

Thoughts?

Thanks, Brandon

magallardo commented 3 years ago

@cafemoloko Any timeline on the upgrade to gen2?

Thanks in advance

Luxonis-Brandon commented 3 years ago

We should be able to get it out this week.

Luxonis-Brandon commented 3 years ago

This is now implemented in https://github.com/luxonis/depthai-experiments/pull/102 @magallardo .