Closed akohlsmith closed 4 years ago
Yes this is an issue that came with the drawing of motion contours when publish_image: true
.
I fixed it in my local branch but unfortunately it conflicts with some other changes that i am not quite done with yet.
I am build a new version right now but it is taking forever to compile som new dependencies on the RPi. I will publish this to dev tomorrow. It will include some changes which are not quite finished yet so there might be some untested code, but nothing that should break existing setups.
I have pushed all images dev
tags so if you grab this one this error should be resolved.
The configuration validators might be a bit more strict now so let me know if you bump into any problems.
I will release a beta soon where all these changes are explained
Thank you, I appreciate it.
I tried pulling :dev and running it but there are just too many regressions (object_detection debugging, area no longer in percent, etc.) that I can't really test it, so I've disabled the MQTT image push for the time being. I did pull :latest and there were updates there as well, so we'll see how this one goes. This is really exciting software to work with, I'm very happy I am able to test it out!
Strange, something went wrong with the push to docker-hub so you got an older version.
I probably asked you before but cant remember, which image are you using?
I'm using roflcoopter/viseron-vaapi:latest
; docker images shows the ID as 993e09059cd6
(not sure if that's a repo commit number or something unique to my system).
Yeah i verified, all the other three images were pushed correctly except the vaapi one. It is pushed now. One thing to note tho is i have changed the default model to being YOLOv3 instead of YOLOv3 tiny if you are using Darknet. This means the CPU usage will be higher, but the detection accurace is much higher.
I will release a beta later tonight so you might want to wanna wait for that
Is there a delay between when you push to docker.io and when we can pull it? I just did a docker pull roflcoopter/viseron:latest
and it said I'm already there (also tried for viseron-vaapi:latest and the :dev variants of both).
re: YOLOv3 vs tiny: no worries. I'm curious why OpenCL isn't using the (i5's built in) GPU. I see
/root/opencv-master/modules/dnn/src/dnn.cpp (1404) setUpNet DNN: OpenCL target is not supported with current OpenCL device (tested with GPUs only), switching to CPU.
the first time that the object detector starts up.
Is there a delay between when you push to docker.io and when we can pull it? I just did a docker pull roflcoopter/viseron:latest and it said I'm already there (also tried for viseron-vaapi:latest and the :dev variants of both).
latest tag is only updated with stable releases, not when i push to dev tag. There shouldnt be any delay.
re: YOLOv3 vs tiny: no worries. I'm curious why OpenCL isn't using the (i5's built in) GPU. I see
/root/opencv-master/modules/dnn/src/dnn.cpp (1404) setUpNet DNN: OpenCL target is not supported with current OpenCL device (tested with GPUs only), switching to CPU.
the first time that the object detector starts up.
Hmm interesting. Is that new or has it always been like that for you?
It's always been like that. This is a Dell OptiPlex 9020 with 16GB of RAM and an i5-4570S.
What does your docker run/docker-compose look like?
docker run --rm -v /home/andrew/visdata/recordings:/recordings -v /home/andrew/visdata/config:/config -v /etc/localtime:/etc/localtime:ro --name viseron --device /dev/dri roflcoopter/viseron-vaapi:dev
The vaapi stuff is working fine with ffmpeg. Output from vainfo
that (if it is helpful):
$ docker exec -it 4a94c332e821 vainfo
error: XDG_RUNTIME_DIR not set in the environment.
error: can't connect to X server!
libva info: VA-API version 1.1.0
libva info: va_getDriverName() returns 0
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/i965_drv_video.so
libva info: Found init function __vaDriverInit_1_1
libva info: va_openDriver() returns 0
vainfo: VA-API version: 1.1 (libva 2.9.0.pre1)
vainfo: Driver version: Intel i965 driver for Intel(R) Haswell Desktop - 2.1.0
vainfo: Supported profile and entrypoints
VAProfileMPEG2Simple : VAEntrypointVLD
VAProfileMPEG2Simple : VAEntrypointEncSlice
VAProfileMPEG2Main : VAEntrypointVLD
VAProfileMPEG2Main : VAEntrypointEncSlice
VAProfileH264ConstrainedBaseline: VAEntrypointVLD
VAProfileH264ConstrainedBaseline: VAEntrypointEncSlice
VAProfileH264Main : VAEntrypointVLD
VAProfileH264Main : VAEntrypointEncSlice
VAProfileH264High : VAEntrypointVLD
VAProfileH264High : VAEntrypointEncSlice
VAProfileH264MultiviewHigh : VAEntrypointVLD
VAProfileH264MultiviewHigh : VAEntrypointEncSlice
VAProfileH264StereoHigh : VAEntrypointVLD
VAProfileH264StereoHigh : VAEntrypointEncSlice
VAProfileVC1Simple : VAEntrypointVLD
VAProfileVC1Main : VAEntrypointVLD
VAProfileVC1Advanced : VAEntrypointVLD
VAProfileNone : VAEntrypointVideoProc
VAProfileJPEGBaseline : VAEntrypointVLD
OpenCL and VA-API are two separate things.
Can you show me the output of this command?
docker exec -it 4a94c332e821 clinfo
The output of that is curious:
$ docker exec -it 4a94c332e821 clinfo
Number of platforms 0
If I run clinfo
on the host (NOT in the container) I get all kinds of output. That must mean I'm not passing whatever OpenCL needs into the container, right?
Interesting! Yes, exactly. Whats the output of ls -al /dev/dri
on the host?
$ ls -al /dev/dri
total 0
drwxr-xr-x 3 root root 100 Sep 28 17:20 .
drwxr-xr-x 21 root root 4320 Sep 29 00:43 ..
drwxr-xr-x 2 root root 80 Sep 28 17:20 by-path
crw-rw-rw-+ 1 root root 226, 0 Sep 28 17:20 card0
crw-rw-rw-+ 1 root video 226, 128 Sep 28 17:20 renderD128
What's interesting is that ffmpeg is using /dev/dri/renderD128
for all of its stuff (encode and decode) without spitting out any errors; Is OpenCL perhaps expecting exclusive access to the device?
No its not, im running ffmpeg with vaapi and OpenCV with OpenCL with no issues. Something else must be acting up.
With a little luck I'll have a coral.ai PCIe board installed, it'll be interesting to see if the kernel driver automatically creates the device that OpenCL can use natively (re: #41).
What does your docker run
or docker-compose
look like? Are you running -vaapi? I noticed that if I run the viseron
(not -vaapi) container then vainfo
still gives correct output, but the clinfo
isn't found at all.
I also found this on https://github.com/pkienzle/opencl_docker:
The official Intel drivers do not recognize the Intel GPU from within the container. The open source beignet driver on Ubuntu is able to see it if the /dev/dri device is forwarded to the docker container. It is not included in this container because it interferes with the other drivers on the system.
Which GPU do you use?
No i am running the cuda
image, but that one also installs the OpenCL packages.
And when i run the vaapi
image OpenCL works for me aswell, so im not sure i trust the statement from the repository you linked. This is hard for me to assist with since it seems hardware related.
Just a s a test, if you try the beignet image linked in that repository, does clinfo work?
It works with chihchun/opencl-beignet
:
$ docker run --rm --device /dev/dri chihchun/opencl-beignet clinfo
Number of platforms 1
Platform Name Intel Gen OCL Driver
Platform Vendor Intel
Platform Version OpenCL 1.2 beignet 1.1.2
Platform Profile FULL_PROFILE
Platform Extensions cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_byte_addressable_store cl_khr_spir cl_khr_icd
Platform Extensions function suffix Intel
Platform Name Intel Gen OCL Driver
Number of devices 1
Device Name Intel(R) HD Graphics Haswell GT2 Desktop
Device Vendor Intel
Device Vendor ID 0x8086
Device Version OpenCL 1.2 beignet 1.1.2
Driver Version 1.1.2
Device OpenCL C Version OpenCL C 1.2 beignet 1.1.2
Device Type GPU
Device Profile FULL_PROFILE
Max compute units 20
Max clock frequency 1000MHz
Device Partition (core)
Max number of sub-devices 1
Supported partition types None, None, None
Max work item dimensions 3
Max work item sizes 512x512x512
Max work group size 512
Preferred work group size multiple 16
Preferred / native vector sizes
char 16 / 8
short 8 / 8
int 4 / 4
long 2 / 2
half 0 / 8 (n/a)
float 4 / 4
double 0 / 2 (n/a)
Half-precision Floating-point support (n/a)
Single-precision Floating-point support (core)
Denormals No
Infinity and NANs Yes
Round to nearest Yes
Round to zero No
Round to infinity No
IEEE754-2008 fused multiply-add No
Support is emulated in software No
Correctly-rounded divide and sqrt operations No
Double-precision Floating-point support (n/a)
Address bits 32, Little-Endian
Global memory size 2147483648 (2GiB)
Error Correction support No
Max memory allocation 1073741824 (1024MiB)
Unified memory for Host and Device Yes
Minimum alignment for any data type 128 bytes
Alignment of base address 1024 bits (128 bytes)
Global Memory cache type Read/Write
Global Memory cache size 8192
Global Memory cache line 64 bytes
Image support Yes
Max number of samplers per kernel 16
Max size for 1D images from buffer 65536 pixels
Max 1D or 2D image array size 2048 images
Max 2D image size 8192x8192 pixels
Max 3D image size 8192x8192x2048 pixels
Max number of read image args 128
Max number of write image args 8
Local memory type Global
Local memory size 65536 (64KiB)
Max constant buffer size 134217728 (128MiB)
Max number of constant args 8
Max size of kernel argument 1024
Queue properties
Out-of-order execution No
Profiling Yes
Prefer user sync for interop Yes
Profiling timer resolution 80ns
Execution capabilities
Run OpenCL kernels Yes
Run native kernels Yes
SPIR versions 1.2
printf() buffer size 1048576 (1024KiB)
Built-in kernels __cl_copy_region_align4;__cl_copy_region_align16;__cl_cpy_region_unalign_same_offset;__cl_copy_region_unalign_dst_offset;__cl_copy_region_unalign_src_offset;__cl_copy_buffer_rect;__cl_copy_image_1d_to_1d;__cl_copy_image_2d_to_2d;__cl_copy_image_3d_to_2d;__cl_copy_image_2d_to_3d;__cl_copy_image_3d_to_3d;__cl_copy_image_2d_to_buffer;__cl_copy_image_3d_to_buffer;__cl_copy_buffer_to_image_2d;__cl_copy_buffer_to_image_3d;__cl_fill_region_unalign;__cl_fill_region_align2;__cl_fill_region_align4;__cl_fill_region_align8_2;__cl_fill_region_align8_4;__cl_fill_region_align8_8;__cl_fill_region_align8_16;__cl_fill_region_align128;__cl_fill_image_1d;__cl_fill_image_1d_array;__cl_fill_image_2d;__cl_fill_image_2d_array;__cl_fill_image_3d;
Device Available Yes
Compiler Available Yes
Linker Available Yes
Device Extensions cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_byte_addressable_store cl_khr_spir cl_khr_icd
NULL platform behavior
clGetPlatformInfo(NULL, CL_PLATFORM_NAME, ...) Intel Gen OCL Driver
clGetDeviceIDs(NULL, CL_DEVICE_TYPE_ALL, ...) Success [Intel]
clCreateContext(NULL, ...) [default] Success [Intel]
clCreateContextFromType(NULL, CL_DEVICE_TYPE_CPU) No devices found in platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_GPU) Success (1)
Platform Name Intel Gen OCL Driver
Device Name Intel(R) HD Graphics Haswell GT2 Desktop
clCreateContextFromType(NULL, CL_DEVICE_TYPE_ACCELERATOR) No devices found in platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_CUSTOM) No devices found in platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_ALL) Success (1)
Platform Name Intel Gen OCL Driver
Device Name Intel(R) HD Graphics Haswell GT2 Desktop
ICD loader properties
ICD loader Name OpenCL ICD Loader
ICD loader Vendor OCL Icd free software
ICD loader Version 2.2.9
ICD loader Profile OpenCL 2.1
This works whether Viseron is running or not.
I wonder if this is because the docker image is built on my machine, thus the installed OpenCL is not supported on your machine.
Is this an older CPU? From what i could gather, Intels OpenCL (which is included in the image) is not supported by Haswell based GPUs. Beignet OpenCL however is an open-source alternative which is a bit slower than Intels variant
That's a very good question - I'm no stranger to building software, but am an utter newbie when it comes to docker. The system is an ultra-small form factor Dell Optiplex 9020 with 16GB of RAM and a 2.9GHz i5-4570S, so it's Haswell, but opencl outside of the container detects it fine. The version of opencl-icd inside and outside the container appears to be the same as well.
$ apt show ocl-icd-libopencl1
Package: ocl-icd-libopencl1
Version: 2.2.11-1ubuntu1
$ apt show clinfo
Package: clinfo
Version: 2.2.18.03.26-1
$ clinfo -v
clinfo version 2.2.18.03.26
$ docker exec -it 8e2818866320 apt show ocl-icd-libopencl1
Package: ocl-icd-libopencl1
Version: 2.2.11-1ubuntu1
$ docker exec -it 8e2818866320 apt show clinfo
Package: clinfo
Version: 2.2.18.03.26-1
$ docker exec -it 8e2818866320 clinfo -v
clinfo version 2.2.18.03.26
Thats really only the version of clinfo
, which is not tied to the OpenCL version
OpenCL is compiled from source in the image and propably overrides the one from apt. This answer on stackoverflow sums the different versions up quite well.
I wonder what happens if you try to build it yourself. Can you exec into the container and run the following to reinstall OpenCL from source and see what happens?
mkdir /opencl &&\
cd /opencl && \
wget https://github.com/intel/compute-runtime/releases/download/19.31.13700/intel-gmmlib_19.2.3_amd64.deb --progress=bar:force:noscroll && \
wget https://github.com/intel/compute-runtime/releases/download/19.31.13700/intel-igc-core_1.0.10-2364_amd64.deb --progress=bar:force:noscroll && \
wget https://github.com/intel/compute-runtime/releases/download/19.31.13700/intel-igc-opencl_1.0.10-2364_amd64.deb --progress=bar:force:noscroll && \
wget https://github.com/intel/compute-runtime/releases/download/19.31.13700/intel-opencl_19.31.13700_amd64.deb --progress=bar:force:noscroll && \
wget https://github.com/intel/compute-runtime/releases/download/19.31.13700/intel-ocloc_19.31.13700_amd64.deb --progress=bar:force:noscroll && \
dpkg -i *.deb && \
rm -R /opencl
And maybe, just as a test, can you run the official Intel OpenCL image and see if this works?
docker run -it --device /dev/dri:/dev/dri --rm docker.io/intelopencl/intel-opencl:ubuntu-18.04-ppa clinfo
If not I have to create a separate container for older CPUs like yours.
I think you're on to something:
$ docker run -it --device /dev/dri:/dev/dri --rm docker.io/intelopencl/intel-opencl:ubuntu-18.04-ppa clinfo
Number of platforms 0
I had to install the latest .debs a little differently since the container does not have wget, and I couldn't install it via apt for some reason. I just downloaded them in the host and ran the container with -v /path/to/clinfo:/clinfo
and then dpkg -i
'd it in the container. Same result: 0 platforms.
I think that you've correctly identified the issue in that the official Intel CL does not seem to work with older GPUs inside a container. It does, however, seem to work fine outside of the container, which is weird.
Whats the clinfo output on the host?
This is the output from the host:
Number of platforms 1
Platform Name Intel(R) CPU Runtime for OpenCL(TM) Applications
Platform Vendor Intel(R) Corporation
Platform Version OpenCL 2.1 LINUX
Platform Profile FULL_PROFILE
Platform Extensions cl_khr_icd cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_byte_addressable_store cl_khr_depth_images cl_khr_3d_image_writes cl_intel_exec_by_local_thread cl_khr_spir cl_khr_fp64 cl_khr_image2d_from_buffer cl_intel_vec_len_hint
Platform Host timer resolution 1ns
Platform Extensions function suffix INTEL
Platform Name Intel(R) CPU Runtime for OpenCL(TM) Applications
Number of devices 1
Device Name Intel(R) Core(TM) i5-4570S CPU @ 2.90GHz
Device Vendor Intel(R) Corporation
Device Vendor ID 0x8086
Device Version OpenCL 2.1 (Build 0)
Driver Version 18.1.0.0920
Device OpenCL C Version OpenCL C 2.0
Device Type CPU
Device Profile FULL_PROFILE
Device Available Yes
Compiler Available Yes
Linker Available Yes
Max compute units 4
Max clock frequency 2900MHz
Device Partition (core)
Max number of sub-devices 4
Supported partition types by counts, equally, by names (Intel)
Max work item dimensions 3
Max work item sizes 8192x8192x8192
Max work group size 8192
Preferred work group size multiple 128
Max sub-groups per work group 1
Preferred / native vector sizes
char 1 / 32
short 1 / 16
int 1 / 8
long 1 / 4
half 0 / 0 (n/a)
float 1 / 8
double 1 / 4 (cl_khr_fp64)
Half-precision Floating-point support (n/a)
Single-precision Floating-point support (core)
Denormals Yes
Infinity and NANs Yes
Round to nearest Yes
Round to zero No
Round to infinity No
IEEE754-2008 fused multiply-add No
Support is emulated in software No
Correctly-rounded divide and sqrt operations No
Double-precision Floating-point support (cl_khr_fp64)
Denormals Yes
Infinity and NANs Yes
Round to nearest Yes
Round to zero Yes
Round to infinity Yes
IEEE754-2008 fused multiply-add Yes
Support is emulated in software No
Address bits 64, Little-Endian
Global memory size 16721399808 (15.57GiB)
Error Correction support No
Max memory allocation 4180349952 (3.893GiB)
Unified memory for Host and Device Yes
Shared Virtual Memory (SVM) capabilities (core)
Coarse-grained buffer sharing Yes
Fine-grained buffer sharing Yes
Fine-grained system sharing Yes
Atomics Yes
Minimum alignment for any data type 128 bytes
Alignment of base address 1024 bits (128 bytes)
Preferred alignment for atomics
SVM 64 bytes
Global 64 bytes
Local 0 bytes
Max size for global variable 65536 (64KiB)
Preferred total size of global vars 65536 (64KiB)
Global Memory cache type Read/Write
Global Memory cache size 262144 (256KiB)
Global Memory cache line size 64 bytes
Image support Yes
Max number of samplers per kernel 480
Max size for 1D images from buffer 261271872 pixels
Max 1D or 2D image array size 2048 images
Base address alignment for 2D image buffers 64 bytes
Pitch alignment for 2D image buffers 64 pixels
Max 2D image size 16384x16384 pixels
Max 3D image size 2048x2048x2048 pixels
Max number of read image args 480
Max number of write image args 480
Max number of read/write image args 480
Max number of pipe args 16
Max active pipe reservations 65535
Max pipe packet size 1024
Local memory type Global
Local memory size 32768 (32KiB)
Max number of constant args 480
Max constant buffer size 131072 (128KiB)
Max size of kernel argument 3840 (3.75KiB)
Queue properties (on host)
Out-of-order execution Yes
Profiling Yes
Local thread execution (Intel) Yes
Queue properties (on device)
Out-of-order execution Yes
Profiling Yes
Preferred size 4294967295 (4GiB)
Max size 4294967295 (4GiB)
Max queues on device 4294967295
Max events on device 4294967295
Prefer user sync for interop No
Profiling timer resolution 1ns
Execution capabilities
Run OpenCL kernels Yes
Run native kernels Yes
Sub-group independent forward progress No
IL version SPIR-V_1.0
SPIR versions 1.2
printf() buffer size 1048576 (1024KiB)
Built-in kernels
Device Extensions cl_khr_icd cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_byte_addressable_store cl_khr_depth_images cl_khr_3d_image_writes cl_intel_exec_by_local_thread cl_khr_spir cl_khr_fp64 cl_khr_image2d_from_buffer cl_intel_vec_len_hint
NULL platform behavior
clGetPlatformInfo(NULL, CL_PLATFORM_NAME, ...) No platform
clGetDeviceIDs(NULL, CL_DEVICE_TYPE_ALL, ...) No platform
clCreateContext(NULL, ...) [default] No platform
clCreateContext(NULL, ...) [other] Success [INTEL]
clCreateContextFromType(NULL, CL_DEVICE_TYPE_DEFAULT) No platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_CPU) No platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_GPU) No platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_ACCELERATOR) No platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_CUSTOM) No platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_ALL) No platform
The error that this issue was originally about is now fixed in latest beta
I'm seeing this pop up in the log. It doesn't cause the system to stop, but doesn't look like it should be happening either: