openvinotoolkit / openvino

OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference
https://docs.openvino.ai
Apache License 2.0
7.03k stars 2.22k forks source link

[Bug]: Apple M1 with Docker ubuntu 22.04 image NETWORK_NOT_READ Please check that model format: xml is supported and the model is correct. #22396

Closed Apidcloud closed 8 months ago

Apidcloud commented 8 months ago

OpenVINO Version

2023.1

Operating System

macOS Systems for Apple Silicon but building a docker image with aarch64

Device used for inference

CPU

Framework

None

Model used

No response

Issue description

I have a custom model that runs fine on linux.

I'm now trying to build a docker image on my Mac M1 that doesn't rely on linux/amd64 emulation because I was getting qemu errors.

Everything seems to be ok until the point I need to actually load the model and I get the error:

Log: Exception occurred while loading model: Exception from src/inference/src/core.cpp:100:
 | [ NETWORK_NOT_READ ] Unable to read the model: /tmp/production/production/saved_model.xml Please check that model format: xml is supported and the model is correct. Available frontends: 

When I try to use the model googlenet-v1 I get the same error, So I'm assuming I'm either building it wrong or some things are still not supported in arm.

For reference I tried our custom model with both int8 and float32.

Any ideas?

Is there maybe a better way (better than what I tried with googlenet-v1) to check if openvino is correctly working in this docker image (in turn running in my mac m1)?

Step-by-step reproduction

ARG DEBIAN_FRONTEND=noninteractive

FROM ubuntu:22.04 as build

WORKDIR /app ARG DEBIAN_FRONTEND

// Apt config RUN apt-get update RUN apt-get install -y build-essential git cmake wget unzip curl

RUN git clone --recursive --branch releases/2023/1 https://github.com/openvinotoolkit/openvino.git openvino-2023.1 RUN cd openvino-2023.1 && git submodule update --init --recursive && ./install_build_dependencies.sh \ && pip install -r src/bindings/python/src/compatibility/openvino/requirements-dev.txt \ && mkdir build && cd build \ && cmake -DENABLE_PYTHON=OFF -DENABLE_OPENCV=OFF -DCMAKE_INSTALL_PREFIX=/usr .. \ && make -jnproc && make install

// I then cd openvino-2023.1/bin/aarch64 and execute the file ./hello_query_device and get

 [ INFO ] Build ................................. 2023.1.0-12268-fef09f046fd-releases/2023/1
[ INFO ] 
[ INFO ] Available devices: 
[ INFO ] CPU
[ INFO ]        SUPPORTED_PROPERTIES: 
[ INFO ]                Immutable: AVAILABLE_DEVICES : ""
[ INFO ]                Immutable: RANGE_FOR_ASYNC_INFER_REQUESTS : 1 1 1
[ INFO ]                Immutable: RANGE_FOR_STREAMS : 1 8
[ INFO ]                Immutable: FULL_DEVICE_NAME : ARM CPU
[ INFO ]                Immutable: OPTIMIZATION_CAPABILITIES : FP32 FP16 INT8 BIN EXPORT_IMPORT
[ INFO ]                Mutable: NUM_STREAMS : 1
[ INFO ]                Mutable: AFFINITY : CORE
[ INFO ]                Mutable: INFERENCE_NUM_THREADS : 0
[ INFO ]                Mutable: PERF_COUNT : NO
[ INFO ]                Mutable: INFERENCE_PRECISION_HINT : f32
[ INFO ]                Mutable: PERFORMANCE_HINT : LATENCY
[ INFO ]                Mutable: EXECUTION_MODE_HINT : PERFORMANCE
[ INFO ]                Mutable: PERFORMANCE_HINT_NUM_REQUESTS : 0
[ INFO ]                Mutable: ENABLE_CPU_PINNING : YES
[ INFO ]                Mutable: SCHEDULING_CORE_TYPE : ANY_CORE
[ INFO ]                Mutable: ENABLE_HYPER_THREADING : YES
[ INFO ]                Mutable: DEVICE_ID : ""
[ INFO ]                Mutable: CPU_DENORMALS_OPTIMIZATION : NO
[ INFO ]                Mutable: CPU_SPARSE_WEIGHTS_DECOMPRESSION_RATE : 1

// I then install dev tools: pip install openvino-dev==2023.1.0

// and export PATH=$PATH:~/.local/bin

// and

omz_downloader --print_all
omz_downloader --name googlenet-v1
omz_converter --name googlenet-v1

cd openvino-2023.1/bin/aarch64

./classification_sample_async -m public/googlenet-v1/FP32/googlenet-v1.xml -i img24.bmp

// and get the output:

[ INFO ] Build ................................. 2023.1.0-12268-fef09f046fd-releases/2023/1
[ INFO ] 
[ INFO ] Parsing input parameters
[ INFO ] Files were added: 1
[ INFO ]     img24.bmp
[ INFO ] Loading model files:
[ INFO ] public/googlenet-v1/FP32/googlenet-v1.xml
[ ERROR ] Exception from src/inference/src/core.cpp:100:
[ NETWORK_NOT_READ ] Unable to read the model: public/googlenet-v1/FP32/googlenet-v1.xml Please check that model format: xml is supported and the model is correct. Available frontends: ir paddle onnx tf pytorch tflite 

// And this is similar to the output I get when trying to load my model:

Log: Exception occurred while loading model: Exception from src/inference/src/core.cpp:100:
 | [ NETWORK_NOT_READ ] Unable to read the model: /tmp/production/production/saved_model.xml Please check that model format: xml is supported and the model is correct. Available frontends: 

Relevant log output

lscpu Architecture: aarch64 CPU op-mode(s): 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Vendor ID: 0x00 Model: 0 Thread(s) per core: 1 Core(s) per cluster: 8 Socket(s): - Cluster(s): 1 Stepping: 0x0 BogoMIPS: 48.00 Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 asimddp sh a512 asimdfhm dit uscat ilrcpc flagm ssbs sb paca pacg dcpodp flagm2 frint Vulnerabilities:
Gather data sampling: Not affected Itlb multihit: Not affected L1tf: Not affected Mds: Not affected Meltdown: Not affected Mmio stale data: Not affected Retbleed: Not affected Spec rstack overflow: Not affected Spec store bypass: Vulnerable Spectre v1: Mitigation; __user pointer sanitization Spectre v2: Not affected Srbds: Not affected Tsx async abort: Not affected

Issue submission checklist

ilya-lavrenov commented 8 months ago

Hi @Apidcloud Have you tried a prebuilt version of OpenVINO? OpenVINO support arm64 for both Linux and macOS platforms in a prebuilt form

Apidcloud commented 8 months ago

you mean a docker image? The ones available were all linux/amd64. But i know for a fact that installing openvino directly on the mac m1 works with the same model.

Apidcloud commented 8 months ago

Added the lscpu output to the issue on top. I also tried to apt install directly openvino 2023.1 (apt-get install openvino-2023.1.0) but ran into dependency issues not met, and thus building from scratch.

ilya-lavrenov commented 8 months ago

you mean a docker image? The ones available were all linux/amd64. But i know for a fact that installing openvino directly on the mac m1 works with the same model.

Apidcloud commented 8 months ago

The archive arm runtime partially did it! I think it was because I was missing the source /opt/intel/openvino_2023/setupvars.sh part. That's required right? (and it was crashing at read_model without doing the source bit).

Now it's not crashing when doing read_model but rather on compile_model:

// prints CPU
cout << "[ORBextractor]: model path: " << _core.get_available_devices()[0] << endl;

// before it would crash directly here at read_model
 _model = _core.read_model(modelPath);
_model->reshape({{1, ov::Dimension(64, 512), ov::Dimension(64, 512), 3}});

// now it's crashing here
ov::CompiledModel compiledModel = _core.compile_model(_model);
_inferRequest = compiledModel.create_infer_request();

Here's the log:

| [13:03:43.4958]D[plugin.cpp:281][AUTO] deviceNameWithID:CPU, defaultDeviceID:, uniqueName:CPU_
 | [13:03:43.4982]I[plugin.cpp:541][AUTO] device:CPU, config:PERFORMANCE_HINT=LATENCY
 [13:03:43.4982]I[plugin.cpp:541][AUTO] device:CPU, config:PERFORMANCE_HINT_NUM_REQUESTS=0
| [13:03:43.4982]I[plugin.cpp:541][AUTO] device:CPU, config:PERF_COUNT=NO
| [13:03:43.4983]I[plugin.cpp:543][AUTO] device:CPU, priority:0
| [13:03:43.4992]I[schedule.cpp:17][AUTO] scheduler starting
| [13:03:43.4993]I[auto_schedule.cpp:131][AUTO] select device:CPU
| bad err=11 in Xbyak::Error

[13:03:43.5742]ERROR[auto_schedule.cpp:381][AUTO] load failed, CPU:[ GENERAL_ERROR ] internal error
 | [13:03:43.5743]I[schedule.cpp:303][AUTO] scheduler ending
 | Exception from src/inference/src/core.cpp:116:
| [ GENERAL_ERROR ] Exception from src/plugins/auto/src/auto_schedule.cpp:384:
 | [AUTO] compile model failed, CPU:[ GENERAL_ERROR ] internal error; 
ilya-lavrenov commented 8 months ago

ov::CompiledModel compiledModel = _core.compile_model(_model);

Could you please try explicitly CPU device?

I think it was because I was missing the source /opt/intel/openvino_2023/setupvars.sh part. That's required right?

It should not be required.

Apidcloud commented 8 months ago

Got the same when setting it to CPU explicitly (i.e., _core.compile_model(_model, "CPU")):

bad err=11 in Xbyak::Error
| Exception from src/inference/src/core.cpp:116:
| [ GENERAL_ERROR ] internal error

And for reference, if I try to download googlenetv1 and run the classification async example I also get the same err=11 Xbyak Error:

pip install openvino-dev==2023.2
export PATH=$PATH:~/.local/bin
omz_downloader --name googlenet-v1

pip install protobuf==3.20

omz_converter --name googlenet-v1

./opt/intel/openvino_2023/samples/cpp/build_samples.sh

// download img24.bmp and then execute example
./opt/intel/openvino_2023/samples/cpp/classification_sample_async -m public/googlenet-v1/FP32/googlenet-v1.xml -i img24.bmp

It logs the same error (scroll a bit down):

[ INFO ] Build ................................. 2023.2.0-13089-cfd42bd2cb0-HEAD
[ INFO ] 
[ INFO ] Parsing input parameters
[ INFO ] Files were added: 1
[ INFO ]     /opt/intel/openvino_2023/runtime/lib/aarch64/img24.bmp
[ INFO ] Loading model files:
[ INFO ] /opt/intel/openvino_2023/runtime/lib/aarch64/public/googlenet-v1/FP32/googlenet-v1.xml
[ INFO ] model name: GoogleNet
[ INFO ]     inputs
[ INFO ]         input name: data
[ INFO ]         input type: f32
[ INFO ]         input shape: [1,3,224,224]
[ INFO ]     outputs
[ INFO ]         output name: prob
[ INFO ]         output type: f32
[ INFO ]         output shape: [1,1000]
[ INFO ] Read input images
[ WARNING ] Image is resized from (227, 227) to (224, 224)
[ INFO ] Set batch size 1
[ INFO ] model name: GoogleNet
[ INFO ]     inputs
[ INFO ]         input name: data
[ INFO ]         input type: u8
[ INFO ]         input shape: [1,224,224,3]
[ INFO ]     outputs
[ INFO ]         output name: prob
[ INFO ]         output type: f32
[ INFO ]         output shape: [1,1000]
[ INFO ] Loading model to the device CPU
bad err=11 in Xbyak::Error
[ ERROR ] Exception from src/inference/src/core.cpp:116:
[ GENERAL_ERROR ] internal error
ilya-lavrenov commented 8 months ago

Could you please use OpenVINO 2023.3? I've provided links to that release above. 2023.3 release contains a fix to xbyak_aarch64 https://github.com/openvinotoolkit/openvino/pull/21762, which enables docker containers on arm64 macOS

Apidcloud commented 8 months ago

The url seems wrong (i cant download it): https://storage.openvinotoolkit.org/repositories/openvino/packages/nightly/2024.0.0-13770-9b52171d290/l_openvino_toolkit_ubuntu18_2023.3.0.13775.ceeafaf64f3_arm64.tgz

Ill try the 32bit meanwhile. Seems to be downloadable

Edit: this 32bit seems to cause some other issues with compatibility between 64 and 32 (seems to be looking for it on the wrong aarch64 path rather than armv7l)

OLINK_MODULE(target) Release/obj.target/reco-addon.node
#19 4.832 /usr/bin/ld: skipping incompatible /usr/lib/gcc/aarch64-linux-gnu/11/../../../../lib/libopenvino.so when searching for -lopenvino
#19 4.832 /usr/bin/ld: skipping incompatible /lib/../lib/libopenvino.so when searching for -lopenvino
#19 4.832 /usr/bin/ld: skipping incompatible /usr/lib/../lib/libopenvino.so when searching for -lopenvino
#19 4.832 /usr/bin/ld: skipping incompatible /usr/lib/gcc/aarch64-linux-gnu/11/../../../libopenvino.so when searching for -lopenvino
#19 4.832 /usr/bin/ld: skipping incompatible /lib/libopenvino.so when searching for -lopenvino

Edit: was able to get a arm64 runtime here https://storage.openvinotoolkit.org/repositories/openvino/packages/2023.3/linux/

trying it now

Apidcloud commented 8 months ago

Yesssss, it seems to work now! :D I'll try a bit more tomorrow (as well as post the whole docker configuration here for those who need). Thank you so much @ilya-lavrenov

ilya-lavrenov commented 8 months ago

@Apidcloud thank you for confirmation that it finally works! Closing the issue, please feel free to add more comments

Apidcloud commented 8 months ago

I can also confirm that it only works when running the setupvars.sh . It will otherwise hang (get stuck).

Here's how I got it to work with docker:

  1. create another .sh file (e.g., entrypoint.sh):
    
    #!/bin/bash

Save the original arguments, as doing 'source' below will flush $@

original_args=("$@")

Source the OpenVINO environment variables (similar to adding it to /etc/rc.local)

source /opt/intel/openvino_2023/setupvars.sh

echo "Incoming arguments: " echo "${original_args[@]}"

Finally, execute the command initially passed to the Docker container

exec "${original_args[@]}"


then on the docker file to build the image copy the file and use it as entrypoint (so that it gets called when using docker run or docker compose):
```sh
COPY ./entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh

ENTRYPOINT ["/entrypoint.sh"]

# this was my CMD but you can have something else
CMD ["npm", "run", "start:dev"]