google-ai-edge / mediapipe

Cross-platform, customizable ML solutions for live and streaming media.
https://ai.google.dev/edge/mediapipe
Apache License 2.0
27.81k stars 5.18k forks source link

Problems in building #5536

Open francoamato opened 4 months ago

francoamato commented 4 months ago

OS Platform and Distribution

Linux RHEL 9

Compiler version

g++ 13.2.1

Programming Language and version

C++ 17

Installed using virtualenv? pip? Conda?(if python)

no

MediaPipe version

No response

Bazel version

No response

XCode and Tulsi versions (if iOS)

No response

Android SDK and NDK versions (if android)

No response

Android AAR (if android)

No

OpenCV version (if running on desktop)

4.10

Describe the problem

I can not build mediapipe

Complete Logs

I have this cpu model:
Intel(R) Xeon(R) Gold 6248R CPU @ 3.00GHz
Giving lscpu I get these flags:
    Flags: 
fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon nopl xtopology tsc_reliable nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 smep bmi2 invpcid rdseed adx smap clflushopt clwb arat md_clear flush_l1d arch_capabilities

I need to build MediaPipe with the most general configuration possible to prevent my program from crashing when I link it to the MediaPipe library.
The other libraries of my software are built with the following flags:
-march=x86-64 -mtune=generic -Wno-padded
I cannot reproduce the same in building MediaPipe.
I tried a most hardware-independent configuration in this way:
$ bazel build --explain=log.txt --define MEDIAPIPE_DISABLE_GPU=1 --define=mediapipe_xnnpack_disabled=true --define=tflite_with_xnnpack=false --copt=-march=x86-64 --copt=-mtune=generic --copt=-Wno-padded --copt=-mno-avx512f --copt=-mno-avx512bw --copt=-mno-avx512cd --copt=-mno-avx512dq --copt=-mno-avx512vl --copt=-mno-avx512vbmi --copt=-mno-avx512ifma --copt=-mno-avx5124fmaps --copt=-mno-avx5124vnniw --copt=-mno-avxvnni --copt=-mno-avx2 --copt=-mno-avx512bf16 --copt=-mno-avx512er --verbose_failures //mediapipe/modules/selfie_segmentation:selfie_segmentation_cpu

But it gives to me a lot of errors related to xnnpack even if it's supposed that I deactivated it with the flag define=mediapipe_xnnpack_disabled=true --define=tflite_with_xnnpack=false
Below an example of error:
external/org_tensorflow/tensorflow/lite/delegates/xnnpack/xnnpack_delegate.cc: In constructor 'tflite::xnnpack::{anonymous}::Delegate::Delegate(const TfLiteXNNPackDelegateOptions*, xnn_workspace_t, TfLiteContext*)':
external/org_tensorflow/tensorflow/lite/delegates/xnnpack/xnnpack_delegate.cc:508:55: error: 'class tflite::CpuBackendContext' has no member named 'get_xnnpack_threadpool'
  508 |           CpuBackendContext::GetFromContext(context)->get_xnnpack_threadpool();
      |                                                       ^~~~~~~~~~~~~~~~~~~~~~

I would like to have a help in building
kuaashish commented 1 week ago

Hi @francoamato,

Could you please confirm if this issue is resolved or any further assistance is needed from our end?

Thank you!!

francoamato commented 1 week ago

Hi, I have a problem in compiling mediapipe using a Xeon processor. The problem is related to xnnpack. I tried to disable it without success. What should I do?

On Wed, 20 Nov 2024 at 03:51, kuaashish @.***> wrote:

Hi @francoamato https://github.com/francoamato,

Could you please confirm if this issue is resolved or any further assistance is needed from our end?

Thank you!!

— Reply to this email directly, view it on GitHub https://github.com/google-ai-edge/mediapipe/issues/5536#issuecomment-2487925159, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACQQKKZJSQYJ6QC44AQNAA32BRESJAVCNFSM6AAAAABLB4MKNKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIOBXHEZDKMJVHE . You are receiving this because you were mentioned.Message ID: @.***>