google-ai-edge / mediapipe

Cross-platform, customizable ML solutions for live and streaming media.
https://mediapipe.dev
Apache License 2.0
26.27k stars 5.05k forks source link

RuntimeError: Unable to initialize runfiles: ERROR: external/bazel_tools/tools/cpp/runfiles/runfiles.cc(120): cannot find runfiles (argv0="") #4220

Closed RTL8710 closed 1 year ago

RTL8710 commented 1 year ago

OS Platform and Distribution

window 7

Compiler version

No response

Programming Language and version

Python

Installed using virtualenv? pip? Conda?(if python)

No response

MediaPipe version

0.9.2

Bazel version

No response

XCode and Tulsi versions (if iOS)

No response

Android SDK and NDK versions (if android)

No response

Android AAR (if android)

None

OpenCV version (if running on desktop)

No response

Describe the problem

detector = vision.ObjectDetector.create_from_options(options)

Complete Logs

Traceback (most recent call last):
  File "D:/AI/mediaPipeExample/object_detector.py", line 17, in <module>
    detector = vision.ObjectDetector.create_from_options(options)
  File "D:\AI\mediaPipeExample\venv\lib\site-packages\mediapipe\tasks\python\vision\object_detector.py", line 207, in create_from_options
    return cls(
  File "D:\AI\mediaPipeExample\venv\lib\site-packages\mediapipe\tasks\python\vision\core\base_vision_task_api.py", line 66, in __init__
    self._runner = _TaskRunner.create(graph_config, packet_callback)
RuntimeError: Unable to initialize runfiles: ERROR: external/bazel_tools/tools/cpp/runfiles/runfiles.cc(120): cannot find runfiles (argv0="")
kuaashish commented 1 year ago

Hi @RTL8710, Could you let us know the complete steps you have followed, standalone code to reproduce and look into the issue share the possible solution in the root cause.

RTL8710 commented 1 year ago

IMAGE_FILE = 'cat_and_dog.jpg'

import cv2

from google.colab.patches import cv2_imshow

STEP 2: Create an ObjectDetector object.

base_options = python.BaseOptions(model_asset_path='efficientdet_lite2_uint8.tflite') options = vision.ObjectDetectorOptions(base_options=base_options, score_threshold=0.5) detector = vision.ObjectDetector.create_from_options(options)

detector = vision.ObjectDetector.create_from_model_path("efficientdet_lite2_uint8.tflite")

STEP 3: Load the input image.

image = mp.Image.create_from_file(IMAGE_FILE)

STEP 4: Detect objects in the input image.

detection_result = detector.detect(image) mp_drawing = mp.solutions.drawing_utils

STEP 5: Process the detection result. In this case, visualize it.

image_copy = np.copy(image.numpy_view())

annotated_image = visualize(, detection_result)

mp_drawing.draw_detection(image_copy, detection_result) rgb_annotated_image = cv2.cvtColor(image_copy, cv2.COLOR_BGR2RGB) cv2.imshow('face_mesh_annotated_image', rgb_annotated_image) `

kuaashish commented 1 year ago

@RTL8710, Thank you for bringing up to our notice, this issue is known to us and we are working towards fix. As workaround, you can load the file manually and use model_asset_buffer: https://github.com/google/mediapipe/blob/master/mediapipe/tasks/python/core/base_options.py#L47

RTL8710 commented 1 year ago

tks The problem has been resolved!

RTL8710 commented 1 year ago

May I ask you a new question: How to efficiently and succinctly convert the output of the model into the structure of the C language by porting the mediapipe model (object_dection) to the arm embedded system? image

kuaashish commented 1 year ago

@RTL8710, Good to hear that issue has been resolved. However, this issue seems to be related to a different error. To keep the context of the issue as relevant to the original issue as possible, may I suggest you close this issue and open another one? This helps community and us to search the issues better in concise manner. thank you

RTL8710 commented 1 year ago

ok

kuaashish commented 1 year ago

@RTL8710, Thank you for the confirmation, Closing this as resolved. Please raise new issue with another query mentioned here.

google-ml-butler[bot] commented 1 year ago

Are you satisfied with the resolution of your issue? Yes No

xiaolinpeter commented 1 year ago

@RTL8710, Thank you for bringing up to our notice, this issue is known to us and we are working towards fix. As workaround, you can load the file manually and use model_asset_buffer: https://github.com/google/mediapipe/blob/master/mediapipe/tasks/python/core/base_options.py#L47

thanks your reply, but i try it as you, but a error as follow: image

xiaolinpeter commented 1 year ago

@RTL8710, Thank you for bringing up to our notice, this issue is known to us and we are working towards fix. As workaround, you can load the file manually and use model_asset_buffer: https://github.com/google/mediapipe/blob/master/mediapipe/tasks/python/core/base_options.py#L47

thanks your reply, but i try it as you, but a error as follow: image

the code as follow: `#!/usr/bin/env python

-- coding: utf-8 --

''' @Project :mediapipe @File :mediapipe_task.py @Author :xiaolin_peter @email:zheng.xiaolin@tslsmart.com @Date :2023/3/5 15:28 '''

STEP 1: Import the necessary modules.

import mediapipe as mp from mediapipe.tasks import python from mediapipe.tasks.python import vision from baseOptions import BaseOptions

STEP 2: Create an GestureRecognizer object.

base_options = python.BaseOptions(model_asset_buffer='gesture_recognizer.task') options = vision.GestureRecognizerOptions(base_options=base_options) recognizer = vision.GestureRecognizer.create_from_options(options)

images = [] results = [] IMAGE_FILENAMES = ["images/img.png"] for image_file_name in IMAGE_FILENAMES:

STEP 3: Load the input image.

image = mp.Image.create_from_file(image_file_name)

STEP 4: Recognize gestures in the input image.

recognition_result = recognizer.recognize(image)

STEP 5: Process the result. In this case, visualize it.

images.append(image) top_gesture = recognition_result.gestures[0][0] hand_landmarks = recognition_result.hand_landmarks results.append((top_gesture, hand_landmarks)) print(hand_landmarks) `

xiaolinpeter commented 1 year ago

@RTL8710, Thank you for bringing up to our notice, this issue is known to us and we are working towards fix. As workaround, you can load the file manually and use model_asset_buffer: https://github.com/google/mediapipe/blob/master/mediapipe/tasks/python/core/base_options.py#L47

thanks your reply. first:but i try it as you, but a error as follow: image

the code as follow: `#!/usr/bin/env python

-- coding: utf-8 --

''' @project :mediapipe @file :mediapipe_task.py @author :xiaolin_peter @email:zheng.xiaolin@tslsmart.com @Date :2023/3/5 15:28 '''

STEP 1: Import the necessary modules.

import mediapipe as mp from mediapipe.tasks import python from mediapipe.tasks.python import vision from baseOptions import BaseOptions

STEP 2: Create an GestureRecognizer object.

base_options = python.BaseOptions(model_asset_buffer='gesture_recognizer.task') options = vision.GestureRecognizerOptions(base_options=base_options) recognizer = vision.GestureRecognizer.create_from_options(options)

images = [] results = [] IMAGE_FILENAMES = ["images/img.png"] for image_file_name in IMAGE_FILENAMES:

STEP 3: Load the input image.

image = mp.Image.create_from_file(image_file_name)

STEP 4: Recognize gestures in the input image.

recognition_result = recognizer.recognize(image)

STEP 5: Process the result. In this case, visualize it.

images.append(image) top_gesture = recognition_result.gestures[0][0] hand_landmarks = recognition_result.hand_landmarks results.append((top_gesture, hand_landmarks)) print(hand_landmarks) `

second: I also try it as follow: image the same error finally, my python version is 3.8 and mediapipe version 0.9.2.1 for window 10, i hope you give me some suggession, thanks