Closed nikparmar closed 2 years ago
Hi @nikparmar, We are currently looking into this issue.
Hi @akwrobel Any update on this?
@nikparmar Can you check the contents of /tmp/results.jsonl
we have seen a similar issue in scenarios where the output file takes longer than usual to be created and vaclient
does not print output from the pipeline. The output is still generated in that case but the vaclient
doesn't report anything.
Hi @nnshah1 Let me try once and check it.
Hi @nnshah1 I tried checking the results.jsonl
file but it's empty.
As you can see in the below screenshot the pipeline is running without any errors and also when I stopped it the FPS values are also coming.
Can you check at your end the same or any other model from the public repository?
Hi @nikparmar could you please share exact models you are using, will test and get back to you.
Hi @tthakkal here are the details of the models:
Thanks, I tried face detection with ultra-lightweight-face-detection-slim-320
and see no results . I will verify if this model requires a special model proc settings and will let you know.
Also, @tthakkal I found the case with lot of public model available here and not just the ultra-lightweight-face-detection-slim-320
. You can try any other face detection model and similar is the case.
Hi @nikparmar, I checked and sorry to say we don't support those models. But you can use these models by inferencing using gvainference and creating detection with gvapython.
Get tensors using gvainference and process tensors using gvapython to create and add region before sending for face mask classification.
" ! gvainference model=ultra-lightweight-face-detection-slim-320.xml name=detection",
" ! gvapython name=face-detection class=FaceDetection module=face_detection.py",`
" ! gvaclassify model=face_mask.xml name=classification",
" ! gvapython name=face-classification class=FaceClassification module=face_classification.py",
Hi @tthakkal I tried using the below script but unable to find any tensors on this video .
"""
* Copyright (C) 2021 Intel Corporation.
*
* SPDX-License-Identifier: BSD-3-Clause
"""
import traceback
from extensions.gva_event_meta import gva_event_meta
from vaserving.common.utils import logging
def print_message(message):
print("", flush=True)
print(message, flush=True)
logger = logging.get_logger("face_detection", is_static=True)
# class FaceDetection:
# DEFAULT_EVENT_TYPE = "face-detection"
# DEFAULT_DETECTION_CONFIDENCE_THRESHOLD = 0.0
# # Caller supplies one or more zones via request parameter
# def __init__(self, threshold=0, log_level="INFO"):
# self._threshold = threshold
# self._logger = logger
# self._logger.log_level = log_level
def process_frame(frame):
try:
width = frame.video_info().width
height = frame.video_info().height
for tensor in frame.tensors():
dims = tensor.dims()
data = tensor.data()
object_size = dims[-1]
for i in range(dims[-2]):
image_id = data[i * object_size + 0]
label_id = data[i * object_size + 1]
confidence = data[i * object_size + 2]
x_min = int(data[i * object_size + 3] * width + 0.5)
y_min = int(data[i * object_size + 4] * height + 0.5)
x_max = int(data[i * object_size + 5] * width + 0.5)
y_max = int(data[i * object_size + 6] * height + 0.5)
if image_id != 0:
break
if confidence < 0.5:
continue
roi = frame.add_region(x_min, y_min, x_max - x_min, y_max - y_min, str(label_id), confidence)
except Exception:
print_message("Error processing frame: {}".format(traceback.format_exc()))
return True
A few points on compatible models
@nikparmar I tried with same video and I see tensor data
uri=https://github.com/intel-iot-devkit/sample-videos/raw/master/classroom.mp4 ! gvainference model=/home/video-analytics-serving/models/face_detection/ultra_lightweight/FP32/ultra-lightweight-face-detection-slim-320.xml ! gvapython module=face_detect.py ! fakesink
<<snip>>
[1, 4420, 4]
[9.1916602e-04 5.3762719e-03 2.2127293e-02 ... 4.1273904e-01 1.2037479e+00
1.2570462e+00]
[1, 4420, 2]
[0.89466214 0.10533787 0.8947022 ... 0.03718159 0.9683689 0.03163114]
New clock: GstSystemClock
[1, 4420, 4]
[0.00148836 0.00500072 0.0228582 ... 0.4127335 1.203156 1.2570627 ]
[1, 4420, 2]
[0.89466614 0.10533387 0.89470595 ... 0.03728213 0.9683611 0.03163888]
[1, 4420, 4]
[0.00135019 0.00493123 0.0226246 ... 0.41368762 1.2044996 1.2576498 ]
<<snip>>
def process_frame(frame):
for tensor in frame.tensors():
dims = tensor.dims()
data = tensor.data()
print(dims)
print(data)
return True
@nikparmar Were you able to get it working?
@tthakkal I will update you soon. Engaged in something
hi @tthakkal apologies for the late reply. But this isn't working at my end. Not getting any output for some reason.
@nikparmar Could you please provide more information on what changes you tried? Were you able to see results replacing your process_frame call with the one @tthakkal provided?
@nikparmar can you provide any updates on what changes you tried? Is this no longer an issue?
Here is the pipeline I am testing with. The pipeline runs without any error and there is no log for any output for the following command:
command
sudo ./vaclient/vaclient.sh run object_classification/face_mask_classification https://github.com/intel/video-analytics-serving/blob/master/samples/classroom.mp4?raw=true
pipeline.json
I think this is because I have not added the
model-proc
file for the models?