luxonis / depthai-experiments

Experimental projects we've done with DepthAI.
MIT License
845 stars 370 forks source link

[NeuralNetwork(7)] [error] Input tensor 'data' (0) exceeds available data range. Data size (3600B), tensor offset (0), size (10800B) - skipping inference #426

Open vipulkumar-developer opened 2 years ago

vipulkumar-developer commented 2 years ago

Hi, I'm trying yo manipulate a black and white frame obtained from the Right Monocamera using the Script node.

In the script file I try these lines of code:

cfg = ImageManipConfig()
cfg.setFrameType(RawImgFrame.Type.RGB888p)

But the problem is that the pipeline seems to ignore it since the resulting error is the following: [14442C10310D57D700] [483.907] [NeuralNetwork(7)] [error] Input tensor 'data' (0) exceeds available data range. Data size (3600B), tensor offset (0), size (10800B) - skipping inference

Erol444 commented 2 years ago

Hi @vipulkumar-developer , I am not sure why you need Script node, but if you want to run NN that expects 3 channels (RGB) on mono frames, please see line 42 of this example: https://docs.luxonis.com/projects/api/en/latest/samples/MobileNet/mono_mobilenet/#mono-mobilenetssd Thanks, Erik

vipulkumar-developer commented 2 years ago

Yes, it is exactly what I'm trying to do but in the script file for the Script node. I'm working with the Script node since I'm changing an existent repository.

Erol444 commented 2 years ago

@vipulkumar-developer I see, could you prepare a full MRE, please?

Erol444 commented 2 years ago

@vipulkumar-developer please see the documentation on how to prepare a MRE. Thanks, Erik

vipulkumar-developer commented 2 years ago

My objective is to use the repository gen2-face-recognition but working with black and white frames, not RGB.

In the main.py file I've modified the pipeline by inserting the MonoCamera and in some components adding initialConfig.setFrameType(dai.RawImgFrame.Type.RGB888p):

def create_pipeline():
    pipeline = dai.Pipeline()
    openvino_version = '2021.2'

    cam = pipeline.create(dai.node.MonoCamera)
    cam.setResolution(dai.MonoCameraProperties.SensorResolution.THE_400_P)
    cam.setBoardSocket(dai.CameraBoardSocket.LEFT)

    host_face_out = pipeline.create(dai.node.XLinkOut)
    host_face_out.setStreamName('frame')
    cam.out.link(host_face_out.input)

    # NeuralNetwork
    print("Creating Face Detection Neural Network...")
    face_det_nn = pipeline.create(dai.node.MobileNetDetectionNetwork)
    face_det_nn.setConfidenceThreshold(0.5)
    face_det_nn.setBlobPath(blobconverter.from_zoo(
        name="face-detection-retail-0004",
        shaves=6,
        version=openvino_version
    ))

    # Link Face ImageManip -> Face detection NN node
    face_det_manip.out.link(face_det_nn.input)

    # Script node will take the output from the face detection NN as an input and set ImageManipConfig
    # to the 'age_gender_manip' to crop the initial frame
    script = pipeline.create(dai.node.Script)
    script.setProcessor(dai.ProcessorType.LEON_CSS)
    with open("script.py", "r") as f:
        script.setScript(f.read())

    face_det_nn.out.link(script.inputs['face_det_in'])
    # We are only interested in timestamp, so we can sync depth frames with NN output
    face_det_nn.passthrough.link(script.inputs['face_pass'])

    # ImageManip as a workaround to have more frames in the pool.
    # cam.preview can only have 4 frames in the pool before it will
    # wait (freeze). Copying frames and setting ImageManip pool size to
    # higher number will fix this issue.
    copy_manip = pipeline.create(dai.node.ImageManip)
    cam.out.link(copy_manip.inputImage)
    copy_manip.setNumFramesPool(20)
    copy_manip.setMaxOutputFrameSize(720*1280*3)

    copy_manip.out.link(face_det_manip.inputImage)
    copy_manip.out.link(script.inputs['preview'])

    print("Creating Head pose estimation NN")
    headpose_manip = pipeline.create(dai.node.ImageManip)
    headpose_manip.initialConfig.setResize(60, 60)
    headpose_manip.initialConfig.setFrameType(dai.RawImgFrame.Type.RGB888p)

    script.outputs['manip_cfg'].link(headpose_manip.inputConfig)
    script.outputs['manip_img'].link(headpose_manip.inputImage)

    return pipeline

While in the scriptcopy.py file I've added initialConfig.setFrameType(dai.RawImgFrame.Type.RGB888p) in the following piece of code:

        node.warn("Entering in the for")
        for det in face_dets.detections:
            bboxes.append(det) # For the rotation
            cfg = ImageManipConfig()
            correct_bb(det)
            cfg.setCropRect(det.xmin, det.ymin, det.xmax, det.ymax)
            cfg.setResize(60, 60)
            cfg.setFrameType(RawImgFrame.Type.RGB888p)
            cfg.setKeepAspectRatio(False)
            node.io['manip_cfg'].send(cfg)
            node.io['manip_img'].send(img)

The blob NN models used are the same of the original repo.

When I start the script I obtain this error: [14442C10310D57D700] [483.907] [NeuralNetwork(7)] [error] Input tensor 'data' (0) exceeds available data range. Data size (3600B), tensor offset (0), size (10800B) - skipping inference

I suppose that is related to some ImageManipNode that resizes the frame to 60x60 (3600), but it expects an RGB frame since 3600 * 3 = 10800

Erol444 commented 2 years ago

@vipulkumar-developer please see again the MRE tutorialm, and provide everything in single zip file, and reduce the script - headpose estimation has nothing to to with the issue you are having.

vipulkumar-developer commented 2 years ago

Here is the zip