610265158 / Peppa_Pig_Face_Landmark

A simple face detect and alignment method, which is easy and stable.
Apache License 2.0
530 stars 116 forks source link

When I use multi-face video to perform inference based on the tflite model, the error is as follows #11

Closed changshuai5 closed 4 years ago

changshuai5 commented 4 years ago

Traceback (most recent call last): File "/home/changshuai/code/Peppa_test/demo.py", line 165, in video("/home/lianping/sunxiaohu/face-detection-tensorrt-yolov3-tiny/test_video/20191010155113.avi") File "/home/changshuai/code/Peppa_test/demo.py", line 30, in video boxes, landmarks, states = facer.run(image) File "/home/changshuai/code/Peppa_test/lib/core/api/facer.py", line 61, in run landmarks,states=self.face_landmark.batch_call(image,boxes) File "/home/changshuai/code/Peppa_test/lib/core/api/face_landmark.py", line 140, in batch_call self.model.set_tensor(self.input_details[0]['index'], images_batched) File "/home/lianping/develop_environment/anaconda2/envs/tensorflow_1.14_cuda9.0/lib/python3.5/site-packages/tensorflow/lite/python/interpreter.py", line 197, in set_tensor self._interpreter.SetTensor(tensor_index, value) File "/home/lianping/develop_environment/anaconda2/envs/tensorflow_1.14_cuda9.0/lib/python3.5/site-packages/tensorflow/lite/python/interpreter_wrapper/tensorflow_wrap_interpreter_wrapper.py", line 136, in SetTensor return _tensorflow_wrap_interpreter_wrapper.InterpreterWrapper_SetTensor(self, i, value) ValueError: Cannot set tensor: Dimension mismatch. Got 4 but expected 1 for dimension 0 of input 411.

610265158 commented 4 years ago

Hi,

The tflite model is a static graph, and the input shape is fixed, the batch was set as 1, you can change the code for batch inference.

And i will fix it later, when i get time. Thanks

1996scarlet commented 4 years ago

capture_screenshot_09 01 2020

Change this line as https://github.com/610265158/Peppa_Pig_Face_Engine/blob/853994bf3fce4e00f27a5dbc14b0714a9abde8bf/lib/core/api/face_landmark.py#L53

            if not self.tflite:
                res = self.model.inference(image_croped)

                landmark = res['landmark'].numpy().reshape(
                    (-1, self.keypoints_num, 2))
                states = res['cls'].numpy()
            else:

                image_croped = image_croped.astype(np.float32)
                self.model.set_tensor(
                    self.input_details[0]['index'], image_croped)
                self.model.invoke()

                landmark = self.model.get_tensor(
                    self.output_details[2]['index']).reshape((-1, self.keypoints_num, 2))
                states = self.model.get_tensor(self.output_details[0]['index'])