StevenBanama / C3AE

C3AE implement
BSD 2-Clause "Simplified" License
86 stars 16 forks source link

ValueError: Input 0 of layer conv1 is incompatible with the layer: : expected min_ndim=4, found ndim=3. Full shape received: [32, 64, 3] #19

Open rakesh160 opened 4 years ago

rakesh160 commented 4 years ago

while running the command >python nets/test.py -g -v -se -m ./model/c3ae_model_v2_151_4.301724-0.962, I am getting a value error.

ValueError: Input 0 of layer conv1 is incompatible with the layer: : expected min_ndim=4, found ndim=3. Full shape received: [32, 64, 3] image

Any help is much appreciated!

StevenBanama commented 4 years ago

Can you provide your enviroments (tensorflow version) ? I have test it locally (tensorflow 2.1), and work well.

StevenBanama commented 4 years ago

while running the command >python nets/test.py -g -v -se -m ./model/c3ae_model_v2_151_4.301724-0.962, I am getting a value error.

ValueError: Input 0 of layer conv1 is incompatible with the layer: : expected min_ndim=4, found ndim=3. Full shape received: [32, 64, 3] image

Any help is much appreciated!

you can find that your inputs may invalid which needs (1, 64, 64, 3) and your inputs size shows it as (32, 64, 3).

rakesh160 commented 4 years ago

Thanks for the quick reply @StevenBanama .

I have tensorflow 2.3.

I am new to these things and just trying to test it on one of the test images.

inputs may invalid which needs (1, 64, 64, 3) and your inputs size shows it as (32, 64, 3) . Can you please elaborate a little on how to resolve the issue.

StevenBanama commented 4 years ago

You can do print the shape img before lines 102 like this to check the inputs. ''' print(img.shape) '''

StevenBanama commented 4 years ago

Thanks for the quick reply @StevenBanama .

I have tensorflow 2.3.

I am new to these things and just trying to test it on one of the test images.

inputs may invalid which needs (1, 64, 64, 3) and your inputs size shows it as (32, 64, 3) . Can you please elaborate a little on how to resolve the issue.

Firstly you can update your local repos. And then run it as below.

python nets/test.py -g -se -i assets/timg.jpg -m ./model/c3ae_model_v2_151_4.301724-0.962

StevenBanama commented 4 years ago

Thanks for the quick reply @StevenBanama . I have tensorflow 2.3. I am new to these things and just trying to test it on one of the test images. inputs may invalid which needs (1, 64, 64, 3) and your inputs size shows it as (32, 64, 3) . Can you please elaborate a little on how to resolve the issue.

Firstly you can update your local repos. And then run it as below.

python nets/test.py -g -se -i assets/timg.jpg -m ./model/c3ae_model_v2_151_4.301724-0.962

Have you resolved it? Feeling free to promote the issue~

KhizarAziz commented 3 years ago

i have found the solution, so you just need to np.expand_dim() each image in tri_imgs array.. then pass tri_imgs array to model.predict(). it will work fine

xiangdeyizhang commented 1 year ago

def predict(models, img, save_image=False): try: bounds, lmarks = gen_face(MTCNN_DETECT, img, only_one=False) ret = MTCNN_DETECT.extract_image_chips(img, lmarks, padding=0.4) except Exception as ee: ret = None print(img.shape, ee) if not ret: print("no face") return img, None padding = 200 new_bd_img = cv2.copyMakeBorder(img, padding, padding, padding, padding, cv2.BORDER_CONSTANT) bounds, lmarks = bounds, lmarks

colors = [(0, 0, 255), (0, 0, 0), (255, 0, 0)]
for pidx, (box, landmarks) in enumerate(zip(bounds, lmarks)):
    trible_box = gen_boundbox(box, landmarks)
    tri_imgs = []
    for bbox in trible_box:
        bbox = bbox + padding
        h_min, w_min = bbox[0]
        h_max, w_max = bbox[1]

        resized =cv2.resize(new_bd_img[w_min:w_max, h_min:h_max, :], (64, 64))
        cv2.imwrite("test2222.jpg", resized)
        tri_imgs.extend([cv2.resize(new_bd_img[w_min:w_max, h_min:h_max, :], (64, 64))])

    for idx, pbox in enumerate(trible_box):
        pbox = pbox + padding
        h_min, w_min = pbox[0]
        h_max, w_max = pbox[1]
        new_bd_img = cv2.rectangle(new_bd_img, (h_min, w_min), (h_max, w_max), colors[idx], 2)

    # 初始化一个列表
    k = []

    # 将图像转换为张量
    #img_tensor = np.transpose(resized, (2, 0, 1))

    img_tensor = np.expand_dims(resized, axis=0)
    print("shape",img_tensor.shape)

     # 将张量加入列表
    k.append(img_tensor)
    k.append(img_tensor)
    k.append(img_tensor)
    #img_tensor= tf.convert_to_tensor(brray)
    #print("out2=",type(img_tensor))
    result = models.predict(k)
    age, gender = None, None
    if result and len(result) == 3:
        age, _, gender = result
        age_label, gender_label = age[-1][-1], "F" if gender[-1][0] > gender[-1][1] else "M"
    elif result and len(result) == 2:
        age, _  = result
        age_label, gender_label = age[-1][-1], "unknown"
    else:
       raise Exception("fatal result: %s"%result)
    cv2.putText(new_bd_img, '%s %s'%(int(age_label), gender_label), (padding + int(bounds[pidx][0]), padding + int(bounds[pidx][1])), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (25, 2, 175), 2)
if save_image:
    print(result)
    cv2.imwrite("igg.jpg", new_bd_img)
return new_bd_img, (age_label, gender_label)

using these codes to replace the source file,can solve the problem