amdegroot / ssd.pytorch

A PyTorch Implementation of Single Shot MultiBox Detector
MIT License
5.11k stars 1.74k forks source link

IndexError: too many indices for array #224

Open mama110 opened 6 years ago

mama110 commented 6 years ago

File "/home/kun/Software/ssd-pytorch-master/data/voc0712.py", line 145, in pull_item img, boxes, labels = self.transform(img, target[:, :4], target[:, 4]) IndexError: too many indices for array

I want to train voc dataset, but after i modify the path and run train.py, the error shows up. Please help me . Thank you .

Eralaf commented 6 years ago

Hi @mama110,

I was facing this error too, but while training on my own dataset. The problem for me was an annotation file without any object in it.

Don't know if this will help you but, here is the script I used to check my annotations files :

import argparse
import sys
import cv2
import os

import os.path          as osp
import numpy            as np

if sys.version_info[0] == 2:
    import xml.etree.cElementTree as ET
else:
    import xml.etree.ElementTree  as ET

parser    = argparse.ArgumentParser(
            description='Single Shot MultiBox Detector Training With Pytorch')
train_set = parser.add_mutually_exclusive_group()

parser.add_argument('--root', help='Dataset root directory path')

args = parser.parse_args()

CLASSES = (  # always index 0
    'aeroplane', 'bicycle', 'bird', 'boat',
    'bottle', 'bus', 'car', 'cat', 'chair',
    'cow', 'diningtable', 'dog', 'horse',
    'motorbike', 'person', 'pottedplant',
    'sheep', 'sofa', 'train', 'tvmonitor')

annopath = osp.join('%s', 'Annotations', '%s.{}'.format("xml"))
imgpath  = osp.join('%s', 'JPEGImages',  '%s.{}'.format("jpg"))

def vocChecker(image_id, width, height, keep_difficult = False):
    target   = ET.parse(annopath % image_id).getroot()
    res      = []

    for obj in target.iter('object'):

        difficult = int(obj.find('difficult').text) == 1

        if not keep_difficult and difficult:
            continue

        name = obj.find('name').text.lower().strip()
        bbox = obj.find('bndbox')

        pts    = ['xmin', 'ymin', 'xmax', 'ymax']
        bndbox = []

        for i, pt in enumerate(pts):

            cur_pt = int(bbox.find(pt).text) - 1
            # scale height or width
            cur_pt = float(cur_pt) / width if i % 2 == 0 else float(cur_pt) / height

            bndbox.append(cur_pt)

        print(name)
        label_idx =  dict(zip(CLASSES, range(len(CLASSES))))[name]
        bndbox.append(label_idx)
        res += [bndbox]  # [xmin, ymin, xmax, ymax, label_ind]
        # img_id = target.find('filename').text[:-4]
    print(res)
    try :
        print(np.array(res)[:,4])
        print(np.array(res)[:,:4])
    except IndexError:
        print("\nINDEX ERROR HERE !\n")
        exit(0)
    return res  # [[xmin, ymin, xmax, ymax, label_ind], ... ]

if __name__ == '__main__' :

    i = 0

    for name in sorted(os.listdir(osp.join(args.root,'Annotations'))):
    # as we have only one annotations file per image
        i += 1

        img    = cv2.imread(imgpath  % (args.root,name.split('.')[0]))
        height, width, channels = img.shape
        print("path : {}".format(annopath % (args.root,name.split('.')[0])))
        res = vocChecker((args.root, name.split('.')[0]), height, width)
    print("Total of annotations : {}".format(i))
ankitksharma commented 5 years ago

@Eralaf It worked like a charm. Thanks a lot! FYI, you forgot to call vocChecker in main()

I pasted the following code after height, width, channels = img.shape:

res = vocChecker((args.root, name.split('.')[0]), height, width)

Eralaf commented 5 years ago

@ankitksharma Indeed, it should work better ! haha

guiw629 commented 5 years ago

@Eralaf It help me a lot

charan1561 commented 5 years ago

facing same error, trying to train my own dataset and annotations are same as coco annotations json files... how to solve the error

img, boxes, labels = self.transform(img, target[:, :4], target[:, 4]) IndexError: too many indices for array

barcahoo commented 5 years ago

@Eralaf i have tried your method on pycharm,but error shows up : Traceback (most recent call last): File "C:/Users/Administrator/Desktop/ssd.pytorch-master/error.py", line 76, in for name in sorted(os.listdir(osp.join(args.root,'Annotations'))):

TypeError: expected str, bytes or os.PathLike object, not NoneType.

so could tell me how to solve the problem ,thanks! (my torch version is 1.0)

Eralaf commented 5 years ago

@barcahoo Did you call the script with the root argument ?

Let's say the script I used to check my annotations files is called check.py :

python check.py --root="Dataset root directory path"

In this root directory there must be an "Annotations" folder and a "JPEGImages" folder. If you don't want to call the script like this you can replace the args.root by a variable of your choice like myPath="Dataset root directory path";)

@charan1561 it's a bit late, sorry. When you say your annotations are same as coco annotations, do you mean your dataset is a subset from the cocodataset or your annotations files have the same structure ? Did you try checking your annotations in case there are empty ones (without objects) ? If so, the idea is to not use them :)

barcahoo commented 5 years ago

@Eralaf thanks a lot ! it works

barcahoo commented 5 years ago

@Eralaf thanks a lot ! it works

810250082 commented 5 years ago

@Eralaf thanks! it works, but what puzzies me is that, I added a print log to the top of this sentence, like print("target {}".format(target)) img, boxes, labels = self.transform(img, target[:, :4], target[:, 4]) why didn't it print out the target of empty, instead, it printed a target with shape (1, 5)

qinzhenyi1314 commented 5 years ago

I have this error too,and i use @Eralaf method to check xml finally,i Found the cause of the mistake if not self.keep_difficult and difficult: continue Original code dose not keep difficulte instances in training make xml parse have no object of labels and boxes,so run to img, boxes, labels = self.transform(img, target[:, :4], target[:, 4]) have IndexError: too many indices for array

RenzhiDaDa commented 4 years ago

I think that will happend when there are not objects in the xml file,because this code does not process this condition

Kuuuo commented 3 years ago

I think that will happend when there are not objects in the xml file,because this code does not process this condition

I feel the same way, but how to solve it ?

I think that will happend when there are not objects in the xml file,because this code does not process this condition

Kuuuo commented 3 years ago

I think that will happend when there are not objects in the xml file,because this code does not process this condition

I feel the same way, but how to solve it ?

I think that will happend when there are not objects in the xml file,because this code does not process this condition

Hi @mama110,

I was facing this error too, but while training on my own dataset. The problem for me was an annotation file without any object in it.

Don't know if this will help you but, here is the script I used to check my annotations files :

import argparse
import sys
import cv2
import os

import os.path          as osp
import numpy            as np

if sys.version_info[0] == 2:
    import xml.etree.cElementTree as ET
else:
    import xml.etree.ElementTree  as ET

parser    = argparse.ArgumentParser(
            description='Single Shot MultiBox Detector Training With Pytorch')
train_set = parser.add_mutually_exclusive_group()

parser.add_argument('--root', help='Dataset root directory path')

args = parser.parse_args()

CLASSES = (  # always index 0
    'aeroplane', 'bicycle', 'bird', 'boat',
    'bottle', 'bus', 'car', 'cat', 'chair',
    'cow', 'diningtable', 'dog', 'horse',
    'motorbike', 'person', 'pottedplant',
    'sheep', 'sofa', 'train', 'tvmonitor')

annopath = osp.join('%s', 'Annotations', '%s.{}'.format("xml"))
imgpath  = osp.join('%s', 'JPEGImages',  '%s.{}'.format("jpg"))

def vocChecker(image_id, width, height, keep_difficult = False):
    target   = ET.parse(annopath % image_id).getroot()
    res      = []

    for obj in target.iter('object'):

        difficult = int(obj.find('difficult').text) == 1

        if not keep_difficult and difficult:
            continue

        name = obj.find('name').text.lower().strip()
        bbox = obj.find('bndbox')

        pts    = ['xmin', 'ymin', 'xmax', 'ymax']
        bndbox = []

        for i, pt in enumerate(pts):

            cur_pt = int(bbox.find(pt).text) - 1
            # scale height or width
            cur_pt = float(cur_pt) / width if i % 2 == 0 else float(cur_pt) / height

            bndbox.append(cur_pt)

        print(name)
        label_idx =  dict(zip(CLASSES, range(len(CLASSES))))[name]
        bndbox.append(label_idx)
        res += [bndbox]  # [xmin, ymin, xmax, ymax, label_ind]
        # img_id = target.find('filename').text[:-4]
    print(res)
    try :
        print(np.array(res)[:,4])
        print(np.array(res)[:,:4])
    except IndexError:
        print("\nINDEX ERROR HERE !\n")
        exit(0)
    return res  # [[xmin, ymin, xmax, ymax, label_ind], ... ]

if __name__ == '__main__' :

    i = 0

    for name in sorted(os.listdir(osp.join(args.root,'Annotations'))):
    # as we have only one annotations file per image
        i += 1

        img    = cv2.imread(imgpath  % (args.root,name.split('.')[0]))
        height, width, channels = img.shape
        print("path : {}".format(annopath % (args.root,name.split('.')[0])))
        res = vocChecker((args.root, name.split('.')[0]), height, width)
    print("Total of annotations : {}".format(i))

Excuse me , When we check out these files that are empty, what do we do with them? delete?Hope to receive our reply !

JJUAN-ART commented 4 months ago

From the beginning to the end of the loss has always been equal to 2.000, what is the reason