jimmyyhwu / pose-interpreter-networks

Real-time robotic object pose estimation with deep learning
MIT License
122 stars 27 forks source link

About ‘end to end eval’ error #15

Closed dingshenglan closed 5 years ago

dingshenglan commented 5 years ago

I add get_filtered_cat_ids() in the ‘datasets.py':

def get_filtered_cat_ids(coco, img_ids): object_instances = coco.loadAnns(coco.getAnnIds(imgIds=img_ids)) def catgory_filter(): return lambda x: x['category_id'] in [1, 2, 4, 5, 6] return [object_instances for object_instances in filter(catgory_filter(), object_instances)]

Then I changed the init() in ’EvalDataset‘class:

def init(self, data_root, ann_file, camera_name, object_names, transform): self.data_root = data_root self.coco = COCO(os.path.join(self.data_root, 'annotations', ann_file))

    img_ids = get_filtered_img_ids(self.coco, camera_name)
    self.object_instances = get_filtered_cat_ids(self.coco, img_ids)

    self.object_names_map = {cat['id']: cat['name'] for cat in self.coco.dataset['categories']}
    #self.object_indices_map = {object_name: i for i, object_name in enumerate(object_names)}
    self.object_indices_map = {'blue_funnel':6,'funnel':4,'oil_bottle':1,'fluid_bottle':2,'engine':5}
    self.object_ids_map = {cat['name']: cat['id'] for cat in self.coco.dataset['categories']}
    #self.object_ids_map = self.object_indices_map

    self.transform = transform

The 'end to end eval' can read the dataset:

loading annotations into memory... Done (t=0.72s) creating index... index created! using camera: kinect2

But, I get thsi error:

RuntimeError Traceback (most recent call last)

in 3 with torch.no_grad(): 4 for input, target, object_index, object_id in tqdm(val_loader): ----> 5 position_error, orientation_error = forward_batch(model, input, target, object_index, object_id) 6 position_errors.extend(position_error) 7 orientation_errors.extend(orientation_error)
in forward_batch(model, input, target, object_index, object_id) 5 6 position, orientation = model(input, object_index, object_id) ----> 7 print(target) 8 position_error = (target[:, :3] - position).pow(2).sum(dim=1).sqrt() 9 orientation_error = 180.0 / np.pi * pose_utils.batch_rotation_angle(target[:, 3:], orientation) ~/.conda/envs/poseIN/lib/python3.6/site-packages/torch/tensor.py in __repr__(self) 55 # characters to replace unicode characters with. 56 if sys.version_info > (3,): ---> 57 return torch._tensor_str._str(self) 58 else: 59 if hasattr(sys.stdout, 'encoding'): ~/.conda/envs/poseIN/lib/python3.6/site-packages/torch/_tensor_str.py in _str(self) 254 suffix += ', dtype=' + str(self.dtype) 255 --> 256 formatter = _Formatter(get_summarized_data(self) if summarize else self) 257 tensor_str = _tensor_str(self, indent, formatter, summarize) 258 ~/.conda/envs/poseIN/lib/python3.6/site-packages/torch/_tensor_str.py in __init__(self, tensor) 80 81 else: ---> 82 copy = torch.empty(tensor.size(), dtype=torch.float64).copy_(tensor).view(tensor.nelement()) 83 copy_list = copy.tolist() 84 try: RuntimeError: cuda runtime error (59) : device-side assert triggered at /opt/conda/conda-bld/pytorch_1535491974311/work/aten/src/THC/generic/THCTensorCopy.cpp:70
dingshenglan commented 5 years ago

Sorry, Maybe I change wrong