graspnet / graspnet-baseline

Baseline model for "GraspNet-1Billion: A Large-Scale Benchmark for General Object Grasping" (CVPR 2020)
https://graspnet.net/
Other
475 stars 142 forks source link

About collision _label/scene_0110/collision_labels.npz in Dataset #17

Closed yanjh97 closed 3 years ago

yanjh97 commented 3 years ago

Hi, I encountered an error as follows:

import numpy as np
data_root = '/media/wind/Share/DataSet/GraspNet1Billion'
collision_label = np.load(os.path.join(data_root, 'collision_label/scene_0110/collision_labels.npz'))
print(collision_label.files)
print(collision_label['arr_8'].shape)

Output: ['arr_0', 'arr_4', 'arr_5', 'arr_2', 'arr_3', 'arr_7', 'arr_1', 'arr_8', 'arr_6'] OSError Traceback (most recent call last)

in () 3 collision_label = np.load(os.path.join(data_root, 'collision_label/scene_0110/collision_labels.npz')) 4 print(collision_label.files) ----> 5 print(collision_label['arr_8'].shape) /home/wind/anaconda3/envs/pytorch160/lib/python3.7/site-packages/numpy/lib/npyio.py in __getitem__(self, key) 253 return format.read_array(bytes, 254 allow_pickle=self.allow_pickle, --> 255 pickle_kwargs=self.pickle_kwargs) 256 else: 257 return self.zip.read(key) /home/wind/anaconda3/envs/pytorch160/lib/python3.7/site-packages/numpy/lib/format.py in read_array(fp, allow_pickle, pickle_kwargs) 761 read_count = min(max_read_count, count - i) 762 read_size = int(read_count * dtype.itemsize) --> 763 data = _read_bytes(fp, read_size, "array data") 764 array[i:i+read_count] = numpy.frombuffer(data, dtype=dtype, 765 count=read_count) /home/wind/anaconda3/envs/pytorch160/lib/python3.7/site-packages/numpy/lib/format.py in _read_bytes(fp, size, error_template) 890 # done about that. note that regular files can't be non-blocking 891 try: --> 892 r = fp.read(size - len(data)) 893 data += r 894 if len(r) == 0 or len(data) == size: /home/wind/anaconda3/envs/pytorch160/lib/python3.7/zipfile.py in read(self, n) 897 self._offset = 0 898 while n > 0 and not self._eof: --> 899 data = self._read1(n) 900 if n < len(data): 901 self._readbuffer = data /home/wind/anaconda3/envs/pytorch160/lib/python3.7/zipfile.py in _read1(self, n) 967 data += self._read2(n - len(data)) 968 else: --> 969 data = self._read2(n) 970 971 if self._compress_type == ZIP_STORED: /home/wind/anaconda3/envs/pytorch160/lib/python3.7/zipfile.py in _read2(self, n) 997 n = min(n, self._compress_left) 998 --> 999 data = self._fileobj.read(n) 1000 self._compress_left -= len(data) 1001 if not data: /home/wind/anaconda3/envs/pytorch160/lib/python3.7/zipfile.py in read(self, n) 740 "Close the writing handle before trying to read.") 741 self._file.seek(self._pos) --> 742 data = self._file.read(n) 743 self._pos = self._file.tell() 744 return data OSError: [Errno 5] Input/output error
chenxi-wang commented 3 years ago

Hi, does this error occur on other files either? Maybe you can check whether the file is complete.

yanjh97 commented 3 years ago

No, this is the only one. And 'arr_0' ~ 'arr_7' in this file are normal.

chenxi-wang commented 3 years ago

I had a try and did not see this error. Maybe you can check whether there is something wrong when downloading.

yanjh97 commented 3 years ago

OK. Thanks for your reply.