Closed wukaishuns closed 4 years ago
Hi @wukaishuns , Please follow this link to prepare the required data for instance segmentation
Hi @wukaishuns , Please follow this link to prepare the required data for instance segmentation
Thanks,I'll try.
In file adet/data/dataset_mapper.py, I have read this code snippet related.
if self.basis_loss_on and self.is_train:
# load basis supervisions
if self.ann_set == "coco":
basis_sem_path = dataset_dict["file_name"].replace('train2017', 'thing_train2017').replace('image/train', 'thing_train')
else:
basis_sem_path = dataset_dict["file_name"].replace('coco', 'lvis').replace('train2017', 'thing_train').replace('jpg', 'npz')
basis_sem_path = basis_sem_path.replace('jpg', 'npz')
basis_sem_gt = np.load(basis_sem_path)["mask"]
basis_sem_gt = transforms.apply_segmentation(basis_sem_gt)
basis_sem_gt = torch.as_tensor(basis_sem_gt.astype("long"))
dataset_dict["basis_sem"] = basis_sem_gt
`
This flag 'basis_loss_on' is being checked while your training, to avoid generate '.npz' compressed file, I switch it off by cfg.MODEL.BASIS_MODULE.LOSS_ON = False
and it works. However, I have no clue whether it will lead to some other issues, the config I am using is "R_101_dcni3_5x.yaml".
Another question: my dataset contains jpg and png picutres, it seems BlendMask only handles jpg pciture so what is the elegant way to solve this problem, converting png to jpg?
@wiekern auxiliary segmentation loss is not a must but it will improve the performance. It's easy to generate and is a free-trick which does not slows down inference.
Our image is read with pillow, which of course supports png files. If you have problems with your images, check your image format and handle the exceptions for those special images. Here are some useful code snippets:
https://github.com/aim-uofa/AdelaiDet/blob/master/adet/data/dataset_mapper.py#L64
awesome! My custom dataset is also in coco format and my task is instance segmentation. If I follow the steps provided in the README, I need to use execute "prepare_thing_sem_from_instance.py" to generate npz files with BASIS_MODULE.LOSS_ON=True. Under this condition, if you look back at the code snippet I attached above, all images with suffix "jpg" are replaced by "npz" excluding "png" files in the root folder of source images, I am wondering if I could add one line code e.g.
if self.basis_loss_on and self.is_train:
# load basis supervisions
if self.ann_set == "coco":
basis_sem_path = dataset_dict["file_name"].replace('train2017', 'thing_train2017').replace('image/train', 'thing_train')
else:
basis_sem_path = dataset_dict["file_name"].replace('coco', 'lvis').replace('train2017', 'thing_train').replace('jpg', 'npz')
basis_sem_path = basis_sem_path.replace('jpg', 'npz')
# handle png suffix
if basis_sem_path.endswith('.png'):
basis_sem_path += '.npz'
# end
basis_sem_gt = np.load(basis_sem_path)["mask"]
basis_sem_gt = transforms.apply_segmentation(basis_sem_gt)
basis_sem_gt = torch.as_tensor(basis_sem_gt.astype("long"))
dataset_dict["basis_sem"] = basis_sem_gt
to support png suffix by appending suffix ".npz"? (prepare_thing_sem_from_instance.py will generate files e.g. xxx.png.npz for images without jpg suffix). In addtion, I did give a try and it can run without errors, since I am not familiar with the source code, so I am not sure if it is correct.
I see what you mean. This part is dirty because we want to hide it from the common dataset path configuration.
I have made a PR to fix this problem (temporarily). You can check it here: https://github.com/aim-uofa/AdelaiDet/pull/67
@stan-haochen @wiekern @wukaishuns hi guys, so does setting basic loss gives you segmented affect your evaluation later on ?
I trained without the seg_head so by setting the cfg.MODEL.BASIS_MODULE.LOSS_ON = False
to false. But using the weights generated after training on the model doesnt give me any segmentation on my custom dataset..
can anyone help me debug this please
When I use blendmask to train my coco format, the error requires npz file, and zengimg is the picture folder.Why?
File "train_net.py", line 104, in train_loop self.run_step() File "d:\cnn\detect2\detectron2\engine\train_loop.py", line 209, in run_step data = next(self._data_loader_iter) File "d:\cnn\detect2\detectron2\data\common.py", line 140, in iter for d in self.dataset: File "C:\Users\wks.conda\envs\AdelaiDet\lib\site-packages\torch\utils\data\dataloader.py", line 345, in next data = self._next_data() File "C:\Users\wks.conda\envs\AdelaiDet\lib\site-packages\torch\utils\data\dataloader.py", line 856, in _next_data return self._process_data(data) File "C:\Users\wks.conda\envs\AdelaiDet\lib\site-packages\torch\utils\data\dataloader.py", line 881, in _process_data data.reraise() File "C:\Users\wks.conda\envs\AdelaiDet\lib\site-packages\torch_utils.py", line 394, in reraise raise self.exc_type(msg) FileNotFoundError: Caught FileNotFoundError in DataLoader worker process 0. Original Traceback (most recent call last): File "C:\Users\wks.conda\envs\AdelaiDet\lib\site-packages\torch\utils\data_utils\worker.py", line 178, in _worker_loop data = fetcher.fetch(index) File "C:\Users\wks.conda\envs\AdelaiDet\lib\site-packages\torch\utils\data_utils\fetch.py", line 44, in fetch
File "C:\Users\wks.conda\envs\AdelaiDet\lib\site-packages\torch\utils\data_utils\fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "d:\cnn\detect2\detectron2\data\common.py", line 41, in getitem
data = self._map_func(self._dataset[cur_idx])
File "D:\CNNW\AdelaiDet\adet\data\dataset_mapper.py", line 137, in call
basis_sem_gt = np.load(basis_sem_path)["mask"]
File "C:\Users\wks.conda\envs\AdelaiDet\lib\site-packages\numpy\lib\npyio.py", line 428, in load
fid = open(os_fspath(file), "rb")
FileNotFoundError: [Errno 2] No such file or directory: 'D:\labelme\zengimg\6122.npz'