Open f954706414 opened 3 years ago
Hi, i have received the cityscapes dataset. However, when i click the link for foggy cityscapes, i don't know which one is i really need. In the website, i see some foggy cityscapes datasets for semantic segmentation, not for object detection. Can you give me the link for foggy dataset specifically. Thanks!
Hi, i have received the cityscapes dataset. However, when i click the link for foggy cityscapes, i don't know which one is i really need. In the website, i see somcityscapes datasets for semantic segmentation, not for object detection. Can you give me the link for foggy dataset specifically. Thanks!
I downloaded it from another website, I can send it to you if you need. Leave your contact information
Hi, i have received the cityscapes dataset. However, when i click the link for foggy cityscapes, i don't know which one is i really need. In the website, i see somcityscapes datasets for semantic segmentation, not for object detection. Can you give me the link for foggy dataset specifically. Thanks!
I downloaded it from another website, I can send it to you if you need. Leave your contact information
email: 810375129@qq.com Thank you!
@f954706414 Please try using torchvision==0.2.1 to solve the problem.
i also have this bug. how to deal with it?
i also have this bug. how to deal with it?
Please try using torchvision==0.2.1 to solve the problem.
Do you meet the error: In the last step of installing:python setup.py build develop , I encountered problem:Couldn t find a setup script in /tmp/easy_install-2xsxsqeq/scikit_image-0.20.0.tar.gz. Thank you @f954706414 @SkeletonOne @mochaojie
Traceback (most recent call last): File "C:/Users/84957/Desktop/qin/every/EveryPixelMatters-master/tools/train_net_da.py", line 480, in
main()
File "C:/Users/84957/Desktop/qin/every/EveryPixelMatters-master/tools/train_net_da.py", line 469, in main
model = train(cfg, args.local_rank, args.distributed)
File "C:/Users/84957/Desktop/qin/every/EveryPixelMatters-master/tools/train_net_da.py", line 361, in train
arguments,
File "C:\Users\84957\Desktop\qin\every\EveryPixelMatters-master\fcos_core\engine\trainer.py", line 128, in do_train
in enumerate(zip(data_loader_source, data_loader_target), start_iter):
File "D:\Anaconda\envs\TF2.1\lib\site-packages\torch\utils\data\dataloader.py", line 363, in next
data = self._next_data()
File "D:\Anaconda\envs\TF2.1\lib\site-packages\torch\utils\data\dataloader.py", line 989, in _next_data
return self._process_data(data)
File "D:\Anaconda\envs\TF2.1\lib\site-packages\torch\utils\data\dataloader.py", line 1014, in _process_data
data.reraise()
File "D:\Anaconda\envs\TF2.1\lib\site-packages\torch_utils.py", line 395, in reraise
raise self.exc_type(msg)
AttributeError: Caught AttributeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "D:\Anaconda\envs\TF2.1\lib\site-packages\torch\utils\data_utils\worker.py", line 185, in _worker_loop
data = fetcher.fetch(index)
File "D:\Anaconda\envs\TF2.1\lib\site-packages\torch\utils\data_utils\fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "D:\Anaconda\envs\TF2.1\lib\site-packages\torch\utils\data_utils\fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "C:\Users\84957\Desktop\qin\every\EveryPixelMatters-master\fcos_core\data\datasets\coco.py", line 67, in getitem
img, anno = super(COCODataset, self).getitem(idx)
File "D:\Anaconda\envs\TF2.1\lib\site-packages\torchvision\datasets\coco.py", line 118, in getitem
img, target = self.transforms(img, target)
File "C:\Users\84957\Desktop\qin\every\EveryPixelMatters-master\fcos_core\data\transforms\transforms.py", line 15, in call
image, target = t(image, target)
File "C:\Users\84957\Desktop\qin\every\EveryPixelMatters-master\fcos_core\data\transforms\transforms.py", line 60, in call
target = target.resize(image.size)
AttributeError: 'list' object has no attribute 'resize'
I trained according to the data set format you provided, but this error occurred. Does the annotation file need to be converted? Looking forward to your reply