matterport / Mask_RCNN

Mask R-CNN for object detection and instance segmentation on Keras and TensorFlow
Other
24.53k stars 11.68k forks source link

No instances_valminusminival2017.json file found #185

Closed priyanka-chaudhary closed 6 years ago

priyanka-chaudhary commented 6 years ago

Hi,

I am trying to train for coco dataset. The command I am using is: python3 coco.py train --dataset=/home/mask-rcnn/Mask_RCNN-master/dataset-coco --model=coco --download=false --year=2017

The dataset is already downloaded before. I am getting "FileNotFoundError: [Errno 2] No such file or directory: '/home/mask-rcnn/Mask_RCNN-master/dataset-coco/annotations/instances_valminusminival2017.json' "

Can anyone please help? TIA. Please see the logs below.

Using TensorFlow backend. Command: train Model: coco Dataset: /home/mask-rcnn/Mask_RCNN-master/dataset-coco Year: 2017 Logs: /home/mask-rcnn/Mask_RCNN-master/logs Auto Download: True

Configurations: BACKBONE_SHAPES [[256 256] [128 128] [ 64 64] [ 32 32] [ 16 16]] BACKBONE_STRIDES [4, 8, 16, 32, 64] BATCH_SIZE 2 BBOX_STD_DEV [0.1 0.1 0.2 0.2] DETECTION_MAX_INSTANCES 100 DETECTION_MIN_CONFIDENCE 0.7 DETECTION_NMS_THRESHOLD 0.3 GPU_COUNT 1 IMAGES_PER_GPU 2 IMAGE_MAX_DIM 1024 IMAGE_MIN_DIM 800 IMAGE_PADDING True IMAGE_SHAPE [1024 1024 3] LEARNING_MOMENTUM 0.9 LEARNING_RATE 0.001 MASK_POOL_SIZE 14 MASK_SHAPE [28, 28] MAX_GT_INSTANCES 100 MEAN_PIXEL [123.7 116.8 103.9] MINI_MASK_SHAPE (56, 56) NAME coco NUM_CLASSES 81 POOL_SIZE 7 POST_NMS_ROIS_INFERENCE 1000 POST_NMS_ROIS_TRAINING 2000 ROI_POSITIVE_RATIO 0.33 RPN_ANCHOR_RATIOS [0.5, 1, 2] RPN_ANCHOR_SCALES (32, 64, 128, 256, 512) RPN_ANCHOR_STRIDE 1 RPN_BBOX_STD_DEV [0.1 0.1 0.2 0.2] RPN_NMS_THRESHOLD 0.7 RPN_TRAIN_ANCHORS_PER_IMAGE 256 STEPS_PER_EPOCH 1000 TRAIN_ROIS_PER_IMAGE 200 USE_MINI_MASK True USE_RPN_ROIS True VALIDATION_STEPS 50 WEIGHT_DECAY 0.0001

Loading weights /home/mask-rcnn/Mask_RCNN-master/mask_rcnn_coco.h5 2018-01-11 17:09:33.070611: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA 2018-01-11 17:09:33.262017: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Found device 0 with properties: name: GeForce GTX TITAN X major: 5 minor: 2 memoryClockRate(GHz): 1.076 pciBusID: 0000:01:00.0 totalMemory: 11.91GiB freeMemory: 11.47GiB 2018-01-11 17:09:33.262044: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1120] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce GTX TITAN X, pci bus id: 0000:01:00.0, compute capability: 5.2) Will use images in /home/mask-rcnn/Mask_RCNN-master/dataset-coco/train2017 Will use annotations in /home/mask-rcnn/Mask_RCNN-master/dataset-coco/annotations/instances_train2017.json loading annotations into memory... Done (t=12.45s) creating index... index created! Will use images in /home/mask-rcnn/Mask_RCNN-master/dataset-coco/val2017 Will use annotations in /home/mask-rcnn/Mask_RCNN-master/dataset-coco/annotations/instances_valminusminival2014.json loading annotations into memory... Traceback (most recent call last): File "coco.py", line 477, in dataset_train.load_coco(args.dataset, "valminusminival", year=args.year, auto_download=args.download) File "coco.py", line 108, in loadcoco coco = COCO("{}/annotations/instances{}{}.json".format(dataset_dir, subset, year)) File "/usr/local/lib/python3.5/dist-packages/pycocotools/coco.py", line 79, in init dataset = json.load(open(annotation_file, 'r')) FileNotFoundError: [Errno 2] No such file or directory: '/home/mask-rcnn/Mask_RCNN-master/dataset-coco/annotations/instances_valminusminival2017.json'

philferriere commented 6 years ago

@priyanka-chaudhary, weren't the 2014 versions of instances_valminusminival2014.json and instances_minival2014.json build by Ross Girshick and his team? That's the feeling I get from reading this README.md. I don't believe there's ever been an official release of these files.

May I suggest that you build your own versions for the 2017 dataset? If you go by the official COCO page, there is no additional image data but only new stuff annotations and new data splits:

2017 Update: The main change in 2017 is that instead of an 80K/40K train/val split, based on community feedback the split is now 115K/5K for train/val. The same exact images are used, and no new annotations for detection/keypoints are provided. However, new in 2017 are stuff annotations on 40K train images (subset of the full 115K train images from 2017) and 5K val images. Also, for testing, in 2017 the test set only has two splits (dev / challenge), instead of the four splits (dev / standard / reserve / challenge) used in previous years. Finally, new in 2017 we are releasing 120K unlabeled images from COCO that follow the same class distribution as the labeled images; this may be useful for semi-supervised learning on COCO.

Note: Annotations last updated 09/05/2017 (stuff annotations added). If you find any issues with the data please let us know!

Does this mean that instances_valminusminival2014.json and instances_minival2014.json files can be used with the 2017 dataset?

priyanka-chaudhary commented 6 years ago

@philferriere, Is there somewhere instructions on how to build these two( instances_valminusminival2017.json and instances_minival2014.json) files then?

Also, then I assume people here have used 2014 version of these files here only?

philferriere commented 6 years ago

@priyanka-chaudhary, since the data is the same, I'm not sure building a different version (instead of using the 2014 files) is actually meaningful.

LifeBeyondExpectations commented 6 years ago

For me, I changed 'instance_val2017.json' to 'instance_minival2017.json' and now it works.

priyanka-chaudhary commented 6 years ago

@philferriere : Data overall combined(train + validation) is same. But validatation dataset of 2014 and 2017 are different also train dataset. That's why I am asking.

priyanka-chaudhary commented 6 years ago

@LifeBeyondExpectations : Thank you. Also did you change something else too? As I am getting FileNotFoundError: [Errno 2] No such file or directory: '/home/pchaudha/mask-rcnn/Mask_RCNN-master/dataset-coco/val2017/COCO_val2014_000000130437.jpg'

So it is basically trying to find 2014 coco dataset files in 2017 folder. So some kind of list of images that also needs to be changed?

Also what did you do for file instances_valminusminival2017.json? Thanks again.

aemilcar commented 6 years ago

@priyanka-chaudhary did you find an answer to your problem? I'm keeps running into No file found issue. It's basically one after another. Now I'm receiving FileNotFoundError: [Errno 2] No such file or directory: '/ml/coco/data/val2017/000000372819.jpg'. Any ideas on this?

priyanka-chaudhary commented 6 years ago

@aemilcar : The change 'instance_val2017.json' to 'instance_minival2017.json' and commenting line 477 in coco.py dataset_train.load_coco(args.dataset, "valminusminival", year=args.year, auto_download=args.download)

After that it works for me.

CMCDragonkai commented 6 years ago

I had to change it to instances_minival2014.json not instance_minival2017.json.

isalirezag commented 6 years ago

@CMCDragonkai but how did you get the instance_minival2017.json

CMCDragonkai commented 6 years ago

I think I renamed the files, I didn't download anything new.

ayush1427 commented 4 years ago

I am trying to train balloon.py file on balloon dataset. Both the train and test datasets contain via_region_data.json file. However the error that I am getting is "via_region_data.json" file not available. Trying to run in google colab with the below command. !python balloon.py train --dataset=Mask_RCNN-master/data/balloon --weights=coco

The error that I am getting is as below:

Using TensorFlow backend. Weights: coco Dataset: Mask_RCNN-master/dataset Logs: /content/drive/My Drive/Mask-rcnn1/logs

Configurations: BACKBONE resnet101 BACKBONE_STRIDES [4, 8, 16, 32, 64] BATCH_SIZE 2 BBOX_STD_DEV [0.1 0.1 0.2 0.2] COMPUTE_BACKBONE_SHAPE None DETECTION_MAX_INSTANCES 100 DETECTION_MIN_CONFIDENCE 0.9 DETECTION_NMS_THRESHOLD 0.3 FPN_CLASSIF_FC_LAYERS_SIZE 1024 GPU_COUNT 1 GRADIENT_CLIP_NORM 5.0 IMAGES_PER_GPU 2 IMAGE_CHANNEL_COUNT 3 IMAGE_MAX_DIM 1024 IMAGE_META_SIZE 14 IMAGE_MIN_DIM 800 IMAGE_MIN_SCALE 0 IMAGE_RESIZE_MODE square IMAGE_SHAPE [1024 1024 3] LEARNING_MOMENTUM 0.9 LEARNING_RATE 0.001 LOSS_WEIGHTS {'rpn_class_loss': 1.0, 'rpn_bbox_loss': 1.0, 'mrcnn_class_loss': 1.0, 'mrcnn_bbox_loss': 1.0, 'mrcnn_mask_loss': 1.0} MASK_POOL_SIZE 14 MASK_SHAPE [28, 28] MAX_GT_INSTANCES 100 MEAN_PIXEL [123.7 116.8 103.9] MINI_MASK_SHAPE (56, 56) NAME balloon NUM_CLASSES 2 POOL_SIZE 7 POST_NMS_ROIS_INFERENCE 1000 POST_NMS_ROIS_TRAINING 2000 PRE_NMS_LIMIT 6000 ROI_POSITIVE_RATIO 0.33 RPN_ANCHOR_RATIOS [0.5, 1, 2] RPN_ANCHOR_SCALES (32, 64, 128, 256, 512) RPN_ANCHOR_STRIDE 1 RPN_BBOX_STD_DEV [0.1 0.1 0.2 0.2] RPN_NMS_THRESHOLD 0.7 RPN_TRAIN_ANCHORS_PER_IMAGE 256 STEPS_PER_EPOCH 100 TOP_DOWN_PYRAMID_SIZE 256 TRAIN_BN False TRAIN_ROIS_PER_IMAGE 200 USE_MINI_MASK True USE_RPN_ROIS True VALIDATION_STEPS 50 WEIGHT_DECAY 0.0001

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:541: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:66: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:4432: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:2139: The name tf.nn.fused_batch_norm is deprecated. Please use tf.compat.v1.nn.fused_batch_norm instead.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:4267: The name tf.nn.max_pool is deprecated. Please use tf.nn.max_pool2d instead.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:2239: The name tf.image.resize_nearest_neighbor is deprecated. Please use tf.compat.v1.image.resize_nearest_neighbor instead.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/array_ops.py:1475: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.where in 2.0, which has the same broadcast rule as np.where WARNING:tensorflow:From /content/drive/My Drive/Mask-rcnn1/Mask_RCNN-master/balloon/mrcnn/model.py:553: The name tf.random_shuffle is deprecated. Please use tf.random.shuffle instead.

WARNING:tensorflow:From /content/drive/My Drive/Mask-rcnn1/Mask_RCNN-master/balloon/mrcnn/utils.py:202: The name tf.log is deprecated. Please use tf.math.log instead.

WARNING:tensorflow:From /content/drive/My Drive/Mask-rcnn1/Mask_RCNN-master/balloon/mrcnn/model.py:600: calling crop_and_resize_v1 (from tensorflow.python.ops.image_ops_impl) with box_ind is deprecated and will be removed in a future version. Instructions for updating: box_ind is deprecated, use box_indices instead Downloading pretrained model to /content/drive/My Drive/Mask-rcnn1/mask_rcnn_coco.h5 ... ... done downloading pretrained model! Loading weights /content/drive/My Drive/Mask-rcnn1/mask_rcnn_coco.h5 WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:190: The name tf.get_default_session is deprecated. Please use tf.compat.v1.get_default_session instead.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:197: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:203: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.

2020-01-10 05:15:33.850593: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2300000000 Hz 2020-01-10 05:15:33.852332: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x1a5b0bc0 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2020-01-10 05:15:33.852378: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2020-01-10 05:15:33.880022: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1 2020-01-10 05:15:33.981714: E tensorflow/stream_executor/cuda/cuda_driver.cc:318] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected 2020-01-10 05:15:33.981792: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (b45da54bc5e2): /proc/driver/nvidia/version does not exist WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:207: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:216: The name tf.is_variable_initialized is deprecated. Please use tf.compat.v1.is_variable_initialized instead.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:223: The name tf.variables_initializer is deprecated. Please use tf.compat.v1.variables_initializer instead.

Traceback (most recent call last): File "balloon.py", line 364, in train(model) File "balloon.py", line 183, in train dataset_train.load_balloon(args.dataset, "train") File "balloon.py", line 112, in load_balloon annotations = json.load(open(os.path.join(dataset_dir, "via_region_data.json"))) FileNotFoundError: [Errno 2] No such file or directory: 'Mask_RCNN master/dataset/train/via_region_data.json'

I tried every possibility by changing the path but it is showing same error as "via_region_data.json" file not found. Please help!

eshwarkamal commented 4 years ago

I am trying to train balloon.py file on balloon dataset. Both the train and test datasets contain via_region_data.json file. However the error that I am getting is "via_region_data.json" file not available. Trying to run in google colab with the below command. !python balloon.py train --dataset=Mask_RCNN-master/data/balloon --weights=coco

The error that I am getting is as below:

Using TensorFlow backend. Weights: coco Dataset: Mask_RCNN-master/dataset Logs: /content/drive/My Drive/Mask-rcnn1/logs

Configurations: BACKBONE resnet101 BACKBONE_STRIDES [4, 8, 16, 32, 64] BATCH_SIZE 2 BBOX_STD_DEV [0.1 0.1 0.2 0.2] COMPUTE_BACKBONE_SHAPE None DETECTION_MAX_INSTANCES 100 DETECTION_MIN_CONFIDENCE 0.9 DETECTION_NMS_THRESHOLD 0.3 FPN_CLASSIF_FC_LAYERS_SIZE 1024 GPU_COUNT 1 GRADIENT_CLIP_NORM 5.0 IMAGES_PER_GPU 2 IMAGE_CHANNEL_COUNT 3 IMAGE_MAX_DIM 1024 IMAGE_META_SIZE 14 IMAGE_MIN_DIM 800 IMAGE_MIN_SCALE 0 IMAGE_RESIZE_MODE square IMAGE_SHAPE [1024 1024 3] LEARNING_MOMENTUM 0.9 LEARNING_RATE 0.001 LOSS_WEIGHTS {'rpn_class_loss': 1.0, 'rpn_bbox_loss': 1.0, 'mrcnn_class_loss': 1.0, 'mrcnn_bbox_loss': 1.0, 'mrcnn_mask_loss': 1.0} MASK_POOL_SIZE 14 MASK_SHAPE [28, 28] MAX_GT_INSTANCES 100 MEAN_PIXEL [123.7 116.8 103.9] MINI_MASK_SHAPE (56, 56) NAME balloon NUM_CLASSES 2 POOL_SIZE 7 POST_NMS_ROIS_INFERENCE 1000 POST_NMS_ROIS_TRAINING 2000 PRE_NMS_LIMIT 6000 ROI_POSITIVE_RATIO 0.33 RPN_ANCHOR_RATIOS [0.5, 1, 2] RPN_ANCHOR_SCALES (32, 64, 128, 256, 512) RPN_ANCHOR_STRIDE 1 RPN_BBOX_STD_DEV [0.1 0.1 0.2 0.2] RPN_NMS_THRESHOLD 0.7 RPN_TRAIN_ANCHORS_PER_IMAGE 256 STEPS_PER_EPOCH 100 TOP_DOWN_PYRAMID_SIZE 256 TRAIN_BN False TRAIN_ROIS_PER_IMAGE 200 USE_MINI_MASK True USE_RPN_ROIS True VALIDATION_STEPS 50 WEIGHT_DECAY 0.0001

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:541: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:66: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:4432: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:2139: The name tf.nn.fused_batch_norm is deprecated. Please use tf.compat.v1.nn.fused_batch_norm instead.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:4267: The name tf.nn.max_pool is deprecated. Please use tf.nn.max_pool2d instead.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:2239: The name tf.image.resize_nearest_neighbor is deprecated. Please use tf.compat.v1.image.resize_nearest_neighbor instead.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/array_ops.py:1475: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.where in 2.0, which has the same broadcast rule as np.where WARNING:tensorflow:From /content/drive/My Drive/Mask-rcnn1/Mask_RCNN-master/balloon/mrcnn/model.py:553: The name tf.random_shuffle is deprecated. Please use tf.random.shuffle instead.

WARNING:tensorflow:From /content/drive/My Drive/Mask-rcnn1/Mask_RCNN-master/balloon/mrcnn/utils.py:202: The name tf.log is deprecated. Please use tf.math.log instead.

WARNING:tensorflow:From /content/drive/My Drive/Mask-rcnn1/Mask_RCNN-master/balloon/mrcnn/model.py:600: calling crop_and_resize_v1 (from tensorflow.python.ops.image_ops_impl) with box_ind is deprecated and will be removed in a future version. Instructions for updating: box_ind is deprecated, use box_indices instead Downloading pretrained model to /content/drive/My Drive/Mask-rcnn1/mask_rcnn_coco.h5 ... ... done downloading pretrained model! Loading weights /content/drive/My Drive/Mask-rcnn1/mask_rcnn_coco.h5 WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:190: The name tf.get_default_session is deprecated. Please use tf.compat.v1.get_default_session instead.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:197: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:203: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.

2020-01-10 05:15:33.850593: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2300000000 Hz 2020-01-10 05:15:33.852332: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x1a5b0bc0 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2020-01-10 05:15:33.852378: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2020-01-10 05:15:33.880022: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1 2020-01-10 05:15:33.981714: E tensorflow/stream_executor/cuda/cuda_driver.cc:318] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected 2020-01-10 05:15:33.981792: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (b45da54bc5e2): /proc/driver/nvidia/version does not exist WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:207: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:216: The name tf.is_variable_initialized is deprecated. Please use tf.compat.v1.is_variable_initialized instead.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:223: The name tf.variables_initializer is deprecated. Please use tf.compat.v1.variables_initializer instead.

Traceback (most recent call last): File "balloon.py", line 364, in train(model) File "balloon.py", line 183, in train dataset_train.load_balloon(args.dataset, "train") File "balloon.py", line 112, in load_balloon annotations = json.load(open(os.path.join(dataset_dir, "via_region_data.json"))) FileNotFoundError: [Errno 2] No such file or directory: 'Mask_RCNN master/dataset/train/via_region_data.json'

I tried every possibility by changing the path but it is showing same error as "via_region_data.json" file not found. Please help!

iam also stuck in this process...

ERROR:root:Error processing image {'id': 177357, 'source': 'coco', 'path': 'C:/Users/91741/Desktop/Keypoints-of-humanpose-with-Mask-R-CNN-master/dataset/val2017\000000177357.jpg', 'width': 640, 'height': 396, 'annotations': [{'segmentation': [[116.74, 249.9, 104.07, 248.09, 101.81, 248.32, 97.29, 250.58, 90.06, 250.58, 88.93, 250.58, 85.76, 250.58, 83.27, 251.26, 80.34, 252.16, 75.36, 252.84, 74.23, 252.61, 74.91, 250.58, 78.07, 249.67, 85.76, 247.64, 84.63, 246.06, 77.85, 246.96, 73.33, 247.87, 71.52, 247.87, 71.07, 246.28, 75.36, 244.7, 80.56, 244.25, 83.73, 243.12, 81.92, 242.21, 77.17, 241.76, 72.42, 242.67, 71.07, 241.53, 71.07, 240.18, 73.55, 239.5, 76.04, 239.05, 84.18, 238.82, 76.94, 238.14, 76.04, 236.11, 78.3, 235.2, 83.73, 234.3, 85.31, 234.98, 92.77, 235.66, 87.8, 230.46, 86.44, 227.97, 87.12, 226.39, 88.47, 225.93, 89.83, 228.42, 94.13, 230.91, 97.97, 232.72, 102.72, 235.43, 107.92, 236.33, 123.97, 235.43, 142.51, 231.36, 155.62, 229.55, 163.99, 225.71, 173.71, 221.19, 180.04, 214.86, 189.54, 206.04, 194.74, 201.97, 199.26, 201.07, 200.39, 199.48, 197.9, 195.41, 194.51, 192.47, 192.02, 189.99, 189.76, 187.95, 185.24, 181.81, 183.53, 186.63, 181.51, 185.23, 181.2, 183.99, 178.87, 182.12, 179.49, 177.15, 179.33, 174.04, 177.62, 171.86, 176.53, 168.75, 177.31, 166.42, 177.78, 162.69, 180.89, 156.62, 186.17, 152.27, 190.37, 149.94, 196.59, 148.07, 200.95, 148.38, 206.08, 150.56, 209.65, 152.27, 215.25, 155.22, 217.43, 157.25, 219.76, 159.27, 221.32, 161.13, 223.96, 165.49, 224.74, 167.51, 232.05, 160.67, 237.95, 154.76, 245.11, 148.38, 249.15, 143.87, 253.35, 139.99, 258.95, 138.9, 261.9, 141.54, 262.99, 144.81, 263.3, 152.27, 258.95, 157.25, 259.88, 163.47, 259.41, 167.2, 256.92, 172.79, 268.57, 185.38, 280.74, 198.15, 292.12, 208.33, 301.3, 212.92, 316.26, 212.92, 330.63, 205.14, 349.19, 195.56, 368.35, 188.77, 373.94, 185.98, 382.32, 184.38, 389.11, 185.98, 309.08, 223.1, 273.96, 243.66, 257.79, 253.63, 246.81, 247.05, 236.04, 241.26, 227.66, 235.07, 219.27, 229.69, 208.5, 227.89, 195.53, 227.69, 185.75, 232.48, 180.36, 236.27, 162.4, 244.25, 156.41, 246.45, 154.02, 248.45, 150.62, 249.64, 139.85, 248.84]], 'num_keypoints': 15, 'area': 10599.2841, 'iscrowd': 0, 'keypoints': [200, 189, 2, 204, 181, 2, 193, 186, 2, 217, 172, 2, 0, 0, 0, 233, 173, 2, 200, 214, 2, 256, 146, 2, 156, 237, 2, 254, 159, 2, 101, 243, 2, 310, 235, 1, 284, 253, 1, 373, 195, 1, 340, 245, 1, 0, 0, 0, 371, 250, 1], 'image_id': 177357, 'bbox': [71.07, 138.9, 318.04, 114.73], 'category_id': 1, 'id': 449134}]} Traceback (most recent call last): File "C:\Users\91741\Desktop\Keypoints-of-humanpose-with-Mask-R-CNN-master\model.py", line 2194, in data_generator_keypoint load_image_gt_keypoints(dataset, config, image_id, augment, use_mini_mask=config.USE_MINI_MASK) File "C:\Users\91741\Desktop\Keypoints-of-humanpose-with-Mask-R-CNN-master\model.py", line 1732, in load_image_gt_keypoints image = dataset.load_image(image_id) File "C:\Users\91741\Desktop\Keypoints-of-humanpose-with-Mask-R-CNN-master\utils.py", line 418, in load_image image = skimage.io.imread(self.image_info[image_id]['path']) File "c:\users\91741\anaconda3\envs\tensorflow1\lib\site-packages\skimage\io_io.py", line 48, in imread img = call_plugin('imread', fname, plugin=plugin, plugin_args) File "c:\users\91741\anaconda3\envs\tensorflow1\lib\site-packages\skimage\io\manage_plugins.py", line 210, in call_plugin return func(*args, *kwargs) File "c:\users\91741\anaconda3\envs\tensorflow1\lib\site-packages\skimage\io_plugins\imageio_plugin.py", line 10, in imread return np.asarray(imageio_imread(args, kwargs)) File "c:\users\91741\anaconda3\envs\tensorflow1\lib\site-packages\imageio\core\functions.py", line 264, in imread reader = read(uri, format, "i", kwargs) File "c:\users\91741\anaconda3\envs\tensorflow1\lib\site-packages\imageio\core\functions.py", line 173, in get_reader request = Request(uri, "r" + mode, kwargs) File "c:\users\91741\anaconda3\envs\tensorflow1\lib\site-packages\imageio\core\request.py", line 126, in init self._parse_uri(uri) File "c:\users\91741\anaconda3\envs\tensorflow1\lib\site-packages\imageio\core\request.py", line 278, in _parse_uri raise FileNotFoundError("No such file: '%s'" % fn) FileNotFoundError: No such file: 'C:\Users\91741\Desktop\Keypoints-of-humanpose-with-Mask-R-CNN-master\dataset\val2017\000000177357.jpg' ERROR:root:Error processing image {'id': 451144, 'source': 'coco', 'path': 'C:/Users/91741/Desktop/Keypoints-of-humanpose-with-Mask-R-CNN-master/dataset/val2017\000000451144.jpg', 'width': 640, 'height': 480, 'annotations': [{'segmentation': [[403.98, 196.45, 400.68, 198.65, 399.22, 210.72, 411.29, 211.45, 395.56, 219.5, 399.59, 224.25, 393.37, 227.18, 388.25, 256.45, 387.88, 265.59, 382.03, 281.32, 386.05, 287.18, 390.07, 284.61, 383.86, 298.88, 379.1, 319, 387.88, 321.2, 390.07, 340.95, 391.17, 365.46, 385.32, 365.83, 384.95, 369.12, 379.47, 370.95, 384.22, 379, 409.83, 378.63, 399.95, 327.05, 407.63, 317.54, 429.58, 342.05, 433.24, 362.54, 419.71, 371.32, 421.9, 377.9, 439.1, 372.78, 441.29, 376.8, 451.9, 375.34, 448.97, 360.34, 443.49, 352.29, 441.29, 337.29, 438.73, 334, 433.61, 321.93, 429.95, 309.13, 428.12, 298.15, 429.95, 292.3, 429.95, 289, 435.8, 289, 435.8, 285.35, 442.75, 280.96, 444.58, 253.52, 436.9, 220.6, 426.29, 216.57, 420.07, 208.89, 418.24, 194.99, 410.93, 193.16, 403.61, 195.72]], 'num_keypoints': 13, 'area': 7565.2024, 'iscrowd': 0, 'keypoints': [0, 0, 0, 0, 0, 0, 0, 0, 0, 404, 209, 2, 0, 0, 0, 398, 229, 2, 436, 225, 2, 395, 258, 2, 442, 262, 2, 385, 280, 2, 430, 286, 2, 412, 283, 2, 427, 282, 2, 391, 320, 2, 424, 322, 2, 395, 366, 2, 437, 365, 2], 'image_id': 451144, 'bbox': [379.1, 193.16, 72.8, 185.84], 'category_id': 1, 'id': 449097}, {'segmentation': [[519.98, 343.58, 516.92, 309.9, 532.74, 275.72, 524.06, 256.84, 524.57, 242.55, 525.59, 238.47, 523.55, 222.14, 526.62, 209.39, 528.15, 206.33, 537.33, 204.29, 546.51, 204.8, 554.68, 215, 561.82, 225.21, 578.66, 241.53, 581.72, 277.25, 575.09, 290.52, 578.15, 319.09, 588.86, 357.36, 560.8, 362.46, 522.02, 361.44]], 'num_keypoints': 12, 'area': 8204.7085, 'iscrowd': 0, 'keypoints': [0, 0, 0, 534, 224, 2, 0, 0, 0, 539, 223, 2, 547, 219, 2, 539, 240, 2, 561, 233, 2, 533, 266, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 551, 284, 2, 561, 285, 2, 535, 317, 2, 559, 318, 2, 530, 346, 2, 572, 341, 2], 'image_id': 451144, 'bbox': [516.92, 204.29, 71.94, 158.17], 'category_id': 1, 'id': 449173}]} Traceback (most recent call last): File "C:\Users\91741\Desktop\Keypoints-of-humanpose-with-Mask-R-CNN-master\model.py", line 2194, in data_generator_keypoint load_image_gt_keypoints(dataset, config, image_id, augment, use_mini_mask=config.USE_MINI_MASK) File "C:\Users\91741\Desktop\Keypoints-of-humanpose-with-Mask-R-CNN-master\model.py", line 1732, in load_image_gt_keypoints image = dataset.load_image(image_id) File "C:\Users\91741\Desktop\Keypoints-of-humanpose-with-Mask-R-CNN-master\utils.py", line 418, in load_image image = skimage.io.imread(self.image_info[image_id]['path']) File "c:\users\91741\anaconda3\envs\tensorflow1\lib\site-packages\skimage\io_io.py", line 48, in imread img = call_plugin('imread', fname, plugin=plugin, plugin_args) File "c:\users\91741\anaconda3\envs\tensorflow1\lib\site-packages\skimage\io\manage_plugins.py", line 210, in call_plugin return func(*args, *kwargs) File "c:\users\91741\anaconda3\envs\tensorflow1\lib\site-packages\skimage\io_plugins\imageio_plugin.py", line 10, in imread return np.asarray(imageio_imread(args, kwargs)) File "c:\users\91741\anaconda3\envs\tensorflow1\lib\site-packages\imageio\core\functions.py", line 264, in imread reader = read(uri, format, "i", kwargs) File "c:\users\91741\anaconda3\envs\tensorflow1\lib\site-packages\imageio\core\functions.py", line 173, in get_reader request = Request(uri, "r" + mode, kwargs) File "c:\users\91741\anaconda3\envs\tensorflow1\lib\site-packages\imageio\core\request.py", line 126, in init self._parse_uri(uri) File "c:\users\91741\anaconda3\envs\tensorflow1\lib\site-packages\imageio\core\request.py", line 278, in _parse_uri raise FileNotFoundError("No such file: '%s'" % fn) FileNotFoundError: No such file: 'C:\Users\91741\Desktop\Keypoints-of-humanpose-with-Mask-R-CNN-master\dataset\val2017\000000451144.jpg' ERROR:root:Error processing image {'id': 296649, 'source': 'coco', 'path': 'C:/Users/91741/Desktop/Keypoints-of-humanpose-with-Mask-R-CNN-master/dataset/val2017\000000296649.jpg', 'width': 640, 'height': 427, 'annotations': [{'segmentation': [[323.37, 361.81, 323.37, 331.11, 335.84, 311.92, 334.88, 297.52, 339.68, 290.81, 355.03, 297.52, 358.87, 307.12, 355.99, 317.68, 348.32, 318.63, 355.99, 327.27, 372.31, 341.66, 387.66, 350.3, 387.66, 354.14, 370.39, 353.18, 357.91, 346.46, 346.4, 361.81, 373.27, 382.92, 359.83, 418.43, 342.56, 406.91, 353.02, 398.02, 322.57, 378.79, 322.57, 360.76]], 'num_keypoints': 10, 'area': 3672.6895, 'iscrowd': 0, 'keypoints': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 344, 324, 2, 331, 323, 2, 364, 337, 2, 346, 347, 2, 386, 350, 2, 370, 362, 1, 339, 367, 2, 329, 370, 2, 0, 0, 0, 361, 382, 2, 0, 0, 0, 347, 410, 1], 'image_id': 296649, 'bbox': [322.57, 290.81, 65.09, 127.62], 'category_id': 1, 'id': 1240673}, {'segmentation': [[324.59, 421.76, 301.39, 419.03, 295.48, 416.3, 303.67, 404.93, 308.67, 384.91, 304.12, 379.45, 280.01, 371.26, 274.1, 361.71, 274.55, 358.07, 273.64, 355.8, 274.55, 328.96, 281.37, 319.4, 288.65, 312.58, 291.38, 309.85, 288.65, 303.03, 288.65, 297.11, 292.29, 293.02, 302.76, 292.11, 307.3, 298.93, 308.67, 308.49, 306.85, 308.94, 304.12, 313.49, 300.48, 315.31, 302.76, 331.69, 305.94, 334.87, 320.95, 330.32, 322.77, 332.6, 319.13, 340.33, 311.85, 341.24, 305.03, 355.34, 303.67, 358.98, 315.04, 362.62, 320.04, 368.54, 323.23, 384, 311.4, 412.21, 313.67, 415.85, 321.41, 415.85, 324.14, 419.03]], 'num_keypoints': 10, 'area': 3092.15735, 'iscrowd': 0, 'keypoints': [307, 308, 2, 0, 0, 0, 305, 305, 2, 0, 0, 0, 296, 306, 2, 299, 320, 2, 285, 326, 2, 0, 0, 0, 294, 355, 2, 0, 0, 0, 311, 340, 2, 0, 0, 0, 284, 368, 2, 0, 0, 0, 317, 372, 2, 0, 0, 0, 312, 408, 2], 'image_id': 296649, 'bbox': [273.64, 292.11, 50.95, 129.65], 'category_id': 1, 'id': 1246913}, {'segmentation': [[1.92, 318.81, 8.66, 299.57, 18.27, 294.76, 18.27, 287.07, 21.16, 272.65, 26.93, 266.88, 44.24, 266.88, 50.97, 277.45, 59.63, 288.03, 53.86, 301.5, 50.97, 303.42, 75.01, 322.65, 83.67, 324.58, 92.32, 329.39, 116.37, 336.12, 115.41, 341.89, 108.67, 347.66, 101.94, 347.66, 83.67, 340.93, 67.32, 333.23, 60.59, 336.12, 64.43, 339, 79.82, 342.85, 79.82, 350.54, 79.82, 354.39, 73.09, 357.28, 67.32, 357.28, 63.47, 364.01, 66.36, 373.63, 66.36, 389.01, 67.32, 411.13, 71.17, 419.79, 37.51, 422.67, 38.47, 405.36, 39.43, 390.94, 39.43, 390.94, 20.2, 388.05, 4.81, 380.36]], 'num_keypoints': 14, 'area': 8318.1434, 'iscrowd': 0, 'keypoints': [46, 291, 2, 48, 287, 2, 43, 288, 2, 0, 0, 0, 0, 0, 0, 44, 307, 2, 15, 306, 2, 62, 322, 2, 28, 336, 2, 89, 339, 2, 63, 347, 2, 44, 354, 2, 15, 355, 2, 73, 362, 1, 60, 376, 2, 0, 0, 0, 52, 421, 2], 'image_id': 296649, 'bbox': [1.92, 266.88, 114.45, 155.79], 'category_id': 1, 'id': 1252738}, {'segmentation': [[489.37, 400.13, 452.91, 397.25, 434.68, 381.9, 440.43, 327.21, 424.12, 318.57, 431.8, 304.18, 452.91, 298.42, 462.5, 275.39, 487.45, 270.59, 495.13, 290.74, 493.21, 306.1, 487.45, 309.93, 499.93, 342.56, 531.59, 386.7, 499.93, 397.25]], 'num_keypoints': 9, 'area': 7747.50895, 'iscrowd': 0, 'keypoints': [492, 299, 2, 0, 0, 0, 489, 295, 2, 0, 0, 0, 478, 297, 2, 458, 315, 2, 477, 325, 2, 0, 0, 0, 491, 362, 2, 486, 328, 2, 0, 0, 0, 448, 391, 2, 464, 391, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'image_id': 296649, 'bbox': [424.12, 270.59, 107.47, 129.54], 'category_id': 1, 'id': 1285746}, {'segmentation': [[281.29, 316.1, 270.36, 312.41, 267.43, 300.4, 263.43, 300.4, 261.12, 299.94, 259.27, 296.71, 259.27, 291.47, 262.35, 290.55, 265.12, 295.47, 269.13, 291.78, 272.36, 290.09, 273.28, 289.01, 273.74, 281.77, 280.67, 281, 282.21, 283.93, 281.9, 289.47, 283.9, 291.93, 285.29, 295.47]], 'num_keypoints': 0, 'area': 494.0774, 'iscrowd': 0, 'keypoints': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'image_id': 296649, 'bbox': [259.27, 281, 26.02, 35.1], 'category_id': 1, 'id': 1290268}, {'segmentation': [[283.93, 284.03, 285.75, 277.52, 291.49, 276.47, 295.27, 281.04, 294.74, 289.25, 296.57, 291.86, 290.57, 293.29, 288.23, 301.37, 291.35, 310.49, 283.14, 317.79, 281.06, 312.84, 284.97, 300.2, 282.88, 290.29, 284.06, 284.82]], 'num_keypoints': 0, 'area': 312.6299, 'iscrowd': 0, 'keypoints': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'image_id': 296649, 'bbox': [281.06, 276.47, 15.51, 41.32], 'category_id': 1, 'id': 1300049}, {'segmentation': [[121.26, 295.85, 123.54, 280.66, 120.69, 277.05, 120.12, 275.72, 120.31, 271.35, 118.03, 267.55, 115.37, 267.74, 112.9, 273.25, 110.43, 275.15, 104.73, 278.38, 107.96, 285.22, 108.53, 293.57, 111.76, 296.61]], 'num_keypoints': 0, 'area': 350.17285, 'iscrowd': 0, 'keypoints': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'image_id': 296649, 'bbox': [104.73, 267.55, 18.81, 29.06], 'category_id': 1, 'id': 1678058}, {'segmentation': [[123.9, 292.83, 120.86, 282.62, 122.38, 279.15, 125.85, 275.89, 127.8, 271.34, 132.36, 271.12, 133.66, 275.24, 136.05, 278.06, 136.05, 283.06, 137.14, 293.69, 134.32, 296.52, 130.19, 295, 121.94, 295.43]], 'num_keypoints': 0, 'area': 296.44395, 'iscrowd': 0, 'keypoints': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'image_id': 296649, 'bbox': [120.86, 271.12, 16.28, 25.4], 'category_id': 1, 'id': 1758230}, {'segmentation': [[269.94, 284.62, 265.55, 281.32, 261.89, 284.07, 261.89, 289.74, 263.36, 290.65, 268.66, 291.2], [258.97, 295.77, 257.14, 304.74, 259.33, 310.41, 260.8, 316.63, 260.43, 320.1, 264.09, 323.34, 267.2, 313.46, 269.39, 312.55, 267.56, 302.12, 265.19, 300.66, 261.53, 300.66]], 'num_keypoints': 0, 'area': 239.9507, 'iscrowd': 0, 'keypoints': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'image_id': 296649, 'bbox': [257.14, 281.32, 12.8, 42.02], 'category_id': 1, 'id': 1759127}, {'segmentation': [[269, 291.5, 269.1, 289.13, 269, 287.68, 269.72, 285.82, 269.51, 284.68, 269.51, 283.24, 270.14, 282.2, 271.69, 280.34, 273.24, 279.2, 272.62, 277.34, 273.13, 275.48, 273.86, 274.55, 275.2, 274.45, 276.44, 275.07, 276.96, 277.03, 277.89, 278.17, 277.47, 280.03, 276.85, 280.65, 275.72, 281.79, 274.48, 284.79, 274.48, 286.96, 274.37, 288.92, 272.72, 290.37, 269.51, 292.23]], 'num_keypoints': 0, 'area': 79.67965, 'iscrowd': 0, 'keypoints': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'image_id': 296649, 'bbox': [269, 274.45, 8.89, 17.78], 'category_id': 1, 'id': 2029859}, {'segmentation': [[556.28, 334.71, 564.1, 339.57, 569.76, 343.88, 580.28, 350.63, 588.1, 355.48, 588.37, 344.69, 584.6, 334.71, 577.85, 332.56, 577.04, 330.67, 578.93, 320.69, 579.47, 309.36, 563.83, 313.14, 562.21, 318.53, 566.26, 326.08, 563.56, 330.67]], 'num_keypoints': 0, 'area': 680.6061, 'iscrowd': 0, 'keypoints': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'image_id': 296649, 'bbox': [556.28, 309.36, 32.09, 46.12], 'category_id': 1, 'id': 2150237}, {'segmentation': [[494.93, 334.63, 508.53, 308.68, 512.24, 304.35, 514.09, 287.66, 528.92, 276.54, 543.75, 279.01, 552.41, 286.43, 554.88, 303.73, 551.17, 321.03, 549.93, 322.27, 578.36, 348.84, 587.01, 356.26, 569.71, 365.53, 540.05, 346.37, 536.34, 372.95, 544.99, 384.07, 527.07, 395.81, 502.97, 402.61, 504.82, 395.19, 527.69, 386.54, 506.67, 347.61]], 'num_keypoints': 8, 'area': 4942.44385, 'iscrowd': 0, 'keypoints': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 541, 301, 2, 507, 317, 2, 534, 325, 2, 0, 0, 0, 562, 348, 2, 0, 0, 0, 593, 362, 1, 494, 383, 1, 521, 388, 1, 0, 0, 0, 562, 400, 1, 0, 0, 0, 0, 0, 0], 'image_id': 296649, 'bbox': [494.93, 276.54, 92.08, 126.07], 'category_id': 1, 'id': 2155639}, {'segmentation': {'counts': [128391, 2, 423, 4, 23, 4, 395, 5, 22, 12, 387, 6, 22, 17, 382, 7, 20, 19, 381, 9, 17, 21, 379, 11, 14, 24, 377, 14, 10, 25, 378, 19, 5, 24, 379, 47, 379, 48, 378, 48, 378, 49, 377, 50, 377, 50, 376, 51, 376, 51, 376, 51, 376, 51, 376, 51, 376, 51, 376, 51, 377, 50, 378, 21, 3, 24, 381, 7, 3, 8, 15, 12, 394, 4, 20, 6, 134179], 'size': [427, 640]}, 'num_keypoints': 0, 'area': 1340, 'iscrowd': 1, 'keypoints': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'image_id': 296649, 'bbox': [300, 280, 25, 54], 'category_id': 1, 'id': 900100296649}]} Traceback (most recent call last): File "C:\Users\91741\Desktop\Keypoints-of-humanpose-with-Mask-R-CNN-master\model.py", line 2194, in data_generator_keypoint load_image_gt_keypoints(dataset, config, image_id, augment, use_mini_mask=config.USE_MINI_MASK) File "C:\Users\91741\Desktop\Keypoints-of-humanpose-with-Mask-R-CNN-master\model.py", line 1732, in load_image_gt_keypoints image = dataset.load_image(image_id) File "C:\Users\91741\Desktop\Keypoints-of-humanpose-with-Mask-R-CNN-master\utils.py", line 418, in load_image image = skimage.io.imread(self.image_info[image_id]['path']) File "c:\users\91741\anaconda3\envs\tensorflow1\lib\site-packages\skimage\io_io.py", line 48, in imread img = call_plugin('imread', fname, plugin=plugin, plugin_args) File "c:\users\91741\anaconda3\envs\tensorflow1\lib\site-packages\skimage\io\manage_plugins.py", line 210, in call_plugin return func(*args, *kwargs) File "c:\users\91741\anaconda3\envs\tensorflow1\lib\site-packages\skimage\io_plugins\imageio_plugin.py", line 10, in imread return np.asarray(imageio_imread(args, kwargs)) File "c:\users\91741\anaconda3\envs\tensorflow1\lib\site-packages\imageio\core\functions.py", line 264, in imread reader = read(uri, format, "i", kwargs) File "c:\users\91741\anaconda3\envs\tensorflow1\lib\site-packages\imageio\core\functions.py", line 173, in get_reader request = Request(uri, "r" + mode, kwargs) File "c:\users\91741\anaconda3\envs\tensorflow1\lib\site-packages\imageio\core\request.py", line 126, in init self._parse_uri(uri) File "c:\users\91741\anaconda3\envs\tensorflow1\lib\site-packages\imageio\core\request.py", line 278, in _parse_uri raise FileNotFoundError("No such file: '%s'" % fn) FileNotFoundError: No such file: 'C:\Users\91741\Desktop\Keypoints-of-humanpose-with-Mask-R-CNN-master\dataset\val2017\000000296649.jpg' ERROR:root:Error processing image {'id': 473219, 'source': 'coco', 'path': 'C:/Users/91741/Desktop/Keypoints-of-humanpose-with-Mask-R-CNN-master/dataset/val2017\000000473219.jpg', 'width': 640, 'height': 428, 'annotations': [{'segmentation': [[465.63, 159.18, 465.33, 134.03, 466.52, 127.14, 464.73, 111.27, 460.83, 100.48, 453.35, 93.9, 445.86, 91.5, 440.17, 90.6, 429.99, 90.3, 420.7, 92.4, 413.22, 96.59, 403.33, 103.18, 396.44, 110.07, 401.54, 110.67, 398.54, 116.06, 397.64, 124.74, 396.74, 132.23, 397.34, 137.02, 389.86, 154.39, 399.14, 154.99, 399.44, 160.68, 399.14, 163.08, 401.24, 167.87, 401.84, 174.46, 410.82, 184.04, 421.6, 184.04, 423.1, 187.34, 417.41, 201.41, 417.41, 207.4, 395.25, 238.25, 390.16, 250.23, 378.18, 277.48, 372.19, 305.63, 371.29, 308.03, 367.09, 327.2, 353.32, 334.38, 346.43, 332.59, 339.54, 330.49, 332.95, 330.49, 326.06, 331.99, 326.96, 337.08, 340.14, 339.77, 327.86, 349.66, 326.36, 355.05, 333.55, 356.55, 337.45, 361.64, 341.04, 363.73, 348.23, 369.12, 356.01, 371.22, 361.1, 369.12, 370.39, 380.21, 365.9, 414.95, 365.6, 423.63, 365, 428, 497.67, 428, 498.27, 382.3, 495.28, 361.64, 495.87, 308.33, 496.47, 301.74, 500.37, 256.82, 499.47, 240.94, 495.87, 210.7, 490.78, 193.62, 486.29, 183.14, 475.81, 173.26, 470.42, 162.78, 466.82, 160.98]], 'num_keypoints': 7, 'area': 35590.50455, 'iscrowd': 0, 'keypoints': [395, 149, 2, 401, 136, 2, 0, 0, 0, 436, 143, 2, 0, 0, 0, 446, 205, 2, 0, 0, 0, 422, 313, 2, 0, 0, 0, 356, 349, 2, 0, 0, 0, 437, 398, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'image_id': 473219, 'bbox': [326.06, 90.3, 174.31, 337.7], 'category_id': 1, 'id': 203411}, {'segmentation': [[138.84, 286.38, 155.11, 213.61, 171.39, 133.18, 181.92, 121.69, 203.95, 116.91, 212.56, 126.48, 225.01, 127.44, 241.29, 132.23, 249.91, 145.63, 252.78, 151.38, 257.57, 178.19, 258.52, 190.63, 251.82, 191.59, 253.74, 202.12, 247.03, 209.78, 232.67, 213.61, 225.01, 218.4, 224.05, 227.98, 228.84, 230.85, 232.67, 242.34, 232.67, 246.17, 239.37, 262.45, 248.95, 268.19, 253.74, 276.81, 255.65, 322.77, 310.23, 329.47, 326.51, 330.43, 338, 341.92, 321.72, 352.45, 254.69, 355.32, 264.27, 378.3, 277.67, 421.39, 175.22, 422.35, 176.18, 362.03, 157.99, 315.11]], 'num_keypoints': 9, 'area': 29103.0018, 'iscrowd': 0, 'keypoints': [255, 188, 2, 0, 0, 0, 247, 177, 2, 0, 0, 0, 214, 182, 2, 222, 235, 2, 192, 240, 2, 0, 0, 0, 227, 332, 2, 0, 0, 0, 316, 338, 2, 241, 398, 2, 210, 410, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'image_id': 473219, 'bbox': [138.84, 116.91, 199.16, 305.44], 'category_id': 1, 'id': 207730}, {'segmentation': [[343.95, 136.95, 342.01, 146.61, 342.98, 160.14, 348.78, 184.29, 350.71, 202.65, 345.88, 204.58, 305.3, 217.14, 298.54, 223.9, 293.71, 231.63, 290.81, 242.26, 284.05, 261.58, 273.42, 286.7, 265.69, 306.99, 268.59, 323.42, 316.89, 334.04, 348.78, 333.08, 363.27, 334.04, 368.1, 321.48, 371, 305.06, 370.03, 292.5, 365.2, 282.84, 355.54, 281.87, 347.81, 285.74, 332.35, 284.77, 322.69, 275.11, 310.13, 276.07, 312.06, 284.77, 314, 295.4, 305.3, 295.4, 297.57, 300.23, 288.88, 306.02, 285.01, 302.16, 283.08, 281.87, 288.88, 260.62, 326.56, 252.89, 329.45, 258.68, 362.3, 256.75, 377.76, 266.41, 409.64, 224.87, 409.64, 215.21, 396.12, 193.95, 397.08, 180.43, 399.98, 164, 395.15, 160.14, 391.29, 154.34, 396.12, 137.92, 396.12, 123.42, 382.59, 116.66, 365.2, 116.66, 342.01, 126.32], [297.57, 357.23, 297.57, 372.69, 293.71, 387.18, 292.74, 395.88, 287.91, 403.6, 287.91, 412.3, 287.91, 421.96, 363.27, 418.1, 370.03, 388.15, 366.17, 379.45, 356.51, 371.72, 351.67, 367.86, 338.15, 362.06, 324.62, 357.23, 309.16, 357.23]], 'num_keypoints': 13, 'area': 18532.13175, 'iscrowd': 0, 'keypoints': [365, 180, 2, 376, 169, 2, 359, 168, 2, 397, 172, 2, 346, 168, 2, 421, 228, 1, 314, 227, 2, 427, 335, 1, 279, 309, 2, 361, 311, 2, 327, 297, 2, 397, 390, 1, 307, 382, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'image_id': 473219, 'bbox': [265.69, 116.66, 143.95, 305.3], 'category_id': 1, 'id': 210559}, {'segmentation': [[611.09, 144.07, 601.43, 165.45, 602.12, 177.86, 602.81, 184.07, 602.81, 190.28, 602.81, 195.1, 602.81, 199.93, 603.5, 202.69, 603.5, 208.21, 606.95, 230.27, 611.09, 237.17, 612.47, 240.62, 542.13, 265.44, 536.61, 266.13, 528.34, 284.75, 528.34, 290.96, 520.75, 296.47, 518.68, 300.61, 516.61, 303.37, 516.61, 307.51, 510.41, 335.09, 499.37, 353.71, 502.13, 404.05, 510.41, 426.81, 640, 426.81, 640, 134.17, 624.19, 137.61, 608.33, 140.37]], 'num_keypoints': 3, 'area': 25966.60385, 'iscrowd': 0, 'keypoints': [0, 0, 0, 0, 0, 0, 0, 0, 0, 609, 193, 2, 0, 0, 0, 551, 288, 2, 0, 0, 0, 519, 397, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'image_id': 473219, 'bbox': [499.37, 134.17, 140.63, 292.64], 'category_id': 1, 'id': 1719091}, {'segmentation': [[2.11, 420.6, 175.43, 419.55, 174.37, 371.99, 159.58, 318.09, 138.44, 301.19, 121.53, 301.19, 117.3, 282.16, 112.02, 257.86, 108.85, 225.1, 98.28, 196.56, 89.83, 184.94, 70.8, 179.65, 52.84, 179.65, 32.76, 197.62, 24.31, 222.98, 16.91, 239.89, 16.91, 285.33, 0, 296.96, 3.17, 419.55]], 'num_keypoints': 1, 'area': 30174.6827, 'iscrowd': 0, 'keypoints': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 136, 316, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'image_id': 473219, 'bbox': [0, 179.65, 175.43, 240.95], 'category_id': 1, 'id': 1751098}]} Traceback (most recent call last): File "C:\Users\91741\Desktop\Keypoints-of-humanpose-with-Mask-R-CNN-master\model.py", line 2194, in data_generator_keypoint load_image_gt_keypoints(dataset, config, image_id, augment, use_mini_mask=config.USE_MINI_MASK) File "C:\Users\91741\Desktop\Keypoints-of-humanpose-with-Mask-R-CNN-master\model.py", line 1732, in load_image_gt_keypoints image = dataset.load_image(image_id) File "C:\Users\91741\Desktop\Keypoints-of-humanpose-with-Mask-R-CNN-master\utils.py", line 418, in load_image image = skimage.io.imread(self.image_info[image_id]['path']) File "c:\users\91741\anaconda3\envs\tensorflow1\lib\site-packages\skimage\io_io.py", line 48, in imread img = call_plugin('imread', fname, plugin=plugin, plugin_args) File "c:\users\91741\anaconda3\envs\tensorflow1\lib\site-packages\skimage\io\manage_plugins.py", line 210, in call_plugin return func(*args, *kwargs) File "c:\users\91741\anaconda3\envs\tensorflow1\lib\site-packages\skimage\io_plugins\imageio_plugin.py", line 10, in imread return np.asarray(imageio_imread(args, kwargs)) File "c:\users\91741\anaconda3\envs\tensorflow1\lib\site-packages\imageio\core\functions.py", line 264, in imread reader = read(uri, format, "i", kwargs) File "c:\users\91741\anaconda3\envs\tensorflow1\lib\site-packages\imageio\core\functions.py", line 173, in get_reader request = Request(uri, "r" + mode, kwargs) File "c:\users\91741\anaconda3\envs\tensorflow1\lib\site-packages\imageio\core\request.py", line 126, in init self._parse_uri(uri) File "c:\users\91741\anaconda3\envs\tensorflow1\lib\site-packages\imageio\core\request.py", line 278, in _parse_uri raise FileNotFoundError("No such file: '%s'" % fn) FileNotFoundError: No such file: 'C:\Users\91741\Desktop\Keypoints-of-humanpose-with-Mask-R-CNN-master\dataset\val2017\000000473219.jpg' ERROR:root:Error processing image {'id': 290248, 'source': 'coco', 'path': 'C:/Users/91741/Desktop/Keypoints-of-humanpose-with-Mask-R-CNN-master/dataset/val2017\000000290248.jpg', 'width': 640, 'height': 480, 'annotations': [{'segmentation': [[393.57, 452.64, 398.02, 449.67, 397.28, 425.93, 400.99, 425.93, 403.21, 448.19, 406.92, 460.8, 414.34, 461.54, 415.82, 456.35, 410.63, 451.9, 412.86, 415.54, 418.05, 405.16, 421.76, 400.71, 423.98, 402.19, 425.47, 394.03, 421.76, 382.9, 417.31, 366.58, 415.82, 360.65, 409.15, 358.42, 407.66, 356.94, 414.34, 346.55, 414.34, 334.68, 406.92, 330.97, 395.79, 334.68, 392.82, 338.39, 391.34, 346.55, 390.6, 353.23, 384.66, 362.13, 379.47, 376.23, 378.73, 388.1, 378.73, 400.71, 383.92, 407.38, 386.15, 433.35, 386.15, 455.61, 388.37, 458.57, 390.6, 457.09]], 'num_keypoints': 15, 'area': 3667.1681, 'iscrowd': 0, 'keypoints': [409, 347, 2, 0, 0, 0, 406, 345, 2, 0, 0, 0, 400, 346, 2, 411, 361, 2, 388, 359, 2, 414, 376, 2, 380, 378, 2, 419, 392, 2, 378, 394, 1, 406, 396, 2, 393, 395, 2, 406, 428, 2, 390, 425, 2, 408, 451, 2, 389, 450, 2], 'image_id': 290248, 'bbox': [378.73, 330.97, 46.74, 130.57], 'category_id': 1, 'id': 1255909}, {'segmentation': [[327.57, 446.1, 346.75, 444.68, 347.46, 434.74, 350.3, 424.8, 350.3, 414.85, 350.3, 407.75, 352.43, 392.84, 352.43, 382.19, 354.56, 370.83, 355.98, 353.79, 350.3, 345.98, 344.62, 336.75, 338.22, 333.91, 333.25, 336.75, 327.57, 343.14, 333.25, 345.27, 333.96, 350.95, 333.96, 362.31, 329.7, 374.38, 329.7, 381.48, 332.54, 387.16, 330.41, 392.13, 328.28, 399.94, 333.25, 406.33, 333.96, 417.69, 337.51, 425.51, 336.09, 439, 328.99, 442.55]], 'num_keypoints': 12, 'area': 2044.1114, 'iscrowd': 0, 'keypoints': [0, 0, 0, 334, 342, 2, 0, 0, 0, 339, 342, 2, 0, 0, 0, 341, 352, 2, 351, 352, 2, 338, 371, 2, 0, 0, 0, 334, 388, 2, 0, 0, 0, 341, 388, 2, 351, 388, 2, 336, 411, 2, 346, 410, 2, 339, 439, 2, 343, 434, 2], 'image_id': 290248, 'bbox': [327.57, 333.91, 28.41, 112.19], 'category_id': 1, 'id': 1296825}, {'segmentation': [[252.96, 423.6, 252.61, 409.93, 254.17, 399.9, 256.42, 396.78, 256.94, 384.32, 254.17, 381.73, 257.8, 373.25, 259.36, 367.71, 260.23, 351.62, 262.65, 343.66, 266.28, 338.98, 270.96, 339.5, 271.3, 332.06, 274.24, 327.91, 278.57, 327.39, 281.68, 330.33, 282.55, 335.52, 281.68, 341.06, 279.26, 343.48, 274.76, 343.31, 273.9, 347.29, 276.32, 351.27, 276.32, 363.73, 276.67, 368.06, 276.67, 370.82, 279.26, 374.28, 282.03, 382.76, 282.55, 387.26, 280.65, 389.34, 277.01, 390.9, 277.01, 397.65, 273.21, 404.39, 268.71, 419.62, 270.26, 423.78, 271.82, 426.2, 274.07, 427.93, 276.67, 426.72, 275.8, 429.31, 272.51, 430.35, 270.44, 430.7, 268.19, 428.45, 264.73, 427.93, 261.44, 428.1, 261.61, 424.64, 262.3, 422.91, 264.03, 419.62, 265.25, 412.01, 266.28, 407.16, 270.09, 398.17, 268.01, 394.36, 266.63, 397.82, 263.17, 398.68, 261.27, 404.57, 258.5, 412.18, 257.29, 417.55, 258.84, 423.95, 260.23, 426.72, 261.22, 428.87, 264.06, 429.4, 265.32, 429.3, 264.9, 431.08, 262.48, 431.61, 257.76, 431.19, 254.6, 429.51, 252.08, 428.14]], 'num_keypoints': 12, 'area': 1678.2791, 'iscrowd': 0, 'keypoints': [281, 340, 2, 0, 0, 0, 279, 337, 2, 0, 0, 0, 273, 336, 2, 0, 0, 0, 269, 349, 2, 0, 0, 0, 272, 366, 2, 0, 0, 0, 278, 378, 2, 271, 373, 2, 265, 374, 2, 274, 397, 2, 260, 396, 2, 266, 424, 2, 254, 424, 2], 'image_id': 290248, 'bbox': [252.08, 327.39, 30.47, 104.22], 'category_id': 1, 'id': 1306668}, {'segmentation': [[168.35, 338.74, 166.74, 334.65, 166.61, 329.07, 168.61, 324.95, 174.86, 324.57, 177.86, 328.33, 177.14, 332.6, 176.2, 334.33, 176.14, 338.6, 181.04, 340.29, 187.1, 347.94, 189.27, 360.78, 184.62, 365.94, 180.41, 369.9, 179.26, 370.54, 178.56, 374.19, 177.22, 377.31, 172.5, 375.14, 167.59, 373.42, 164.08, 374.25, 160.76, 375.27, 158.34, 377.06, 159.36, 371.44, 158.91, 369.9, 155.6, 368.43, 156.43, 365.5, 155.02, 363.33, 152.28, 360.01, 152.15, 355.16, 156.17, 346.81, 162.99, 339.01, 168.42, 338.76]], 'num_keypoints': 14, 'area': 1172.43255, 'iscrowd': 0, 'keypoints': [169, 332, 2, 172, 331, 2, 168, 331, 2, 177, 332, 2, 0, 0, 0, 182, 345, 2, 162, 343, 2, 186, 359, 2, 154, 355, 2, 181, 365, 2, 157, 364, 2, 175, 371, 2, 164, 371, 2, 171, 394, 1, 161, 394, 1, 0, 0, 0, 0, 0, 0], 'image_id': 290248, 'bbox': [152.15, 324.57, 37.12, 52.74], 'category_id': 1, 'id': 1308640}, {'segmentation': [[197.55, 318.93, 201.12, 323.64, 201.87, 327.58, 199.43, 330.59, 205.07, 336.61, 205.44, 342.63, 202.62, 349.59, 202.25, 355.6, 201.31, 358.61, 197.17, 358.99, 195.1, 355.98, 196.42, 351.47, 199.05, 348.46, 199.43, 342.06, 199.24, 337.18, 195.1, 332.85, 190.59, 331.35, 190.4, 324.58, 190.02, 320.06, 193.78, 317.81, 197.55, 318.37], [197.46, 372.83, 200.01, 376.03, 200.87, 381.99, 201.51, 396.05, 199.38, 397.76, 193.2, 398.61, 195.54, 394.35, 194.69, 387.53, 192.56, 377.31, 195.54, 372.83]], 'num_keypoints': 0, 'area': 451.3625, 'iscrowd': 0, 'keypoints': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'image_id': 290248, 'bbox': [190.02, 317.81, 15.42, 80.8], 'category_id': 1, 'id': 1311210}, {'segmentation': [[238.83, 347.15, 237.61, 342.88, 233.74, 339.01, 233.54, 337.38, 235.98, 334.74, 237.81, 329.25, 237, 324.77, 237.2, 319.89, 241.07, 320.29, 245.74, 322.13, 248.19, 325.18, 248.59, 329.65, 249.61, 332.71, 246.76, 335.96, 243.1, 340.84, 241.27, 345.32]], 'num_keypoints': 0, 'area': 244.4145, 'iscrowd': 0, 'keypoints': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'image_id': 290248, 'bbox': [233.54, 319.89, 16.07, 27.26], 'category_id': 1, 'id': 1317082}, {'segmentation': [[261.78, 335.09, 262.95, 332.6, 265.11, 330.77, 266.44, 330.27, 265.61, 329.1, 265.28, 325.44, 264.45, 322.78, 264.45, 318.96, 265.94, 315.63, 270.94, 314.8, 271.1, 314.8, 271.6, 315.3, 272.1, 318.46, 273.43, 322.45, 275.59, 325.94, 276.92, 326.44, 272.1, 331.1, 270.77, 333.6, 270.6, 334.43, 270.6, 338.09, 270.6, 339.59, 268.77, 339.75, 264.95, 340.25, 264.95, 340.25, 263.61, 342.58, 261.95, 338.92, 261.45, 337.92]], 'num_keypoints': 0, 'area': 213.7225, 'iscrowd': 0, 'keypoints': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'image_id': 290248, 'bbox': [261.45, 314.8, 15.47, 27.78], 'category_id': 1, 'id': 1319612}, {'segmentation': [[113.22, 358.45, 108.87, 358.06, 106.89, 355.29, 110.85, 342.62, 117.18, 338.67, 119.16, 336.29, 121.14, 325.21, 127.08, 322.44, 129.05, 328.38, 128.66, 334.31, 127.47, 337.08, 132.22, 340.25, 135.39, 343.02, 134.2, 360.04, 133.01, 368.74, 132.22, 378.24, 129.85, 381.01, 126.28, 401.2, 128.26, 412.68, 127.47, 416.63, 122.72, 417.03, 121.14, 413.47, 121.93, 407.93, 121.93, 393.28, 122.72, 380.62, 120.74, 380.22, 118.76, 389.33, 118.37, 396.05, 115.6, 407.93, 116.39, 411.49, 114.81, 412.68, 109.27, 413.07, 111.64, 407.14, 112.43, 391.3, 114.01, 379.83, 111.24, 368.35]], 'num_keypoints': 16, 'area': 1463.226, 'iscrowd': 0, 'keypoints': [127, 332, 2, 129, 329, 2, 125, 330, 2, 0, 0, 0, 121, 330, 2, 132, 343, 2, 113, 342, 2, 133, 356, 2, 110, 354, 2, 130, 349, 2, 113, 355, 2, 126, 369, 2, 115, 369, 2, 125, 388, 2, 114, 388, 2, 123, 408, 2, 113, 404, 2], 'image_id': 290248, 'bbox': [106.89, 322.44, 28.5, 94.59], 'category_id': 1, 'id': 1716511}, {'segmentation': [[458.71, 360.17, 455.33, 356.23, 454.77, 351.16, 455.33, 346.29, 458.14, 344.41, 467.71, 342.91, 470.15, 344.22, 471.28, 347.04, 470.9, 351.54, 471.09, 353.23, 472.03, 354.35, 472.4, 355.86, 470.71, 356.79, 470.34, 358.48, 469.96, 359.8, 469.4, 360.73, 465.65, 363.74, 465.09, 365.24, 467.71, 368.43, 469.03, 370.49, 470.53, 376.12, 471.47, 381.75, 471.09, 385.5, 476.72, 389.07, 480.1, 399.57, 479.16, 400.89, 477.66, 403.51, 474.09, 408.21, 470.53, 410.08, 470.15, 411.4, 474.65, 419.28, 476.16, 428.47, 474.09, 438.23, 473.53, 442.17, 474.28, 455.49, 477.47, 458.3, 483.29, 460.56, 481.78, 462.06, 475.41, 463.37, 467.34, 464.12, 462.83, 462.99, 463.21, 459.24, 463.77, 455.3, 464.33, 446.86, 464.9, 432.03, 464.9, 429.6, 463.4, 424.9, 461.15, 419.65, 458.14, 422.84, 456.45, 431.47, 453.64, 438.23, 452.33, 445.54, 451.76, 450.24, 450.64, 456.24, 451.95, 461.87, 454.02, 465.62, 451.39, 466.93, 445.01, 466, 441.07, 463.37, 439.57, 459.24, 439.75, 456.8, 439.75, 453.05, 442.38, 441.6, 442.94, 435.41, 447.45, 430.72, 449.7, 427.34, 448.01, 420.03, 447.26, 417.21, 447.26, 412.9, 445.38, 414.21, 441.44, 413.46, 441.26, 411.21, 442.19, 406.33, 444.45, 402.76, 444.45, 397.89, 446.89, 383.63, 448.01, 376.5, 449.32, 373.12, 452.7, 367.3, 455.33, 363.36, 455.7, 362.05]], 'num_keypoints': 13, 'area': 2780.20745, 'iscrowd': 0, 'keypoints': [470, 355, 2, 0, 0, 0, 469, 352, 2, 0, 0, 0, 463, 354, 2, 464, 368, 2, 454, 370, 2, 0, 0, 0, 448, 387, 2, 0, 0, 0, 446, 404, 2, 464, 406, 2, 453, 409, 2, 470, 426, 2, 449, 434, 2, 468, 456, 2, 444, 459, 2], 'image_id': 290248, 'bbox': [439.57, 342.91, 43.72, 124.02], 'category_id': 1, 'id': 1723378}, {'segmentation': [[88.91, 377, 86.55, 354.45, 84.87, 350.07, 82.18, 345.03, 78.14, 340.31, 71.07, 336.61, 70.06, 327.18, 67.37, 323.48, 56.93, 328.19, 56.93, 334.93, 53.23, 342.33, 51.54, 346.37, 47.5, 359.84, 48.18, 370.61, 49.19, 373.3, 87.56, 377]], 'num_keypoints': 9, 'area': 1486.871, 'iscrowd': 0, 'keypoints': [0, 0, 0, 0, 0, 0, 70, 332, 2, 0, 0, 0, 66, 334, 2, 54, 348, 2, 78, 344, 2, 52, 368, 2, 83, 357, 2, 0, 0, 0, 85, 375, 2, 60, 378, 1, 79, 377, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'image_id': 290248, 'bbox': [47.5, 323.48, 41.41, 53.52], 'category_id': 1, 'id': 1736496}, {'segmentation': [[135.01, 350.12, 138.08, 343.27, 141.39, 337.13, 143.51, 334.76, 147.05, 333.82, 147.76, 330.98, 146.35, 327.44, 144.22, 323.66, 141.39, 315.87, 140.68, 314.69, 135.95, 314.69, 131.23, 316.81, 130.52, 322.72, 128.63, 327.91, 127.69, 334.53, 128.16, 338.07, 134.3, 341.85, 134.54, 342.56, 134.77, 343.5, 134.77, 344.92, 135.01, 349.41, 135.01, 349.88, 134.77, 351.3]], 'num_keypoints': 0, 'area': 393.3592, 'iscrowd': 0, 'keypoints': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'image_id': 290248, 'bbox': [127.69, 314.69, 20.07, 36.61], 'category_id': 1, 'id': 2031502}, {'segmentation': [[140.37, 409.81, 140.91, 393.03, 140.91, 386.53, 139.83, 382.2, 139.29, 373.54, 140.37, 362.72, 142.54, 360.01, 142.54, 356.22, 143.62, 353.52, 143.08, 350.27, 137.12, 350.27, 136.58, 349.19, 140.91, 338.9, 143.62, 335.66, 148.24, 334.03, 149.31, 333.67, 148.24, 331.17, 147.88, 327.25, 148.24, 324.39, 149.67, 321.53, 150.38, 319.39, 155.38, 319.75, 158.95, 323.68, 159.31, 326.17, 158.59, 329.39, 156.46, 331.91, 156.69, 333.32, 160.93, 334.5, 164.23, 338.27, 164.47, 339.69, 162.11, 339.92, 152.69, 352.88, 151.98, 357.83, 154.1, 363.01, 156.46, 365.84, 155.75, 367.72, 158.34, 369.14, 156.81, 374.38, 155.38, 376.17, 154.67, 386.17, 152.17, 389.74, 150.65, 390.87, 145.78, 399.53, 140.91, 408.19], [138.21, 411.43, 134.96, 413.6, 138.21, 414.68, 139.29, 414.68, 138.75, 413.6, 139.29, 410.35]], 'num_keypoints': 12, 'area': 1115.80215, 'iscrowd': 0, 'keypoints': [150, 328, 2, 152, 327, 2, 149, 327, 2, 157, 327, 2, 0, 0, 0, 161, 339, 2, 144, 338, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 138, 347, 2, 157, 365, 1, 144, 364, 2, 153, 389, 1, 145, 386, 2, 0, 0, 0, 143, 407, 1], 'image_id': 290248, 'bbox': [134.96, 319.39, 29.51, 95.29], 'category_id': 1, 'id': 2032043}, {'segmentation': [[183.12, 409.95, 184.15, 414.06, 189.02, 408.41, 189.02, 400.97, 192.1, 378.12, 195.95, 370.16, 198.78, 358.87, 194.41, 349.63, 192.62, 343.21, 190.05, 339.58, 184.92, 334.96, 184.92, 329.06, 180.29, 325.98, 176.7, 326.49, 176.19, 334.45, 176.96, 339.32, 184.66, 344.2, 186.71, 349.08, 189.28, 360.63, 181.06, 369.62, 183.89, 388.87, 184.66, 404.01]], 'num_keypoints': 0, 'area': 730.9974, 'iscrowd': 0, 'keypoints': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'image_id': 290248, 'bbox': [176.19, 325.98, 22.59, 88.08], 'category_id': 1, 'id': 2033971}, {'segmentation': {'counts': [339, 10, 449, 12, 4, 26, 5, 2, 431, 12, 1, 39, 428, 52, 428, 53, 427, 53, 427, 53, 427, 53, 427, 53, 427, 53, 427, 53, 427, 53, 427, 53, 427, 53, 427, 53, 427, 52, 428, 52, 428, 52, 428, 51, 429, 51, 430, 49, 431, 49, 431, 50, 431, 49, 431, 49, 431, 49, 431, 49, 431, 49, 431, 49, 431, 48, 432, 48, 432, 47, 433, 47, 433, 46, 434, 45, 435, 45, 435, 43, 437, 42, 438, 41, 439, 39, 442, 35, 445, 35, 446, 33, 447, 33, 448, 31, 450, 30, 450, 29, 453, 26, 454, 26, 456, 22, 459, 20, 461, 18, 464, 14, 470, 6, 6222, 1, 477, 2, 477, 4, 475, 6, 473, 9, 470, 18, 462, 20, 459, 21, 459, 22, 458, 22, 457, 24, 456, 24, 456, 25, 455, 26, 454, 27, 452, 29, 451, 30, 450, 32, 448, 34, 446, 36, 443, 40, 440, 45, 435, 46, 433, 47, 433, 47, 433, 47, 433, 47, 433, 47, 433, 46, 434, 46, 434, 45, 435, 45, 435, 44, 436, 43, 437, 43, 437, 41, 440, 39, 441, 38, 443, 35, 445, 31, 450, 28, 453, 26, 454, 26, 455, 23, 456, 24, 455, 26, 453, 27, 452, 28, 452, 27, 452, 28, 452, 27, 453, 26, 453, 26, 454, 25, 455, 20, 460, 14, 466, 13, 467, 13, 467, 12, 468, 12, 468, 11, 469, 11, 469, 14, 11, 1, 454, 15, 466, 12, 468, 6, 475, 3, 475, 5, 473, 6, 473, 7, 472, 7, 472, 8, 471, 9, 471, 9, 470, 10, 470, 11, 469, 13, 466, 17, 15, 1, 447, 20, 11, 1, 448, 22, 9, 1, 448, 23, 73, 4, 6, 12, 362, 26, 4, 1, 63, 8, 2, 16, 360, 23, 2, 6, 61, 29, 359, 19, 10, 1, 60, 32, 358, 17, 72, 34, 357, 16, 71, 36, 357, 16, 70, 37, 357, 16, 69, 38, 357, 16, 68, 39, 358, 15, 65, 42, 358, 16, 64, 42, 359, 16, 10, 2, 50, 43, 359, 17, 8, 3, 50, 42, 361, 18, 3, 6, 50, 42, 362, 26, 49, 42, 363, 26, 49, 41, 366, 22, 51, 38, 370, 20, 52, 35, 374, 18, 53, 34, 377, 14, 55, 34, 381, 6, 59, 34, 446, 34, 446, 34, 446, 34, 446, 34, 446, 33, 448, 32, 448, 31, 450, 30, 451, 28, 389, 3, 61, 26, 389, 5, 60, 26, 388, 6, 62, 22, 389, 7, 63, 20, 389, 8, 64, 18, 390, 8, 66, 14, 391, 10, 69, 6, 395, 10, 470, 11, 15, 1, 452, 19, 461, 19, 461, 20, 460, 21, 459, 22, 458, 23, 457, 3, 12, 9, 456, 2, 13, 11, 454, 2, 14, 12, 452, 1, 15, 16, 448, 1, 15, 18, 446, 2, 15, 17, 447, 1, 16, 15, 448, 2, 16, 14, 449, 2, 16, 12, 450, 4, 7, 2, 468, 4, 5, 4, 468, 13, 467, 14, 468, 13, 468, 13, 468, 18, 464, 14, 19, 10, 441, 41, 438, 44, 434, 48, 430, 51, 428, 53, 426, 55, 424, 57, 422, 58, 422, 59, 420, 60, 420, 60, 420, 61, 418, 62, 418, 67, 413, 69, 411, 73, 407, 75, 405, 77, 403, 79, 401, 80, 400, 81, 399, 82, 398, 83, 397, 83, 397, 84, 397, 83, 397, 16, 3, 64, 398, 14, 5, 64, 397, 13, 7, 63, 398, 9, 11, 62, 420, 60, 423, 57, 422, 58, 395, 1, 25, 59, 394, 2, 23, 61, 393, 4, 20, 63, 392, 5, 18, 65, 391, 6, 17, 66, 390, 8, 14, 68, 389, 10, 12, 69, 388, 12, 10, 69, 389, 18, 3, 70, 388, 92, 388, 92, 388, 92, 387, 93, 387, 90, 390, 69, 3, 16, 392, 67, 6, 14, 393, 65, 414, 63, 417, 59, 421, 46, 434, 40, 440, 25, 5, 6, 444, 23, 9, 2, 446, 22, 458, 8, 7, 6, 459, 6, 14, 1, 459, 6, 474, 6, 474, 6, 474, 5, 475, 5, 19, 5, 451, 8, 472, 12, 468, 14, 466, 15, 19, 2, 444, 17, 17, 2, 444, 18, 16, 2, 14, 12, 418, 18, 16, 3, 11, 15, 18, 4, 394, 19, 16, 4, 7, 19, 17, 6, 391, 21, 14, 5, 5, 24, 13, 9, 389, 22, 12, 38, 9, 12, 386, 26, 7, 43, 5, 14, 385, 97, 383, 99, 380, 102, 378, 107, 373, 109, 19, 6, 346, 112, 14, 10, 344, 114, 11, 12, 343, 116, 8, 14, 342, 120, 3, 16, 340, 123, 1, 17, 339, 142, 338, 142, 338, 142, 338, 142, 338, 142, 338, 142, 338, 142, 338, 142, 338, 142, 338, 142, 338, 142, 338, 142, 338, 142, 338, 142, 338, 141, 339, 141, 339, 140, 340, 140, 340, 139, 341, 139, 341, 138, 342, 137, 343, 134, 346, 131, 348, 132, 348, 132, 348, 132, 348, 132, 348, 132, 348, 132, 348, 132, 348, 132, 348, 132, 348, 132, 348, 132, 348, 39, 1, 92, 348, 38, 2, 54, 3, 35, 348, 37, 4, 29, 9, 11, 9, 33, 348, 36, 5, 26, 14, 6, 13, 32, 348, 35, 6, 23, 19, 2, 16, 31, 348, 34, 8, 20, 41, 29, 349, 32, 81, 18, 349, 31, 84, 16, 349, 31, 87, 12, 350, 30, 90, 6, 355, 29, 451, 29, 449, 31, 447, 34, 440, 40, 436, 45, 432, 48, 427, 54, 422, 59, 418, 64, 411, 71, 408, 73, 406, 76, 403, 78, 402, 40, 4, 36, 47, 5, 348, 34, 9, 38, 30, 23, 346, 32, 11, 40, 22, 31, 344, 27, 16, 41, 13, 40, 343, 22, 21, 95, 342, 20, 23, 96, 342, 14, 28, 97, 342, 8, 33, 97, 344, 4, 35, 98, 382, 98, 382, 98, 382, 99, 381, 99, 381, 99, 381, 99, 381, 99, 381, 99, 381, 99, 381, 99, 381, 99, 381, 99, 381, 99, 381, 98, 382, 98, 382, 97, 383, 97, 383, 96, 384, 72, 18, 5, 385, 63, 28, 4, 385, 61, 31, 1, 387, 58, 422, 55, 425, 52, 428, 50, 431, 48, 432, 46, 435, 44, 436, 42, 439, 40, 441, 34, 446, 27, 455, 21, 460, 18, 463, 16, 466, 13, 471, 9, 1438, 1, 477, 2, 476, 4, 475, 5, 474, 5, 474, 6, 473, 7, 473, 7, 472, 8, 25, 1, 446, 9, 22, 4, 445, 9, 21, 5, 444, 11, 18, 8, 443, 11, 17, 9, 443, 12, 14, 11, 443, 12, 13, 13, 442, 38, 442, 39, 441, 43, 437, 47, 36, 3, 394, 50, 32, 5, 393, 54, 26, 8, 392, 58, 21, 10, 392, 60, 17, 12, 391, 63, 15, 11, 392, 65, 13, 11, 391, 68, 4, 17, 392, 88, 393, 88, 392, 88, 394, 86, 395, 85, 396, 84, 398, 82, 402, 78, 406, 74, 404, 76, 403, 77, 402, 78, 401, 79, 400, 79, 401, 79, 400, 79, 401, 74, 406, 72, 407, 71, 409, 62, 418, 56, 424, 50, 430, 44, 436, 41, 439, 39, 441, 38, 442, 36, 444, 34, 446, 33, 448, 16, 4, 10, 450, 12, 12, 4, 453, 11, 13, 3, 453, 10, 15, 1, 455, 9, 472, 8, 472, 7, 475, 5, 476, 4, 477, 3, 479, 1, 473, 7, 471, 9, 19, 5, 445, 11, 18, 7, 442, 13, 18, 9, 439, 15, 15, 15, 434, 18, 4, 2, 4, 22, 429, 27, 2, 30, 420, 60, 420, 61, 418, 62, 418, 63, 417, 64, 415, 69, 9, 2, 400, 72, 5, 3, 400, 81, 399, 81, 399, 81, 399, 81, 399, 81, 399, 81, 399, 81, 399, 81, 399, 81, 400, 80, 400, 80, 401, 78, 402, 78, 403, 76, 405, 75, 405, 74, 408, 71, 410, 70, 411, 67, 415, 64, 420, 12, 5, 42, 448, 30, 469, 7, 4815, 10, 468, 14, 464, 18, 460, 22, 457, 24, 455, 26, 402, 13, 36, 30, 399, 82, 396, 85, 393, 88, 391, 90, 389, 92, 387, 94, 385, 95, 385, 96, 383, 97, 383, 98, 382, 99, 380, 100, 380, 101, 379, 101, 379, 101, 379, 102, 378, 102, 378, 102, 378, 102, 378, 102, 378, 102, 378, 102, 378, 102, 378, 102, 378, 102, 378, 102, 378, 102, 378, 102, 378, 102, 378, 102, 378, 102, 378, 102, 378, 102, 378, 102, 378, 102, 378, 102, 378, 102, 378, 101, 379, 101, 379, 100, 380, 100, 380, 99, 381, 98, 382, 98, 383, 96, 384, 95, 386, 94, 386, 92, 389, 90, 391, 88, 392, 86, 395, 81, 400, 78, 404, 72, 409, 34, 1, 16, 430, 19, 463, 15, 469, 7, 1951, 24, 453, 29, 449, 35, 443, 39, 439, 43, 436, 46, 432, 49, 429, 53, 424, 59, 411, 71, 407, 86, 392, 90, 388, 94, 385, 97, 382, 99, 380, 101, 378, 103, 377, 104, 375, 105, 375, 106, 374, 106, 373, 107, 373, 108, 372, 108, 372, 108, 372, 108, 372, 108, 372, 108, 372, 108, 372, 108, 372, 108, 372, 108, 372, 108, 372, 107, 373, 107, 374, 105, 375, 105, 375, 104, 376, 103, 377, 103, 377, 101, 379, 101, 379, 101, 379, 101, 379, 101, 379, 101, 379, 101, 380, 100, 380, 99, 382, 98, 382, 97, 384, 96, 385, 94, 386, 93, 389, 91, 390, 88, 393, 86, 396, 83, 401, 77, 412, 34, 2, 6, 4, 18, 42], 'size': [480, 640]}, 'num_keypoints': 0, 'area': 44367, 'iscrowd': 1, 'keypoints': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'image_id': 290248, 'bbox': [0, 269, 639, 188], 'category_id': 1, 'id': 900100290248}]} Traceback (most recent call last): File "C:\Users\91741\Desktop\Keypoints-of-humanpose-with-Mask-R-CNN-master\model.py", line 2194, in data_generator_keypoint load_image_gt_keypoints(dataset, config, image_id, augment, use_mini_mask=config.USE_MINI_MASK) File "C:\Users\91741\Desktop\Keypoints-of-humanpose-with-Mask-R-CNN-master\model.py", line 1732, in load_image_gt_keypoints image = dataset.load_image(image_id) File "C:\Users\91741\Desktop\Keypoints-of-humanpose-with-Mask-R-CNN-master\utils.py", line 418, in load_image image = skimage.io.imread(self.image_info[image_id]['path']) File "c:\users\91741\anaconda3\envs\tensorflow1\lib\site-packages\skimage\io_io.py", line 48, in imread img = call_plugin('imread', fname, plugin=plugin, plugin_args) File "c:\users\91741\anaconda3\envs\tensorflow1\lib\site-packages\skimage\io\manage_plugins.py", line 210, in call_plugin return func(*args, *kwargs) File "c:\users\91741\anaconda3\envs\tensorflow1\lib\site-packages\skimage\io_plugins\imageio_plugin.py", line 10, in imread return np.asarray(imageio_imread(args, kwargs)) File "c:\users\91741\anaconda3\envs\tensorflow1\lib\site-packages\imageio\core\functions.py", line 264, in imread reader = read(uri, format, "i", kwargs) File "c:\users\91741\anaconda3\envs\tensorflow1\lib\site-packages\imageio\core\functions.py", line 173, in get_reader request = Request(uri, "r" + mode, kwargs) File "c:\users\91741\anaconda3\envs\tensorflow1\lib\site-packages\imageio\core\request.py", line 126, in init self._parse_uri(uri) File "c:\users\91741\anaconda3\envs\tensorflow1\lib\site-packages\imageio\core\request.py", line 278, in _parse_uri raise FileNotFoundError("No such file: '%s'" % fn) FileNotFoundError: No such file: 'C:\Users\91741\Desktop\Keypoints-of-humanpose-with-Mask-R-CNN-master\dataset\val2017\000000290248.jpg' ERROR:root:Error processing image {'id': 208423, 'source': 'coco', 'path': 'C:/Users/91741/Desktop/Keypoints-of-humanpose-with-Mask-R-CNN-master/dataset/val2017\000000208423.jpg', 'width': 640, 'height': 480, 'annotations': [{'segmentation': [[415.8, 441.65, 416.02, 437.55, 417.79, 436, 417.57, 433.34, 418.02, 432.46, 419.45, 432.12, 421.34, 434.01, 421.45, 435.67, 423, 437.44, 423.88, 440.76, 424.33, 445.41, 423.99, 445.96, 424.55, 452.83, 423.66, 452.83, 421.78, 448.07, 420.56, 448.51, 420.78, 452.5, 417.9, 452.83, 418.13, 443.64, 418.02, 442.75, 416.8, 441.31, 416.35, 441.54]], 'num_keypoints': 0, 'area': 111.729, 'iscrowd': 0, 'keypoints': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'image_id': 208423, 'bbox': [415.8, 432.12, 8.75, 20.71], 'category_id': 1, 'id': 1750045}, {'segmentation': [[499.99, 435.78, 493.57, 435.35, 493.15, 427.22, 494.86, 426.36, 494.43, 423.36, 494.22, 420.8, 497, 420.16, 497.85, 419.94, 499.78, 425.08, 501.71, 432.99]], 'num_keypoints': 0, 'area': 96.66615, 'iscrowd': 0, 'keypoints': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'image_id': 208423, 'bbox': [493.15, 419.94, 8.56, 15.84], 'category_id': 1, 'id': 1762584}]} Traceback (most recent call last): File "C:\Users\91741\Desktop\Keypoints-of-humanpose-with-Mask-R-CNN-master\model.py", line 2194, in data_generator_keypoint load_image_gt_keypoints(dataset, config, image_id, augment, use_mini_mask=config.USE_MINI_MASK) File "C:\Users\91741\Desktop\Keypoints-of-humanpose-with-Mask-R-CNN-master\model.py", line 1732, in load_image_gt_keypoints image = dataset.load_image(image_id) File "C:\Users\91741\Desktop\Keypoints-of-humanpose-with-Mask-R-CNN-master\utils.py", line 418, in load_image image = skimage.io.imread(self.image_info[image_id]['path']) File "c:\users\91741\anaconda3\envs\tensorflow1\lib\site-packages\skimage\io_io.py", line 48, in imread img = call_plugin('imread', fname, plugin=plugin, plugin_args) File "c:\users\91741\anaconda3\envs\tensorflow1\lib\site-packages\skimage\io\manage_plugins.py", line 210, in call_plugin return func(*args, *kwargs) File "c:\users\91741\anaconda3\envs\tensorflow1\lib\site-packages\skimage\io_plugins\imageio_plugin.py", line 10, in imread return np.asarray(imageio_imread(args, kwargs)) File "c:\users\91741\anaconda3\envs\tensorflow1\lib\site-packages\imageio\core\functions.py", line 264, in imread reader = read(uri, format, "i", kwargs) File "c:\users\91741\anaconda3\envs\tensorflow1\lib\site-packages\imageio\core\functions.py", line 173, in get_reader request = Request(uri, "r" + mode, kwargs) File "c:\users\91741\anaconda3\envs\tensorflow1\lib\site-packages\imageio\core\request.py", line 126, in init self._parse_uri(uri) File "c:\users\91741\anaconda3\envs\tensorflow1\lib\site-packages\imageio\core\request.py", line 278, in _parse_uri raise FileNotFoundError("No such file: '%s'" % fn) FileNotFoundError: No such file: 'C:\Users\91741\Desktop\Keypoints-of-humanpose-with-Mask-R-CNN-master\dataset\val2017\000000208423.jpg'

FileNotFoundError Traceback (most recent call last)

in 4 learning_rate=config.LEARNING_RATE, 5 epochs=15, ----> 6 layers='heads') 7 # Training - Stage 2 8 # Finetune layers from ResNet stage 4 and up ~\Desktop\Keypoints-of-humanpose-with-Mask-R-CNN-master\model.py in train(self, train_dataset, val_dataset, learning_rate, epochs, layers) 3043 steps_per_epoch=self.config.STEPS_PER_EPOCH, 3044 callbacks=callbacks, -> 3045 validation_data=next(val_generator), 3046 validation_steps=self.config.VALIDATION_STEPS, 3047 max_queue_size=100, ~\Desktop\Keypoints-of-humanpose-with-Mask-R-CNN-master\model.py in data_generator_keypoint(dataset, config, shuffle, augment, random_rois, batch_size, detection_targets) 2192 #image_meta:image_id,image_shape,windows.active_class_ids 2193 image, image_meta, gt_class_ids, gt_boxes, gt_masks, gt_keypoints = \ -> 2194 load_image_gt_keypoints(dataset, config, image_id, augment, use_mini_mask=config.USE_MINI_MASK) 2195 2196 Num_keypoint = np.shape(gt_keypoints)[1] ~\Desktop\Keypoints-of-humanpose-with-Mask-R-CNN-master\model.py in load_image_gt_keypoints(dataset, config, image_id, augment, use_mini_mask) 1730 """ 1731 # Load image and mask -> 1732 image = dataset.load_image(image_id) 1733 # mask, class_ids = dataset.load_mask(image_id) 1734 shape = image.shape ~\Desktop\Keypoints-of-humanpose-with-Mask-R-CNN-master\utils.py in load_image(self, image_id) 416 """ 417 # Load image --> 418 image = skimage.io.imread(self.image_info[image_id]['path']) 419 # If grayscale. Convert to RGB for consistency. 420 if image.ndim != 3: c:\users\91741\anaconda3\envs\tensorflow1\lib\site-packages\skimage\io\_io.py in imread(fname, as_gray, plugin, **plugin_args) 46 47 with file_or_url_context(fname) as fname: ---> 48 img = call_plugin('imread', fname, plugin=plugin, **plugin_args) 49 50 if not hasattr(img, 'ndim'): c:\users\91741\anaconda3\envs\tensorflow1\lib\site-packages\skimage\io\manage_plugins.py in call_plugin(kind, *args, **kwargs) 208 (plugin, kind)) 209 --> 210 return func(*args, **kwargs) 211 212 c:\users\91741\anaconda3\envs\tensorflow1\lib\site-packages\skimage\io\_plugins\imageio_plugin.py in imread(*args, **kwargs) 8 @wraps(imageio_imread) 9 def imread(*args, **kwargs): ---> 10 return np.asarray(imageio_imread(*args, **kwargs)) c:\users\91741\anaconda3\envs\tensorflow1\lib\site-packages\imageio\core\functions.py in imread(uri, format, **kwargs) 262 263 # Get reader and read first --> 264 reader = read(uri, format, "i", **kwargs) 265 with reader: 266 return reader.get_data(0) c:\users\91741\anaconda3\envs\tensorflow1\lib\site-packages\imageio\core\functions.py in get_reader(uri, format, mode, **kwargs) 171 172 # Create request object --> 173 request = Request(uri, "r" + mode, **kwargs) 174 175 # Get format c:\users\91741\anaconda3\envs\tensorflow1\lib\site-packages\imageio\core\request.py in __init__(self, uri, mode, **kwargs) 124 125 # Parse what was given --> 126 self._parse_uri(uri) 127 128 # Set extension c:\users\91741\anaconda3\envs\tensorflow1\lib\site-packages\imageio\core\request.py in _parse_uri(self, uri) 276 # Reading: check that the file exists (but is allowed a dir) 277 if not os.path.exists(fn): --> 278 raise FileNotFoundError("No such file: '%s'" % fn) 279 else: 280 # Writing: check that the directory to write to does exist FileNotFoundError: No such file: 'C:\Users\91741\Desktop\Keypoints-of-humanpose-with-Mask-R-CNN-master\dataset\val2017\000000208423.jpg'