tensorflow / models

Models and examples built with TensorFlow
Other
77.16k stars 45.75k forks source link

tensorflow.python.framework.errors_impl.NotFoundError: NewRandomAccessFile failed to Create/Open: data/Obj_det.pbtxt : The system cannot find the file specified. #6595

Closed Pices-26 closed 4 years ago

Pices-26 commented 5 years ago

Few things before the start Yes, protobuf was compiled. I ran the basic object_detection model test no problem. That input_reader_pb2 is there along with every other file I made a live camera feed with this object_detection When I try to run model_test_builder in object_detection\builders I get this issue

D:\python_ver\python364\Lib\site-packages\object_detection\builders>python model_builder_test.py Traceback (most recent call last): File "model_builder_test.py", line 23, in from object_detection.builders import model_builder File "D:\python_ver\python364\lib\site-packages\object_detection\builders\model_builder.py", line 20, in from object_detection.builders import anchor_generator_builder File "D:\python_ver\python364\lib\site-packages\object_detection\builders\anchor_generator_builder.py", line 21, in from object_detection.protos import anchor_generator_pb2 ImportError: cannot import name 'anchor_generator_pb2'

This means that protobuf wasn't installed fully correctly which might lead to my main issue down below. But everything was done including exports of path and compilation. I've been on this 2 whole days now. Maybe someone knows the answer. Thank you

System information

TF version: 1.13.1 Python Version: 3.6.4 protoc version: 3.4

Source code / logs

D:\python_ver\python364\Lib\site-packages\tensorflow\models-master\research\object_detection>python train.py --logtostderr --train_dir=training/ --pipeline_config_path=training/ssd_mobilenet_v1_pets.config Traceback (most recent call last): File "train.py", line 49, in from object_detection.builders import dataset_builder File "D:\python_ver\python364\lib\site-packages\object_detection\builders\dataset_builder.py", line 27, in from object_detection.data_decoders import tf_example_decoder File "D:\python_ver\python364\lib\site-packages\object_detection\data_decoders\tf_example_decoder.py", line 24, in from object_detection.protos import input_reader_pb2 ImportError: cannot import name 'input_reader_pb2'

Pices-26 commented 5 years ago

I've used this command and got no errors. It was ran from research folder C:\Users\User\Downloads\protoc\bin\protoc object_detection/protos/*.proto --python_out=.

merts1 commented 5 years ago

cd research folder and run this

protoc --python_out=. .\object_detection\protos\input_reader.proto python setup.py build python setup.py install

Pices-26 commented 5 years ago

cd research folder and run this

protoc --python_out=. .\object_detection\protos\input_reader.proto python setup.py build python setup.py install

this is what I'm getting now

Traceback (most recent call last): File "model_builder.py", line 35, in from object_detection.models import faster_rcnn_inception_resnet_v2_feature_extractor as frcnn_inc_res File "D:\python_ver\python364\lib\site-packages\object_detection-0.1-py3.6.egg\object_detection\models\faster_rcnn_inception_resnet_v2_feature_extractor.py", line 28, in from nets import inception_resnet_v2 ModuleNotFoundError: No module named 'nets'

merts1 commented 5 years ago

cd research folder and run this protoc --python_out=. .\object_detection\protos\input_reader.proto python setup.py build python setup.py install

this is what I'm getting now

Traceback (most recent call last): File "model_builder.py", line 35, in from object_detection.models import faster_rcnn_inception_resnet_v2_feature_extractor as frcnn_inc_res File "D:\python_ver\python364\lib\site-packages\object_detection-0.1-py3.6.egg\object_detection\models\faster_rcnn_inception_resnet_v2_feature_extractor.py", line 28, in from nets import inception_resnet_v2 ModuleNotFoundError: No module named 'nets'

you didn't set the paths correctly do the following

set PYTHONPATH=%PYTHONPATH%;[PATH TO RESEARCH FOLDER] set PYTHONPATH=%PYTHONPATH%;[PATH TO SLIM FOLDER INSIDE THE RESEARCH FOLDER]

merts1 commented 5 years ago

did it work?

Pices-26 commented 5 years ago

My model test has worked, my live object detection works too. But that's a preset model. Tomorrow I will train my own model and see how it will work. I'll update you. Other than that, thank you so much. Do I have to set this path every time I do something in a different directory location? Also I'm using pycharm combined with just console. Do you think it's a good idea or should I move over to conda? or just linux all together?

merts1 commented 5 years ago

yes you have to set it every time, or you can simply move nets folder to current directory but setting pythonpath is much more useful in many ways. i have never used pycharm for object detection, i’m using conda and it’s very useful imo but just stay with the one you’re comfortable

Pices-26 commented 5 years ago

I've used this line python legacy/train.py --logtostderr --train_dir=training/ --pipeline_config_path=training/ssd_mobilenet_v1_pets.config

and got this

Traceback (most recent call last): File "legacy/train.py", line 184, in tf.app.run() File "D:\python_ver\python364\lib\site-packages\tensorflow\python\platform\app.py", line 125, in run _sys.exit(main(argv)) File "D:\python_ver\python364\lib\site-packages\tensorflow\python\util\deprecation.py", line 324, in new_func return func(*args, *kwargs) File "legacy/train.py", line 180, in main graph_hook_fn=graph_rewriter_fn) File "D:\python_ver\python364\Lib\site-packages\tensorflow\models\research\object_detection\legacy\trainer.py", line 280, in train train_config.prefetch_queue_capacity, data_augmentation_options) File "D:\python_ver\python364\Lib\site-packages\tensorflow\models\research\object_detection\legacy\trainer.py", line 59, in create_input_queue tensor_dict = create_tensor_dict_fn() File "legacy/train.py", line 121, in get_next dataset_builder.build(config)).get_next() File "D:\python_ver\python364\Lib\site-packages\tensorflow\models\research\object_detection\builders\dataset_builder.py", line 124, in build num_additional_channels=input_reader_config.num_additional_channels) File "D:\python_ver\python364\Lib\site-packages\tensorflow\models\research\object_detection\data_decoders\tf_example_decoder.py", line 307, in init default_value=''), File "D:\python_ver\python364\Lib\site-packages\tensorflow\models\research\object_detection\data_decoders\tf_example_decoder.py", line 59, in init label_map_proto_file, use_display_name=False) File "D:\python_ver\python364\Lib\site-packages\tensorflow\models\research\object_detection\utils\label_map_util.py", line 164, in get_label_map_dict label_map = load_labelmap(label_map_path) File "D:\python_ver\python364\Lib\site-packages\tensorflow\models\research\object_detection\utils\label_map_util.py", line 133, in load_labelmap label_map_string = fid.read() File "D:\python_ver\python364\lib\site-packages\tensorflow\python\lib\io\file_io.py", line 125, in read self._preread_check() File "D:\python_ver\python364\lib\site-packages\tensorflow\python\lib\io\file_io.py", line 85, in _preread_check compat.as_bytes(self.__name), 1024 512, status) File "D:\python_ver\python364\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 528, in exit c_api.TF_GetCode(self.status.status)) tensorflow.python.framework.errors_impl.NotFoundError: NewRandomAccessFile failed to Create/Open: data/Obj_det.pbtxt : The system cannot find the file specified. ; No such file or directory

I know for a fact that this file is inside of data. data folder is in the directory that I'm running the command from. It looks like my path is set incorrectly. Is it something to do with pointing it not at set PYTHONPATH=%PYTHONPATH%;[PATH TO RESEARCH FOLDER] set PYTHONPATH=%PYTHONPATH%;[PATH TO SLIM FOLDER INSIDE THE RESEARCH FOLDER] ?

Pices-26 commented 5 years ago

label_map_path: "data/Obj_det.pbtxt" this is my path in the config file

Pices-26 commented 5 years ago

How much disk memory would I need for like 150 photos?

I've got 140gb on D and 40gb on C. Some people say it might be linked to memory

mihuzz commented 5 years ago

Hi! Please i cant write good code for model_main.py will run sucsessfully in colab google. I cheked with !ls => model_main.py

!export PYTHONPATH=$PYTHONPATH:/content/gdrive/My Drive/colab_data/models/research/object_detection:/content/gdrive/My Drive/colab_data/models/research/object_detection/slim python content/gdrive/My Drive/colab_data/models/research/object_detection/model_main.py \ --pipeline_config_path=object_detection/my_data/pipeline.config \ --model_dir=object_detection/my_data/output \ --num_train_steps=1000 \ --alsologtostderr

/bin/bash: /content/gdrive/My: No such file or directory /bin/bash: /content/gdrive/My: No such file or directory /bin/bash: line 0: export: content/gdrive/My': not a valid identifier /bin/bash: line 0: export:Drive/colab_data/models/research/object_detection/model_main.py': not a valid identifier /bin/bash: line 0: export: --pipeline_config_path=object_detection/my_data/pipeline.config': not a valid identifier /bin/bash: line 0: export:--model_dir=object_detection/my_data/output': not a valid identifier /bin/bash: line 0: export: --num_train_steps=1000': not a valid identifier /bin/bash: line 0: export:--alsologtostderr': not a valid identifier

merts1 commented 5 years ago

setting a pythonpath does not change your current working directory, it just enables libraries in that direction for you to use them without moving them to working directory. the reason why you’re getting such error is probably because you haven’t edit the paths correctly in the contig file according to current directory. i can’t know how did you configure the folders but i can say that it has nothing to do with memory issue

mihuzz commented 5 years ago

setting a pythonpath does not change your current working directory, it just enables libraries in that direction for you to use them without moving them to working directory. the reason why you’re getting such error is probably because you haven’t edit the paths correctly in the contig file according to current directory. i can’t know how did you configure the folders but i can say that it has nothing to do with memory issue fine_tune_checkpoint: "/content/gdrive/My Drive/colab_data/models/research/object_detection/ssd_mobilenet_v1_coco_2018_01_28/model.ckpt" from_detection_checkpoint: true num_steps: 20000 } train_input_reader { label_map_path: "/content/gdrive/My Drive/colab_data/models/research/object_detection/my_data/32classes_label_map.pbtxt" tf_record_input_reader { input_path: "/content/gdrive/My Drive/colab_data/models/research/object_detection/my_data/train_data.record" } } eval_config { num_examples: 4280 max_evals: 10 use_moving_averages: false } eval_input_reader { label_map_path: "/content/gdrive/My Drive/colab_data/models/research/object_detection/my_data/32classes_label_map.pbtxt" shuffle: false num_readers: 1 tf_record_input_reader { input_path: "/content/gdrive/My Drive/colab_data/models/research/object_detection/my_data/test_data.record" } }

Pices-26 commented 5 years ago

my paths are set correctly. The file is there and it's contents are what they are meant to be.

train_config: { batch_size: 1 optimizer { rms_prop_optimizer: { learning_rate: { exponential_decay_learning_rate { initial_learning_rate: 0.004 decay_steps: 800720 decay_factor: 0.95 } } momentum_optimizer_value: 0.9 decay: 0.9 epsilon: 1.0 } } fine_tune_checkpoint: "ssd_mobilenet_v1_coco_2017_11_17/model.ckpt" from_detection_checkpoint: true load_all_detection_checkpoint_vars: true

Note: The below line limits the training process to 200K steps, which we

empirically found to be sufficient enough to train the pets dataset. This

effectively bypasses the learning rate schedule (the learning rate will

never decay). Remove the below line to train indefinitely.

num_steps: 200000 data_augmentation_options { random_horizontal_flip { } } data_augmentation_options { ssd_random_crop { } } }

train_input_reader: { tf_record_input_reader { input_path: "data/train.record" } label_map_path: "data/Obj_det.pbtxt" }

eval_config: { metrics_set: "coco_detection_metrics" num_examples: 1100 }

eval_input_reader: { tf_record_input_reader { input_path: "data/test.record" } label_map_path: "data/Obj_det.pbtxt" shuffle: false num_readers: 1 }

merts1 commented 5 years ago

can you change the name of obj_det file to maplabel and the path to data/maplabel.pbtxt for both rows in the config file

Pices-26 commented 5 years ago

same error at the end tensorflow.python.framework.errors_impl.NotFoundError: NewRandomAccessFile failed to Create/Open: data/maplabel.pbtxt : The system cannot find the file specified. ; No such file or directory

Pices-26 commented 5 years ago

Would I have to update?

WARNING: The TensorFlow contrib module will not be included in TensorFlow 2.0. For more information, please see:

WARNING:tensorflow:From D:\python_ver\python364\lib\site-packages\tensorflow\python\platform\app.py:125: main (from main) is deprecated and will be removed in a future version. Instructions for updating: Use object_detection/model_main.py. WARNING:tensorflow:From D:\python_ver\python364\Lib\site-packages\tensorflow\models\research\object_detection\legacy\trainer.py:266: create_global_step (from tensorflow.contrib.framework.python.ops.variables) is deprecated and will be removed in a future version. Instructions for updating: Please switch to tf.train.create_global_step WARNING:tensorflow:From D:\python_ver\python364\lib\site-packages\tensorflow\python\framework\op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. Instructions for updating:

Gerahal commented 5 years ago

I have the exact same problem, any solutions yet?

merts1 commented 5 years ago

can you upload your config file

Gerahal commented 5 years ago

Here is my config file:

# Faster R-CNN with Inception v2, configured for Oxford-IIIT Pets Dataset.
# Users should configure the fine_tune_checkpoint field in the train config as
# well as the label_map_path and input_path fields in the train_input_reader and
# eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that
# should be configured.

model {
  faster_rcnn {
    num_classes: 2
    image_resizer {
      keep_aspect_ratio_resizer {
        min_dimension: 600
        max_dimension: 1024
      }
    }
    feature_extractor {
      type: 'faster_rcnn_inception_v2'
      first_stage_features_stride: 16
    }
    first_stage_anchor_generator {
      grid_anchor_generator {
        scales: [0.25, 0.5, 1.0, 2.0]
        aspect_ratios: [0.5, 1.0, 2.0]
        height_stride: 16
        width_stride: 16
      }
    }
    first_stage_box_predictor_conv_hyperparams {
      op: CONV
      regularizer {
        l2_regularizer {
          weight: 0.0
        }
      }
      initializer {
        truncated_normal_initializer {
          stddev: 0.01
        }
      }
    }
    first_stage_nms_score_threshold: 0.0
    first_stage_nms_iou_threshold: 0.7
    first_stage_max_proposals: 300
    first_stage_localization_loss_weight: 2.0
    first_stage_objectness_loss_weight: 1.0
    initial_crop_size: 14
    maxpool_kernel_size: 2
    maxpool_stride: 2
    second_stage_box_predictor {
      mask_rcnn_box_predictor {
        use_dropout: false
        dropout_keep_probability: 1.0
        fc_hyperparams {
          op: FC
          regularizer {
            l2_regularizer {
              weight: 0.0
            }
          }
          initializer {
            variance_scaling_initializer {
              factor: 1.0
              uniform: true
              mode: FAN_AVG
            }
          }
        }
      }
    }
    second_stage_post_processing {
      batch_non_max_suppression {
        score_threshold: 0.0
        iou_threshold: 0.6
        max_detections_per_class: 100
        max_total_detections: 300
      }
      score_converter: SOFTMAX
    }
    second_stage_localization_loss_weight: 2.0
    second_stage_classification_loss_weight: 1.0
  }
}

train_config: {
  batch_size: 32
  optimizer {
    momentum_optimizer: {
      learning_rate: {
        manual_step_learning_rate {
          initial_learning_rate: 0.0002
          schedule {
            step: 900000
            learning_rate: .00002
          }
          schedule {
            step: 1200000
            learning_rate: .000002
          }
        }
      }
      momentum_optimizer_value: 0.9
    }
    use_moving_average: false
  }
  gradient_clipping_by_norm: 10.0
  fine_tune_checkpoint: "faster_rcnn_inception_v2_coco_2018_01_28/model.ckpt"
  from_detection_checkpoint: true
  load_all_detection_checkpoint_vars: true
  # Note: The below line limits the training process to 200K steps, which we
  # empirically found to be sufficient enough to train the pets dataset. This
  # effectively bypasses the learning rate schedule (the learning rate will
  # never decay). Remove the below line to train indefinitely.
  num_steps: 200000
  data_augmentation_options {
    random_horizontal_flip {
    }
  }
}

train_input_reader: {
  tf_record_input_reader {
    input_path: "data\train.record-?????-of-00010"
  }
  label_map_path: "data\objectdetection.pbtxt"
}

eval_config: {
  metrics_set: "coco_detection_metrics"
  num_examples: 1101
}

eval_input_reader: {
  tf_record_input_reader {
    input_path: "data\test.record-?????-of-00010"
  }
  label_map_path: "data\objectdetection.pbtxt"
  shuffle: false
  num_readers: 1
}
merts1 commented 5 years ago

are your record files really called like that?

On 22 Apr 2019 Mon at 10:48 AM Théophile notifications@github.com wrote:

Here is my config file:

`# Faster R-CNN with Inception v2, configured for Oxford-IIIT Pets Dataset. Users should configure the fine_tune_checkpoint field in the train config as well as the label_map_path and input_path fields in the train_input_reader and eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that should be configured.

model { faster_rcnn { num_classes: 2 image_resizer { keep_aspect_ratio_resizer { min_dimension: 600 max_dimension: 1024 } } feature_extractor { type: 'faster_rcnn_inception_v2' first_stage_features_stride: 16 } first_stage_anchor_generator { grid_anchor_generator { scales: [0.25, 0.5, 1.0, 2.0] aspect_ratios: [0.5, 1.0, 2.0] height_stride: 16 width_stride: 16 } } first_stage_box_predictor_conv_hyperparams { op: CONV regularizer { l2_regularizer { weight: 0.0 } } initializer { truncated_normal_initializer { stddev: 0.01 } } } first_stage_nms_score_threshold: 0.0 first_stage_nms_iou_threshold: 0.7 first_stage_max_proposals: 300 first_stage_localization_loss_weight: 2.0 first_stage_objectness_loss_weight: 1.0 initial_crop_size: 14 maxpool_kernel_size: 2 maxpool_stride: 2 second_stage_box_predictor { mask_rcnn_box_predictor { use_dropout: false dropout_keep_probability: 1.0 fc_hyperparams { op: FC regularizer { l2_regularizer { weight: 0.0 } } initializer { variance_scaling_initializer { factor: 1.0 uniform: true mode: FAN_AVG } } } } } second_stage_post_processing { batch_non_max_suppression { score_threshold: 0.0 iou_threshold: 0.6 max_detections_per_class: 100 max_total_detections: 300 } score_converter: SOFTMAX } second_stage_localization_loss_weight: 2.0 second_stage_classification_loss_weight: 1.0 } }

train_config: { batch_size: 32 optimizer { momentum_optimizer: { learning_rate: { manual_step_learning_rate { initial_learning_rate: 0.0002 schedule { step: 900000 learning_rate: .00002 } schedule { step: 1200000 learning_rate: .000002 } } } momentum_optimizer_value: 0.9 } use_moving_average: false } gradient_clipping_by_norm: 10.0 fine_tune_checkpoint: "faster_rcnn_inception_v2_coco_2018_01_28/model.ckpt" from_detection_checkpoint: true load_all_detection_checkpoint_vars: true Note: The below line limits the training process to 200K steps, which we empirically found to be sufficient enough to train the pets dataset. This effectively bypasses the learning rate schedule (the learning rate will never decay). Remove the below line to train indefinitely.

num_steps: 200000 data_augmentation_options { random_horizontal_flip { } } }

train_input_reader: { tf_record_input_reader { input_path: "data\train.record-?????-of-00010" } label_map_path: "data\objectdetection.pbtxt" }

eval_config: { metrics_set: "coco_detection_metrics" num_examples: 1101 }

eval_input_reader: { tf_record_input_reader { input_path: "data\test.record-?????-of-00010" } label_map_path: "data\objectdetection.pbtxt" shuffle: false num_readers: 1 } `

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/tensorflow/models/issues/6595#issuecomment-485353971, or mute the thread https://github.com/notifications/unsubscribe-auth/ALAGXMMYBHSO2IMXOFT2SFDPRVUVJANCNFSM4HGUCNEA .

Gerahal commented 5 years ago

The problem was in fact the record file names, but even by renaming them "train.record" and "test.record" (which are the real names) it didn't work, so I tried to rename them " train. " and " test. " and it magically worked.

Pices-26 commented 5 years ago

here is the config file

SSD with Mobilenet v1, configured for Oxford-IIIT Pets Dataset.

Users should configure the fine_tune_checkpoint field in the train config as

well as the label_map_path and input_path fields in the train_input_reader and

eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that

should be configured.

model { ssd { num_classes: 1 box_coder { faster_rcnn_box_coder { y_scale: 10.0 x_scale: 10.0 height_scale: 5.0 width_scale: 5.0 } } matcher { argmax_matcher { matched_threshold: 0.5 unmatched_threshold: 0.5 ignore_thresholds: false negatives_lower_than_unmatched: true force_match_for_each_row: true } } similarity_calculator { iou_similarity { } } anchor_generator { ssd_anchor_generator { num_layers: 6 min_scale: 0.2 max_scale: 0.95 aspect_ratios: 1.0 aspect_ratios: 2.0 aspect_ratios: 0.5 aspect_ratios: 3.0 aspect_ratios: 0.3333 } } image_resizer { fixed_shape_resizer { height: 300 width: 300 } } box_predictor { convolutional_box_predictor { min_depth: 0 max_depth: 0 num_layers_before_predictor: 0 use_dropout: false dropout_keep_probability: 0.8 kernel_size: 1 box_code_size: 4 apply_sigmoid_to_scores: false conv_hyperparams { activation: RELU_6, regularizer { l2_regularizer { weight: 0.00004 } } initializer { truncated_normal_initializer { stddev: 0.03 mean: 0.0 } } batch_norm { train: true, scale: true, center: true, decay: 0.9997, epsilon: 0.001, } } } } feature_extractor { type: 'ssd_mobilenet_v1' min_depth: 16 depth_multiplier: 1.0 conv_hyperparams { activation: RELU_6, regularizer { l2_regularizer { weight: 0.00004 } } initializer { truncated_normal_initializer { stddev: 0.03 mean: 0.0 } } batch_norm { train: true, scale: true, center: true, decay: 0.9997, epsilon: 0.001, } } } loss { classification_loss { weighted_sigmoid { } } localization_loss { weighted_smooth_l1 { } } hard_example_miner { num_hard_examples: 3000 iou_threshold: 0.99 loss_type: CLASSIFICATION max_negatives_per_positive: 3 min_negatives_per_image: 0 } classification_weight: 1.0 localization_weight: 1.0 } normalize_loss_by_num_matches: true post_processing { batch_non_max_suppression { score_threshold: 1e-8 iou_threshold: 0.6 max_detections_per_class: 100 max_total_detections: 100 } score_converter: SIGMOID } } }

train_config: { batch_size: 1 optimizer { rms_prop_optimizer: { learning_rate: { exponential_decay_learning_rate { initial_learning_rate: 0.004 decay_steps: 800720 decay_factor: 0.95 } } momentum_optimizer_value: 0.9 decay: 0.9 epsilon: 1.0 } } fine_tune_checkpoint: "ssd_mobilenet_v1_coco_2017_11_17/model.ckpt" from_detection_checkpoint: true load_all_detection_checkpoint_vars: true

Note: The below line limits the training process to 200K steps, which we

empirically found to be sufficient enough to train the pets dataset. This

effectively bypasses the learning rate schedule (the learning rate will

never decay). Remove the below line to train indefinitely.

num_steps: 200000 data_augmentation_options { random_horizontal_flip { } } data_augmentation_options { ssd_random_crop { } } }

train_input_reader: { tf_record_input_reader { input_path: "data/train.record" } label_map_path: "data/maplabel.pbtxt" }

eval_config: { metrics_set: "coco_detection_metrics" num_examples: 1100 }

eval_input_reader: { tf_record_input_reader { input_path: "data/test.record" } label_map_path: "data/maplabel.pbtxt" shuffle: false num_readers: 1 }

vishal2612200 commented 5 years ago

This problem is generated when you have not set the correct path, use set pythonpath ='path name'

Chenfeid commented 5 years ago

This problem is generated when you have not set the correct path, use set pythonpath ='path name'

what do mean by the pythonpath='path name', I am still having the same problem, which is no such process

Chenfeid commented 5 years ago

same error at the end tensorflow.python.framework.errors_impl.NotFoundError: NewRandomAccessFile failed to Create/Open: data/maplabel.pbtxt : The system cannot find the file specified. ; No such file or directory

Have u fix the error, it is so annoying, stuck in the last step

muhammadaamir123 commented 4 years ago

Have you fix this issue ?

here is the config file

SSD with Mobilenet v1, configured for Oxford-IIIT Pets Dataset.

Users should configure the fine_tune_checkpoint field in the train config as

well as the label_map_path and input_path fields in the train_input_reader and

eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that

should be configured.

model { ssd { num_classes: 1 box_coder { faster_rcnn_box_coder { y_scale: 10.0 x_scale: 10.0 height_scale: 5.0 width_scale: 5.0 } } matcher { argmax_matcher { matched_threshold: 0.5 unmatched_threshold: 0.5 ignore_thresholds: false negatives_lower_than_unmatched: true force_match_for_each_row: true } } similarity_calculator { iou_similarity { } } anchor_generator { ssd_anchor_generator { num_layers: 6 min_scale: 0.2 max_scale: 0.95 aspect_ratios: 1.0 aspect_ratios: 2.0 aspect_ratios: 0.5 aspect_ratios: 3.0 aspect_ratios: 0.3333 } } image_resizer { fixed_shape_resizer { height: 300 width: 300 } } box_predictor { convolutional_box_predictor { min_depth: 0 max_depth: 0 num_layers_before_predictor: 0 use_dropout: false dropout_keep_probability: 0.8 kernel_size: 1 box_code_size: 4 apply_sigmoid_to_scores: false conv_hyperparams { activation: RELU_6, regularizer { l2_regularizer { weight: 0.00004 } } initializer { truncated_normal_initializer { stddev: 0.03 mean: 0.0 } } batch_norm { train: true, scale: true, center: true, decay: 0.9997, epsilon: 0.001, } } } } feature_extractor { type: 'ssd_mobilenet_v1' min_depth: 16 depth_multiplier: 1.0 conv_hyperparams { activation: RELU_6, regularizer { l2_regularizer { weight: 0.00004 } } initializer { truncated_normal_initializer { stddev: 0.03 mean: 0.0 } } batch_norm { train: true, scale: true, center: true, decay: 0.9997, epsilon: 0.001, } } } loss { classification_loss { weighted_sigmoid { } } localization_loss { weighted_smooth_l1 { } } hard_example_miner { num_hard_examples: 3000 iou_threshold: 0.99 loss_type: CLASSIFICATION max_negatives_per_positive: 3 min_negatives_per_image: 0 } classification_weight: 1.0 localization_weight: 1.0 } normalize_loss_by_num_matches: true post_processing { batch_non_max_suppression { score_threshold: 1e-8 iou_threshold: 0.6 max_detections_per_class: 100 max_total_detections: 100 } score_converter: SIGMOID } } }

train_config: { batch_size: 1 optimizer { rms_prop_optimizer: { learning_rate: { exponential_decay_learning_rate { initial_learning_rate: 0.004 decay_steps: 800720 decay_factor: 0.95 } } momentum_optimizer_value: 0.9 decay: 0.9 epsilon: 1.0 } } fine_tune_checkpoint: "ssd_mobilenet_v1_coco_2017_11_17/model.ckpt" from_detection_checkpoint: true load_all_detection_checkpoint_vars: true

Note: The below line limits the training process to 200K steps, which we

empirically found to be sufficient enough to train the pets dataset. This

effectively bypasses the learning rate schedule (the learning rate will

never decay). Remove the below line to train indefinitely.

num_steps: 200000 data_augmentation_options { random_horizontal_flip { } } data_augmentation_options { ssd_random_crop { } } }

train_input_reader: { tf_record_input_reader { input_path: "data/train.record" } label_map_path: "data/maplabel.pbtxt" }

eval_config: { metrics_set: "coco_detection_metrics" num_examples: 1100 }

eval_input_reader: { tf_record_input_reader { input_path: "data/test.record" } label_map_path: "data/maplabel.pbtxt" shuffle: false num_readers: 1 }

Mayank-savaliya commented 4 years ago

same error at the end tensorflow.python.framework.errors_impl.NotFoundError: NewRandomAccessFile failed to Create/Open: data/maplabel.pbtxt : The system cannot find the file specified. ; No such file or directory

Did you check the extension of your maplabel.pbtxt is really .pbtxt because i ran into exactly same error because of making this file in a notepad and saving as labelmap.pbtxt while it was still a .txt file

timothylimyl commented 4 years ago

My goodness, I have the exact same error that I have been trying to deal with for the past 10 hours. 100% sure the directory in the config file is correct. 100% sure labelmap.pbtxt is PBTXT file but then the error: No such file or directory is there for the labelmap..

MlvPrasadOfficial commented 4 years ago

i am having the same error

tensorflow.python.framework.errors_impl.NotFoundError: NewRandomAccessFile failed to Create/Open: training\label_map.pbtxt : The system cannot find the file specified. ; No such file or directory

I am using it for object detection..

python train.py --logtostderr --train_dir=training\ --pipeline_config_path=training\faster_rcnn_inception_v2_coco.config

error log: tf version 1.14.0

(tfod) C:\Users\passionHEART\Desktop\tfod\models\research>python train.py --logtostderr --train_dir=training\ --pipeline_config_path=training\faster_rcnn_inception_v2_coco.config C:\Users\passionHEART.conda\envs\tfod\lib\site-packages\tensorflow\python\framework\dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint8 = np.dtype([("qint8", np.int8, 1)]) C:\Users\passionHEART.conda\envs\tfod\lib\site-packages\tensorflow\python\framework\dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint8 = np.dtype([("quint8", np.uint8, 1)]) C:\Users\passionHEART.conda\envs\tfod\lib\site-packages\tensorflow\python\framework\dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint16 = np.dtype([("qint16", np.int16, 1)]) C:\Users\passionHEART.conda\envs\tfod\lib\site-packages\tensorflow\python\framework\dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint16 = np.dtype([("quint16", np.uint16, 1)]) C:\Users\passionHEART.conda\envs\tfod\lib\site-packages\tensorflow\python\framework\dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint32 = np.dtype([("qint32", np.int32, 1)]) C:\Users\passionHEART.conda\envs\tfod\lib\site-packages\tensorflow\python\framework\dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. np_resource = np.dtype([("resource", np.ubyte, 1)]) C:\Users\passionHEART.conda\envs\tfod\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint8 = np.dtype([("qint8", np.int8, 1)]) C:\Users\passionHEART.conda\envs\tfod\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint8 = np.dtype([("quint8", np.uint8, 1)]) C:\Users\passionHEART.conda\envs\tfod\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint16 = np.dtype([("qint16", np.int16, 1)]) C:\Users\passionHEART.conda\envs\tfod\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint16 = np.dtype([("quint16", np.uint16, 1)]) C:\Users\passionHEART.conda\envs\tfod\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint32 = np.dtype([("qint32", np.int32, 1)]) C:\Users\passionHEART.conda\envs\tfod\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. np_resource = np.dtype([("resource", np.ubyte, 1)]) WARNING:tensorflow: The TensorFlow contrib module will not be included in TensorFlow 2.0. For more information, please see:

WARNING:tensorflow:From C:\Users\passionHEART\Desktop\tfod\models\research\nets\inception_resnet_v2.py:373: The name tf.GraphKeys is deprecated. Please use tf.compat.v1.GraphKeys instead.

WARNING:tensorflow:From C:\Users\passionHEART\Desktop\tfod\models\research\nets\mobilenet\mobilenet.py:389: The name tf.nn.avg_pool is deprecated. Please use tf.nn.avg_pool2d instead.

WARNING:tensorflow:From train.py:55: The name tf.logging.set_verbosity is deprecated. Please use tf.compat.v1.logging.set_verbosity instead.

WARNING:tensorflow:From train.py:55: The name tf.logging.INFO is deprecated. Please use tf.compat.v1.logging.INFO instead.

WARNING:tensorflow:From train.py:184: The name tf.app.run is deprecated. Please use tf.compat.v1.app.run instead.

WARNING:tensorflow:From C:\Users\passionHEART.conda\envs\tfod\lib\site-packages\absl\app.py:250: main (from main) is deprecated and will be removed in a future version. Instructions for updating: Use object_detection/model_main.py. W0422 16:24:07.242830 11904 deprecation.py:323] From C:\Users\passionHEART.conda\envs\tfod\lib\site-packages\absl\app.py:250: main (from main) is deprecated and will be removed in a future version. Instructions for updating: Use object_detection/model_main.py. WARNING:tensorflow:From train.py:90: The name tf.gfile.MakeDirs is deprecated. Please use tf.io.gfile.makedirs instead.

W0422 16:24:07.247827 11904 deprecation_wrapper.py:119] From train.py:90: The name tf.gfile.MakeDirs is deprecated. Please use tf.io.gfile.makedirs instead.

WARNING:tensorflow:From C:\Users\passionHEART\Desktop\tfod\models\research\object_detection\utils\config_util.py:94: The name tf.gfile.GFile is deprecated. Please use tf.io.gfile.GFile instead.

W0422 16:24:07.256824 11904 deprecation_wrapper.py:119] From C:\Users\passionHEART\Desktop\tfod\models\research\object_detection\utils\config_util.py:94: The name tf.gfile.GFile is deprecated. Please use tf.io.gfile.GFile instead.

WARNING:tensorflow:From train.py:95: The name tf.gfile.Copy is deprecated. Please use tf.io.gfile.copy instead.

W0422 16:24:07.269821 11904 deprecation_wrapper.py:119] From train.py:95: The name tf.gfile.Copy is deprecated. Please use tf.io.gfile.copy instead.

WARNING:tensorflow:From C:\Users\passionHEART\Desktop\tfod\models\research\object_detection\anchor_generators\grid_anchor_generator.py:59: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.cast instead. W0422 16:24:07.340774 11904 deprecation.py:323] From C:\Users\passionHEART\Desktop\tfod\models\research\object_detection\anchor_generators\grid_anchor_generator.py:59: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.cast instead. WARNING:tensorflow:From C:\Users\passionHEART\Desktop\tfod\models\research\object_detection\legacy\trainer.py:266: create_global_step (from tensorflow.contrib.framework.python.ops.variables) is deprecated and will be removed in a future version. Instructions for updating: Please switch to tf.train.create_global_step W0422 16:24:07.354768 11904 deprecation.py:323] From C:\Users\passionHEART\Desktop\tfod\models\research\object_detection\legacy\trainer.py:266: create_global_step (from tensorflow.contrib.framework.python.ops.variables) is deprecated and will be removed in a future version. Instructions for updating: Please switch to tf.train.create_global_step WARNING:tensorflow:From C:\Users\passionHEART\Desktop\tfod\models\research\object_detection\data_decoders\tf_example_decoder.py:167: The name tf.FixedLenFeature is deprecated. Please use tf.io.FixedLenFeature instead.

W0422 16:24:07.368759 11904 deprecation_wrapper.py:119] From C:\Users\passionHEART\Desktop\tfod\models\research\object_detection\data_decoders\tf_example_decoder.py:167: The name tf.FixedLenFeature is deprecated. Please use tf.io.FixedLenFeature instead.

WARNING:tensorflow:From C:\Users\passionHEART\Desktop\tfod\models\research\object_detection\data_decoders\tf_example_decoder.py:182: The name tf.VarLenFeature is deprecated. Please use tf.io.VarLenFeature instead.

W0422 16:24:07.373755 11904 deprecation_wrapper.py:119] From C:\Users\passionHEART\Desktop\tfod\models\research\object_detection\data_decoders\tf_example_decoder.py:182: The name tf.VarLenFeature is deprecated. Please use tf.io.VarLenFeature instead.

Traceback (most recent call last): File "train.py", line 184, in tf.app.run() File "C:\Users\passionHEART.conda\envs\tfod\lib\site-packages\tensorflow\python\platform\app.py", line 40, in run _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef) File "C:\Users\passionHEART.conda\envs\tfod\lib\site-packages\absl\app.py", line 299, in run _run_main(main, args) File "C:\Users\passionHEART.conda\envs\tfod\lib\site-packages\absl\app.py", line 250, in _run_main sys.exit(main(argv)) File "C:\Users\passionHEART.conda\envs\tfod\lib\site-packages\tensorflow\python\util\deprecation.py", line 324, in new_func return func(*args, *kwargs) File "train.py", line 180, in main graph_hook_fn=graph_rewriter_fn) File "C:\Users\passionHEART\Desktop\tfod\models\research\object_detection\legacy\trainer.py", line 280, in train train_config.prefetch_queue_capacity, data_augmentation_options) File "C:\Users\passionHEART\Desktop\tfod\models\research\object_detection\legacy\trainer.py", line 59, in create_input_queue tensor_dict = create_tensor_dict_fn() File "train.py", line 121, in get_next dataset_builder.build(config)).get_next() File "C:\Users\passionHEART\Desktop\tfod\models\research\object_detection\builders\dataset_builder.py", line 123, in build num_additional_channels=input_reader_config.num_additional_channels) File "C:\Users\passionHEART\Desktop\tfod\models\research\object_detection\data_decoders\tf_example_decoder.py", line 297, in init default_value=''), File "C:\Users\passionHEART\Desktop\tfod\models\research\object_detection\data_decoders\tf_example_decoder.py", line 59, in init label_map_proto_file, use_display_name=False) File "C:\Users\passionHEART\Desktop\tfod\models\research\object_detection\utils\label_map_util.py", line 164, in get_label_map_dict label_map = load_labelmap(label_map_path) File "C:\Users\passionHEART\Desktop\tfod\models\research\object_detection\utils\label_map_util.py", line 133, in load_labelmap label_map_string = fid.read() File "C:\Users\passionHEART.conda\envs\tfod\lib\site-packages\tensorflow\python\lib\io\file_io.py", line 122, in read self._preread_check() File "C:\Users\passionHEART.conda\envs\tfod\lib\site-packages\tensorflow\python\lib\io\file_io.py", line 84, in _preread_check compat.as_bytes(self.__name), 1024 512) tensorflow.python.framework.errors_impl.NotFoundError: NewRandomAccessFile failed to Create/Open: training\label_map.pbtxt : The system cannot find the file specified. ; No such file or directory

pedrosouzax commented 4 years ago

@MlvPrasadOfficial, @Mayank-savaliya, @timothylimyl

I was with the same error and I just resolved. My problem was that the format file of "label_map.pbtxt" was ".txt" and the format file that tensorflow needs is "pbtxt"...

So what I did was I opened the label_map.pbtxt with notepad++ and saved as "filename: label_map.pbtxt" and "save as type: All types"

That's works to me. Let me know if this helped.

jyosthna-alt commented 4 years ago

category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True) NotFoundError Traceback (most recent call last)

in ----> 1 category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True) C:\tensorflow\models\research\object_detection\utils\label_map_util.py in create_category_index_from_labelmap(label_map_path, use_display_name) 241 {1: {'id': 1, 'name': 'dog'}, 2: {'id': 2, 'name': 'cat'}, ...} 242 """ --> 243 categories = create_categories_from_labelmap(label_map_path, use_display_name) 244 return create_category_index(categories) 245 C:\tensorflow\models\research\object_detection\utils\label_map_util.py in create_categories_from_labelmap(label_map_path, use_display_name) 221 categories: a list of dictionaries representing all possible categories. 222 """ 223 label_map = load_labelmap(label_map_path) 224 max_num_classes = max(item.id for item in label_map.item) 225 return convert_label_map_to_categories(label_map, max_num_classes, C:\tensorflow\models\research\object_detection\utils\label_map_util.py in load_labelmap(path) 137 """ 138 with tf.io.gfile.GFile(path, 'r') as fid: --> 139 label_map_string = fid.read() 140 label_map = string_int_label_map_pb2.StringIntLabelMap() 141 try: c:\users\jyost\appdata\local\programs\python\python37\lib\site-packages\tensorflow\python\lib\io\file_io.py in read(self, n) 114 string if in string (regular) mode. 115 """ --> 116 self._preread_check() 117 if n == -1: 118 length = self.size() - self.tell() c:\users\jyost\appdata\local\programs\python\python37\lib\site-packages\tensorflow\python\lib\io\file_io.py in _preread_check(self) 77 "File isn't open for reading") 78 self._read_buf = _pywrap_file_io.BufferedInputStream( ---> 79 self.__name, 1024 * 512) 80 81 def _prewrite_check(self): NotFoundError: NewRandomAccessFile failed to Create/Open: data\mscoco_label_map.pbtxt : The system cannot find the path specified. ; No such process iam getting this error could please help me with it is a "live object detection tensorflow model"
hsekki commented 4 years ago

i have same problem , have you solved it? compat.as_bytes(self.__name), 1024 * 512) tensorflow.python.framework.errors_impl.NotFoundError: NewRandomAccessFile failed to Create/Open: \training\labelmap.pbtxt : The system cannot find the file specified. ; No such file or directory

singhvbv commented 4 years ago

same error at the end tensorflow.python.framework.errors_impl.NotFoundError: NewRandomAccessFile failed to Create/Open: data/maplabel.pbtxt : The system cannot find the file specified. ; No such file or directory

Did you check the extension of your maplabel.pbtxt is really .pbtxt because i ran into exactly same error because of making this file in a notepad and saving as labelmap.pbtxt while it was still a .txt file

Man... what a silly issue i was having... Thanks for saving

Adexhabtamu commented 3 years ago

File "eval.py", line 43, in eval vocab_processor=tf.contrib.learn.preprocessing.VocabularyProcessor.restore(vocab_path) File "C:\Users\PC\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\contrib\learn\python\learn\preprocessing\text.py", line 246, in restore return pickle.loads(f.read()) File "C:\Users\PC\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\lib\io\file_io.py", line 125, in read self._preread_check() File "C:\Users\PC\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\lib\io\file_io.py", line 85, in _preread_check compat.as_bytes(self.__name), 1024 * 512, status) File "C:\Users\PC\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 528, in exit c_api.TF_GetCode(self.status.status)) tensorflow.python.framework.errors_impl.NotFoundError: NewRandomAccessFile failed to Create/Open: ..\vocab : The system cannot find the file specified. ; No such file or directory

any one please help me