nixingyang / AdaptiveL2Regularization

[ICPR 2020] Adaptive L2 Regularization in Person Re-Identification
https://ieeexplore.ieee.org/document/9412481
MIT License
64 stars 23 forks source link

Loading the Model on Google Colab #13

Closed ElsaLuz closed 3 years ago

ElsaLuz commented 3 years ago

@nixingyang Hi! I am trying to load the model on Google Colab. But I am facing issues and I am unable to resolve it.

As suggested in point 1 (#3 ): I have defined the model using init_model. But I am unable to call test_on_batch and load_weights.

When I tried this code snippet after init_model:

_ = training_model.test_on_batch(train_generator[0])
# Load weights from the pretrained model 
training_model.load_weights(pretrained_model_file_path)

I got this error:

NameError: name 'training_model' is not defined  
It is obvious as first I have to call init_model in order to get training_model.

But when I tried this code:

python3 -u solution.py --dataset_name "Market1501" --backbone_model_name "ResNet50" --pretrained_model_file_path "?.h5" --output_folder_path "evaluation_only" --evaluation_only --freeze_backbone_for_N_epochs 0 --testing_size 1.0 --evaluate_testing_every_N_epochs 1

It gives me this error:

File "<ipython-input-7-44d28dd95e5d>", line 1
    python3 -u solution.py --dataset_name "Market1501" --backbone_model_name "ResNet50" --pretrained_model_file_path "/content/gdrive/Market1501_ResNet50_9502037.h5" --output_folder_path "evaluation_only" --evaluation_only --freeze_backbone_for_N_epochs 0 --testing_size 1.0 --evaluate_testing_every_N_epochs 1
                      ^
SyntaxError: invalid syntax

How do we load this model? I would be grateful for your help.

nixingyang commented 3 years ago

Hi, You should run the command python3 -u solution.py ... in a Bash shell rather than an IPython environment. That is the reason why it yields the syntax error. To solve this, please check this post. You could try %run ./solution.py .... All the best. Xingyang

ElsaLuz commented 3 years ago

@nixingyang Thank you so much for your response.

But how do we initiate training_model and inference_model? In order to call:

_ = training_model.test_on_batch(train_generator[0])
# Load weights from the pretrained model
training_model.load_weights(pretrained_model_file_path)

we first need training_model and inference_model. Right? Like:

training_model, inference_model, preprocess_input = init_model(
        backbone_model_name=backbone_model_name,
        freeze_backbone_for_N_epochs=freeze_backbone_for_N_epochs,
        input_shape=input_shape,
        region_num=region_num,
        attribute_name_to_label_encoder_dict=
        train_and_valid_attribute_name_to_label_encoder_dict,
        kernel_regularization_factor=kernel_regularization_factor,
        bias_regularization_factor=bias_regularization_factor,
        gamma_regularization_factor=gamma_regularization_factor,
        beta_regularization_factor=beta_regularization_factor,
        use_adaptive_l1_l2_regularizer=use_adaptive_l1_l2_regularizer,
        min_value_in_clipping=min_value_in_clipping,
        max_value_in_clipping=max_value_in_clipping,
        evaluate_concatenated_embedding=evaluate_concatenated_embedding)

Regards

nixingyang commented 3 years ago

Yes. You could initiate those models by calling init_model. Xingyang

ElsaLuz commented 3 years ago

I have tried but calling init_model needs the arguments. What about the arguments, then?

nixingyang commented 3 years ago

You may find the default values in the FLAGS variable. Alternatively, one could explicitly change those values in the command, i.e., python3 -u solution.py .... Xingyang

ElsaLuz commented 3 years ago

I have accessed all the parameters from FLAGS variable but what should I pass to attribute_name_to_label_encoder_dict?

nixingyang commented 3 years ago

Yes. It is better to run the script as it is and debug it line by line if you have doubts. Xingyang

ElsaLuz commented 3 years ago

When I execute %run solution.py, it gives me this error:

---------------------------------------------------------------------------
DuplicateFlagError                        Traceback (most recent call last)
/content/AdaptiveL2Regularization/solution.py in <module>()
     35 from utils.vis_utils import summarize_model, visualize_model
     36 
---> 37 flags.DEFINE_string("root_folder_path", "", "Folder path of the dataset.")
     38 flags.DEFINE_string("dataset_name", "Market1501", "Name of the dataset.")
     39 # ["Market1501", "DukeMTMC_reID", "MSMT17"]

3 frames
/usr/local/lib/python3.7/dist-packages/absl/flags/_flagvalues.py in __setitem__(self, name, flag)
    436         # module is simply being imported a subsequent time.
    437         return
--> 438       raise _exceptions.DuplicateFlagError.from_flag(name, self)
    439     short_name = flag.short_name
    440     # If a new flag overrides an old one, we need to cleanup the old flag's

DuplicateFlagError: The flag 'root_folder_path' is defined twice. First from solution.py, Second from solution.py.  Description from first occurrence: Folder path of the dataset.
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-22-43b0ff2d49d6> in <module>()
----> 1 get_ipython().magic('run solution.py')

5 frames
<decorator-gen-51> in run(self, parameter_s, runner, file_finder)

/usr/local/lib/python3.7/dist-packages/IPython/core/pylabtools.py in mpl_execfile(fname, *where, **kw)
    175         matplotlib.interactive(is_interactive)
    176         # make rendering call now, if the user tried to do it
--> 177         if plt.draw_if_interactive.called:
    178             plt.draw()
    179             plt.draw_if_interactive.called = False

AttributeError: 'function' object has no attribute 'called'

Alternatively if I try to run using !python solution.py, I get this error:

2021-03-24 20:10:40.167534: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
Getting hyperparameters ...
Using command solution.py
? False
alsologtostderr False
augmentation_num 1
backbone_model_name ResNet50
beta_regularization_factor 0.005
bias_regularization_factor 0.005
dataset_name Market1501
epoch_num 200
evaluate_testing_every_N_epochs 10
evaluate_validation_every_N_epochs 1
evaluation_only False
freeze_backbone_for_N_epochs 20
gamma_regularization_factor 0.005
hbm_oom_exit True
help False
helpfull False
helpshort False
helpxml False
identity_num_per_batch 16
image_augmentor_name RandomErasingImageAugmentor
image_height 384
image_num_per_identity 4
image_width 128
kernel_regularization_factor 0.005
learning_rate_base 0.0002
learning_rate_drop_factor 10.0
learning_rate_end 0.0002
learning_rate_lower_bound 2e-06
learning_rate_mode default
learning_rate_start 0.0002
learning_rate_steady_epochs 30
learning_rate_warmup_epochs 10
log_dir 
logger_levels {}
logtostderr False
max_value_in_clipping 1.0
min_value_in_clipping 0.0
only_check_args False
op_conversion_fallback_to_while_loop True
output_folder_path /content/AdaptiveL2Regularization/output_2021_03_24
pdb False
pdb_post_mortem False
pretrained_model_file_path 
profile_file None
region_num 2
root_folder_path 
run_with_pdb False
run_with_profiling False
runtime_oom_exit True
save_data_to_disk False
showprefixforinfo True
stderrthreshold fatal
steps_per_epoch 200
test_random_seed 301
test_randomize_ordering_seed 
test_srcdir 
test_tmpdir /tmp/absl_testing
testing_size 1.0
use_adaptive_l1_l2_regularizer True
use_cprofile_for_profiling True
use_data_augmentation_in_evaluation False
use_data_augmentation_in_training True
use_horizontal_flipping_in_evaluation True
use_identity_balancing_in_training False
use_re_ranking False
v 0
validation_size 0.0
verbosity 0
workers 5
xml_output_file 
Recreating the output folder at /content/AdaptiveL2Regularization/output_2021_03_24/Market1501_384x128/ResNet50_16_4 ...
Loading the annotations of the Market1501 dataset ...
Traceback (most recent call last):
  File "solution.py", line 1070, in <module>
    app.run(main)
  File "/usr/local/lib/python3.7/dist-packages/absl/app.py", line 300, in run
    _run_main(main, args)
  File "/usr/local/lib/python3.7/dist-packages/absl/app.py", line 251, in _run_main
    sys.exit(main(argv))
  File "solution.py", line 842, in main
    load_accumulated_info_of_dataset(root_folder_path=root_folder_path, dataset_name=dataset_name)
  File "/content/AdaptiveL2Regularization/datasets/__init__.py", line 36, in load_accumulated_info_of_dataset
    root_folder_path = _get_root_folder_path()
  File "/content/AdaptiveL2Regularization/datasets/__init__.py", line 19, in _get_root_folder_path
    root_folder_path = root_folder_path_list[root_folder_path_mask.index(True)]
ValueError: True is not in list
nixingyang commented 3 years ago
ElsaLuz commented 3 years ago
os.path.expanduser("~/Documents/Local Storage/Dataset"),
        "/sgn-data/MLG/nixingyang/Dataset"

In this list both of the paths are of the root folder containing datasets, right? If it is then on my side all three of the datasets should also be present in my root folder?

nixingyang commented 3 years ago

Yes. Just append your path to the list, and the script will identify it. You only need to have the dataset that you want to check on the disk. Xingyang

ElsaLuz commented 3 years ago

Then why does it give me this assertion error?

2021-03-24 20:56:39.658331: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
Getting hyperparameters ...
Using command solution.py
? False
alsologtostderr False
augmentation_num 1
backbone_model_name ResNet50
beta_regularization_factor 0.005
bias_regularization_factor 0.005
dataset_name Market1501
epoch_num 200
evaluate_testing_every_N_epochs 10
evaluate_validation_every_N_epochs 1
evaluation_only False
freeze_backbone_for_N_epochs 20
gamma_regularization_factor 0.005
hbm_oom_exit True
help False
helpfull False
helpshort False
helpxml False
identity_num_per_batch 16
image_augmentor_name RandomErasingImageAugmentor
image_height 384
image_num_per_identity 4
image_width 128
kernel_regularization_factor 0.005
learning_rate_base 0.0002
learning_rate_drop_factor 10.0
learning_rate_end 0.0002
learning_rate_lower_bound 2e-06
learning_rate_mode default
learning_rate_start 0.0002
learning_rate_steady_epochs 30
learning_rate_warmup_epochs 10
log_dir 
logger_levels {}
logtostderr False
max_value_in_clipping 1.0
min_value_in_clipping 0.0
only_check_args False
op_conversion_fallback_to_while_loop True
output_folder_path /content/AdaptiveL2Regularization/output_2021_03_24
pdb False
pdb_post_mortem False
pretrained_model_file_path 
profile_file None
region_num 2
root_folder_path 
run_with_pdb False
run_with_profiling False
runtime_oom_exit True
save_data_to_disk False
showprefixforinfo True
stderrthreshold fatal
steps_per_epoch 200
test_random_seed 301
test_randomize_ordering_seed 
test_srcdir 
test_tmpdir /tmp/absl_testing
testing_size 1.0
use_adaptive_l1_l2_regularizer True
use_cprofile_for_profiling True
use_data_augmentation_in_evaluation False
use_data_augmentation_in_training True
use_horizontal_flipping_in_evaluation True
use_identity_balancing_in_training False
use_re_ranking False
v 0
validation_size 0.0
verbosity 0
workers 5
xml_output_file 
Recreating the output folder at /content/AdaptiveL2Regularization/output_2021_03_24/Market1501_384x128/ResNet50_16_4 ...
Loading the annotations of the Market1501 dataset ...
Use /content/gdrive/MyDrive/Colab Notebooks as root_folder_path ...
Traceback (most recent call last):
  File "solution.py", line 1070, in <module>
    app.run(main)
  File "/usr/local/lib/python3.7/dist-packages/absl/app.py", line 300, in run
    _run_main(main, args)
  File "/usr/local/lib/python3.7/dist-packages/absl/app.py", line 251, in _run_main
    sys.exit(main(argv))
  File "solution.py", line 842, in main
    load_accumulated_info_of_dataset(root_folder_path=root_folder_path, dataset_name=dataset_name)
  File "/content/AdaptiveL2Regularization/datasets/__init__.py", line 47, in load_accumulated_info_of_dataset
    root_folder_path=root_folder_path)
  File "/content/AdaptiveL2Regularization/datasets/market1501.py", line 65, in load_Market1501
    image_folder_name="bounding_box_train")
  File "/content/AdaptiveL2Regularization/datasets/market1501.py", line 25, in _load_accumulated_info
    assert len(image_file_path_list) == 12936
AssertionError
nixingyang commented 3 years ago

Something is wrong with your dataset. Please debug the code line by line. Xingyang

ElsaLuz commented 3 years ago

@nixingyang Grateful for your help! Thank you! :)