I'm trying to understand if log_dir is needed when using a pre-trained model. All of the scripts (including result.py, which seems like just the "run the model" script) try to load a checkpoint from there, and fail if it's not there. Is that required? Should that be included with the pre-trained model? Can I just delete that session restore?
Or do I really need to run my own training on the dataset with the model?
during training, store all the log info, including checkpoints, tensorboard info, etc.
during test, the checkpoints (pre-trained model parameters) are restored from this dir; also, a separate tensorboard visualization is stored under sub-dir "model_test", this include image/seg visualization for the convenience of debugging
I realize this is a huge delay... hope this still helps
Hey,
I'm trying to understand if log_dir is needed when using a pre-trained model. All of the scripts (including result.py, which seems like just the "run the model" script) try to load a checkpoint from there, and fail if it's not there. Is that required? Should that be included with the pre-trained model? Can I just delete that session restore?
Or do I really need to run my own training on the dataset with the model?
Thanks, David