Closed jiunyen-ching closed 4 years ago
For fair comparison, the ground truth masks, to identity the empty/non-empty/surface grids, are copied from SSCNet.
To make it work, you should configure the SSCNet and then run its download_data script. Then, modify the path of DEFAULT_GT in statistics.py to the SSCNet repo root. Finally, we could run the evaluation with command:
python miscellaneous.py --option criterion --logdir eval --benchmark nyu
Some more tips:
Hi, I successfully ran
run_training.sh
andrun_test.sh
. Afterrun_test.sh
, I have amodel_iter149999.hdf5
file sized at 4.1 GB in the eval folder.To evaluate the performance of the model, I looked into
miscellaneous.py
and wrote a short script:python miscellaneous.py --option fusion --benchmark nyucad
A few questions about the other parameters:
logdir Should this optional param, if I were to set it, be the same as the log directory I set in
run_test.sh
?root_dir What does it mean by benchmark targets?
targets, target_model What does it mean by targets to compare each other? For target_model, should this be set to
model_iter149999.hdf5
?In
statistics.py
, there is adef acquire_results( )
requiring a path to _fusionattributes.hdf5 which I seem to be missing. Are there steps that I skipped before I should runmiscellaneous.py
?Edit: I ran
analysis.sh
and it required amain.py
which is missing from the master copy. I edited fromtrain.py
to only require--phase
andoutput-model-path
but I was not successful.