NERSC / hep_cnn_benchmark

TensorFlow Benchmark for the HEP Deep Learning CNN Model
Other
5 stars 5 forks source link

/scripts/networks/binary_classifier_tf.py is missing #4

Open jbalma opened 6 years ago

jbalma commented 6 years ago

Trying to run hep_classifier_tf_train.py with --dummy_data option and hitting a snag with a missing import of networks.binary_classifier_tf.py

I start off in the hep_cnn_benchmark directory, copy everything to the system's distributed filesystem and then set the python paths to ensure I have the environment setup properly. I notice that despite the dependence in hep_classifier_tf_train.py on the import networks.binary_classifier_tf, there doesn't appear to be anything other than utils.py in the directory /scripts/networks.

Maybe I'm missing something. Any guidance is appreciated.

Here's the steps I'm following to setup the run and launch:

export SLURM_WORKING_DIR=/lus/scratch/jbalma/temp/junk_hepcnn_run mkdir ${SLURM_WORKING_DIR} cd ../ echo ${PWD} cp -r ./* ${SLURM_WORKING_DIR}/ cd ${SLURM_WORKING_DIR}/

export PYTHONPATH="$PYTHONPATH:${PWD}:${PWD}/scripts:${PWD}/slurm_tf_helper"

srun -N ${NODES} -n ${NP} -c ${OMP_NUM_THREADS} -C P100 --gres=gpu -u python scripts/hep_classifier_tf_train.py --config=configs/daint_gpu_224.json --num_tasks=${NP} --dummy_data

Traceback (most recent call last): File "scripts/hep_classifier_tf_train.py", line 76, in import networks.binary_classifier_tf as bc ImportError: No module named networks.binary_classifier_tf

azrael417 commented 6 years ago

Hello Jacob,

I think the dummy data option needs to be reworked it is not compatible with the latest changes. Do you have access to the real data?

Best Thorsten Le 9 oct. 2018 à 08:51 -0700, Jacob Balma notifications@github.com, a écrit :

Trying to run hep_classifier_tf_train.py with --dummy_data option and hitting a snag with a missing import of networks.binary_classifier_tf.py I start off in the hep_cnn_benchmark directory, copy everything to the system's distributed filesystem and then set the python paths to ensure I have the environment setup properly. I notice that despite the dependence in hep_classifier_tf_train.py on the import networks.binary_classifier_tf, there doesn't appear to be anything other than utils.py in the directory /scripts/networks. Maybe I'm missing something. Any guidance is appreciated. Here's the steps I'm following to setup the run and launch: export SLURM_WORKING_DIR=/lus/scratch/jbalma/temp/junk_hepcnn_run mkdir ${SLURM_WORKING_DIR} cd ../ echo ${PWD} cp -r ./* ${SLURM_WORKING_DIR}/ cd ${SLURM_WORKING_DIR}/ export PYTHONPATH="$PYTHONPATH:${PWD}:${PWD}/scripts:${PWD}/slurm_tf_helper" srun -N ${NODES} -n ${NP} -c ${OMP_NUM_THREADS} -C P100 --gres=gpu -u python scripts/hep_classifier_tf_train.py --config=configs/daint_gpu_224.json --num_tasks=${NP} --dummy_data Traceback (most recent call last): File "scripts/hep_classifier_tf_train.py", line 76, in import networks.binary_classifier_tf as bc ImportError: No module named networks.binary_classifier_tf — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or mute the thread.

jbalma commented 6 years ago

No where would I find it? Is there a public version of the dataset somewhere?