enyac-group / NeuralPower

The code for paper: Neuralpower: Predict and deploy energy-efficient convolutional neural networks
Apache License 2.0
21 stars 7 forks source link

neuralpower_paleo/sample_run.sh broken #5

Open Mwuschnig opened 4 years ago

Mwuschnig commented 4 years ago

I have an issue with the provided sample.sh, it seems to be broken. First you have to create a results folder (which is not initialy there). The sample_run script then creates a tmp.txt which i try to parse with the provided parser_raw_data.py which doesn´t seem to work because the paleo profiler doesn´t seem to write the power and time measurements into the tmp.txt file. Without this information the provided matlab scripts in model training doesn´t work.

My workflow would be: run the sample_run.sh -> use the created tmp.txt with the parser_raw_data.py to create the res.txt files -> use the model_training scripts to create the coeff.txt files -> use the coeff*.txt files with the predic_runtime_power.py to get my final result. Would this workflow be correct?

What could it be, that the modified paleo profiler doesn´t work or is there a problem with the sample_run script?

The first lines of tmp.txt:

Convolution1 [10, 32, 32, 16] Filters: [3, 3, 3, 16] Pad: SAME (1, 1) Stride: 1, 1 Params: 448 Input: [10, 32, 32, 3] BatchNorm1 [10, 32, 32, 16] Generic layer: generic_BatchNorm Input: [10, 32, 32, 16] Convolution2 [10, 32, 32, 12] Filters: [3, 3, 16, 12] Pad: SAME (1, 1) Stride: 1, 1 Params: 1,740 Input: [10, 32, 32, 16] Dropout1 [10, 32, 32, 12] Keep prob: 0.200000 Input: [10, 32, 32, 12] Concat1 [10, 32, 32, 28] Input: [[10, 32, 32, 16], [10, 32, 32, 12]] BatchNorm2 [10, 32, 32, 28] Generic layer: generic_BatchNorm Input: [10, 32, 32, 28] Convolution3 [10, 32, 32, 12] Filters: [3, 3, 28, 12] Pad: SAME (1, 1) Stride: 1, 1 Params: 3,036 Input: [10, 32, 32, 28] Dropout2 [10, 32, 32, 12] Keep prob: 0.200000 Input: [10, 32, 32, 12] Concat2 [10, 32, 32, 40] Input: [[10, 32, 32, 28], [10, 32, 32, 12]]

Thanks for helping