neurosim / DNN_NeuroSim_V1.3

Benchmark framework of compute-in-memory based accelerators for deep neural network (inference engine focused)
62 stars 36 forks source link

Long inference time but short test time (in train.py) #51

Open duanhx1037 opened 7 months ago

duanhx1037 commented 7 months ago

I run the inference.py with VGG8 and cifar10. It takes about 5 mins to finish, which is rather slow because of hardware consideration. But when I run the train.py with inference=1, the testing is very fast. I think the testing part in train.py should do the same things as the inference.py. But why do they require such a large difference in time?

Or since this repo is recommended for inference only, the train.py in this repo is not correct or sth?

Looking forward to any reply. Thanks.

shieldforever commented 7 months ago

I think that's because when you run inference.py, some hooked functions are executed for collecting the output for each layer, for subsequent analysis by main.cpp. I suggest you to read some more about the code.

duanhx1037 commented 7 months ago

I think that's because when you run inference.py, some hooked functions are executed for collecting the output for each layer, for subsequent analysis by main.cpp. I suggest you to read some more about the code.

Thanks. But I think the hooked functions only run at the first batch. And after I remove the hook, the time of running inference.py seems to be the same. May I know how long it takes you to run the inference.py?

shieldforever commented 7 months ago

I think that's because when you run inference.py, some hooked functions are executed for collecting the output for each layer, for subsequent analysis by main.cpp. I suggest you to read some more about the code.

Thanks. But I think the hooked functions only run at the first batch. And after I remove the hook, the time of running inference.py seems to be the same. May I know how long it takes you to run the inference.py?

What about the last line of the inference.py "call(["/bin/bash", './layerrecord'+str(args.model)+'/trace_command.sh'])"? That's the command to run the NeuroSim for simulation. Did you consider the time for that?

duanhx1037 commented 7 months ago

I think that's because when you run inference.py, some hooked functions are executed for collecting the output for each layer, for subsequent analysis by main.cpp. I suggest you to read some more about the code.

Thanks. But I think the hooked functions only run at the first batch. And after I remove the hook, the time of running inference.py seems to be the same. May I know how long it takes you to run the inference.py?

What about the last line of the inference.py "call(["/bin/bash", './layerrecord'+str(args.model)+'/trace_command.sh'])"? That's the command to run the NeuroSim for simulation. Did you consider the time for that?

Yes, I remove it too...still very slow