This PR sets up a way to run inference on each model.
For each model in [vision, bert, transformer, gnmt], I extracted common logic between training and inference into an independent function called setup, and created another wrapper called eval_wrapper for each model for inference work.
Then both eval_wrapper and train_wrapper use the setup function to set up things.
The core logic on how to do inference is encapsulated in utils.measure, which abstracts the forward pass into an argument func, so I don't have to write the measurement logic for each model.
This PR also includes a series of changes to output the experiment data in a structured manner, e.g. in a json file, to ease collecting data into the google sheet later. This is exclusively done by an object, DataManager, which is a member in many SyncInfo classes to write the data into a file with some thread/process-safe control.
The data after each experiment looks like the following
This PR sets up a way to run inference on each model.
For each model in [vision, bert, transformer, gnmt], I extracted common logic between training and inference into an independent function called
setup
, and created another wrapper calledeval_wrapper
for each model for inference work.Then both
eval_wrapper
andtrain_wrapper
use thesetup
function to set up things.The core logic on how to do inference is encapsulated in utils.measure, which abstracts the forward pass into an argument
func
, so I don't have to write the measurement logic for each model.This PR also includes a series of changes to output the experiment data in a structured manner, e.g. in a
json
file, to ease collecting data into the google sheet later. This is exclusively done by an object,DataManager
, which is a member in manySyncInfo
classes to write the data into a file with some thread/process-safe control.The data after each experiment looks like the following