at present, the server is initialized with a single energy model. but energy models have a relatively small footprint in ram. for example, the Camry Coupe model we used in tests is about 420kb on my computer. it seems like it would be simple to load a catalog of models and allow users to pick the model type at query time. this would allow for researchers to look at how sets of queries vary across models, or, a RouteE/HIVE integration to work with heterogeneous fleets.
if a query included a field requesting some model by name:
{
"model_name": "2016_TOYOTA_Camry_4cyl_2WD"
}
under-the-hood, there could be something along the lines of a HashMap<String, SpeedGradeModelRecord> that would hold the catalog of in-memory models, such as the entire current catalog (.zip) of 60+ trained models we have. we would then have a SpeedGradeModelRecord that has the model and metadata:
at present, the server is initialized with a single energy model. but energy models have a relatively small footprint in ram. for example, the Camry Coupe model we used in tests is about 420kb on my computer. it seems like it would be simple to load a catalog of models and allow users to pick the model type at query time. this would allow for researchers to look at how sets of queries vary across models, or, a RouteE/HIVE integration to work with heterogeneous fleets.
if a query included a field requesting some model by name:
under-the-hood, there could be something along the lines of a
HashMap<String, SpeedGradeModelRecord>
that would hold the catalog of in-memory models, such as the entire current catalog (.zip) of 60+ trained models we have. we would then have aSpeedGradeModelRecord
that has the model and metadata: