Adds support for getting total elapsed time, CPU time, and memory allocations every time an inference param is run in inferno-ml-server
Originally I thought we could use EKG for this, but then we would need to keep inferno-ml-server on to get the evaluation info. Given our constraints elsewhere, we should avoid this. So I've continued my quest to Use Postgres For Everything™ and stored it in the DB
Adds support for getting total elapsed time, CPU time, and memory allocations every time an inference param is run in
inferno-ml-server
Originally I thought we could use EKG for this, but then we would need to keep
inferno-ml-server
on to get the evaluation info. Given our constraints elsewhere, we should avoid this. So I've continued my quest to Use Postgres For Everything™ and stored it in the DB