At the moment, the evaluation demo has to be updated manually by SSHing into the server, pulling the latest code, and restarting the WebAPI. Suppose we'd instead have a script that runs after every long-running experiment (from #31). In that case, we could ensure that the demo will always have the latest code and checkpoint without introducing massive downtimes through potential human error.
At the moment, the evaluation demo has to be updated manually by SSHing into the server, pulling the latest code, and restarting the WebAPI. Suppose we'd instead have a script that runs after every long-running experiment (from #31). In that case, we could ensure that the demo will always have the latest code and checkpoint without introducing massive downtimes through potential human error.