Closed Johnson7878 closed 1 year ago
After discussing with Kyle, lets wait on this and maybe instead chose an EC2 instance instead of the Lambda/SageMaker complex processes. Towards the 2nd quarter of the year, we'll make this decision.
Decided to instead focus more on the ML model instead of devOps with this one. We'll be going with Heroku.
Towards the end of this project, we should be able to identify the timing aspect of our ML section as a whole. Specifically, being able to determine the timing of loading data and overall model run-time. The choice between Lambda and SageMaker is close, because if we identify that our timing is small then we can take advantage of Lambda. However, I have seen a pipeline using both these frameworks concurrently with a REST API (which would be our Flask in this case). Let's come back towards this idea once we have fleshed out our ML section more locally with HiPerGator.