Closed kthfre closed 1 month ago
@kthfre there are some conflicts here. Always create a new branch from the default branch and create a PR using that.
@kthfre Your PR is also adding a feedback task. Please fix this so you only modify the executable tutorial README
@algomaster99 @sofiabobadilla sorry this is fixed. Does the feedback assignment require a submission PR as well or is it enough to reply to PR giving feedback to with the actual feedback?
Does the feedback assignment require a submission PR as well or is it enough to reply to PR giving feedback to with the actual feedback?
Yes. Feeback requires a proposal PR (see https://github.com/KTH/devops-course/pull/2676). And then you can add another PR editing the proposal to have a link to the feedback comment.
sorry this is fixed.
It is still not fixed. I can see 2
file changes above.
@kthfre one way to fix it would be to run git pull <main repository remote name> 2024 --rebase
and then git push <your remote name> 2024 --force
.
It seems the that the GitHub link has already been submitted. https://github.com/KTH/devops-course/tree/2024/contributions/executable-tutorial/golman
Assignment Proposal
Title
End-to-end training of a neural network to deployment in a live application
Names and KTH ID
Deadline
Category
Description
I'd like to make an executable tutorial that goes through the training of a neural network in a Jupyter notebook on Colab, handling the intermediary steps, and deployment to some live application, so the end-to-end process. I'd put limited focus on the ML aspects and greater focus on the DevOps aspects. I'd like to whip together my own functionality for the DevOps parts, if I may, as it's a fun learning experience and could be meaningful scripts for future usage. The deployment criteria for the model could be to exceed previous test data accuracy, but there could also be any other reasonable criteria. I haven't fully decided on the functionality for the MLops/DevOps part. The bare minimum is actually deploying the model live when fulfilling the criteria. Other things being considered are model storage/rollback, job scheduling/queue in running notebooks, monitoring of multiple notebooks, etc.
Architecture wise there would be:
I asked TA about this briefly in a lab session (not previous, but one before that) and it sounded OK. I meant to register it earlier, but other coursework came in between. I think it's still OK to register an MLops task since it's asynchronous and there is no "week" folder structure in the directory tree. So if it is, and the proposal sounds OK, is all I have to do commit to a deadline and deliver?
Relevance
Jupyter Notebook/Lab is often used for processing, preparing, and visualizing data, as well as subsequently training machine learning models. The process of deriving a model is often an iterative process to determine suitable model architectures and optimal hyperparameters. Models may furthermore require continuous altering after deployment as more data becomes available or use cases change. This process is presumably often done manually, particularly as data scientists and conventional developers may be different teams, but there are clear benefits in automating the process.
Submission
Github repository