issues
search
johannespischinger
/
senti_anal
MIT License
2
stars
0
forks
source link
requested issue tracking
#47
Open
michaelfeil
opened
2 years ago
michaelfeil
commented
2 years ago
TODOs
Week 1
[X] Create a git repository
[X] Make sure that all team members have write access to the github repository
[X] Create a dedicated environment for you project to keep track of your packages (using conda)
[X] Create the initial file structure using cookiecutter
[X] Fill out the
make_dataset.py
file such that it downloads whatever data you need and
[x] Add a model file and a training script and get that running
[X] Remember to fill out the
requirements.txt
file with whatever dependencies that you are using
[X] Remember to comply with good coding practices (
pep8
) while doing the project
[x] Do a bit of code typing and remember to document essential parts of your code
[X] Setup version control for your data or part of your data
[x] Construct one or multiple docker files for your code
[x] Build the docker files locally and make sure they work as intended
[x] Write one or multiple configurations files for your experiments
[x] Used Hydra to load the configurations and manage your hyperparameters
[ ] When you have something that works somewhat, remember at some point to to some profiling and see if you can optimize your code
[x] Use wandb to log training progress and other important metrics/artifacts in your code
[x] Use pytorch-lightning (if applicable) to reduce the amount of boilerplate in your code
Week 2
[x] Write unit tests related to the data part of your code
[x] Write unit tests related to model construction
[X] Calculate the coverage.
[X] Get some continuous integration running on the github repository
[X] (optional) Create a new project on
gcp
and invite all group members to it
[X] Create a data storage on
gcp
for you data
[x] Create a trigger workflow for automatically building your docker images
[x] Get your model training on
gcp
[x] Play around with distributed data loading
[ ] (optional) Play around with distributed model training
[ ] Play around with quantization and compilation for you trained models
Week 3
[x] Deployed your model locally using TorchServe
[ ] Checked how robust your model is towards data drifting
[x] Deployed your model using
gcp
[x] Monitored the system of your deployed model
[ ] Monitored the performance of your deployed model
Additional
[x] Revisit your initial project description. Did the project turn out as you wanted?
[ ] Make sure all group members have a understanding about all parts of the project
[x] Create a presentation explaining your project
[X] Uploaded all your code to github
[X] (extra) Implemented pre-commit hooks for your project repository
[not_targeted] (extra) Used Optuna to run hyperparameter optimization on your model
Additional group defined
[X] get docs deployed with every production release
[X] log coverage with pipeline runs on codecov.io
open backlock
[ ] Finalize document tree in readme.md at the end of the project
TODOs
Week 1
make_dataset.py
file such that it downloads whatever data you need andrequirements.txt
file with whatever dependencies that you are usingpep8
) while doing the projectWeek 2
gcp
and invite all group members to itgcp
for you datagcp
Week 3
gcp
Additional
Additional group defined
open backlock