Right now the way the data is managed is that it's generated through a script and then uploaded in the make process. That's less than ideal for several reasons:
As we found out during initial exploration for taking over this project, the make process breaks if the data build process doesn't work (which it currently doesn't, as detailed in #1)
The data is also in the repo. This isn't actually specifically tied to the process, it should be able to be gitignored, but it does allow for the potential for the data to be committed which does significantly increase repo size.
Also this data is (afaik) largely static - it should really ever be changing barring corrections, so there's no need to build/deploy it every time we're deploying the code.
in #4 we excluded the make all command from the docker build and up processes. to rebuild the data, the make all command must be run separately. does this satisfy closing this issue?
Right now the way the data is managed is that it's generated through a script and then uploaded in the make process. That's less than ideal for several reasons:
Also this data is (afaik) largely static - it should really ever be changing barring corrections, so there's no need to build/deploy it every time we're deploying the code.