Closed shus2018 closed 6 years ago
I thought I documented it somewhere but I guess I didn't: I've moved the requirements for data generation into scripts/requirements.txt. In theory the scripts should run with these requirements alone (doing a pip install --ignore-installed -r scripts/requirements.txt
starting from a clean environment). It looks like I've had the Datacube Data bamboo plan set up to install from environment.yml
first and then install from scripts/requirements.txt
after that, which also works. The main thing is just to avoid polluting the production environment with packages that are only needed for data generation (allensdk
is a good example of this, since it has caused deployment headaches in the past, and is only needed by the scripts).
Okay, I modified the Datacube Data bamboo build plan to install from scripts/requirements.txt
into a clean python 3 conda environment, rather than starting with environment.yml
. This worked fine, so I would recommend doing the same. Not likely to be any harm in starting with an env based on environment.yml
, but it is not necessary.
tried to deploy latest datacube including new human mtg data using conda env today, failed to import wget, I was able to workaround, please add python module "wget" to environment.yml to avoid issue. Thanks.