Infrastructure for storing IceNet predictions and importing them into a database. This is part of the IceNet project.
You will need to install the following in order to use this package:
A Microsoft Azure
account with at least Contributor
permissions on the IceNet
subscription
Python 3.8
or above
Install Python
requirements with the following:
pip install --upgrade pip setuptools wheel
pip install -r requirements.txt
Terraform
setup script ./setup_terraform.py
like so:./setup_terraform.py -v \
-i [[admin_subnets]] \
-s [[subscription_name]] \
-rg [[state_resourcegroupname]] \
-sa [[state_accountname]] \
-sc [[state_containername]] \
[[docker_login]] \
[[notification_email]]
You can specify the environment with -e [[ENV]]
which defaults to dev
terraform
directory with cd terraform
Terraform
by running terraform init
like so:terraform init -backend-config=backend.[[ENV]].secrets \
-backend-config='storage_account_name=[[state_accountname]]' \
-backend-config='container_name=[[state_containername]]'
Terraform
will carry out by running terraform plan -var-file=azure.[[ENV]].secrets
Terraform
by running terraform apply -var-file=azure.[[ENV]].secrets
terraform init
againNote that a full run from fresh will likely fail and the apply need rerunning, because we've not sorted all the resource chaining out yet
This is a WIP issue, the processing application needs to be deployed before a final run to deploy a function app that can be targeted by the event grid subscription. See this github issue
This is not achievable via the terraform provider yet, so you'll need to provision the email domain for sending manually,
connect it to the comms provider and then add the notification_email address to the azure.[[ENV]].secrets
file.
In order to process NetCDF
files created by the IceNet pipeline, these need to be uploaded to the blob storage created by the Terraform
commands above.
Follow the instructions here to generate tokens for the blob storage at:
rg-icenet[[ENV]]-data
sticenet[[ENV]]data
input
The SAS token will need: Create
, Write
, Add
and List
permissions.
Every time a file is uploaded to the blob storage container it will trigger a run of the processing function. It is possible that the processing might fail, for example if the file is malformed or the process runs out of memory. To retry a failed run, do one of the following:
Other methods are possible (for example interfacing with blob receipts) but these are more complicated.
In order to provide access to the NetCDF
files stored in blob storage another SAS token will be needed.
Follow the instructions here to generate tokens for the blob storage at:
rg-icenet[[ENV]]-data
sticenet[[ENV]]data
input
The SAS token will need: Read
and List
permissions.
make deploy-azure
basically deploys each of the applications from each of these repositories
This needs to be done manually, because with a properly configured perimeter this will be delegated to a secure internal host. If this were in terraform, it would make the implementation much more of a mission!
The following are noted from a recent deployment
You have to provision manually the email domain on icenet[ENV]-emails Then connect the icenet[ENV]-emails service to icenet[ENV]-comms
Then you need to deploy the applications
icenet-geoapi-processing
export ICENET_ENV=uat
make deploy-azure
icenet-application
export ICENET_ENV=uat
make deploy-azure
- from https://github.com/icenet-ai/icenet-application
ORmake deploy-zip
- from the local clone
icenet-event-processor
There's no incremental versioning at present.
v0.0.1 refers to the ongoing development until we move into demo usage, at which point this will be reviewed...
This is licensed using the MIT License