This PR aims to make the deployments portable (can be run and updated from other machines)
Add S3 as the state backend, replacing the use of local terraform.tfstate files
Simplified the AWS permissions setup to the creation of 1 role, and 2 copy/pasted inline policies
Flesh out AWS prerequisites documentation generally
Separate the deployments into terraform workspaces
README:
[x] Create state s3 bucket
[x] Create dynamodb table
[x] assume role in aws config
[x] Create deployment IAM Role (currently Dandihub-Staging-TFstate, will be renamed)
Allow DANDI group to assume role (AssumeDandihubStagingRolePolicy)
Add policies for dynamo (for locking) dandihub-staging-dynamo-access
Add policies for using s3 DandihubStagingTFStatePolicy
Add all necessary policies for provisioning and orchestration (currently `dandihub-staging-ec2 should be renamed to dandi-admin or such) also, this is very permissive. so extra eyes on this would be appreciated.
Next steps (not necessary to merge)
~BICAN:
setup s3, locking, policies, and roles
Migrate terraform.tfstate to s3
Merge bican fork
custom jupyterhub.yaml file
LINC:
setup s3, locking, policies, and roles
Migrate terraform.tfstate to s3 (Aaron currently has terraform.tfstate locally)~
This PR aims to make the deployments portable (can be run and updated from other machines)
workspaces
README:
dandihub-staging-dynamo-access
Next steps (not necessary to merge)
~BICAN:
LINC: