edge proxy box == AWS Application Load Balancer - connection point for all clients at data.commons.io - routes clients to the other boxes
gen3 core box == core gen3 services (indexd, sheepdog, fence, ...) - revproxy maps to a particular nodeport per namespace - k8s nodes autoscaling group is a target for the edge proxy alb
user workspace box == jupyterhub, shinyr, ... etc - separate VPC running user code - similarly registered as targets on the edge proxy
workflow system box == CWL management and execution ... future service - This setup gives us a path for decoupling different groups of services that run, iterate, deploy in different ways, but are all accessible via a single domain - which sidesteps CORS and auth-cookie issues ...
Depends on: https://github.com/uc-cdis/cloud-automation/issues/285
Decouple our edge proxy form our edge services - setup an ALB for production commons:
I like this architecture presented by a dev at Lyft - where the "edge proxy" is its own box decoupled from the other services, and does L7 routing: https://www.youtube.com/watch?v=RVZX4CwKhGE&feature=youtu.be&t=25m22sWe could adapt this to a gen3 deployment like this:
data.commons.io
- routes clients to the other boxes