Open source cyber-serenity platform that help the security teams designing and managing companies' ISMS, and which allow them to create resilient digital systems with efficient responses against to cyber-threats.
Apache License 2.0
13
stars
2
forks
source link
As technology, I should support Clusterizable independent unit per technical or functional concern #75
One Vert.x Verticle is deployable per JVM and per CPU core for each technical concern (e.g Bridge for routing a an API http/rest or event bus API exposed to Internet) according to the exposed protocol, and with securized isolation (e.g dedicated settings and sla)
Each technical service (e.g Vert.x bridge Verticle for protocol routing as backend gateway) or functional service (e.g UI capability entry point exposing a capabilities domain’s api; one feature Verticle per JVM per CPU core on computation units) is packaged/deployable as an independent applicative component on an independent JVM and CPU core (e.g allowing horizontal scalability).
[x] develop workflow CI (based on existing helm chart for one envt without specific cluster configuration) that deploy a version into a github environnement (not self-hosted used with public repo; minikube based) allowing to test components during a common development, or during a tests campaign execution of Test stage
[x] Infrastructure services area
[x] Defense platform cluster as Kubernetes Local Dev cluster (deployable as mono instance cluster of modules into a Minikube stack)
[x] Provisioning management of networking, load balancer, database, users, permissions... and Kubernetes cluster (orchestration management) via Helm (Local cluster for dev on workstation's Minikube; "Staging & QA" cluster for test on OVHCloud)
[x] Docker image & resources configuration management via Helm source codes (Configuration of systems' resources, logs, forwarding ports of Docker image, Kubernetes objects (e.g baseline/live replica cluster allowing pre-prod dress rehearsal, canary cluster for code changed and rollout with automated recovery, production/live cluster) supporting environments/infrastructures deployment AT APPLICATION LEVEL
[ ] Control Plane
[ ] API server (including https public root certificates) and Controller Manager > Cloud provider API link > cluster external ip address
[ ] Volumes configuration as stateful set (and \deployment\local-deployment.yaml file definition)
[x] User interfaces area as Kubernetes Node (including external ip address)
[x] Kubernetes Service (including POD VPN address, POD unique IP address)
[x] Domains area
[ ] Access control domain deployable modules
[x] Infrastructure services area
[ ] UI area
[ ] Keycloack SSO backoffice & database server as PostgreSQL server, Keycloack backoffice server supporting the API delegation for teams members permissions, accounts etc.. managed by a final user (e.g CISO) : Helm project
[x] Keycloack SSO service (defined in infrastructure area over Kubernetes Node) and in UI layer (only login view module) as delegation login screen from CYBNITY frontend
[ ] delegation adapter usable from access control domain gateway to check sso from many areas (ui for sso login, doamin for sso token check)
[ ] Kubernetes Service (including POD VPN address, POD unique IP address)
[ ] Domains area
[ ] Migrate Postgresql module from ui layer into the infrastructure layer (including access to keycloack back office) + open network connection between SSO service to node
[ ] Access control process module system as Flink server (rts process module, apache flink, apache flink cep, kafka client, mongodb client, zookeeper client)
[ ] Flink server (Docker image) including port as dedicated Docker image template
[ ] Kubernetes control plan (\service\access-control-domain-process-service.yaml file definition)
[ ] Kubernetes control plan (\service\access-control-domain-process-systems-deployment.yaml file definition)
[ ] Volumes configuration as stateful set linking to systems-deployment file
[ ] the runtime dependencies overview between deployable components and macro-links between them (e.g CYBNITY system components level between park of CYBNITY Docker Images executed into Kubernetes services activated)
[ ] the map of logical stream ports (e.g internal / external) helping to understand the path of events/interactions flow between the Kubernetes Nodes (supposed deployable as application Space Based Architecture; with visibility on location where security policy about permissions/flow opened are controlling the runtime perimeters)
[x] develop CI-CD workflow allowing to build Helm charts, validate them, package to version, push into Dockerhub repository according to LOCAL DEV, QA environments
https://www.notion.so/cybnity/447-d01de61153714443ae8fc294300b773a
REQ_MAIN4: https://www.notion.so/cybnity/REQ_MAIN_4-8513483dd519412087185e24134453bc?pvs=4 As Clusterizable independent unit per technical or functional concern:
Components: vertx server, reactjs, docker, K8s cluster
TODO
[x] create helm github repository supporting the helm chart source codes
[x] develop multi-environments support from charts/sub-charts with migration of existing sh scripts (Node areas)
[x] develop workflow CI (based on existing helm chart for one envt without specific cluster configuration) that deploy a version into a github environnement (not self-hosted used with public repo; minikube based) allowing to test components during a common development, or during a tests campaign execution of Test stage
[x] Infrastructure services area
[x] Defense platform cluster as Kubernetes Local Dev cluster (deployable as mono instance cluster of modules into a Minikube stack)
[x] Provisioning management of networking, load balancer, database, users, permissions... and Kubernetes cluster (orchestration management) via Helm (Local cluster for dev on workstation's Minikube; "Staging & QA" cluster for test on OVHCloud)
[x] Docker image & resources configuration management via Helm source codes (Configuration of systems' resources, logs, forwarding ports of Docker image, Kubernetes objects (e.g baseline/live replica cluster allowing pre-prod dress rehearsal, canary cluster for code changed and rollout with automated recovery, production/live cluster) supporting environments/infrastructures deployment AT APPLICATION LEVEL
[ ] Control Plane
[ ] API server (including https public root certificates) and Controller Manager > Cloud provider API link > cluster external ip address
[ ] Volumes configuration as stateful set (and \deployment\local-deployment.yaml file definition)
[x] User interfaces area as Kubernetes Node (including external ip address)
[ ] Kube-Proxy (including inside/outside cluster network session rules)
[x] Domains I/O area Node (including external ip address)
[ ] Kube-Proxy (including inside/outside cluster network session rules)
[x] Domains area Node (including external ip address)
[ ] Kube-Proxy (including inside/outside cluster network session rules)
[x] Infrastructure Services Area Node (including external ip address)
[ ] Kube-Proxy (including inside/outside cluster network session rules)
[x] Foundation common modules
[ ] Access control domain deployable modules
[ ] Update the Implementation View documentation regarding:
tech: kubernetes, docker, terraform, helm