openstack-charmers / charm-interface-vault-kv

Vault interface for simple KV secrets management
Other
0 stars 6 forks source link

Switch to manage_flags #7

Closed johnsca closed 4 years ago

johnsca commented 5 years ago

Using the newer manage_flags instead of when avoids race conditions where charm handlers run before the flags for the endpoint are properly updated.

Closes-Bug: #1844103

johnsca commented 5 years ago

Ref: Endpoint.manage_flags

johnsca commented 5 years ago

Tested on AWS along with:

Confirmed that the cluster blocked until Vault was manually unsealed, then successfully came up. Then confirmed that the relation was able to be removed without error.

johnsca commented 4 years ago

@dosaboy

xtrusia commented 4 years ago

I'm trying to deploy with

dosaboy's kubernetes-master and vault.

k-master is waiting on encryption info from vault. which wasn't there with current charm.

any advice about this to test further?

`Model Controller Cloud/Region Version SLA Timestamp test maas maas 2.6.8 unsupported 21:08:52+09:00

App Version Status Scale Charm Store Rev OS Notes easyrsa 3.0.1 active 1 easyrsa jujucharms 278 ubuntu
etcd 3.2.10 active 3 etcd jujucharms 460 ubuntu
flannel 0.11.0 active 2 flannel jujucharms 450 ubuntu
ha-vault active 3 hacluster jujucharms 60 ubuntu
kubeapi-load-balancer 1.14.0 active 1 kubeapi-load-balancer jujucharms 682 ubuntu exposed kubernetes-master 1.15.4 waiting 1 kubernetes-master jujucharms 0 ubuntu
kubernetes-worker 1.15.4 blocked 1 kubernetes-worker jujucharms 590 ubuntu exposed percona-cluster 5.7.20-29.24 active 1 percona-cluster jujucharms 279 ubuntu
vault 1.1.1 active 3 vault jujucharms 0 ubuntu

Unit Workload Agent Machine Public address Ports Message easyrsa/0 active idle 0 10.0.0.22 Certificate Authority connected. etcd/0 active idle 1 10.0.0.23 2379/tcp Healthy with 3 known peers etcd/1 active idle 2 10.0.0.24 2379/tcp Healthy with 3 known peers etcd/2 active idle 3 10.0.0.26 2379/tcp Healthy with 3 known peers kubeapi-load-balancer/0 active idle 4 10.0.0.28 443/tcp Loadbalancer ready. kubernetes-master/0 waiting executing 5 10.0.0.31 Waiting for encryption info from Vault to secure secrets flannel/1 active idle 10.0.0.31 Flannel subnet 10.1.87.1/24 kubernetes-worker/0 blocked idle 6 10.0.0.27 Connect a container runtime. flannel/0 active idle 10.0.0.27 Flannel subnet 10.1.60.1/24 percona-cluster/0 active idle 7 10.0.0.32 3306/tcp Unit is ready vault/0 active idle 8 10.0.0.30 8200/tcp Unit is ready (active: true, mlock: enabled) ha-vault/2 active idle 10.0.0.30 Unit is ready and clustered vault/1 active idle 9 10.0.0.25 8200/tcp Unit is ready (active: false, mlock: enabled) ha-vault/1 active idle 10.0.0.25 Unit is ready and clustered vault/2 active idle 10 10.0.0.29 8200/tcp Unit is ready (active: false, mlock: enabled) ha-vault/0* active idle 10.0.0.29 Unit is ready and clustered

Machine State DNS Inst id Series AZ Message 0 started 10.0.0.22 node-14 bionic default Deployed 1 started 10.0.0.23 node-15 bionic default Deployed 2 started 10.0.0.24 node-16 bionic default Deployed 3 started 10.0.0.26 node-19 bionic default Deployed 4 started 10.0.0.28 node-21 bionic default Deployed 5 started 10.0.0.31 node-23 bionic default Deployed 6 started 10.0.0.27 node-20 bionic default Deployed 7 started 10.0.0.32 node-24 bionic default Deployed 8 started 10.0.0.30 node-18 bionic default Deployed 9 started 10.0.0.25 node-17 bionic default Deployed 10 started 10.0.0.29 node-22 bionic default Deployed

`

johnsca commented 4 years ago

Tested on serverstack and confirmed that secret is encrypted (includes the k8s:enc:aescbc: prefix indicating it's encrypted with the AES CBC algorithm): https://pastebin.canonical.com/p/9jxHxsVCPf/

Working on testing with a rebuilt Vault charm to verify that side is unaffected, and to confirm removing the relation works.

johnsca commented 4 years ago

Tested on serverstack with rebuilt k8s-master and vault charms with same result:

ubuntu@juju-018b80-default-1:~$ ETCDCTL_API=3 /snap/bin/etcdctl --endpoints=10.5.0.17:2379 get /registry/secrets/default/secret1 | head -c 49
/registry/secrets/default/secret1
k8s:enc:aescbc:

Also verified that I could remove the kubernetes-master:vault-kv <-> vault:secrets relation without error and that the secrets could still be accessed in K8s.

johnsca commented 4 years ago

Regarding @xtrusia's test, I'm not sure what build of k8s-master or vault was used, but that particular error sounds like the subnet mismatch error that we still haven't been able to find a valid fix for (see #6), so I think it's unrelated to this change. I specifically tested this on serverstack rather than AWS to avoid issues with subdomains and bindings.

javacruft commented 4 years ago

LGTM