hashicorp / vault

A tool for secrets management, encryption as a service, and privileged access management
https://www.vaultproject.io/
Other
30.98k stars 4.19k forks source link

Vault Ui #7886

Closed itsecforu closed 4 years ago

itsecforu commented 4 years ago

I deployd Vault into Kubernetes cluster on one node.

It seems ok, but i cant open ui:

``# vault status Key Value


Seal Type shamir Initialized true Sealed false Total Shares 1 Threshold 1 Version 1.2.3 Cluster Name vault-cluster-48775859 Cluster ID 06e091c8-2d1a-f61b-2609-cb6dc7cee052 HA Enabled false ``

When i start server i see this:

# vault server -dev -dev-listen-address=0.0.0.0:8200 Error initializing listener of type tcp: listen tcp4 0.0.0.0:8200: bind: address already in use

my env:

# env KUBERNETES_SERVICE_PORT=443 VAULT_SERVICE_HOST=10.233.13.100 KUBERNETES_PORT=tcp://10.233.0.1:443 HOSTNAME=vault-6cbbc5474d-hkmzp SHLVL=1 VAULT_ADDR=http://127.0.0.1:8200 HOME=/root VAULT_PORT_8200_TCP=tcp://10.233.13.100:8200 VAULT_PORT=tcp://10.233.13.100:8200 VAULT_SERVICE_PORT=8200 TERM=xterm KUBERNETES_PORT_443_TCP_ADDR=10.233.0.1 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin KUBERNETES_PORT_443_TCP_PORT=443 KUBERNETES_PORT_443_TCP_PROTO=tcp VAULT_DEV_ROOT_TOKEN_ID=vault-root-token VAULT_TOKEN=s.d1kVdV1k1RY7dFsDgzSDhipM KUBERNETES_PORT_443_TCP=tcp://10.233.0.1:443 VAULT_SERVICE_PORT_VAULT=8200 KUBERNETES_SERVICE_PORT_HTTPS=443 KUBERNETES_SERVICE_HOST=10.233.0.1 PWD=/ VAULT_PORT_8200_TCP_ADDR=10.233.13.100 VAULT_PORT_8200_TCP_PORT=8200 VAULT_PORT_8200_TCP_PROTO=tcp

Pod;s logs:

==> Vault server started! Log data will stream in below: 2019-10-15T15:36:39.783Z [WARN] no `api_addr` value specified in config or in VAULT_API_ADDR; falling back to detection if possible, but this value should be manually set 2019-10-15T15:36:39.785Z [INFO] core: security barrier not initialized 2019-10-15T15:36:39.785Z [INFO] core: security barrier initialized: shares=1 threshold=1 2019-10-15T15:36:39.786Z [INFO] core: post-unseal setup starting 2019-10-15T15:36:39.799Z [INFO] core: loaded wrapping token key 2019-10-15T15:36:39.799Z [INFO] core: successfully setup plugin catalog: plugin-directory= 2019-10-15T15:36:39.799Z [INFO] core: no mounts; adding default mount table 2019-10-15T15:36:39.802Z [INFO] core: successfully mounted backend: type=cubbyhole path=cubbyhole/ 2019-10-15T15:36:39.802Z [INFO] core: successfully mounted backend: type=system path=sys/ 2019-10-15T15:36:39.803Z [INFO] core: successfully mounted backend: type=identity path=identity/ 2019-10-15T15:36:39.808Z [INFO] core: successfully enabled credential backend: type=token path=token/ 2019-10-15T15:36:39.808Z [INFO] core: restoring leases 2019-10-15T15:36:39.808Z [INFO] rollback: starting rollback manager 2019-10-15T15:36:39.810Z [INFO] identity: entities restored 2019-10-15T15:36:39.810Z [INFO] identity: groups restored 2019-10-15T15:36:39.810Z [INFO] core: post-unseal setup complete 2019-10-15T15:36:39.810Z [INFO] expiration: lease restore complete 2019-10-15T15:36:39.811Z [INFO] core: root token generated 2019-10-15T15:36:39.811Z [INFO] core: pre-seal teardown starting 2019-10-15T15:36:39.811Z [INFO] rollback: stopping rollback manager 2019-10-15T15:36:39.811Z [INFO] core: pre-seal teardown complete 2019-10-15T15:36:39.811Z [INFO] core.cluster-listener: starting listener: listener_address=0.0.0.0:8201 2019-10-15T15:36:39.811Z [INFO] core.cluster-listener: serving cluster requests: cluster_listen_address=[::]:8201 2019-10-15T15:36:39.811Z [INFO] core: post-unseal setup starting 2019-10-15T15:36:39.811Z [INFO] core: loaded wrapping token key 2019-10-15T15:36:39.811Z [INFO] core: successfully setup plugin catalog: plugin-directory= 2019-10-15T15:36:39.812Z [INFO] core: successfully mounted backend: type=system path=sys/ 2019-10-15T15:36:39.812Z [INFO] core: successfully mounted backend: type=identity path=identity/ 2019-10-15T15:36:39.812Z [INFO] core: successfully mounted backend: type=cubbyhole path=cubbyhole/ 2019-10-15T15:36:39.813Z [INFO] core: successfully enabled credential backend: type=token path=token/ 2019-10-15T15:36:39.813Z [INFO] core: restoring leases 2019-10-15T15:36:39.813Z [INFO] rollback: starting rollback manager 2019-10-15T15:36:39.813Z [INFO] identity: entities restored 2019-10-15T15:36:39.813Z [INFO] identity: groups restored 2019-10-15T15:36:39.814Z [INFO] core: post-unseal setup complete 2019-10-15T15:36:39.814Z [INFO] core: vault is unsealed 2019-10-15T15:36:39.815Z [INFO] expiration: revoked lease: lease_id=auth/token/root/x 2019-10-15T15:36:39.815Z [INFO] expiration: lease restore complete 2019-10-15T15:36:39.815Z [INFO] expiration: revoked lease: lease_id=auth/token/root/x2 2019-10-15T15:36:39.819Z [INFO] core: successful mount: namespace= path=secret/ type=kv 2019-10-15T15:36:39.822Z [INFO] secrets.kv.kv_0f8d2455: collecting keys to upgrade 2019-10-15T15:36:39.822Z [INFO] secrets.kv.kv_0f8d2455: done collecting keys: num_keys=1 2019-10-15T15:36:39.822Z [INFO] secrets.kv.kv_0f8d2455: upgrading keys finished 2019-11-15T10:49:49.339Z [INFO] core: root generation initialized: nonce=x 2019-11-15T10:51:45.382Z [INFO] core: root generation finished: nonce=x 2019-11-15T10:55:53.004Z [INFO] core: root generation initialized: nonce=x 2019-11-15T10:56:40.800Z [INFO] core: root generation finished: nonce=x Logs from 10/15/19 3:36 PM to 11/15/19 10:56 AM UTC

plz help, what did I wrong?

michelvocks commented 4 years ago

Hi @itsecforu!

When i start server i see this: vault server -dev -dev-listen-address=0.0.0.0:8200 Error initializing listener of type tcp: listen tcp4 0.0.0.0:8200: bind: address already in use

Apparently, you try to start the Vault server inside a docker container where the server is already running? Have you tried to just access the Vault UI via the container IP?

Cheers, Michel

itsecforu commented 4 years ago

Hello! @michelvocks

Do u mean nodeIP?

Sure i tryied, but Unable to access this site

michelvocks commented 4 years ago

How did you start your Pod (e.g. Kubernetes service setup, Kubernetes deployment setup, etc.)? Is the Pod running? Have you tried to proxy your container service to your localhost? What configuration did you use for your Vault server?

itsecforu commented 4 years ago

Pod is working. Service is working.

pod config: { "kind": "Pod", "apiVersion": "v1", "metadata": { "name": "vault-6cbbc5474d-hkmzp", "generateName": "vault-6cbbc5474d-", "namespace": "default", "selfLink": "/api/v1/namespaces/default/pods/vault-6cbbc5474d-hkmzp", "uid": "5f46a701-a069-4246-9135-d45eaa15f139", "resourceVersion": "4944494", "creationTimestamp": "2019-10-11T18:21:40Z", "labels": { "app": "vault", "pod-template-hash": "6cbbc5474d" }, "ownerReferences": [ { "apiVersion": "apps/v1", "kind": "ReplicaSet", "name": "vault-6cbbc5474d", "uid": "763c3bea-d969-47d0-b10d-60486bf52d17", "controller": true, "blockOwnerDeletion": true } ] }, "spec": { "volumes": [ { "name": "default-token-m84n7", "secret": { "secretName": "default-token-m84n7", "defaultMode": 420 } } ], "containers": [ { "name": "vault", "image": "vault", "ports": [ { "name": "vaultport", "containerPort": 8200, "protocol": "TCP" } ], "env": [ { "name": "VAULT_DEV_ROOT_TOKEN_ID", "value": "vault-root-token" } ], "resources": {}, "volumeMounts": [ { "name": "default-token-m84n7", "readOnly": true, "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount" } ], "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "imagePullPolicy": "Always", "securityContext": { "capabilities": { "add": [ "IPC_LOCK" ] } } } ], "restartPolicy": "Always", "terminationGracePeriodSeconds": 30, "dnsPolicy": "ClusterFirst", "serviceAccountName": "default", "serviceAccount": "default", "nodeName": "worker1", "securityContext": {}, "schedulerName": "default-scheduler", "tolerations": [ { "key": "node.kubernetes.io/not-ready", "operator": "Exists", "effect": "NoExecute", "tolerationSeconds": 300 }, { "key": "node.kubernetes.io/unreachable", "operator": "Exists", "effect": "NoExecute", "tolerationSeconds": 300 } ], "priority": 0, "enableServiceLinks": true }, "status": { "phase": "Running", "conditions": [ { "type": "Initialized", "status": "True", "lastProbeTime": null, "lastTransitionTime": "2019-10-11T18:21:40Z" }, { "type": "Ready", "status": "True", "lastProbeTime": null, "lastTransitionTime": "2019-10-15T15:36:39Z" }, { "type": "ContainersReady", "status": "True", "lastProbeTime": null, "lastTransitionTime": "2019-10-15T15:36:39Z" }, { "type": "PodScheduled", "status": "True", "lastProbeTime": null, "lastTransitionTime": "2019-10-11T18:21:40Z" } ], "hostIP": "10.2.67.203", "podIP": "10.233.110.41", "startTime": "2019-10-11T18:21:40Z", "containerStatuses": [ { "name": "vault", "state": { "running": { "startedAt": "2019-10-15T15:36:39Z" } }, "lastState": { "terminated": { "exitCode": 0, "reason": "Completed", "startedAt": "2019-10-11T18:21:51Z", "finishedAt": "2019-10-14T07:27:31Z", "containerID": "docker://337a3add35834b07db8c8f12a84783583dad2f7ec5deed9413ff3d62625bf6e2" } }, "ready": true, "restartCount": 1, "image": "vault:latest", "imageID": "docker-pullable://vault@sha256:bf63e6c13afac87a439912f88e8e0b879b3233b0a0dfddb6976abde0f6c99068", "containerID": "docker://355eaed45fb6f1b23a2d1dc3cfd15b569a31d63cb42a58f33c337ed3bd0a1ae8" } ], "qosClass": "BestEffort" } }

michelvocks commented 4 years ago

Hi @itsecforu!

It would be helpful if you could also answer my other questions. I assume that Vault is running fine but you are somehow unable to access the UI because of the Kubernetes traffic routing. Please make sure that you are able to proxy the Vault traffic to your localhost and also validate the Service configuration.

Cheers, Michel

itsecforu commented 4 years ago

hello @michelvocks !

I just used this tutorial -> https://www.vaultproject.io/docs/platform/k8s/run.html. I didnt make special settings with proxy, but other applications work correctly out of the box, that is, I see them and work with them in the browser.

michelvocks commented 4 years ago

You need to setup a proxy since by default the service is not configured to route traffic to Vault's UI. See https://www.vaultproject.io/docs/platform/k8s/run.html#viewing-the-vault-ui for more information.

itsecforu commented 4 years ago

you sent the same guide as I did :-) I dont see there a word about the proxy

michelvocks commented 4 years ago

If you click on the link you will be directly redirected to the right paragraph called Viewing the Vault UI. There it is mentioned how to set up a port-forward.

itsecforu commented 4 years ago

I did it: kubectl port-forward vault-6cbbc5474d-hkmzp 8200:8200 Forwarding from **127.0.0.1:8200 -> 8200** Forwarding from [::1]:**8200 -> 8200**

but pod works at another node, not master, do I need to change address?

itsecforu commented 4 years ago

1

But i havent LISTEN port on my node :-(

michelvocks commented 4 years ago

Hi @itsecforu!

It seems to me that this is not an issue related to Vault but it is more a question regarding your Kubernetes setup. Since GitHub issues should be reserved for Vault bugs/feature requests, I recommend asking this question again via our Discussion Forum or our mailing list.

I hope you don't mind that I close this issue for now because of the reasons mentioned above.

Cheers, Michel