Closed kminehart closed 7 years ago
If you're curious or want to keep up with the development you can see it here
https://github.com/kminehart/charts/tree/master/incubator/hydra
Nice :)
That didn't take long!
Have a look, let me know if there's anything alarming that should be changed.
time="2017-04-19T20:48:35Z" level=info msg="Connecting with postgres://*:*@authorization-postgresql:5432/hydra?sslmode=disable"
time="2017-04-19T20:48:35Z" level=info msg="Connected to SQL!"
time="2017-04-19T20:48:35Z" level=warning msg="Expected system secret to be at least 32 characters long, got 8 characters."
time="2017-04-19T20:48:35Z" level=info msg="Generating a random system secret..."
time="2017-04-19T20:48:35Z" level=info msg="Generated system secret: 3DP5IZWI&Z6PN?4BI<A?b?hPN,8F&Mqq"
time="2017-04-19T20:48:35Z" level=warning msg="WARNING: DO NOT generate system secrets in production. The secret will be leaked to the logs."
time="2017-04-19T20:48:35Z" level=info msg="Applied 0 migrations postgres!"
time="2017-04-19T20:48:35Z" level=info msg="Key pair for signing hydra.openid.id-token is missing. Creating new one."
time="2017-04-19T20:48:37Z" level=info msg="Key pair for signing hydra.consent.response is missing. Creating new one."
time="2017-04-19T20:48:46Z" level=info msg="Key pair for signing hydra.consent.challenge is missing. Creating new one."
time="2017-04-19T20:48:52Z" level=warning msg="No clients were found. Creating a temporary root client..."
helm install --dry-run --debug
NAME: hardy-clownfish
REVISION: 1
RELEASED: Wed Apr 19 15:50:42 2017
CHART: hydra-0.7.10
USER-SUPPLIED VALUES:
{}
COMPUTED VALUES:
config:
accessTokenLifespan: 1h
authorizeCodeLifespan: 10m
consentUrl: https://consent.example.com
idTokenLifespan: 1h
logLevel: info
system:
secret: changeme
image: oryd/hydra
imagePullPolicy: Always
imageTag: v0.7.10
mountPath: /root
persistence:
accessMode: ReadWriteOnce
enabled: true
size: 1Gi
postgresql:
cpu: 250m
global: {}
image: postgres
imageTag: "9.6"
memory: 256Mi
metrics:
enabled: false
image: wrouesnel/postgres_exporter
imagePullPolicy: IfNotPresent
imageTag: v0.1.1
resources:
requests:
cpu: 100m
memory: 256Mi
persistence:
accessMode: ReadWriteOnce
enabled: true
size: 10Gi
subPath: postgresql-db
postgresDatabase: hydra
postgresPassword: hydra
postgresUser: hydra
resources:
requests:
cpu: 100m
memory: 256Mi
replicas: 2
resources:
requests:
cpu: 100m
memory: 128Mi
HOOKS:
MANIFEST:
---
# Source: hydra/charts/postgresql/templates/secrets.yaml
apiVersion: v1
kind: Secret
metadata:
name: hardy-clownfish-postgresql
labels:
app: hardy-clownfish-postgresql
chart: "postgresql-0.6.0"
release: "hardy-clownfish"
heritage: "Tiller"
type: Opaque
data:
postgres-password: "aHlkcmE="
---
# Source: hydra/templates/hydra_secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: hardy-clownfish-hydra-secret
labels:
app: hardy-clownfish-hydra
chart: "hydra-0.7.10"
release: "hardy-clownfish"
heritage: "Tiller"
type: Opaque
data:
system.secret: Y2hhbmdlbWU=
---
# Source: hydra/charts/postgresql/templates/pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: hardy-clownfish-postgresql
labels:
app: hardy-clownfish-postgresql
chart: "postgresql-0.6.0"
release: "hardy-clownfish"
heritage: "Tiller"
annotations:
volume.alpha.kubernetes.io/storage-class: default
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "10Gi"
---
# Source: hydra/templates/hydra_pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: hardy-clownfish-hydra
annotations:
volume.alpha.kubernetes.io/storage-class: default
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "1Gi"
---
# Source: hydra/charts/postgresql/templates/svc.yaml
apiVersion: v1
kind: Service
metadata:
name: hardy-clownfish-postgresql
labels:
app: hardy-clownfish-postgresql
chart: "postgresql-0.6.0"
release: "hardy-clownfish"
heritage: "Tiller"
spec:
ports:
- name: postgresql
port: 5432
targetPort: postgresql
selector:
app: hardy-clownfish-postgresql
---
# Source: hydra/templates/hydra_service.yaml
kind: Service
apiVersion: v1
metadata:
name: hardy-clownfish-hydra-service
labels:
app: hardy-clownfish-hydra
chart: "hydra-0.7.10"
release: "hardy-clownfish"
heritage: "Tiller"
spec:
type: ClusterIP
selector:
app: hardy-clownfish-hydra
ports:
- name: service
port: 4444
targetPort: 4444
protocol: TCP
---
# Source: hydra/charts/postgresql/templates/deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hardy-clownfish-postgresql
labels:
app: hardy-clownfish-postgresql
chart: "postgresql-0.6.0"
release: "hardy-clownfish"
heritage: "Tiller"
spec:
template:
metadata:
labels:
app: hardy-clownfish-postgresql
spec:
containers:
- name: hardy-clownfish-postgresql
image: "postgres:9.6"
imagePullPolicy: ""
env:
- name: POSTGRES_USER
value: "hydra"
# Required for pg_isready in the health probes.
- name: PGUSER
value: "hydra"
- name: POSTGRES_DB
value: "hydra"
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: hardy-clownfish-postgresql
key: postgres-password
- name: POD_IP
valueFrom: { fieldRef: { fieldPath: status.podIP } }
ports:
- name: postgresql
containerPort: 5432
livenessProbe:
exec:
command:
- sh
- -c
- exec pg_isready --host $POD_IP
initialDelaySeconds: 60
timeoutSeconds: 5
failureThreshold: 6
readinessProbe:
exec:
command:
- sh
- -c
- exec pg_isready --host $POD_IP
initialDelaySeconds: 5
timeoutSeconds: 3
periodSeconds: 5
resources:
requests:
cpu: 100m
memory: 256Mi
volumeMounts:
- name: data
mountPath: /var/lib/postgresql/data/pgdata
subPath: postgresql-db
volumes:
- name: data
persistentVolumeClaim:
claimName: hardy-clownfish-postgresql
---
# Source: hydra/templates/hydra_deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hardy-clownfish-hydra
labels:
app: hardy-clownfish-hydra
chart: "hydra-0.7.10"
release: "hardy-clownfish"
heritage: "Tiller"
spec:
replicas: 2
selector:
matchLabels:
app: hardy-clownfish-hydra
template:
metadata:
name: hardy-clownfish-hydra
labels:
app: hardy-clownfish-hydra
chart: "hydra-0.7.10"
release: "hardy-clownfish"
heritage: "Tiller"
spec:
volumes:
- name: hydra-data
persistentVolumeClaim:
claimName: hardy-clownfish-hydra
- name: hydra-secret
secret:
secretName: hardy-clownfish-hydra-secret
containers:
- name: hydra
image: oryd/hydra:v0.7.10
imagePullPolicy: Always
command: ["hydra", "host", "--dangerous-auto-logon"]
volumeMounts:
- name: hydra-data
mountPath: /root
ports:
- name: service
containerPort: 4444
env:
- name: SYSTEM_SECRET
valueFrom:
secretKeyRef:
name: hardy-clownfish-hydra-secret
key: system.secret
- name: DATABASE_URL
value: postgres://hydra:hydra@hardy-clownfish-postgresql:5432/hydra?sslmode=disable
- name: HTTPS_ALLOW_TERMINATION_FROM
value: 0.0.0.0/0
- name: LOG_LEVEL
value: info
- name: CONSENT_URL
value: https://consent.example.com
- name: ACCESS_TOKEN_LIFESPAN
value: 1h
- name: ID_TOKEN_LIFESPAN
value: 1h
- name: AUTHORIZE_CODE_LIFESPAN
value: 10m
resources:
requests:
cpu: 100m
memory: 128Mi
Here are the configuration options I provide to helm users. Are there any extras I should include for environment variables and whatnot?
image: "oryd/hydra"
imageTag: "v0.7.10"
imagePullPolicy: "Always"
replicas: 2
mountPath: "/root"
# Persistent storage.
persistence:
## If this is false, then emptyDir: {} will be used.
## Setting this to true is highly recommended for production use.
## If this is false, you will lose your data when your pod is destroyed.
enabled: true
## If defined, volume.beta.kubernetes.io/storage-class: <storageClass>
## Default: volume.alpha.kubernetes.io/storage-class: default
#
# storageClass: <storageClass>
accessMode: ReadWriteOnce
size: 1Gi
postgresql:
imageTag: "9.6"
memory: 256Mi
cpu: 250m
postgresUser: hydra
postgresPassword: hydra
postgresDatabase: hydra
persistence:
size: 10Gi
config:
system:
secret: "changeme"
consentUrl: "https://consent.example.com"
logLevel: "info"
accessTokenLifespan: "1h"
idTokenLifespan: "1h"
authorizeCodeLifespan: "10m"
# http://kubernetes.io/docs/user-guide/compute-resources/
resources:
requests:
memory: 128Mi
cpu: 100m
Actually, I had a problem with one of the pods constantly terminating with an exist status 1.
Logs:
time="2017-04-19T21:29:58Z" level=info msg="Connecting with postgres://*:*@authorization-postgresql:5432/hydra?sslmode=disable"
time="2017-04-19T21:29:58Z" level=info msg="Connected to SQL!"
time="2017-04-19T21:29:58Z" level=warning msg="Expected system secret to be at least 32 characters long, got 8 characters."
time="2017-04-19T21:29:58Z" level=info msg="Generating a random system secret..."
time="2017-04-19T21:29:58Z" level=info msg="Generated system secret: F&SFjKEDGRO-sgy(K>ns,U8x9aB(6T2%"
time="2017-04-19T21:29:58Z" level=warning msg="WARNING: DO NOT generate system secrets in production. The secret will be leaked to the logs."
time="2017-04-19T21:29:58Z" level=info msg="Applied 0 migrations postgres!"
Could not fetch signing key for OpenID Connect
These are the logs for a working pod:
time="2017-04-19T20:48:35Z" level=info msg="Connecting with postgres://*:*@authorization-postgresql:5432/hydra?sslmode=disable"
time="2017-04-19T20:48:35Z" level=info msg="Connected to SQL!"
time="2017-04-19T20:48:35Z" level=warning msg="Expected system secret to be at least 32 characters long, got 8 characters."
time="2017-04-19T20:48:35Z" level=info msg="Generating a random system secret..."
time="2017-04-19T20:48:35Z" level=info msg="Generated system secret: 3DP5IZWI&Z6PN?4BI<A?b?hPN,8F&Mqq"
time="2017-04-19T20:48:35Z" level=warning msg="WARNING: DO NOT generate system secrets in production. The secret will be leaked to the logs."
time="2017-04-19T20:48:35Z" level=info msg="Applied 0 migrations postgres!"
time="2017-04-19T20:48:35Z" level=info msg="Key pair for signing hydra.openid.id-token is missing. Creating new one."
time="2017-04-19T20:48:37Z" level=info msg="Key pair for signing hydra.consent.response is missing. Creating new one."
time="2017-04-19T20:48:46Z" level=info msg="Key pair for signing hydra.consent.challenge is missing. Creating new one."
time="2017-04-19T20:48:52Z" level=warning msg="No clients were found. Creating a temporary root client..."
time="2017-04-19T20:48:52Z" level=info msg="Temporary root client created."
time="2017-04-19T20:48:52Z" level=info msg="client_id: eb11d028-fc89-459e-9288-4e9caec2ef7f"
time="2017-04-19T20:48:52Z" level=info msg="client_secret: 91BhL3ukzaY5EqPn"
time="2017-04-19T20:48:52Z" level=warning msg="WARNING: YOU MUST delete this client once in production, as credentials may have been leaked in your logfiles."
time="2017-04-19T20:48:52Z" level=warning msg="Do not use flag --dangerous-auto-logon in production."
time="2017-04-19T20:48:52Z" level=info msg="Persisting config in file /root/.hydra.yml"
time="2017-04-19T20:48:52Z" level=warning msg="No TLS Key / Certificate for HTTPS found. Generating self-signed certificate."
time="2017-04-19T20:48:52Z" level=info msg="Setting up http server on :4444"
time="2017-04-19T20:48:52Z" level=info msg="TLS termination enabled, disabling https."
Any ideas?
Any ideas?
The system secret is wrong, probably because it's not set or something. So it can't decode the store:
time="2017-04-19T21:29:58Z" level=warning msg="Expected system secret to be at least 32 characters long, got 8 characters."
time="2017-04-19T21:29:58Z" level=info msg="Generating a random system secret..."
time="2017-04-19T21:29:58Z" level=info msg="Generated system secret: F&SFjKEDGRO-sgy(K>ns,U8x9aB(6T2%"
Makes sense. Providing a long enough secret makes it run flawlessly.
I'll just need to test it a bit more and have the option to provide a tls certificate and it should be finished.
Sorry about the delay!
I've created a PR to kubernetes/charts
. If someone more familiar with the application wants to take a look at it, the PR is here:
https://github.com/kubernetes/charts/pull/1022
And the chart itself, documentation and everything is here:
https://github.com/kminehart/charts/tree/master/incubator/hydra
Awesome! Thank you for your work on this :)
Again, thank you so much for your work on this. I've added it to the readme and thus closing this issue.
I can't find the helm chart now, where is it?
https://github.com/ory/hydra/issues/430#issuecomment-295044119
Keep in mind, this is 1 1/2 years old.
@aeneasr right, but the README of this repo links to a PR that was merged. But, the chart no longer seems to exist in the chart repo. 😕
actually the README links to a PR that has been closed as superseeded by a different PR https://github.com/helm/charts/pull/1241 which has then been closed without being merged
I've rebooted the effort here: https://github.com/helm/charts/pull/12845
Helm is the "apt-get" of kubernetes. Providing a stable helm chart in a way also provides a canonical way to deploy your application on Kubernetes.
Hydra has a great use-case in our organization and I will be working on this right now. :)
I'm not incredibly familiar with how Hydra is setup or how well it lends itself to being distributed / clustered, so I'll be doing lots of research and asking lots of questions on the way.