hasura / graphql-engine

Blazing fast, instant realtime GraphQL APIs on your DB with fine grained access control, also trigger webhooks on database events.
https://hasura.io
Apache License 2.0
30.98k stars 2.75k forks source link

Kerberos on Hasura Container, is it possible? #4588

Open ccalvarez opened 4 years ago

ccalvarez commented 4 years ago

Hi,

Is it possible to configure Kerberos on a Hasura Docker container? I need to connect Hasura to an existing database that is configured to use SSPI to delegate authentication to Windows Active Directory. I am now at the point that Hasura returns "No Kerberos credentials available".

I'm using hasura/graphql-engine v1.1.1 image.

Also I was looking for documentation on how to configure Kerberos on Linux, but I don't know which is the Linux distro on which Hasura container is based, besides I know that to keeping the image size lean, most of the Linux commands are not available.

Thank you.

arjunyel commented 4 years ago

Heres how you may customize it https://github.com/hasura/graphql-engine/issues/2729

tirumaraiselvan commented 4 years ago

@ccalvarez Will be happy to get on a call and try and get this integration going for you. You can mail me at tiru@hasura.io and we can set up some time?

ccalvarez commented 4 years ago

Hi.

I've followed this examples: #2729

I have this Dockerfile, maybe it can be a starting point, I'm using Alpine but I don't mind using another Linux distro.

My actual Dockerfile is this: (I am starting to use Docker and I don't have experience configuring Kerberos):

Dockerfile:

FROM hasura/graphql-engine:latest as base

FROM alpine:latest

RUN apk update && apk add libpq krb5-pkinit krb5-dev krb5

# copy hasura binary from base container
COPY --from=base /bin/graphql-engine /bin/graphql-engine

#CMD ["graphql-engine" "serve"]

As you can see, I've commented the CMD line, if I uncomment it, the image is successfully built with the command:

docker build -t hasura-kerberos .

But when I try to start the container (I'm using md5 user/password method to start simple):

docker run --name hasura-kerberos -d -p 8080:8080 -e HASURA_GRAPHQL_DATABASE_URL=postgres://postgres:myPassword@192.168.0.10:5432/myDatabase -e HASURA_GRAPHQL_ENABLE_CONSOLE=true hasura-kerberos

The container fails to start, with this error in logs:

/bin/sh: [graphql-engine: not found

This is my current state.

[UPDATE] I ran a container in interactive mode, and found out that for some reason graphql-engine serve doesn't start the Hasura engine, so I will use Debian instead of Alpine.

luke-j commented 4 years ago

for some reason graphql-engine serve doesn't start the Hasura engine (in alpine)

This is my experience as well. It doesn't error - just doesn't start.

It would be great if graphql-engine server worked on alpine - is this something that might be possible?

0x000def42 commented 3 years ago

I have same issue, but write CMD ["graphql-engine", "serve"] instead of CMD ["graphql-engine" "serve"] Missed "," :(

blncoxauto commented 3 years ago

The solution in #2729 doesn't work in AWS ECS as of v1.3.0, and I get a segmentation fault running on ECS...

I'm guessing there is something that is missing from either the package or when the COPY moves over the /bin/graphql-engine?

In CloudFormation I can't figure out an easy way to assemble the HASURA_GRAPHQL_DATABASE_URL environment variable using values from SecretsManager, so I've written the following wrapper:

#!/bin/sh
set -e

# From https://stackoverflow.com/questions/57496552/customize-hasura-docker-image

if [ ! -z "${DB_SECRETS}" ]; then
    DB_NAME=$(echo $DB_SECRETS | jq -r .dbname)
    DB_USER=$(echo $DB_SECRETS | jq -r .username)
    DB_HOST=$(echo $DB_SECRETS | jq -r .host)
    DB_PORT=$(echo $DB_SECRETS | jq -r .port)
    DB_PASSWORD=$(echo $DB_SECRETS | jq -r .password)
fi

if [ -z "${DB_NAME}" ]; then
   echo "Must provide DB_NAME environment variable. Exiting...."
   exit 1
fi

if [ -z "${DB_USER}" ]; then
   echo "Must provide DB_USER environment variable. Exiting...."
   exit 1
fi

if [ -z "${DB_PASSWORD}" ]; then
   echo "Must provide DB_PASSWORD environment variable. Exiting...."
   exit 1
fi

if [ -z "${DB_HOST}" ]; then
   echo "Must provide DB_HOST environment variable. Exiting...."
   exit 1
fi

if [ -z "${DB_PORT}" ]; then
   echo "Must provide DB_PORT environment variable. Exiting...."
   exit 1
fi

if [ -z "${DB_PASSWORD}" ]; then
   echo "Must provide DB_PASSWORD environment variable. Exiting...."
   exit 1
fi

export HASURA_GRAPHQL_DATABASE_URL=postgres://${DB_USER}:${DB_PASSWORD}@${DB_HOST}:${DB_PORT}/${DB_NAME}

#echo "HASURA CONNECTION STRING: " $HASURA_GRAPHQL_DATABASE_URL
#echo "ADMIN: " $HASURA_GRAPHQL_ADMIN_SECRET

echo "Starting Hasura GraphQL server connecting to ${DB_HOST}:${DB_PORT}/${DB_NAME} as ${DB_USER}"
/bin/graphql-engine serve

With the dockerfile:

# This is based on https://stackoverflow.com/questions/57496552/customize-hasura-docker-image
FROM hasura/graphql-engine:v1.3.2 as base

FROM debian:stretch-slim

# install libpq (required by Hasura)
RUN apt-get -y update \
    && apt-get install -y jq \
    && apt-get install -y libpq-dev \
    && apt-get -y auto-remove \
    && apt-get -y clean \
    && rm -rf /var/lib/apt/lists/* \
    && rm -rf /usr/share/doc/ \
    && rm -rf /usr/share/man/ \
    && rm -rf /usr/share/locale/

# copy hausra binary from base container
COPY --from=base /bin/graphql-engine /bin/graphql-engine
#RUN chmod +x /bin/graphql-engine

COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh

CMD ["/entrypoint.sh"]

Then deploying using AWS CDK to generate the CloudFormation (which works up to v1.2.2) I am able to wire in the environment variables with the password as a secret.