ComputerScienceHouse / gitlab-ce-oidc

GitLab CE Docker image with OpenID Connect support
https://hub.docker.com/r/computersciencehouse/gitlab-ce-oidc
MIT License
9 stars 6 forks source link

Not redirecting to IDP #2

Open rasheedamir opened 7 years ago

rasheedamir commented 7 years ago

Not sure whats wrong but I am not able to get gitlab redirected to my IDP keycloak for login; I keep getting this landing page

screen shot 2017-08-18 at 10 50 25

I am trying to setup gitlab with keycloak using openid connect.

Here is the environment variable:

          - name: GITLAB_OMNIBUS_CONFIG
            value: |
               gitlab_rails['omniauth_enabled'] = true
               gitlab_rails['omniauth_allow_single_sign_on'] = true
               gitlab_rails['omniauth_block_auto_created_users'] = false
               gitlab_rails['omniauth_auto_sign_in_with_provider'] = 'keycloak'
               gitlab_rails['omniauth_providers'] = [{'name'=>'openid_connect', 'args'=>{'name'=>'keycloak', 'scope'=>['openid', 'profile'], 'response_type'=>'code', 'discovery'=>true, 'issuer'=>'https://keycloak.dd.theapp.com/auth/', 'client_options'=>{'port'=>'443', 'scheme'=>'https', 'host'=>'keycloak.dd.theapp.com', 'identifier'=>'gitlab', 'secret'=>'b7875680-6ad7-44a5-97cb-bd210789eb41', 'redirect_uri'=>'http://gitlab.dd.theapp.com/users/auth/openid_connect/callback', 'authorization_endpoint'=>'/auth/realms/tools/protocol/openid-connect/auth', 'token_endpoint'=>'/auth/realms/tools/protocol/openid-connect/token', 'userinfo_endpoint'=>'/auth/realms/tools/protocol/openid-connect/userinfo'}}}]
liam-middlebrook commented 7 years ago

@rasheedamir looks like you posted your apps secret in that blob. You should probably generate a new value!

rasheedamir commented 7 years ago

@liam-middlebrook thanks for pointing out; i have already modified above values.

stevenmirabito commented 7 years ago

For your omniauth_providers, try this:

{
    'name'=>'openid_connect',
    'args'=>{
        'name'=>'keycloak',
        'scope'=>['openid', 'profile'],
        'response_type'=>'code',
        'discovery'=>true,
        'issuer'=>'https://keycloak.dd.theddapp.com/auth/realms/tools',
        'client_options'=>{
            'port'=>'443',
            'scheme'=>'https',
            'host'=>'keycloak.dd.theddapp.com',
            'identifier'=>'gitlab',
            'secret'=>'<your secret>',
            'redirect_uri'=>'http://gitlab.dd.theddapp.com/users/auth/keycloak/callback'
        }
    }
}

Differences:

rasheedamir commented 7 years ago

great thanks @stevenmirabito

for some reason the environment variable value is not taking any affect;

i exec'ed into the container and did printenv and I see this

GITLAB_OMNIBUS_CONFIG=gitlab_rails['omniauth_enabled'] = true
gitlab_rails['omniauth_allow_single_sign_on'] = true
gitlab_rails['omniauth_block_auto_created_users'] = false
gitlab_rails['omniauth_auto_sign_in_with_provider'] = 'keycloak'
gitlab_rails['omniauth_providers'] = [{'name'=>'openid_connect', 'args'=>{'name'=>'keycloak', 'scope'=>['openid', 'profile'], 'response_type'=>'code', 'discovery'=>true, 'issuer'=>'https://keycloak.dd.theapp.com/auth/realms/tools', 'client_options'=>{'port'=>'443', 'scheme'=>'https', 'host'=>'keycloak.dd.theapp.com', 'identifier'=>'gitlab, 'secret'=>'v888899-6ad7-44a5-88990-bd210789eb41', 'redirect_uri'=>'http://gitlab.dd.theapp.com/users/auth/keycloak/callback'}}}]

to ensure that these settings are correct now; i exec'ed into container and then updated gitlab.rb manually and it redirected me to keycloak

what could be the issue?

stevenmirabito commented 7 years ago

Check the container logs, there's likely a permissions issue that is preventing the Omnibus from creating the config files. This container does need to run as root as it expects to be able to manage file permissions and run on privileged ports.

rasheedamir commented 7 years ago

@stevenmirabito thanks

I see following in the initial logs:

Preparing services...
Starting services...
Configuring GitLab package...
Configuring GitLab...
/opt/gitlab/embedded/bin/runsvdir-start: line 37: /proc/sys/fs/file-max: Read-only file system

  * Moving existing certificates found in /opt/gitlab/embedded/ssl/certs

  * Symlinking existing certificates found in /etc/gitlab/trusted-certs
gitlab Reconfigured!
Checking for an omnibus managed postgresql: OK
Checking if we already upgraded: OK
The latest version 9.6.3 is already running, nothing to do
==> /var/log/gitlab/postgres-exporter/current <==
2017-08-20_12:24:39.35303 time="2017-08-20T12:24:39Z" level=info msg="Semantic Version Changed: 0.0.0 -> 9.6.3" source="postgres_exporter.go:945"
2017-08-20_12:24:39.35345 time="2017-08-20T12:24:39Z" level=error msg="Failed to reload user queries: /var/opt/gitlab/postgres-exporter/queries.yaml open /var/opt/gitlab/postgres-exporter/queries.yaml: no such file or directory" source="postgres_exporter.go:955"
2017-08-20_12:24:39.37302 time="2017-08-20T12:24:39Z" level=info msg="Starting Server: localhost:9187" source="postgres_exporter.go:1042"
2017-08-20_12:24:57.15116 time="2017-08-20T12:24:57Z" level=info msg="Semantic Version Changed: 0.0.0 -> 9.6.3" source="postgres_exporter.go:945"
2017-08-20_12:24:57.17788 time="2017-08-20T12:24:57Z" level=info msg="Starting Server: localhost:9187" source="postgres_exporter.go:1042"

==> /var/log/gitlab/prometheus/current <==
2017-08-20_12:24:56.80431 time="2017-08-20T12:24:56Z" level=info msg="Build context (go=go1.8.1, user=, date=)" source="main.go:89"
2017-08-20_12:24:56.80798 time="2017-08-20T12:24:56Z" level=info msg="Loading configuration file /var/opt/gitlab/prometheus/prometheus.yml" source="main.go:251"
2017-08-20_12:24:56.83183 time="2017-08-20T12:24:56Z" level=info msg="Loading series map and head chunks..." source="storage.go:421"
2017-08-20_12:24:56.84898 time="2017-08-20T12:24:56Z" level=info msg="2721 series loaded." source="storage.go:432"
2017-08-20_12:24:56.84921 time="2017-08-20T12:24:56Z" level=info msg="Starting target manager..." source="targetmanager.go:61"
2017-08-20_12:24:56.84923 time="2017-08-20T12:24:56Z" level=info msg="Listening on localhost:9090" source="web.go:259"
2017-08-20_12:24:56.85876 time="2017-08-20T12:24:56Z" level=error msg="github.com/prometheus/prometheus/discovery/kubernetes/kubernetes.go:210: Failed to list *v1.Node: Get https://kubernetes.default.svc:443/api/v1/nodes?resourceVersion=0: x509: certificate is valid for kube-apiserver, *.cluster.internal, *.cluster.local, kube-10.240.3.37.cluster.local, kube-api.ddzandbox.com, kubernetes.default, not kubernetes.default.svc" component="kube_client_runtime" source="kubernetes.go:73"
2017-08-20_12:24:56.85880 time="2017-08-20T12:24:56Z" level=error msg="github.com/prometheus/prometheus/discovery/kubernetes/kubernetes.go:184: Failed to list *v1.Pod: Get https://kubernetes.default.svc:443/api/v1/pods?resourceVersion=0: x509: certificate is valid for kube-apiserver, *.cluster.internal, *.cluster.local, kube-10.240.3.37.cluster.local, kube-api.ddzandbox.com, kubernetes.default, not kubernetes.default.svc" component="kube_client_runtime" source="kubernetes.go:73"
2017-08-20_12:24:57.86647 time="2017-08-20T12:24:57Z" level=error msg="github.com/prometheus/prometheus/discovery/kubernetes/kubernetes.go:210: Failed to list *v1.Node: Get https://kubernetes.default.svc:443/api/v1/nodes?resourceVersion=0: x509: certificate is valid for kube-apiserver, *.cluster.internal, *.cluster.local, kube-10.240.3.37.cluster.local, kube-api.ddzandbox.com, kubernetes.default, not kubernetes.default.svc" component="kube_client_runtime" source="kubernetes.go:73"
2017-08-20_12:24:57.86666 time="2017-08-20T12:24:57Z" level=error msg="github.com/prometheus/prometheus/discovery/kubernetes/kubernetes.go:184: Failed to list *v1.Pod: Get https://kubernetes.default.svc:443/api/v1/pods?resourceVersion=0: x509: certificate is valid for kube-apiserver, *.cluster.internal, *.cluster.local, kube-10.240.3.37.cluster.local, kube-api.ddzandbox.com, kubernetes.default, not kubernetes.default.svc" component="kube_client_runtime" source="kubernetes.go:73"

==> /var/log/gitlab/nginx/gitlab_error.log <==

==> /var/log/gitlab/nginx/access.log <==

==> /var/log/gitlab/nginx/gitlab_access.log <==

==> /var/log/gitlab/nginx/error.log <==

==> /var/log/gitlab/nginx/current <==

==> /var/log/gitlab/gitlab-workhorse/current <==
2017-08-20_12:24:23.19900 2017/08/20 12:24:23 Starting gitlab-workhorse v2.3.0-20170814.122004
2017-08-20_12:24:23.19915 2017/08/20 12:24:23 Can not load config file "config.toml": open config.toml: no such file or directory
2017-08-20_12:24:24.23444 2017/08/20 12:24:24 Starting gitlab-workhorse v2.3.0-20170814.122004
2017-08-20_12:24:24.23490 2017/08/20 12:24:24 keywatcher: starting process loop
2017-08-20_12:24:24.23493 2017/08/20 12:24:24 redis: dialing "unix", "/var/opt/gitlab/redis/redis.socket"
2017-08-20_12:24:56.27149 2017/08/20 12:24:56 Starting gitlab-workhorse v2.3.0-20170814.122004
2017-08-20_12:24:56.27203 2017/08/20 12:24:56 keywatcher: starting process loop
2017-08-20_12:24:56.27212 2017/08/20 12:24:56 redis: dialing "unix", "/var/opt/gitlab/redis/redis.socket"

==> /var/log/gitlab/sshd/current <==
2017-08-20_12:23:07.71941 Server listening on 0.0.0.0 port 22.
2017-08-20_12:23:07.71946 Server listening on :: port 22.

==> /var/log/gitlab/gitlab-monitor/current <==
2017-08-20_12:24:55.66913 - -> /process
2017-08-20_12:24:57.49068 ::1 - - [20/Aug/2017:12:24:57 UTC] "GET /sidekiq HTTP/1.1" 200 3399
2017-08-20_12:24:57.49070 - -> /sidekiq
2017-08-20_12:24:57.58359 == Sinatra has ended his set (crowd applauds)
2017-08-20_12:24:57.58389 [2017-08-20 12:24:57] INFO  going to shutdown ...
2017-08-20_12:24:57.58395 [2017-08-20 12:24:57] INFO  WEBrick::HTTPServer#start done.
2017-08-20_12:24:57.86701 [2017-08-20 12:24:57] INFO  WEBrick 1.3.1
2017-08-20_12:24:57.86702 [2017-08-20 12:24:57] INFO  ruby 2.3.3 (2016-11-21) [x86_64-linux]
2017-08-20_12:24:57.86737 == Sinatra (v1.4.8) has taken the stage on 9168 for development with backup from WEBrick
2017-08-20_12:24:57.86755 [2017-08-20 12:24:57] INFO  WEBrick::HTTPServer#start: pid=1243 port=9168

==> /var/log/gitlab/node-exporter/current <==
2017-08-20_12:24:32.28158 time="2017-08-20T12:24:32Z" level=info msg=" - filesystem" source="node_exporter.go:162"
2017-08-20_12:24:32.28158 time="2017-08-20T12:24:32Z" level=info msg=" - zfs" source="node_exporter.go:162"
2017-08-20_12:24:32.28160 time="2017-08-20T12:24:32Z" level=info msg=" - filefd" source="node_exporter.go:162"
2017-08-20_12:24:32.28162 time="2017-08-20T12:24:32Z" level=info msg=" - textfile" source="node_exporter.go:162"
2017-08-20_12:24:32.28163 time="2017-08-20T12:24:32Z" level=info msg=" - time" source="node_exporter.go:162"
2017-08-20_12:24:32.28163 time="2017-08-20T12:24:32Z" level=info msg=" - uname" source="node_exporter.go:162"
2017-08-20_12:24:32.28165 time="2017-08-20T12:24:32Z" level=info msg=" - wifi" source="node_exporter.go:162"
2017-08-20_12:24:32.28168 time="2017-08-20T12:24:32Z" level=info msg=" - diskstats" source="node_exporter.go:162"
2017-08-20_12:24:32.28168 time="2017-08-20T12:24:32Z" level=info msg=" - edac" source="node_exporter.go:162"
2017-08-20_12:24:32.28185 time="2017-08-20T12:24:32Z" level=info msg="Listening on localhost:9100" source="node_exporter.go:186"

==> /var/log/gitlab/gitlab-shell/gitlab-shell.log <==
# Logfile created on 2017-08-20 12:23:12 +0000 by logger.rb/56438

==> /var/log/gitlab/unicorn/unicorn_stdout.log <==

==> /var/log/gitlab/unicorn/unicorn_stderr.log <==
I, [2017-08-20T12:24:30.341389 #755]  INFO -- : master process ready
I, [2017-08-20T12:24:30.345991 #914]  INFO -- : worker=11 ready
I, [2017-08-20T12:24:30.447805 #893]  INFO -- : worker=4 ready
I, [2017-08-20T12:24:30.469563 #887]  INFO -- : worker=2 ready
I, [2017-08-20T12:24:30.488407 #923]  INFO -- : worker=14 ready
I, [2017-08-20T12:24:30.494546 #929]  INFO -- : worker=16 ready
I, [2017-08-20T12:24:30.498933 #890]  INFO -- : worker=3 ready
I, [2017-08-20T12:24:30.527270 #926]  INFO -- : worker=15 ready
I, [2017-08-20T12:24:30.566670 #917]  INFO -- : worker=12 ready
I, [2017-08-20T12:24:30.573260 #920]  INFO -- : worker=13 ready

==> /var/log/gitlab/unicorn/current <==
2017-08-20_12:24:10.06062 starting new unicorn master
2017-08-20_12:24:31.36909 adopted new unicorn master 755

==> /var/log/gitlab/redis-exporter/current <==
2017-08-20_12:24:38.32718 time="2017-08-20T12:24:38Z" level=info msg="Redis Metrics Exporter <<< filled in by build >>>    build date: <<< filled in by build >>>    sha1: <<< filled in by build >>>\n"
2017-08-20_12:24:38.32748 time="2017-08-20T12:24:38Z" level=info msg="Providing metrics at localhost:9121/metrics"
2017-08-20_12:24:38.32749 time="2017-08-20T12:24:38Z" level=info msg="Connecting to redis hosts: []string{\"unix:///var/opt/gitlab/redis/redis.socket\"}"
2017-08-20_12:24:38.32749 time="2017-08-20T12:24:38Z" level=info msg="Using alias: []string{\"\"}"

==> /var/log/gitlab/redis/current <==
2017-08-20_12:23:27.81946
2017-08-20_12:23:27.81947 566:M 20 Aug 12:23:27.819 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
2017-08-20_12:23:27.81947 566:M 20 Aug 12:23:27.819 # Server started, Redis version 3.2.5
2017-08-20_12:23:27.81956 566:M 20 Aug 12:23:27.819 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
2017-08-20_12:23:27.81959 566:M 20 Aug 12:23:27.819 * The server is now ready to accept connections at /var/opt/gitlab/redis/redis.socket
2017-08-20_12:24:48.98061 566:M 20 Aug 12:24:48.980 * 10000 changes in 60 seconds. Saving...
2017-08-20_12:24:48.98133 566:M 20 Aug 12:24:48.981 * Background saving started by pid 1121
2017-08-20_12:24:48.99901 1121:C 20 Aug 12:24:48.998 * DB saved on disk
2017-08-20_12:24:48.99929 1121:C 20 Aug 12:24:48.999 * RDB: 8 MB of memory used by copy-on-write
2017-08-20_12:24:49.08135 566:M 20 Aug 12:24:49.081 * Background saving terminated with success

==> /var/log/gitlab/gitaly/current <==
2017-08-20_12:24:17.14427 time="2017-08-20T12:24:17Z" level=info msg="Starting Gitaly" version=v0.21.2-20170814.122048
2017-08-20_12:24:17.14633 time="2017-08-20T12:24:17Z" level=warning msg="git path not configured. Using default path resolution" resolvedPath="/opt/gitlab/embedded/bin/git"
2017-08-20_12:24:17.14643 time="2017-08-20T12:24:17Z" level=info msg="listening on unix socket" address="/var/opt/gitlab/gitaly/gitaly.socket"
2017-08-20_12:24:55.82728 time="2017-08-20T12:24:55Z" level=info msg="Starting Gitaly" version=v0.21.2-20170814.122048
2017-08-20_12:24:55.82751 time="2017-08-20T12:24:55Z" level=warning msg="git path not configured. Using default path resolution" resolvedPath="/opt/gitlab/embedded/bin/git"
2017-08-20_12:24:55.82775 time="2017-08-20T12:24:55Z" level=info msg="listening on unix socket" address="/var/opt/gitlab/gitaly/gitaly.socket"

==> /var/log/gitlab/sidekiq/current <==
2017-08-20_12:24:30.61072 2017-08-20T12:24:30.610Z 774 TID-orouw9maw INFO: Cron Jobs - add job with name: trending_projects_worker
2017-08-20_12:24:30.61368 2017-08-20T12:24:30.613Z 774 TID-orouw9maw INFO: Cron Jobs - add job with name: remove_unreferenced_lfs_objects_worker
2017-08-20_12:24:30.61628 2017-08-20T12:24:30.616Z 774 TID-orouw9maw INFO: Cron Jobs - add job with name: stuck_import_jobs_worker
2017-08-20_12:24:30.61954 2017-08-20T12:24:30.619Z 774 TID-orouw9maw INFO: Cron Jobs - add job with name: gitlab_usage_ping_worker
2017-08-20_12:24:30.62253 2017-08-20T12:24:30.622Z 774 TID-orouw9maw INFO: Cron Jobs - add job with name: schedule_update_user_activity_worker
2017-08-20_12:24:30.62552 2017-08-20T12:24:30.625Z 774 TID-orouw9maw INFO: Cron Jobs - add job with name: remove_old_web_hook_logs_worker
2017-08-20_12:24:36.46184 2017-08-20T12:24:36.461Z 774 TID-orouw9maw INFO: Running in ruby 2.3.3p222 (2016-11-21 revision 56859) [x86_64-linux]
2017-08-20_12:24:36.46186 2017-08-20T12:24:36.461Z 774 TID-orouw9maw INFO: See LICENSE and the LGPL-3.0 for licensing details.
2017-08-20_12:24:36.46189 2017-08-20T12:24:36.461Z 774 TID-orouw9maw INFO: Upgrade to Sidekiq Pro for more features and support: http://sidekiq.org
2017-08-20_12:24:36.46258 2017-08-20T12:24:36.462Z 774 TID-orouw9maw INFO: Starting processing, hit Ctrl-C to stop

==> /var/log/gitlab/logrotate/current <==

==> /var/log/gitlab/gitlab-rails/production.log <==
Raven 2.5.3 configured not to capture errors: DSN not set
Raven 2.5.3 configured not to capture errors: DSN not set
Raven 2.5.3 configured not to capture errors: DSN not set
Raven 2.5.3 configured not to capture errors: DSN not set
Started GET "/-/metrics" for 127.0.0.1 at 2017-08-20 12:24:44 +0000
Processing by MetricsController#index as HTML
Filter chain halted as :validate_prometheus_metrics rendered or redirected
Completed 404 Not Found in 69ms (Views: 68.4ms | ActiveRecord: 0.0ms)
Raven 2.5.3 configured not to capture errors: DSN not set

==> /var/log/gitlab/gitlab-rails/application.log <==
# Logfile created on 2017-08-20 12:23:52 +0000 by logger.rb/56438
August 20, 2017 12:23: User "Administrator" (admin@example.com) was created

==> /var/log/gitlab/gitlab-rails/gitlab-rails-db-migrate-2017-08-20-12-23-35.log <==
   -> 0.0140s

== Seed from /opt/gitlab/embedded/service/gitlab-rails/db/fixtures/production/001_admin.rb
Administrator account created:

login:    root
password: You'll be prompted to create one on your first visit.

== Seed from /opt/gitlab/embedded/service/gitlab-rails/db/fixtures/production/010_settings.rb

==> /var/log/gitlab/postgresql/current <==
2017-08-20_12:23:33.88523 LOG:  database system was shut down at 2017-08-20 12:23:29 GMT
2017-08-20_12:23:33.90368 LOG:  MultiXact member wraparound protections are now enabled
2017-08-20_12:23:33.90784 LOG:  database system is ready to accept connections
2017-08-20_12:23:33.90858 LOG:  autovacuum launcher started

==> /var/log/gitlab/prometheus/current <==
2017-08-20_12:24:58.87471 time="2017-08-20T12:24:58Z" level=error msg="github.com/prometheus/prometheus/discovery/kubernetes/kubernetes.go:210: Failed to list *v1.Node: Get https://kubernetes.default.svc:443/api/v1/nodes?resourceVersion=0: x509: certificate is valid for kube-apiserver, *.cluster.internal, *.cluster.local, kube-10.240.3.37.cluster.local, kube-api.ddzandbox.com, kubernetes.default, not kubernetes.default.svc" component="kube_client_runtime" source="kubernetes.go:73"
2017-08-20_12:24:58.87473 time="2017-08-20T12:24:58Z" level=error msg="github.com/prometheus/prometheus/discovery/kubernetes/kubernetes.go:184: Failed to list *v1.Pod: Get https://kubernetes.default.svc:443/api/v1/pods?resourceVersion=0: x509: certificate is valid for kube-apiserver, *.cluster.internal, *.cluster.local, kube-10.240.3.37.cluster.local, kube-api.ddzandbox.com, kubernetes.default, not kubernetes.default.svc" component="kube_client_runtime" source="kubernetes.go:73"

==> /var/log/gitlab/gitlab-monitor/current <==
2017-08-20_12:24:59.44142 ::1 - - [20/Aug/2017:12:24:59 UTC] "GET /database HTTP/1.1" 200 42026
2017-08-20_12:24:59.44144 - -> /database

==> /var/log/gitlab/prometheus/current <==
2017-08-20_12:24:59.88278 time="2017-08-20T12:24:59Z" level=error msg="github.com/prometheus/prometheus/discovery/kubernetes/kubernetes.go:184: Failed to list *v1.Pod: Get https://kubernetes.default.svc:443/api/v1/pods?resourceVersion=0: x509: certificate is valid for kube-apiserver, *.cluster.internal, *.cluster.local, kube-10.240.3.37.cluster.local, kube-api.ddzandbox.com, kubernetes.default, not kubernetes.default.svc" component="kube_client_runtime" source="kubernetes.go:73"
2017-08-20_12:24:59.88299 time="2017-08-20T12:24:59Z" level=error msg="github.com/prometheus/prometheus/discovery/kubernetes/kubernetes.go:210: Failed to list *v1.Node: Get https://kubernetes.default.svc:443/api/v1/nodes?resourceVersion=0: x509: certificate is valid for kube-apiserver, *.cluster.internal, *.cluster.local, kube-10.240.3.37.cluster.local, kube-api.ddzandbox.com, kubernetes.default, not kubernetes.default.svc" component="kube_client_runtime" source="kubernetes.go:73"
2017-08-20_12:25:00.89316 time="2017-08-20T12:25:00Z" level=error msg="github.com/prometheus/prometheus/discovery/kubernetes/kubernetes.go:184: Failed to list *v1.Pod: Get https://kubernetes.default.svc:443/api/v1/pods?resourceVersion=0: x509: certificate is valid for kube-apiserver, *.cluster.internal, *.cluster.local, kube-10.240.3.37.cluster.local, kube-api.ddzandbox.com, kubernetes.default, not kubernetes.default.svc" component="kube_client_runtime" source="kubernetes.go:73"
2017-08-20_12:25:00.89342 time="2017-08-20T12:25:00Z" level=error msg="github.com/prometheus/prometheus/discovery/kubernetes/kubernetes.go:210: Failed to list *v1.Node: Get https://kubernetes.default.svc:443/api/v1/nodes?resourceVersion=0: x509: certificate is valid for kube-apiserver, *.cluster.internal, *.cluster.local, kube-10.240.3.37.cluster.local, kube-api.ddzandbox.com, kubernetes.default, not kubernetes.default.svc" component="kube_client_runtime" source="kubernetes.go:73"

==> /var/log/gitlab/gitlab-monitor/current <==
2017-08-20_12:25:01.68406 ::1 - - [20/Aug/2017:12:25:01 UTC] "GET /process HTTP/1.1" 200 6423
2017-08-20_12:25:01.68409 - -> /process

==> /var/log/gitlab/prometheus/current <==
2017-08-20_12:25:01.90138 time="2017-08-20T12:25:01Z" level=error msg="github.com/prometheus/prometheus/discovery/kubernetes/kubernetes.go:184: Failed to list *v1.Pod: Get https://kubernetes.default.svc:443/api/v1/pods?resourceVersion=0: x509: certificate is valid for kube-apiserver, *.cluster.internal, *.cluster.local, kube-10.240.3.37.cluster.local, kube-api.ddzandbox.com, kubernetes.default, not kubernetes.default.svc" component="kube_client_runtime" source="kubernetes.go:73"
2017-08-20_12:25:01.90161 time="2017-08-20T12:25:01Z" level=error msg="github.com/prometheus/prometheus/discovery/kubernetes/kubernetes.go:210: Failed to list *v1.Node: Get https://kubernetes.default.svc:443/api/v1/nodes?resourceVersion=0: x509: certificate is valid for kube-apiserver, *.cluster.internal, *.cluster.local, kube-10.240.3.37.cluster.local, kube-api.ddzandbox.com, kubernetes.default, not kubernetes.default.svc" component="kube_client_runtime" source="kubernetes.go:73"
2017-08-20_12:25:02.90995 time="2017-08-20T12:25:02Z" level=error msg="github.com/prometheus/prometheus/discovery/kubernetes/kubernetes.go:210: Failed to list *v1.Node: Get https://kubernetes.default.svc:443/api/v1/nodes?resourceVersion=0: x509: certificate is valid for kube-apiserver, *.cluster.internal, *.cluster.local, kube-10.240.3.37.cluster.local, kube-api.ddzandbox.com, kubernetes.default, not kubernetes.default.svc" component="kube_client_runtime" source="kubernetes.go:73"
2017-08-20_12:25:02.91028 time="2017-08-20T12:25:02Z" level=error msg="github.com/prometheus/prometheus/discovery/kubernetes/kubernetes.go:184: Failed to list *v1.Pod: Get https://kubernetes.default.svc:443/api/v1/pods?resourceVersion=0: x509: certificate is valid for kube-apiserver, *.cluster.internal, *.cluster.local, kube-10.240.3.37.cluster.local, kube-api.ddzandbox.com, kubernetes.default, not kubernetes.default.svc" component="kube_client_runtime" source="kubernetes.go:73"

==> /var/log/gitlab/gitlab-rails/production.log <==
Started GET "/help" for 127.0.0.1 at 2017-08-20 12:25:03 +0000

is this line an error?

/opt/gitlab/embedded/bin/runsvdir-start: line 37: /proc/sys/fs/file-max: Read-only file system

I m trying to run it on kubernetes.

rasheedamir commented 7 years ago

@stevenmirabito did get chance to review the logs above?