Ericsson / codechecker

CodeChecker is an analyzer tooling, defect database and viewer extension for the Clang Static Analyzer and Clang Tidy
https://codechecker.readthedocs.io
Apache License 2.0
2.16k stars 361 forks source link

Get nothing but 502's when deployed on k8s cluster #2387

Open scphantm opened 4 years ago

scphantm commented 4 years ago

Describe the bug I deployed the docker container on a k8s cluster. when authentication is turned off, i get 502 errors when i navigate to the products.html or the login.html page.

When i turn authentication on, (doesn't matter dictionary or ldap), then i get nothing but 502 errors on every page, including the login.html page.

CodeChecker version 6.10

here is the end of my log file.

[DEBUG][2019-10-14 15:05:04] {server} [22] <139857615644416> - server.py:141 __check_session_cookie() - 10.129.0.1:59498 Invalid access, credentials not found - session refused.
--
  | [DEBUG][2019-10-14 15:05:04] {server} [22] <139857615644416> - server.py:174 do_GET() - 10.129.0.1:59498 -- [Anonymous] GET /favicon.ico
  | [DEBUG][2019-10-14 15:05:12] {server} [22] <139857607251712> - server.py:141 __check_session_cookie() - 10.129.0.1:59620 Invalid access, credentials not found - session refused.
  | [DEBUG][2019-10-14 15:05:12] {server} [22] <139857607251712> - server.py:174 do_GET() - 10.129.0.1:59620 -- [Anonymous] GET /login.html?returnto=Default/
  | [DEBUG][2019-10-14 15:05:12] {server} [22] <139857598859008> - server.py:141 __check_session_cookie() - 10.129.0.1:59622 Invalid access, credentials not found - session refused.
  | [DEBUG][2019-10-14 15:05:12] {server} [22] <139857598859008> - server.py:174 do_GET() - 10.129.0.1:59622 -- [Anonymous] GET /favicon.ico
  | [DEBUG][2019-10-14 15:05:16] {server} [22] <139857590466304> - server.py:141 __check_session_cookie() - 10.129.0.1:59680 Invalid access, credentials not found - session refused.
  | [DEBUG][2019-10-14 15:05:16] {server} [22] <139857590466304> - server.py:174 do_GET() - 10.129.0.1:59680 -- [Anonymous] GET /
  | [DEBUG][2019-10-14 15:05:16] {server} [22] <139857582073600> - server.py:141 __check_session_cookie() - 10.129.0.1:59682 Invalid access, credentials not found - session refused.
  | [DEBUG][2019-10-14 15:05:16] {server} [22] <139857582073600> - server.py:174 do_GET() - 10.129.0.1:59682 -- [Anonymous] GET /login.html
  | [DEBUG][2019-10-14 15:05:17] {server} [22] <139857573680896> - server.py:141 __check_session_cookie() - 10.129.0.1:59694 Invalid access, credentials not found - session refused.
  | [DEBUG][2019-10-14 15:05:17] {server} [22] <139857573680896> - server.py:174 do_GET() - 10.129.0.1:59694 -- [Anonymous] GET /favicon.ico
csordasmarton commented 4 years ago

Hi @scphantm! Could you please share us your Kubernetes deployment configuration file so we can easily reproduce your problem.

scphantm commented 4 years ago
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: '12'
  creationTimestamp: '2019-10-11T17:59:15Z'
  generation: 12
  labels:
    app.kubernetes.io/instance: codechecker
    app.kubernetes.io/managed-by: Tiller
    app.kubernetes.io/name: codechecker
    component: codechecker
    helm.sh/chart: codechecker-6.10.0
  name: codechecker
  namespace: codechecker
  resourceVersion: '42654948'
  selfLink: /apis/apps/v1/namespaces/codechecker/deployments/codechecker
  uid: d6f96db0-ec50-11e9-8e01-0cc47a51e1de
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app.kubernetes.io/instance: codechecker
      app.kubernetes.io/name: codechecker
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app.kubernetes.io/instance: codechecker
        app.kubernetes.io/name: codechecker
    spec:
      containers:
        - args:
            - >-
              /usr/local/bin/entrypoint.sh; CodeChecker server --verbose debug
              --workspace /workspace --not-host-only --postgresql --db-host
              grafanapg.svc.lab.mycomany.com --db-port 5432 --db-username
              codechecker --db-name codechecker_config
          command:
            - /bin/sh
            - '-c'
          env:
            - name: PGPASSFILE
              value: /.pgpass
          image: 'codechecker/codechecker-web:6.10.0'
          imagePullPolicy: IfNotPresent
          name: web
          ports:
            - containerPort: 8001
              protocol: TCP
          resources: {}
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
            - mountPath: /workspace/server_config.json
              name: codechecker-map
              subPath: server_config.json
            - mountPath: /run/secrets/pgpass
              name: codechecker-secrets
              subPath: pgpass
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
        - configMap:
            defaultMode: 420
            items:
              - key: server_config.json
                path: server_config.json
            name: codechecker-map
          name: codechecker-map
        - name: codechecker-secrets
          secret:
            defaultMode: 420
            items:
              - key: pgpass
                path: pgpass
            secretName: codechecker-secrets
status:
  availableReplicas: 1
  conditions:
    - lastTransitionTime: '2019-10-11T19:59:33Z'
      lastUpdateTime: '2019-10-11T20:24:52Z'
      message: ReplicaSet "codechecker-c8d89897" has successfully progressed.
      reason: NewReplicaSetAvailable
      status: 'True'
      type: Progressing
    - lastTransitionTime: '2019-10-14T15:07:05Z'
      lastUpdateTime: '2019-10-14T15:07:05Z'
      message: Deployment has minimum availability.
      reason: MinimumReplicasAvailable
      status: 'True'
      type: Available
  observedGeneration: 12
  readyReplicas: 1
  replicas: 1
  updatedReplicas: 1

Here is the service

apiVersion: v1
kind: Service
metadata:
  creationTimestamp: '2019-10-11T17:59:15Z'
  labels:
    app: codechecker
    chart: codechecker-6.10.0
    heritage: Tiller
    release: codechecker
  name: codechecker
  namespace: codechecker
  resourceVersion: '41850397'
  selfLink: /api/v1/namespaces/codechecker/services/codechecker
  uid: d6f66fc3-ec50-11e9-8e01-0cc47a51e1de
spec:
  clusterIP: 172.30.53.183
  ports:
    - name: service
      port: 8001
      protocol: TCP
      targetPort: 8001
  selector:
    app.kubernetes.io/name: codechecker
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

and the route

apiVersion: route.openshift.io/v1
kind: Route
metadata:
  creationTimestamp: '2019-10-11T20:34:50Z'
  labels:
    app: codechecker
    chart: codechecker-6.10.0
    heritage: Tiller
    release: codechecker
  name: codecheck
  namespace: codechecker
  resourceVersion: '41881330'
  selfLink: /apis/route.openshift.io/v1/namespaces/codechecker/routes/codecheck
  uid: 92aee67b-ec66-11e9-8bc8-0cc47a51ee18
spec:
  host: codecheck.apps.lab.mycompany.com
  port:
    targetPort: service
  tls:
    insecureEdgeTerminationPolicy: Redirect
    termination: edge
  to:
    kind: Service
    name: codechecker
    weight: 100
  wildcardPolicy: None
status:
  ingress:
    - conditions:
        - lastTransitionTime: '2019-10-11T20:34:50Z'
          status: 'True'
          type: Admitted
      host: codecheck.apps.lab.mycompany.com
      routerName: router
      wildcardPolicy: None
scphantm commented 4 years ago

im running this on openshift, so i have routes instead of ingress.

csordasmarton commented 4 years ago

I am newby in the Kubernetes word and I am not familiar with openshift but I tried to debug your problem. First I started a local Kubernetes cluster by using minikube, and I have created the following deployments and service:

deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: codechecker-deployment
  labels:
    name: codechecker
spec:
  replicas: 2
  selector:
    matchLabels:
      name: codechecker
  template:
    metadata:
      labels:
        name: codechecker
    spec:
      containers:
        - args:
          image: 'codechecker/codechecker-web:6.10.0'
          name: codechecker-web
          ports:
            - containerPort: 8001
              protocol: TCP
          volumeMounts:
            - mountPath: /workspace
              name: workspace
      volumes:
        - name: workspace
          hostPath:
            path: /home/username/workspace/k8s
            type: Directory

service.yaml:

apiVersion: v1
kind: Service
metadata:
  name: codechecker-service
spec:
  ports:
    - port: 8001
      targetPort: 8001
  selector:
    name: codechecker
  type: LoadBalancer

To make the service accessible through minikube I run the following command: minikube service codechecker-service.

When I opened the link in a browser given by the command above I didn't get any errors. I tried it by turning authentication on and off and it works in both cases.

Maybe the problem will be that you are using routes.

scphantm commented 4 years ago

yea, problem is i cant use ingress (not putting in an ingress server). I have found instances of the http.server having problems behind proxy servers, which is exactly what routes are.

ps, difference between routes and ingress, ingress is more or less an ngnix server running url rewrites. Routes are (in my case) an ha proxy service acting as a router based on hostname and ports. OpenShift does have the ability to run ingress, but i would have to deploy an ngnix server, which isn't going to happen any time soon. RedHat made this change to k8s because its easier to do direct integration with enterprise load balancers (like F5's and such) when it does its routing by proxy.

short of editing code, i have gone as far as i can. If i get some time i will attempt to create a dev env and see if i can get it to do anything. You may want to test with https://github.com/minishift/minishift

most popular enterprise k8s server, by a massive margin. may wanna make sure your stuff works on it.