eclipse-che / che

Kubernetes based Cloud Development Environments for Enterprise Teams
http://eclipse.org/che
Eclipse Public License 2.0
6.99k stars 1.19k forks source link

Workspace is failing due to failing containers #17561

Closed awsompankaj closed 3 years ago

awsompankaj commented 4 years ago

My custom stack which is running fine on eclipse che hosted on OCP 3.11 is running fine.

the same stack when i run on echipse che hosted on kubernetes cluster it is failing.

image

below is the devfile.

metadata:
  name: Demostack
components:
  - id: che-incubator/typescript/latest
    memoryLimit: 512Mi
    type: chePlugin
  - mountSources: true
    endpoints:
      - name: nodejs
        port: 3001
    memoryLimit: 512Mi
    type: dockerimage
    image: 'docker.io/awsompankaj/skytap:v5'
    alias: nodejs
  - mountSources: true
    endpoints:
      - name: postwoman
        port: 3000
    memoryLimit: 1024Mi
    type: dockerimage
    image: docker.io/liyasthomas/postwoman
    alias: postwomen
  - mountSources: true
    endpoints:
      - name: omnidb
        port: 8080
      - name: socket
        port: 25482
    memoryLimit: 1024Mi
    type: dockerimage
    image: docker.io/awsompankaj/omnidb:v1
    alias: omnidb
  - mountSources: true
    endpoints:
      - name: mongo
        port: 27017
    env:
      - name: MONGO_INITDB_ROOT_USERNAME
        value: root
      - name: MONGO_INITDB_ROOT_PASSWORD
        value: admin123
    memoryLimit: 1024Mi
    type: dockerimage
    image: docker.io/mongo
    alias: mongo
  - mountSources: true
    endpoints:
      - name: admin-mongo
        port: 1234
    memoryLimit: 1024Mi
    type: dockerimage
    image: docker.io/awsompankaj/admin-mongo:v4
    alias: admin-mongo
  - mountSources: true
    endpoints:
      - name: nginx
        port: 80
    memoryLimit: 1024Mi
    type: dockerimage
    image: docker.io/awsompankaj/nginx-skytap
    alias: nginx-skytap
apiVersion: 1.0.0
awsompankaj commented 4 years ago

Dockerfile for omnidb

FROM alpine:3.11

MAINTAINER Taivo Käsper <taivo.kasper@gmail.com>

ENV OMNIDB_VERSION 2.17.0

RUN apk add --no-cache --virtual .build-deps curl unzip g++ python3-dev \
      && apk add --no-cache make wget llvm \
      && apk add --no-cache --update python3 \
      && pip3 install --upgrade pip \
      && apk add postgresql-dev libffi-dev \
      && pip3 install psycopg2 \
      && pip3 install cffi \
      && curl -Lo /tmp/OmniDB.zip https://github.com/OmniDB/OmniDB/archive/${OMNIDB_VERSION}.zip \
      && unzip /tmp/OmniDB.zip -d /opt/ \
      && rm -f /tmp/OmniDB.zip \
      && mkdir /etc/omnidb \
      && cd /opt/OmniDB-${OMNIDB_VERSION} \
      && pip3 install cherrypy \
      && pip3 install -r requirements.txt \
      && apk del .build-deps \
      && find /usr/local -name '*.a' -delete \
      && addgroup -S omnidb && adduser -S omnidb -G omnidb \
      && chown -R omnidb:omnidb /opt/OmniDB-${OMNIDB_VERSION} \
      && chown -R omnidb:omnidb /etc/omnidb

USER omnidb

EXPOSE 8080 25482 80

WORKDIR /opt/OmniDB-${OMNIDB_VERSION}/OmniDB

ENTRYPOINT ["python3", "omnidb-server.py", "--host=127.0.0.1", "--port=8080", "--wsport=25482", "--ewsport=80", "-d", "/etc/omnidb"]

Dockerfile for nginx


FROM ubuntu:16.04

MAINTAINER Pankaj Sharma

RUN apt-get update \
    && apt-get install -y nginx \
    && apt-get clean \
    && apt-get install -y telnet vim \
    && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* \
    && rm -rf /etc/nginx/sites-available/default

COPY my.conf /etc/nginx/conf.d/
EXPOSE 80
EXPOSE 2000
EXPOSE 25482

CMD ["nginx", "-g", "daemon off;"]
#############################################
awsompankaj commented 4 years ago

if we run same containers as deployment on same kubernetes cluster,they are working fine.

sleshchenko commented 4 years ago

@awsompankaj have you tried to restart a workspace in debug mode to get more info?

I was not able to start a workspace with docker.io/mongo on che.openshift.io because of mkdir pod failure, but after I replaced it with mongo image we use in our devfiles, I've got the following:

Workspace Logs ``` [nginx-skytap] -> nginx: [alert] could not open error log file: open() "/var/log/nginx/error.log" failed (13: Permission denied) [nginx-skytap] -> 2020/08/17 13:17:55 [warn] 1#1: the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:1 [nginx-skytap] -> 2020/08/17 13:17:55 [emerg] 1#1: mkdir() "/var/lib/nginx/body" failed (13: Permission denied) [admin-mongo] -> adminMongo listening on host: http://0.0.0.0:1234 [admin-mongo] -> /app/mongo/node_modules/nedb/lib/datastore.js:77 [admin-mongo] -> if (err) { throw err; } [admin-mongo] -> ^ [admin-mongo] -> [admin-mongo] -> [Error: EACCES: permission denied, open '/app/mongo/data/dbStats.db'] { [admin-mongo] -> errno: -13, [admin-mongo] -> code: 'EACCES', [admin-mongo] -> syscall: 'open', [admin-mongo] -> path: '/app/mongo/data/dbStats.db' [admin-mongo] -> } [mongo] -> => sourcing /usr/share/container-scripts/mongodb/pre-init//10-check-env-vars.sh ... [mongo] -> error: MONGODB_ADMIN_PASSWORD has to be set. [mongo] -> [mongo] -> You must specify the following environment variables: [mongo] -> MONGODB_ADMIN_PASSWORD [mongo] -> Optionally you can provide settings for a user with 'readWrite' role: [mongo] -> (Note you MUST specify all three of these settings) [mongo] -> MONGODB_USER [mongo] -> MONGODB_PASSWORD [mongo] -> MONGODB_DATABASE [mongo] -> Optional settings: [mongo] -> MONGODB_QUIET (default: true) [mongo] -> [mongo] -> For more information see /usr/share/container-scripts/mongodb/README.md [mongo] -> within the container or visit https://github.com/sclorg/mongodb-container/. Error: Failed to run the workspace: "The following containers have terminated: mongo: reason = 'Error', exit code = 1, message = 'null' admin-mongo: reason = 'Error', exit code = 1, message = 'null' nginx-skytap: reason = 'Error', exit code = 1, message = 'null' omnidb: reason = 'Error', exit code = 1, message = 'null' postwomen: reason = 'Error', exit code = 1, message = 'null'" ```

I assume your logs may be different but I hope debug mode will help you and us understand the issue better.

che-bot commented 3 years ago

Issues go stale after 180 days of inactivity. lifecycle/stale issues rot after an additional 7 days of inactivity and eventually close.

Mark the issue as fresh with /remove-lifecycle stale in a new comment.

If this issue is safe to close now please do so.

Moderators: Add lifecycle/frozen label to avoid stale mode.