cattr-app / server-application

Public clone of https://git.amazingcat.net/cattr/core/app
https://cattr.app
Other
50 stars 25 forks source link

Unable to run cattr on Kubernetes #119

Open khalid244 opened 11 months ago

khalid244 commented 11 months ago

I have been trying to run cattr on Kubernetes for several days, but I keep encountering the following error message:

/package/admin/s6-overlay/libexec/preinit: notice: /var/run is not a symlink to /run, fixing it
s6-rmrf: fatal: unable to remove /var/run: Read-only file system
s6-overlay-suexec: fatal: child failed with exit code 111

Has anyone else encountered this issue while running cattr on Kubernetes? If so, how did you resolve it? Any help would be greatly appreciated.

vermorag commented 11 months ago

It seems, that you're trying to run your pod in read-only mode. It is not an option with Cattr. Or you should set up write-allowed folders for storage. At least: /app/storage & /app/bootstrap/cache. Also if you insist on the read-only mode you should read the notes: https://github.com/just-containers/s6-overlay#read-only-root-filesystem Basically, Cattr container isn't ready for Kubernetes as it is.

khalid244 commented 11 months ago

Hi Vermorag

Thank you for your response. After I dig more in the problem I found that Kubernetes make /run read-only so that's why it is not working here. Also, anything in /app/* have write-allowed. The main problem with s6-overlay. Is there are any way to run Cattr without s6-overlay ?

I have been trying many things to make s6-overlay work but nothing work even when I make read-only mode.

khalid244 commented 11 months ago

I also open issue in s6-overlay. But I don't found any good solution https://github.com/just-containers/s6-overlay/issues/551

vermorag commented 11 months ago

Read the note about s6 I've attached above. It can run with read-only root when ENV variable set + add smth like:

...
<pod description in deployment>
volumeMounts:
  - mountPath: /run
     name: app-run
...
<volumes description in deployment>
      volumes:
        - name: app-run
          emptyDir: {}

Currently Cattr don't run without s6-overlay, but you can ask about it @DanilShapilov

khalid244 commented 11 months ago

I already try that using simple deployment, but it does not work

apiVersion: apps/v1
kind: Deployment
metadata:
  name: cattr-app-deployment
  namespace: cattr
  labels:
    app: cattr
spec:
  selector:
    matchLabels:
      app: cattr
  template:
    metadata:
      labels:
        app: cattr
    spec:
      containers:
        - name: app
          image: registry.gyoom.sa/cattr:2023-10-15
          env:
            - name: "S6_READ_ONLY_ROOT"
              value: "1"
          ports:
            - containerPort: 8000
          volumeMounts:
            - mountPath: /run
              name: app-run
          resources:
            limits:
              cpu: 3000m
              hugepages-2Mi: 1Gi
              memory: "8Gi"
            requests:
              cpu: 100m
              memory: "200Mi"
        - name: db
          image: mysql:latest
          imagePullPolicy: Always
          ports:
            - containerPort: 3306
          env:
            - name: MYSQL_DATABASE
              value: cattr
            - name: MYSQL_ROOT_PASSWORD
              value: password
          resources:
            limits:
              cpu: 2000m
              memory: "2Gi"
            requests:
              cpu: 100m
              memory: "200Mi"
      volumes:
        - name: app-run
          emptyDir: {}
vermorag commented 11 months ago

For the sake of the experiment try:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: cattr-app-deployment
  namespace: cattr
  labels:
    app: cattr
spec:
  selector:
    matchLabels:
      app: cattr
  template:
    metadata:
      labels:
        app: cattr
    spec:
      containers:
        - name: app
          image: registry.gyoom.sa/cattr:2023-10-15
          securityContext:
            readOnlyRootFilesystem: true
          env:
            - name: "S6_READ_ONLY_ROOT"
              value: "1"
          ports:
            - containerPort: 8000
          volumeMounts:
            - mountPath: /run
              name: app-run
            - mountPath: /app/bootstrap/cache
              name: bootstrap-cache
            - mountPath: /tmp
              name: app-tmp
            - mountPath: /app/storage
              name: storage-framework
          resources:
            limits:
              cpu: 3000m
              hugepages-2Mi: 1Gi
              memory: "8Gi"
            requests:
              cpu: 100m
              memory: "200Mi"
        - name: db
          image: mysql:latest
          imagePullPolicy: Always
          ports:
            - containerPort: 3306
          env:
            - name: MYSQL_DATABASE
              value: cattr
            - name: MYSQL_ROOT_PASSWORD
              value: password
          resources:
            limits:
              cpu: 2000m
              memory: "2Gi"
            requests:
              cpu: 100m
              memory: "200Mi"
      volumes:
        - name: bootstrap-cache
          emptyDir: {}
        - name: app-tmp
          emptyDir: {}
        - name: storage-framework
          emptyDir: {}
        - name: app-run
          emptyDir: {}

I've mixed your config with my config from another project, that also uses s6 & laravel, but I can't test this config rnow.

khalid244 commented 11 months ago
/package/admin/s6-overlay/libexec/preinit: info: read-only root
/package/admin/s6-overlay/libexec/preinit: info: writable /run. Checking for executability.
/package/admin/s6-overlay/libexec/preinit: notice: /var/run is not a symlink to /run, fixing it
s6-rmrf: fatal: unable to remove /var/run: Read-only file system
s6-overlay-suexec: fatal: child failed with exit code 111
vermorag commented 11 months ago

Okay. Let's try another option. I see - you use your registry. Can you build and push an image with that Dockerfile and try to use it?

FROM registry.git.amazingcat.net/cattr/core/app:latest
RUN rm -rf /var/run

Just remove /var/run, maybe it will help. Another option:

FROM registry.git.amazingcat.net/cattr/core/app:latest
RUN rm -rf /var/run && ln -s /var/run /run

Note: use with my deployment config, because, as you see - s6 found writable run. Only thing left - don't try to remove /var/run

khalid244 commented 11 months ago

I test both of them, but I have the same error message. I think Kubernetes reconfigure /var/run every time

vermorag commented 11 months ago

Misspelled params for ln... Can we try:

FROM registry.git.amazingcat.net/cattr/core/app:latest

RUN rm -rf /var/run && ln -s /run /var/run
khalid244 commented 11 months ago

Things start working !!!

/package/admin/s6-overlay/libexec/preinit: info: read-only root
/package/admin/s6-overlay/libexec/preinit: info: writable /run. Checking for executability.
s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service prepare-filesystem: starting
s6-rc: info: service prepare-cache: starting
s6-rc: info: service supercronic: starting
s6-rc: info: service supercronic successfully started
mkdir: can't create directory '/var/lib/nginx/tmp': Read-only file system
s6-rc: warning: unable to start service prepare-filesystem: command exited 1
time="2023-10-17T15:40:24Z" level=info msg="read crontab: /crontab"
[2023-10-17 15:40:24] production.ERROR: Please provide a valid cache path. {"exception":"[object] (InvalidArgumentException(code: 0): Please provide a valid cache path. at /app/vendor/laravel/framework/src/Illuminate/View/Compilers/Compiler.php:66)
[stacktrace]
#0 /app/vendor/laravel/framework/src/Illuminate/View/ViewServiceProvider.php(96): Illuminate\\View\\Compilers\\Compiler->__construct(Object(Illuminate\\Filesystem\\Filesystem), false, '', true, 'php')
#1 /app/vendor/laravel/framework/src/Illuminate/Container/Container.php(910): Illuminate\\View\\ViewServiceProvider->Illuminate\\View\\{closure}(Object(Illuminate\\Foundation\\Application), Array)
#2 /app/vendor/laravel/framework/src/Illuminate/Container/Container.php(795): Illuminate\\Container\\Container->build(Object(Closure))
#3 /app/vendor/laravel/framework/src/Illuminate/Foundation/Application.php(933): Illuminate\\Container\\Container->resolve('blade.compiler', Array, true)
#4 /app/vendor/laravel/framework/src/Illuminate/Container/Container.php(731): Illuminate\\Foundation\\Application->resolve('blade.compiler', Array)
#5 /app/vendor/laravel/framework/src/Illuminate/Foundation/Application.php(918): Illuminate\\Container\\Container->make('blade.compiler', Array)
#6 /app/vendor/laravel/framework/src/Illuminate/Container/Container.php(1456): Illuminate\\Foundation\\Application->make('blade.compiler')
#7 /app/vendor/laravel/framework/src/Illuminate/Support/Facades/Facade.php(236): Illuminate\\Container\\Container->offsetGet('blade.compiler')
#8 /app/vendor/laravel/framework/src/Illuminate/Support/Facades/Facade.php(207): Illuminate\\Support\\Facades\\Facade::resolveFacadeInstance('blade.compiler')
#9 /app/vendor/laravel/framework/src/Illuminate/Support/Facades/Facade.php(347): Illuminate\\Support\\Facades\\Facade::getFacadeRoot()
#10 /app/vendor/elhebert/laravel-sri/src/SriServiceProvider.php(32): Illuminate\\Support\\Facades\\Facade::__callStatic('component', Array)
#11 /app/vendor/laravel/framework/src/Illuminate/Container/BoundMethod.php(36): Elhebert\\SubresourceIntegrity\\SriServiceProvider->boot()
#12 /app/vendor/laravel/framework/src/Illuminate/Container/Util.php(41): Illuminate\\Container\\BoundMethod::Illuminate\\Container\\{closure}()
#13 /app/vendor/laravel/framework/src/Illuminate/Container/BoundMethod.php(93): Illuminate\\Container\\Util::unwrapIfClosure(Object(Closure))
#14 /app/vendor/laravel/framework/src/Illuminate/Container/BoundMethod.php(35): Illuminate\\Container\\BoundMethod::callBoundMethod(Object(Illuminate\\Foundation\\Application), Array, Object(Closure))
#15 /app/vendor/laravel/framework/src/Illuminate/Container/Container.php(662): Illuminate\\Container\\BoundMethod::call(Object(Illuminate\\Foundation\\Application), Array, Array, NULL)
#16 /app/vendor/laravel/framework/src/Illuminate/Foundation/Application.php(1006): Illuminate\\Container\\Container->call(Array)
#17 /app/vendor/laravel/framework/src/Illuminate/Foundation/Application.php(987): Illuminate\\Foundation\\Application->bootProvider(Object(Elhebert\\SubresourceIntegrity\\SriServiceProvider))
#18 [internal function]: Illuminate\\Foundation\\Application->Illuminate\\Foundation\\{closure}(Object(Elhebert\\SubresourceIntegrity\\SriServiceProvider), 16)
#19 /app/vendor/laravel/framework/src/Illuminate/Foundation/Application.php(986): array_walk(Array, Object(Closure))
#20 /app/vendor/laravel/framework/src/Illuminate/Foundation/Bootstrap/BootProviders.php(17): Illuminate\\Foundation\\Application->boot()
#21 /app/vendor/laravel/framework/src/Illuminate/Foundation/Application.php(261): Illuminate\\Foundation\\Bootstrap\\BootProviders->bootstrap(Object(Illuminate\\Foundation\\Application))
#22 /app/vendor/laravel/framework/src/Illuminate/Foundation/Console/Kernel.php(428): Illuminate\\Foundation\\Application->bootstrapWith(Array)
#23 /app/app/Console/Kernel.php(27): Illuminate\\Foundation\\Console\\Kernel->bootstrap()
#24 /app/vendor/laravel/framework/src/Illuminate/Foundation/Console/Kernel.php(198): App\\Console\\Kernel->bootstrap()
#25 /app/artisan(35): Illuminate\\Foundation\\Console\\Kernel->handle(Object(Symfony\\Component\\Console\\Input\\ArgvInput), Object(Symfony\\Component\\Console\\Output\\ConsoleOutput))
#26 {main}
"} 

In Compiler.php line 66:

  Please provide a valid cache path.  

s6-rc: warning: unable to start service prepare-cache: command exited 1
time="2023-10-17T15:41:00Z" level=info msg=starting iteration=0 job.command="php /app/artisan schedule:run" job.position=0 job.schedule="* * * * *"
time="2023-10-17T15:41:00Z" level=info channel=stdout iteration=0 job.command="php /app/artisan schedule:run" job.position=0 job.schedule="* * * * *"
time="2023-10-17T15:41:00Z" level=info msg="In TelescopeServiceProvider.php line 11:" channel=stdout iteration=0 job.command="php /app/artisan schedule:run" job.position=0 job.schedule="* * * * *"
time="2023-10-17T15:41:00Z" level=info msg="                                                                           " channel=stdout iteration=0 job.command="php /app/artisan schedule:run" job.position=0 job.schedule="* * * * *"
time="2023-10-17T15:41:00Z" level=info msg="  Class \"Laravel\\Telescope\\TelescopeApplicationServiceProvider\" not found  " channel=stdout iteration=0 job.command="php /app/artisan schedule:run" job.position=0 job.schedule="* * * * *"
time="2023-10-17T15:41:00Z" level=info msg="                                                                           " channel=stdout iteration=0 job.command="php /app/artisan schedule:run" job.position=0 job.schedule="* * * * *"
time="2023-10-17T15:41:00Z" level=info channel=stdout iteration=0 job.command="php /app/artisan schedule:run" job.position=0 job.schedule="* * * * *"
time="2023-10-17T15:41:00Z" level=error msg="error running command: exit status 1" iteration=0 job.command="php /app/artisan schedule:run" job.position=0 job.schedule="* * * * *"
time="2023-10-17T15:42:00Z" level=info msg=starting iteration=1 job.command="php /app/artisan schedule:run" job.position=0 job.schedule="* * * * *"
time="2023-10-17T15:42:00Z" level=info channel=stdout iteration=1 job.command="php /app/artisan schedule:run" job.position=0 job.schedule="* * * * *"
time="2023-10-17T15:42:00Z" level=info msg="In TelescopeServiceProvider.php line 11:" channel=stdout iteration=1 job.command="php /app/artisan schedule:run" job.position=0 job.schedule="* * * * *"
time="2023-10-17T15:42:00Z" level=info msg="                                                                           " channel=stdout iteration=1 job.command="php /app/artisan schedule:run" job.position=0 job.schedule="* * * * *"
time="2023-10-17T15:42:00Z" level=info msg="  Class \"Laravel\\Telescope\\TelescopeApplicationServiceProvider\" not found  " channel=stdout iteration=1 job.command="php /app/artisan schedule:run" job.position=0 job.schedule="* * * * *"
time="2023-10-17T15:42:00Z" level=info msg="                                                                           " channel=stdout iteration=1 job.command="php /app/artisan schedule:run" job.position=0 job.schedule="* * * * *"
time="2023-10-17T15:42:00Z" level=info channel=stdout iteration=1 job.command="php /app/artisan schedule:run" job.position=0 job.schedule="* * * * *"
time="2023-10-17T15:42:00Z" level=error msg="error running command: exit status 1" iteration=1 job.command="php /app/artisan schedule:run" job.position=0 job.schedule="* * * * *"
vermorag commented 11 months ago

Great! This error happens because /app/storage/framework/views does not exist because we've mounted /app/storage as volume.

It seems my access to the repo hasn't been revoked. Can you try to build and run your image from the source, main branch?

khalid244 commented 11 months ago

I test many thing but it seems to be not working when there are a mounted paths like /app/storage. I need to debug more to found out why that's happen for now I run it without them and there are this error appear now

#32 /app/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php(180): Illuminate\\Http\\Middleware\\TrustProxies->handle(Object(Illuminate\\Http\\Request), Object(Closure))
#33 /app/app/Http/Middleware/SentryContext.php(50): Illuminate\\Pipeline\\Pipeline->Illuminate\\Pipeline\\{closure}(Object(Illuminate\\Http\\Request))
#34 /app/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php(180): App\\Http\\Middleware\\SentryContext->handle(Object(Illuminate\\Http\\Request), Object(Closure))
#35 /app/vendor/laravel/framework/src/Illuminate/Http/Middleware/HandleCors.php(62): Illuminate\\Pipeline\\Pipeline->Illuminate\\Pipeline\\{closure}(Object(Illuminate\\Http\\Request))
#36 /app/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php(180): Illuminate\\Http\\Middleware\\HandleCors->handle(Object(Illuminate\\Http\\Request), Object(Closure))
#37 /app/vendor/laravel/framework/src/Illuminate/Foundation/Http/Middleware/TransformsRequest.php(21): Illuminate\\Pipeline\\Pipeline->Illuminate\\Pipeline\\{closure}(Object(Illuminate\\Http\\Request))
#38 /app/vendor/laravel/framework/src/Illuminate/Foundation/Http/Middleware/ConvertEmptyStringsToNull.php(31): Illuminate\\Foundation\\Http\\Middleware\\TransformsRequest->handle(Object(Illuminate\\Http\\Request), Object(Closure))
#39 /app/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php(180): Illuminate\\Foundation\\Http\\Middleware\\ConvertEmptyStringsToNull->handle(Object(Illuminate\\Http\\Request), Object(Closure))
#40 /app/vendor/laravel/framework/src/Illuminate/Foundation/Http/Middleware/TransformsRequest.php(21): Illuminate\\Pipeline\\Pipeline->Illuminate\\Pipeline\\{closure}(Object(Illuminate\\Http\\Request))
#41 /app/vendor/laravel/framework/src/Illuminate/Foundation/Http/Middleware/TrimStrings.php(40): Illuminate\\Foundation\\Http\\Middleware\\TransformsRequest->handle(Object(Illuminate\\Http\\Request), Object(Closure))
#42 /app/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php(180): Illuminate\\Foundation\\Http\\Middleware\\TrimStrings->handle(Object(Illuminate\\Http\\Request), Object(Closure))
#43 /app/vendor/laravel/framework/src/Illuminate/Foundation/Http/Middleware/ValidatePostSize.php(27): Illuminate\\Pipeline\\Pipeline->Illuminate\\Pipeline\\{closure}(Object(Illuminate\\Http\\Request))
#44 /app/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php(180): Illuminate\\Foundation\\Http\\Middleware\\ValidatePostSize->handle(Object(Illuminate\\Http\\Request), Object(Closure))
#45 /app/vendor/laravel/framework/src/Illuminate/Foundation/Http/Middleware/PreventRequestsDuringMaintenance.php(86): Illuminate\\Pipeline\\Pipeline->Illuminate\\Pipeline\\{closure}(Object(Illuminate\\Http\\Request))
#46 /app/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php(180): Illuminate\\Foundation\\Http\\Middleware\\PreventRequestsDuringMaintenance->handle(Object(Illuminate\\Http\\Request), Object(Closure))
#47 /app/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php(116): Illuminate\\Pipeline\\Pipeline->Illuminate\\Pipeline\\{closure}(Object(Illuminate\\Http\\Request))
#48 /app/vendor/laravel/framework/src/Illuminate/Foundation/Http/Kernel.php(175): Illuminate\\Pipeline\\Pipeline->then(Object(Closure))
#49 /app/vendor/laravel/framework/src/Illuminate/Foundation/Http/Kernel.php(144): Illuminate\\Foundation\\Http\\Kernel->sendRequestThroughRouter(Object(Illuminate\\Http\\Request))
#50 /app/vendor/laravel/octane/src/ApplicationGateway.php(37): Illuminate\\Foundation\\Http\\Kernel->handle(Object(Illuminate\\Http\\Request))
#51 /app/vendor/laravel/octane/src/Worker.php(92): Laravel\\Octane\\ApplicationGateway->handle(Object(Illuminate\\Http\\Request))
#52 /app/vendor/laravel/octane/bin/swoole-server(120): Laravel\\Octane\\Worker->handle(Object(Illuminate\\Http\\Request), Object(Laravel\\Octane\\RequestContext))
#53 [internal function]: {closure}(Object(Swoole\\Http\\Request), Object(Swoole\\Http\\Response))
#54 /app/vendor/laravel/octane/bin/swoole-server(170): Swoole\\Server->start()
#55 {main}
"}
vermorag commented 11 months ago

Provide full error message plz.

khalid244 commented 10 months ago

This is full log file app.log

Also, sometimes it fails in migrations process is there are any way to incenses timeout ? See this app.log

vermorag commented 10 months ago

About first log. Do you persist your application encryption key? As I see - cron job can't decode info, that already stored in db. Mostly i happens because encryption key change

khalid244 commented 10 months ago

I have been testing with this simple deployment. I removed any volume mounts to simplify it. Also, I built and pushed the latest commit from GitHub without editing anything. Do I need to add any application encryption key ?

What I did was run the database deployment. When it was ready, I ran the app deployment. Once it was ready, I logged into the app. Then, the error appeared.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: cattr-app-deployment
  namespace: cattr
  labels:
    app: cattr
spec:
  selector:
    matchLabels:
      app: cattr
  template:
    metadata:
      labels:
        app: cattr
    spec:
      containers:
        - name: app
          image: registry.gyoom.sa/cattr:main-1
          # env:
          #   - name: "S6_READ_ONLY_ROOT"
          #     value: "1"
          ports:
            - containerPort: 80
          # volumeMounts:
            # - mountPath: /app/storage
            #   name: app-storage
            # - name: cattr-app-env
            #   mountPath: /app/.env
            #   subPath: file-from-cfgmap
          resources:
            limits:
              cpu: 3000m
              hugepages-2Mi: 256Mi
              memory: "2Gi"
            requests:
              cpu: 100m
              hugepages-2Mi: 256Mi
              memory: "200Mi"
      securityContext:
        fsGroup: 2000
      # volumes:
        # - name: app-storage
        #   persistentVolumeClaim:
        #     claimName: cattr-app-storage-claim
        # - name: cattr-app-env
        #   configMap:
        #     name: cattr-app-env

Also, I use a MariaDB database. I found out it works better with Kubernetes. I am not sure if that caused this problem.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: cattr-db-deployment
  namespace: cattr
  labels:
    app: cattr-db
spec:
  selector:
    matchLabels:
      app: cattr-db
  template:
    metadata:
      labels:
        app: cattr-db
    spec:
      containers:
        - name: db
          image: mariadb:latest
          ports:
            - containerPort: 3306
          env:
            - name: MYSQL_DATABASE
              value: cattr
            - name: MYSQL_ROOT_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: cattr-percona-secret
                  key: password
          # volumeMounts:
          #   - mountPath: /var/lib/mysql
          #     name: database-storage
          resources:
            limits:
              cpu: 2000m
              memory: "2Gi"
            requests:
              cpu: 100m
              memory: "200Mi"
      # volumes:
      #   - name: database-storage
      #     persistentVolumeClaim:
      #       claimName: cattr-database-claim
khalid244 commented 10 months ago

I tested many things and figured out that clearing the cache fixed the first problem. Here’s the command I used:

php artisan cache:clear
DanilShapilov commented 10 months ago

@vermorag Thank you!

khalid244 commented 10 months ago

I want to create a simple kubernetes.yaml file that contains all the components needed to run Cattr on Kubernetes. This will simplify the process for people who want to run it on Kubernetes. I need your help @DanilShapilov to fix some bugs that appear with the current Cattr version on Kubernetes.

For now, there are three problems:

  1. Already fixed by Vermorag

    s6-overlay-suexec: fatal: child failed with exit code 111 
  2. I managed to fix it by shelling into the pod and writing php artisan cache:clear. I'm not sure how we can automate this process.

    Illuminate\\Encryption\\Encrypter->getJsonPayload(NULL)
  3. We need to increase the timeout for the "prepare-database" process.

    s6-sudoc: fatal: unable to get exit status from server: Operation timed out
    s6-rc: warning: unable to start service prepare-database: command exited 111
vermorag commented 10 months ago

About timeout we can use timeout-up, timeout-down files as described at https://skarnet.org/software/s6-rc/s6-rc-compile.html

vermorag commented 10 months ago

About cache-clear - the best option is to create helm bundle with all necessary commands and configs. To separate ENV to kube secrets, for example. To generate encryption key in runtime, but only once, etc...

khalid244 commented 10 months ago

I tried to change S6_CMD_WAIT_FOR_SERVICES_MAXTIME to 180000, and it works great!

khalid244 commented 10 months ago

I have been testing the mounting of /app/storage, but it has caused many problems. I am planning to only mount user-generated content, for example, mount /app/storage/app/screenshots.

I need to know the use cases for these paths:

I think it is not important to be mounted

I will mount those

vermorag commented 10 months ago

Basically, I've told you to use /app/storage as mount because we've set read-only fs for the container. Okay, let me explain the folders you've asked /app/storage/framework/views - Cache for artisan views. Should be writable. Can be safely omitted for persistence. /app/storage/framework/logs - Log files. You should really keep them for researching what happened. Of setup logging to stdout/stderr and catch logs there via external logstash/fliebeat/etc. /app/storage/framework/testing - Temp folder for self-testing. Should be write allowed, but can be safely omitted for persistence. /app/storage/clockwork - Folder for storing clockwork data. It is a metric & tracing provider. Should be write allowed, but can be safely omitted for persistence. /app/storage/app/modules - I don't remember. Smth related to Cattr modules system.

khalid244 commented 10 months ago

Great, I’m not sure about keeping logs because they sometimes grow and cause problems. I may make it optional for the user to activate it. What do you think ?

Also does Cattr have image on DockerHub ?

vermorag commented 10 months ago

In my projects and in my current work we have separate storage for logs (opensearch/loki). In the container, we just output everything at stdout and don't have separate persistent logs dir inside the container.

khalid244 commented 10 months ago

Loki is a great tool, but I want to make things simple for anyone to work with.

vermorag commented 10 months ago

For a simple system, we can just ignore logging questions. And just set LOG_CHANNEL=stderr

khalid244 commented 10 months ago

That will be great.

I have been testing this image registry.git.amazingcat.net/cattr/core/app:latest, but it still shows this error. Can you please update it?

/package/admin/s6-overlay/libexec/preinit: notice: /var/run is not a symlink to /run, fixing it
s6-rmrf: fatal: unable to remove /var/run: Read-only file system
s6-overlay-suexec: fatal: child failed with exit code 111
vermorag commented 10 months ago

I no longer have access to that repo. @DanilShapilov Can you create a new RC tag to build a new image?

khalid244 commented 10 months ago

This is the full kubernetes.yaml for Cattr. However, it needs more testing. Please review it and let me know what you think. kubernetes.yaml.txt

vermorag commented 10 months ago

Why you use type: kubernetes.io/basic-auth for secret type? Opaque will suit this more. Also, let's save DB_USERNAME to secret too. I'd preref it looks like:

apiVersion: v1
kind: Secret
metadata:
  name: cattr-env-secrets
type: Opaque
stringData:
  DB_USERNAME "..."
  DB_PASSWORD: "..."
---
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: cattr-app-deployment
  labels:
    app: cattr-app
spec:
  selector:
    matchLabels:
      app: cattr-app
  template:
    metadata:
      labels:
        app: cattr-app
    spec:
      initContainers:
        - name: waiting-for-database
          image: registry.git.amazingcat.net/cattr/core/app:latest
          envFrom:
            - secretRef:
                name: cattr-env-secrets

and so on. It will allow to add more secret data for users without additional modification (for example - password for redis or s3 if smb is willing to use it).

Also it seems, that I've created a new tag and latest image has been updated. Can you check it? @khalid244

khalid244 commented 10 months ago

I pull the latest image, and it is still showing the same error Using Opaque is a great Idea, but we also need another secrets for database deployment I will make it like this

apiVersion: v1
kind: Secret
metadata:
  name: cattr-app-env-secrets
type: Opaque
stringData:
  DB_CONNECTION: mysql
  DB_HOST: "cattr-db-service"
  DB_USERNAME: "root"
  DB_PASSWORD: "__Set_Password__"
  S6_CMD_WAIT_FOR_SERVICES_MAXTIME: "180000"
---
apiVersion: v1
kind: Secret
metadata:
  name: cattr-db-env-secrets
type: Opaque
stringData:
  MYSQL_DATABASE: cattr
  MYSQL_ROOT_PASSWORD: "__Set_Password__"
  ---
  ....

kubernetes.yaml.txt

khalid244 commented 10 months ago

I’ve tested it many times and something seems weird. When I pull the latest commit, build it, and push it to my private repository, it works great. However, when I pull registry.git.amazingcat.net/cattr/core/app:v4.0.0-RC44, it does not work.

vermorag commented 10 months ago

What error do you get at RC44? Maybe, you can compress your version in tar and send me to check what differences it has from RC44?

khalid244 commented 10 months ago
/package/admin/s6-overlay/libexec/preinit: notice: /var/run is not a symlink to /run, fixing it
s6-rmrf: fatal: unable to remove /var/run: Read-only file system
s6-overlay-suexec: fatal: child failed with exit code 111
vermorag commented 10 months ago

Aha. Got you. Will check

vermorag commented 10 months ago

Checked. It is a problem with kaniko, that used to build images. You use docker as the build engine, while images for the registry.amazingcat build via kaniko. Three options there:

  1. Migrate to github build
  2. Migrate to buildah (maybe, will help)
  3. Create an issue in kaniko Generally, kaniko doesn't execute rm -rf /var/run part
khalid244 commented 10 months ago
  1. I have some experience with GitHub builds, so I can help with that.
  2. I have not used Buildah before, but we can test it first to see if it will be helpful.
  3. I took a look at their repo issues and there are a lot. I don't think they will fix them soon.
  4. There is a fourth option. I have been testing some alternative code to solve the same problem. We may find some code that works with Kaniko, or I will find a way to fix it from kubernetes.yaml. (I will work on this for now)

You can try this code:

RUN rmdir /var/run
RUN ln -sf /run /var/run
khalid244 commented 10 months ago

You can use this workflow. It will trigger when you set a tag on a commit.

name: CI

# Must add secrets
# - DOCKER_USERNAME
# - DOCKER_PASSWORD

on:
  push:
    tags:
      - 'v*'

jobs:
  build:
    name: Build & Push
    runs-on: ubuntu-latest

    steps:
    - name: Check out code
      uses: actions/checkout@v3

    - name: Set output
      id: vars
      run: echo "tag=${GITHUB_REF#refs/*/}" >> $GITHUB_OUTPUT

    - name: Build and push Docker image
      uses: docker/build-push-action@v1.1.0
      with:
        username: ${{ secrets.DOCKER_USERNAME }}
        password: ${{ secrets.DOCKER_PASSWORD }}
        registry: registry.git.amazingcat.net
        repository: cattr/core/app
        tags: ${{ github.sha }}, ${{ steps.vars.outputs.tag }}, latest
vermorag commented 10 months ago

Can you check with the fresh version: registry.git.amazingcat.net/cattr/core/app:main?

khalid244 commented 10 months ago

I’ve done some testing and I think there’s something wrong with this image. It doesn’t connect to the database at all. I’ve tried changing DB_HOST, but it doesn’t work. Can you check if there’s something wrong?

vermorag commented 10 months ago

Can you check RC47 tag?

khalid244 commented 10 months ago

It is working now, but there are errors happening app.log

khalid244 commented 10 months ago

I have been doing some testing. When I remove the app pod, the screenshots disappear. Are there any important storage paths other than these?

/app/storage/app/screenshots /app/storage/framework/sessions

vermorag commented 9 months ago

Sorry for the great delay in answering. @khalid244 I've corrected a bit the paths for docker auto creation here: https://github.com/cattr-app/server-application/commit/a3cef4f267c90d8f341c5d577cc3d4ed44fb0cfc#diff-5cb069ded6c39cb26dce80f3299d440063f528cff2e0f837d6ee8e1a1df248be.

/app/storage/app/screenshots Store your screenshots. So it should be persisted. /app/storage/framework/sessions can be safely dropped during redeployments.