getkuby / kuby-core

A convention over configuration approach for deploying Rails apps. https://getkuby.io
MIT License
580 stars 26 forks source link

Error: uninitialized constant Kuby::KubeDB with Sidekiq or Redis gems #98

Closed scart88 closed 2 years ago

scart88 commented 2 years ago

Adding Sidekiq or Redis are resulting with the following error:

RAILS_MASTER_KEY=some-key bundle exec kuby -e production deploy
Error: uninitialized constant Kuby::KubeDB

Gemfile

gem "kuby-redis", "~> 0.1.0"
gem "kuby-sidekiq", "~> 0.3.0"

kuby.rb file

kubernetes do
      ***

      add_plugin :redis do
        instance(:my_rails_cache)
      end

      add_plugin :sidekiq
end
scart88 commented 2 years ago

I thought it was not required to run bundle exec kuby -e production setup again. But, I guess it is required. I will try again tomorrow.

Thanks for everything!

scart88 commented 2 years ago

I tried the bundle exec kuby -e production setup before deploy and It didn't fixed the issue. I'm getting the same error: Error: uninitialized constant Kuby::KubeDB

I downloaded the kuby-redis gem locally and installed the kuby-kube-db gem and I added require 'kuby/kube-db' inside /lib/kuby-redis/kuby/redis/instance.rb https://github.com/getkuby/kuby-redis/blob/master/lib/kuby/redis/instance.rb under require 'kube-dsl'

That error disappeared, however something new came up

❯ RAILS_MASTER_KEY=some-key bundle exec kuby -e production deploy
Validating global resource, namespace 'hotwiresample-production'
namespace/hotwiresample-production configured (dry run)
Deploying namespace 'hotwiresample-production'
namespace/hotwiresample-production unchanged
[INFO][2022-03-15 14:44:36]
[INFO][2022-03-15 14:44:36]       ------------------------------------Phase 1: Initializing deploy------------------------------------
[INFO][2022-03-15 14:44:39]       All required parameters and files are present
[INFO][2022-03-15 14:44:39]       Discovering resources:
[INFO][2022-03-15 14:44:42]         - Deployment/hotwiresample-web
[INFO][2022-03-15 14:44:42]         - Service/hotwiresample-svc
[INFO][2022-03-15 14:44:42]         - Ingress/hotwiresample-ingress
[INFO][2022-03-15 14:44:42]         - Secret/hotwiresample-registry-secret
[INFO][2022-03-15 14:44:42]         - ConfigMap/hotwiresample-config
[INFO][2022-03-15 14:44:42]         - Secret/hotwiresample-secrets
[INFO][2022-03-15 14:44:42]         - Deployment/hotwiresample-assets
[INFO][2022-03-15 14:44:42]         - ServiceAccount/hotwiresample-assets-sa
[INFO][2022-03-15 14:44:42]         - ServiceAccount/hotwiresample-sa
[INFO][2022-03-15 14:44:42]         - Service/hotwiresample-assets-svc
[INFO][2022-03-15 14:44:42]         - Redis/my_rails_cache-redis
[INFO][2022-03-15 14:44:42]         - ConfigMap/hotwiresample-assets-nginx-config
[INFO][2022-03-15 14:44:46]
[INFO][2022-03-15 14:44:46]       ------------------------------------------Result: FAILURE-------------------------------------------
[FATAL][2022-03-15 14:44:46]      Template validation failed
[FATAL][2022-03-15 14:44:46]
[FATAL][2022-03-15 14:44:46]      Invalid template: Redis-my_rails_cache-redis20220315-4172-vfswuj.yml
[FATAL][2022-03-15 14:44:46]      > Error message:
[FATAL][2022-03-15 14:44:46]          W0315 14:44:43.793040    4225 helpers.go:557] --dry-run is deprecated and can be replaced with --dry-run=client.
[FATAL][2022-03-15 14:44:46]          error: unable to recognize "/var/folders/hh/z_vjqk3j3dl5vw0whd7h7bp80000gn/T/Redis-my_rails_cache-redis20220315-4172-vfswuj.yml": no matches for kind "Redis" in version "kubedb.com/v1alpha1"
[FATAL][2022-03-15 14:44:46]      > Template content:
[FATAL][2022-03-15 14:44:46]          ---
[FATAL][2022-03-15 14:44:46]          kind: Redis
[FATAL][2022-03-15 14:44:46]          spec:
[FATAL][2022-03-15 14:44:46]            serviceTemplate:
[FATAL][2022-03-15 14:44:46]              spec:
[FATAL][2022-03-15 14:44:46]                type: NodePort
[FATAL][2022-03-15 14:44:46]                ports:
[FATAL][2022-03-15 14:44:46]                - port: 6379
[FATAL][2022-03-15 14:44:46]                  name: memcached
[FATAL][2022-03-15 14:44:46]            storage:
[FATAL][2022-03-15 14:44:46]              accessModes:
[FATAL][2022-03-15 14:44:46]              - ReadWriteOnce
[FATAL][2022-03-15 14:44:46]              resources:
[FATAL][2022-03-15 14:44:46]                requests:
[FATAL][2022-03-15 14:44:46]                  storage: 1Gi
[FATAL][2022-03-15 14:44:46]              storageClassName: do-block-storage
[FATAL][2022-03-15 14:44:46]            version: 5.0.3-v1
[FATAL][2022-03-15 14:44:46]            storageType: Durable
[FATAL][2022-03-15 14:44:46]          apiVersion: kubedb.com/v1alpha1
[FATAL][2022-03-15 14:44:46]          metadata:
[FATAL][2022-03-15 14:44:46]            name: my_rails_cache-redis
[FATAL][2022-03-15 14:44:46]            namespace: hotwiresample-production
[FATAL][2022-03-15 14:44:46]          
Template validation failed
.../.rbenv/versions/3.0.2/lib/ruby/gems/3.0.0/gems/krane-1.1.4/lib/krane/deploy_task.rb:290:in `validate_resources'
.../.rbenv/versions/3.0.2/lib/ruby/gems/3.0.0/gems/krane-1.1.4/lib/krane/statsd.rb:41:in `block in measure_method'
.../.rbenv/versions/3.0.2/lib/ruby/gems/3.0.0/gems/krane-1.1.4/lib/krane/deploy_task.rb:139:in `run!'
.../.rbenv/versions/3.0.2/lib/ruby/gems/3.0.0/bundler/gems/kuby-core-9885015027e1/lib/kuby/kubernetes/deploy_task.rb:18:in `block in run!'
.../.rbenv/versions/3.0.2/lib/ruby/gems/3.0.0/bundler/gems/kuby-core-9885015027e1/lib/kuby/kubernetes/deploy_task.rb:31:in `with_env'
.../.rbenv/versions/3.0.2/lib/ruby/gems/3.0.0/bundler/gems/kuby-core-9885015027e1/lib/kuby/kubernetes/deploy_task.rb:17:in `run!'
.../.rbenv/versions/3.0.2/lib/ruby/gems/3.0.0/bundler/gems/kuby-core-9885015027e1/lib/kuby/kubernetes/deployer.rb:110:in `deploy_namespaced_resources'
.../.rbenv/versions/3.0.2/lib/ruby/gems/3.0.0/bundler/gems/kuby-core-9885015027e1/lib/kuby/kubernetes/deployer.rb:36:in `block (2 levels) in deploy'
.../.rbenv/versions/3.0.2/lib/ruby/gems/3.0.0/bundler/gems/kuby-core-9885015027e1/lib/kuby/kubernetes/deployer.rb:32:in `each_pair'
.../.rbenv/versions/3.0.2/lib/ruby/gems/3.0.0/bundler/gems/kuby-core-9885015027e1/lib/kuby/kubernetes/deployer.rb:32:in `block in deploy'
.../.rbenv/versions/3.0.2/lib/ruby/gems/3.0.0/bundler/gems/kuby-core-9885015027e1/lib/kuby/kubernetes/deployer.rb:133:in `restart_rails_deployment_if_necessary'
.../.rbenv/versions/3.0.2/lib/ruby/gems/3.0.0/bundler/gems/kuby-core-9885015027e1/lib/kuby/kubernetes/deployer.rb:21:in `deploy'
.../.rbenv/versions/3.0.2/lib/ruby/gems/3.0.0/bundler/gems/kuby-core-9885015027e1/lib/kuby/kubernetes/provider.rb:48:in `deploy'
.../.rbenv/versions/3.0.2/lib/ruby/gems/3.0.0/bundler/gems/kuby-core-9885015027e1/lib/kuby/kubernetes/spec.rb:117:in `deploy'
.../.rbenv/versions/3.0.2/lib/ruby/gems/3.0.0/bundler/gems/kuby-core-9885015027e1/lib/kuby/tasks.rb:75:in `deploy'
.../.rbenv/versions/3.0.2/lib/ruby/gems/3.0.0/bundler/gems/kuby-core-9885015027e1/lib/kuby/commands.rb:134:in `block (2 levels) in <class:Commands>'
.../.rbenv/versions/3.0.2/lib/ruby/gems/3.0.0/gems/gli-2.21.0/lib/gli/command_support.rb:131:in `execute'
.../.rbenv/versions/3.0.2/lib/ruby/gems/3.0.0/gems/gli-2.21.0/lib/gli/app_support.rb:298:in `block in call_command'
.../.rbenv/versions/3.0.2/lib/ruby/gems/3.0.0/gems/gli-2.21.0/lib/gli/app_support.rb:311:in `call_command'
.../.rbenv/versions/3.0.2/lib/ruby/gems/3.0.0/gems/gli-2.21.0/lib/gli/app_support.rb:85:in `run'
.../.rbenv/versions/3.0.2/lib/ruby/gems/3.0.0/bundler/gems/kuby-core-9885015027e1/lib/kuby/commands.rb:32:in `run'
.../.rbenv/versions/3.0.2/lib/ruby/gems/3.0.0/bundler/gems/kuby-core-9885015027e1/bin/kuby:6:in `<top (required)>'
.../.rbenv/versions/3.0.2/lib/ruby/gems/3.0.0/bin/kuby:25:in `load'
.../.rbenv/versions/3.0.2/lib/ruby/gems/3.0.0/bin/kuby:25:in `<top (required)>'
.../.rbenv/versions/3.0.2/lib/ruby/site_ruby/3.0.0/bundler/cli/exec.rb:58:in `load'
.../.rbenv/versions/3.0.2/lib/ruby/site_ruby/3.0.0/bundler/cli/exec.rb:58:in `kernel_load'
.../.rbenv/versions/3.0.2/lib/ruby/site_ruby/3.0.0/bundler/cli/exec.rb:23:in `run'
.../.rbenv/versions/3.0.2/lib/ruby/site_ruby/3.0.0/bundler/cli.rb:483:in `exec'
.../.rbenv/versions/3.0.2/lib/ruby/site_ruby/3.0.0/bundler/vendor/thor/lib/thor/command.rb:27:in `run'
.../.rbenv/versions/3.0.2/lib/ruby/site_ruby/3.0.0/bundler/vendor/thor/lib/thor/invocation.rb:127:in `invoke_command'
.../.rbenv/versions/3.0.2/lib/ruby/site_ruby/3.0.0/bundler/vendor/thor/lib/thor.rb:392:in `dispatch'
.../.rbenv/versions/3.0.2/lib/ruby/site_ruby/3.0.0/bundler/cli.rb:31:in `dispatch'
.../.rbenv/versions/3.0.2/lib/ruby/site_ruby/3.0.0/bundler/vendor/thor/lib/thor/base.rb:485:in `start'
.../.rbenv/versions/3.0.2/lib/ruby/site_ruby/3.0.0/bundler/cli.rb:25:in `start'
.../.rbenv/versions/3.0.2/lib/ruby/gems/3.0.0/gems/bundler-2.3.9/exe/bundle:48:in `block in <top (required)>'
.../.rbenv/versions/3.0.2/lib/ruby/site_ruby/3.0.0/bundler/friendly_errors.rb:103:in `with_friendly_errors'
.../.rbenv/versions/3.0.2/lib/ruby/gems/3.0.0/gems/bundler-2.3.9/exe/bundle:36:in `<top (required)>'
.../.rbenv/versions/3.0.2/bin/bundle:25:in `load'
.../.rbenv/versions/3.0.2/bin/bundle:25:in `<main>'
[INFO][2022-03-15 14:44:46]
[INFO][2022-03-15 14:44:46]       ------------------------------------Phase 1: Initializing deploy------------------------------------
[INFO][2022-03-15 14:44:49]       All required parameters and files are present
[INFO][2022-03-15 14:44:49]       Discovering resources:
[INFO][2022-03-15 14:44:52]         - ClusterIssuer/letsencrypt-production
[INFO][2022-03-15 14:44:53]
[INFO][2022-03-15 14:44:53]       ----------------------------Phase 2: Checking initial resource statuses-----------------------------
[INFO][2022-03-15 14:44:54]       ClusterIssuer/letsencrypt-production              Exists
[INFO][2022-03-15 14:44:54]
[INFO][2022-03-15 14:44:54]       ------------------------------Phase 3: Predeploying priority resources------------------------------
[INFO][2022-03-15 14:44:57]       Deploying ClusterIssuer/letsencrypt-production (timeout: 300s)
[WARN][2022-03-15 14:44:59]       Don't know how to monitor resources of type ClusterIssuer. Assuming ClusterIssuer/letsencrypt-production deployed successfully.
[INFO][2022-03-15 14:44:59]       Successfully deployed in 1.8s: ClusterIssuer/letsencrypt-production
[INFO][2022-03-15 14:44:59]
[INFO][2022-03-15 14:44:59]
[INFO][2022-03-15 14:44:59]       ----------------------------------Phase 4: Deploying all resources----------------------------------
[INFO][2022-03-15 14:44:59]       Deploying ClusterIssuer/letsencrypt-production (timeout: 300s)
[INFO][2022-03-15 14:45:00]       Successfully deployed in 1.7s: ClusterIssuer/letsencrypt-production
[INFO][2022-03-15 14:45:00]
[INFO][2022-03-15 14:45:00]       ------------------------------------------Result: SUCCESS-------------------------------------------
[INFO][2022-03-15 14:45:00]       Successfully deployed 1 resource
[INFO][2022-03-15 14:45:00]
[INFO][2022-03-15 14:45:00]       Successful resources
[INFO][2022-03-15 14:45:00]       ClusterIssuer/letsencrypt-production              Exists

I just found out this issue was also discovered by @palkan and he also created an issue https://github.com/getkuby/kuby-redis/issues/1

I had expected some magic here, but unfortunately, that didn’t exactly pan out. First, I found that we also need to install KubeDB. Of course, there is kuby-kubedb for that! The problem is that it only supports the v1alpha1 API spec. Further, it’s not compatible with recent versions of KubeDB. Further, older versions of KubeDB aren’t compatible with the modern Kubernetes API. ⛔️ A dead end.

scart88 commented 2 years ago

Could we use something like this https://keydb.dev/ ? I was trying to find some alternative solutions and I found KeyDB which looks pretty interesting.

image

Full benchmark results and setup information here: https://docs.keydb.dev/blog/2020/09/29/blog-post/

Cluster docs: https://docs.keydb.dev/docs/cluster-tutorial/

scart88 commented 2 years ago

Or wouldn't be possible to use some YAML files like these?

redis.yml

apiVersion: v1
kind: Service
metadata:
  name: app-redis-svc
spec:
  ports:
    - port: 6379
      targetPort: 6379
  selector:
    app: app-redis
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: app-redis
spec:
  selector:
    matchLabels:
      app: app-redis
  replicas: 1
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 0
  template:
    metadata:
      labels:
        app: app-redis
    spec:
      containers:
        - name: redis
          image: redis:5.0-alpine
          ports:
            - containerPort: 6379
          resources:
            requests:
              cpu: 100m
              memory: 200Mi
            limits:
              cpu: 300m
              memory: 500Mi
          volumeMounts:
            - mountPath: /data
              name: app-redis-data
      volumes:
        - name: app-redis-data
          emptyDir: {}

sidekiq.yml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: app-sidekiq
spec:
  selector:
    matchLabels:
      app: app-sidekiq
  replicas: 2
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 50%
  template:
    metadata:
      labels:
        app: app-sidekiq
    spec:
      imagePullSecrets:
        - name: digitalocean-access-token
      containers:
        - name: sidekiq
          image: $IMAGE_TAG
          imagePullPolicy: Always
          command: ["bundle", "exec", "sidekiq", "-C", "config/sidekiq.yml"]
          env:
          - name: RAILS_ENV
            value: "production"
          - name: RAILS_LOG_TO_STDOUT
            value: "true"
          - name: REDIS_URL
            value: "redis://app-redis-svc:6379"
          - name: RAILS_MASTER_KEY
            valueFrom:
              secretKeyRef:
                name: app-secrets
                key: app-master-key
          resources:
            requests:
              cpu: 1000m
              memory: 1000Mi
            limits:
              cpu: 1100m
              memory: 1100Mi
          ports:
            - containerPort: 7433
          livenessProbe:
            httpGet:
              path: /
              port: 7433
            initialDelaySeconds: 15
            timeoutSeconds: 5
          readinessProbe:
            httpGet:
              path: /
              port: 7433
            initialDelaySeconds: 15
            periodSeconds: 5
            successThreshold: 2
            failureThreshold: 2
            timeoutSeconds: 5
          lifecycle:
            preStop:
              exec:
                command: ["k8s/sidekiq_quiet"]
      terminationGracePeriodSeconds: 300
---
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
  name: app-sidekiq
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: app-sidekiq
  minReplicas: 2
  maxReplicas: 4
  metrics:
    - type: Resource
      resource:
        name: cpu
        targetAverageUtilization: 60

Even with these YAML files, I wouldn't know how to use them with Kuby.

camertron commented 2 years ago

Hey @scart88, thanks for letting me know about this. Unfortunately I haven't had the chance to upgrade the kuby-redis gem to the version of KubeDB that kuby-core uses, so that's why you're seeing the issues with the Redis object type. Future versions of Kuby won't even use KubeDB because they've moved to an incompatible licensing model.

Unfortunately I don't have a workaround for you on this one. Don't get me wrong, I would love to support all this in the hopefully not too distant future.

Even with these YAML files, I wouldn't know how to use them with Kuby.

Yeah, Kuby doesn't support custom YAML files, although that might be a cool feature to add someday.

scart88 commented 2 years ago

Thank you very much. Having a managed redis or a simple self-managed redis on a vps seems to be a better option.

It looks like kuby is already supporting YAML files: bundle exec kuby -e production kubectl -- apply -f k8s/ingress.yaml....

camertron commented 2 years ago

Oh that's true, you can use kubectl directly via the Kuby CLI. I just meant there's not a way to ask Kuby to deploy custom YAMLs at the same time it deploys everything else.

scart88 commented 2 years ago

Hey @camertron, I hope you are doing well.

I found these 2 options to use instead of KubeDB. Is there any reason why these are not good options for Kuby?

Redis: https://artifacthub.io/packages/helm/bitnami/redis Postgresql: https://access.crunchydata.com/documentation/postgres-operator/v5/ https://github.com/CrunchyData/postgres-operator

Thanks

camertron commented 2 years ago

Hey @scart88. I missed your comment about KeyDB, looks really interesting :) If it's Redis-compatible then it could be a replacement worth considering.

Ah yeah, I've seen both the Bitnami Redis helm chart and the Crunchy Postgres operator. I considered using the Postgres operator for a while, but settled on CockroachDB because it's designed to work in cloud-native environments whereas Postgres is not. CockroachDB can be upgraded in-place and provides all these nice guarantees. It's nearly 100% compatible with Postgres too, so it seemed like the right way to go.

It turns out to be much easier to run a Redis instance, so a Helm chart or just a regular 'ol k8s statefulset should work. The one you linked to may indeed be what Kuby ends up using :)

denikus commented 2 years ago

@scart88 Did you manage to deploy and use sidekiq with selfhosted redis? I would appreciate if you share you solution for this case. Still have a problem to run sidekiq with kuby (

scart88 commented 2 years ago

Hey @denikus

Yes, I used bitnami/redis https://artifacthub.io/packages/helm/bitnami/redis.

You will need to have a PersistentVolume, and in my case I used storageClass: "do-block-storage" since I'm using DigitalOcean. You will need to configure your redis helm values, add your password, your storageClass, set your replica count, add your resources limits, etc...

I also used a PersistentVolume size of 1Gb for the master, and 1Gb for the replicas. If you have 1 master and 3 replicas, helm will generate 4 PersistentVolumeClaims on your storage. The default is 8Gb, so you might want to change that.

Here is tutorial on how to install bitnami/redis helm chart: https://phoenixnap.com/kb/kubernetes-redis#ftoc-heading-2

I discovered https://k8slens.dev/ which a great tool to see and manage your cluster, and it's open-source. You don't even need an account.

Here you can see all available bitnami/redis values you can change: https://artifacthub.io/packages/helm/bitnami/redis?modal=values

global:
  imageRegistry: ""
  ## E.g.
  ## imagePullSecrets:
  ##   - myRegistryKeySecretName
  ##
  imagePullSecrets: []
  storageClass: "do-block-storage"
  redis:
    password: "your-very-strong-password"
  resources:
    limits:
      cpu: 200m
      memory: 256Mi
    requests: {}
      cpu: 100m
      memory: 128Mi

Once you setup redis, you can access it from other pods inside your cluster. The default username is default and the password is what you set in the global. redis://default:your-password@redis-master.default.svc.cluster.local:6379/0

I added my Redis url inside Rails credentials and I created a sidekiq.rb initializer.

config/initializers/sidekiq.rb

sidekiq_url = if Rails.env.production?
  Rails.application.credentials.dig(:production, :REDIS_URL) ||  "redis://localhost:6379/1"
else 
  "redis://localhost:6379/1"
end

Sidekiq.configure_server do |config|
  config.redis = { url: sidekiq_url }
end

Sidekiq.configure_client do |config|
  config.redis = { url: sidekiq_url }
end

I also changed my cable.yml file

production:
  adapter: redis
  url: <%= Rails.application.credentials.dig(:production, :REDIS_URL) ||  "redis://localhost:6379/1" %>

If you are using hiredis and cache_store you will need to use the same REDIS_URL in your:

development.rb or production.rb

config.cache_store = :redis_cache_store, {driver: :hiredis, url: Rails.application.credentials[:REDIS_URL] || "redis://localhost:6379/1" }
config.session_store :redis_session_store, key: "_session_app_production", serializer: :json,
  redis: {
    driver: :hiredis,
    expire_after: 1.year,
    ttl: 1.year,
    url: Rails.application.credentials[:REDIS_URL] || "redis://localhost:6379/6"
  }

At this step, you should have Redis configured and ready to use on your cluster. Now, you will need to deploy the sidekiq worker. Unfortunately I wasn't able to deploy it with Kuby, so I used a custom yaml file.

sidekiq.yml

And you would install it like this: kubectl apply -f sidekiq.yml, however you will need to make this compatible with your app, cluster, namespace, etc... This is just an example.

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: worker
  namespace: your-app-namespace
  labels:
    role: worker
spec:
  revisionHistoryLimit: 0
  replicas: 1
  selector:
    matchLabels:
      app: app-worker
      role: web
  template:
    metadata:
      labels:
        app: app-worker
    spec:
      containers:
      - name: app-worker
        image: your-registry-url
        imagePullPolicy: Always
        command: ["launcher"]
        args: ["bundle", "exec", "sidekiq"]
        envFrom:
        - configMapRef:
            name: env
        - secretRef:
            name: your-app-secrets
        env:
        resources:
          requests:
            cpu: "500m"
            memory: "500Mi"
          limits:
            cpu: "1000m"
            memory: "1000Mi" 
      initContainers:
      - name: migration-check
        image: your-registry-url
        imagePullPolicy: Always
        command: ["launcher"]
        args: ["rake", "db:abort_if_pending_migrations"]
        envFrom:
        - configMapRef:
            name: env # for example app-example-config
        - secretRef:
            name: your-app-secrets # for example app-example-secrets
      imagePullSecrets:
      - name: your-reg-secrets

You can run bundle exec kuby -e production resources, look for Deployment pod with the role web, duplicate it and make it look like in my previous example.

Maybe @camertron can help a little bit more with the sidekiq.yml file and how can it be configured directly inside kuby.rb file

I hope it make sense.

camertron commented 2 years ago

Wow, thanks for sharing @scart88!

For what it's worth, I've been working a lot on getting the next big release of Kuby out the door, which includes upgrades to both the kuby-redis and kuby-sidekiq gems. I decided to use the Spotahome Redis operator, which supports failover and some other nice features. My hope is it will be pretty turnkey for those wanting to use Sidekiq or stand up a Rails cache.

scart88 commented 2 years ago

Thank you very much for the update! Looking forward to see your RailsConf video :)

On my example: It turns out 1GB it wasn't enough on the storage, and it was very painful to increase the PVC storage. So, it's better to have at least 10GB of storage than having to increase the PVC storage.

camertron commented 2 years ago

No problem! Yeah, I'm really excited about it :)

Hmm that's really good to know. Sounds like Kuby should request 10Gb by default.

camertron commented 2 years ago

Hey everyone, just wanted to jump back in here and let you know that new versions of kuby-redis, kuby-sidekiq, and kuby-core have been published. This is the "next big release" I was talking about, and it brings a whole bunch of features and fixes. Check out the full changelog entry for more information.

scart88 commented 2 years ago

Impressive work @camertron! Thanks for putting so much effort into this!