immense / Remotely

A remote control and remote scripting solution, built with .NET 8, Blazor, and SignalR.
GNU General Public License v3.0
4.5k stars 1.61k forks source link

Running Remotely on k8s #597

Open AHulshoff opened 1 year ago

AHulshoff commented 1 year ago

Is your feature request related to a problem? Please describe. I tried to run Remotely on k8s. The webapp works if you create the database before deploying the application and reconfigure the appsettings to 'PostgreSQL'. All the database objects are created and the webapp is active. The only problem is that the webapp is not stateful and therefore you need to login again when the you are connecting to a different pod.

Describe the solution you'd like I would like to see the app to be stateful, so I am able to run it in k8s in a scalable environment.

Describe alternatives you've considered N/A

Additional context

apiVersion: v1
kind: Namespace
metadata:
  name: remotely
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: appsettings-remotely
  namespace: remotely
  labels:
    k8s-app: remotely
data:
  appsettings.json: |-
    {
      "ConnectionStrings": {
        "PostgreSQL": "Host=database.postgresql.svc.cluster.local;Database=Remotely;Username=postgres;Password=abc123!",
        "SQLite": "DataSource=Remotely.db",
        "SQLServer": "Server=(localdb)\\mssqllocaldb;Database=Remotely-Server-53bc9b9d-9d6a-45d4-8429-2a2761773502;Trusted_Connection=True;MultipleActiveResultSets=true"
      },
      "Logging": {
        "LogLevel": {
          "Microsoft.AspNetCore.HttpLogging.HttpLoggingMiddleware": "Information",
          "Default": "Information"
        }
      },
      "ApplicationOptions": {
        "AllowApiLogin": false,
        "BannedDevices": [],
        "DataRetentionInDays": 90,
        "DBProvider": "PostgreSQL",
        "EnableWindowsEventLog": false,
        "EnforceAttendedAccess": false,
        "ForceClientHttps": false,
        "KnownProxies": [],
        "MaxConcurrentUpdates": 10,
        "MaxOrganizationCount": 1,
        "MessageOfTheDay": "",
        "RedirectToHttps": true,
        "RemoteControlNotifyUser": true,
        "RemoteControlRequiresAuthentication": true,
        "RemoteControlSessionLimit": 3,
        "Require2FA": false,
        "SmtpDisplayName": "",
        "SmtpEmail": "",
        "SmtpHost": "",
        "SmtpLocalDomain": "",
        "SmtpCheckCertificateRevocation": true,
        "SmtpPassword": "",
        "SmtpPort": 587,
        "SmtpUserName": "",
        "Theme": "Dark",
        "TrustedCorsOrigins": [],
        "UseHsts": false,
        "UseHttpLogging": false
      }
    }
---
kind: StatefulSet
apiVersion: apps/v1
metadata:
  name: remotely
  namespace: remotely
  labels:
    k8s-app: remotely
spec:
  replicas: 3
  selector:
    matchLabels:
      k8s-app: remotely
  template:
    metadata:
      name: remotely
      labels:
        k8s-app: remotely
    spec:
      volumes:
        - name: remotely-volume
          configMap:
            name: appsettings-remotely
      containers:
        - name: remotely
          image: 'docker.io/immybot/remotely:latest'
          volumeMounts:
            - name: remotely-volume
              mountPath: /app/appsettings.json
              subPath: appsettings.json
          imagePullPolicy: Always
          ports:
            - containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
  name: remotely-svc
  namespace: remotely
  annotations:
    lb.kubesphere.io/v1alpha1: openelb
    protocol.openelb.kubesphere.io/v1alpha1: layer2
    eip.openelb.kubesphere.io/v1alpha2: eip-k8s-pool
spec:
  type: LoadBalancer
  selector:
    k8s-app: remotely
  ports:
    - protocol: TCP
      port: 5000
      targetPort: 5000
  externalTrafficPolicy: Cluster
darkpixel commented 1 year ago

I noticed this too.

It needs to stop relying on the /remotely-data folder and build its configuration through environment variables.

i.e. docker run -e DATABASE_URL=postgres://myuser:mypass@mypostgreshost:1234/mydatabase mmybot/remotely:latest

...then grab everything else it needs from that database. And the database needs to to be created if it's empty...and possibly managed by a schema migration tool similar to what you get with Django-based projects.

Otherwise, you can really only run it on one node at a time and you can't scale for failover.

I also noticed things will fail unless you run the container as root. Probably because it's writing configuration data to /remotely-data.

bitbound commented 1 year ago

It's already on the refactor list to move much of the configuration to the database. Originally, there was no Docker support or in-app configuration page, which is why it used appsettings.json (the default for ASP.NET Core apps).

The volume will still be required, though. The default DB provider is SQLite, so it needs to be able to persist files on the host file system.

And the database needs to to be created if it's empty...and possibly managed by a schema migration tool similar to what you get with Django-based projects.

This is already being done, via Entity Framework Core.

Otherwise, you can really only run it on one node at a time and you can't scale for failover.

In its current form, horizontal scaling wouldn't be possible anyway. The front-end and agents all use SignalR for real-time connectivity, and their state is managed only in-memory. A backplane (e.g. Redis or Azure SignalR Service) would need to be used for scale-out.