Closed mpflugfelder-stn closed 1 month ago
If it helps any, here's the kubernetes manifest that I'm using
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: maintainerr
namespace: default
labels:
app: maintainerr
spec:
replicas: 1
selector:
matchLabels:
app: maintainerr
template:
metadata:
labels:
app: maintainerr
spec:
securityContext:
runAsUser: 1024
runAsGroup: 100
containers:
- name: maintainerr
image: ghcr.io/jorenn92/maintainerr:latest
imagePullPolicy: Always
env:
- name: TZ
value: 'America/New_York'
- name: UID
value: '1024'
- name: GID
value: '100'
- name: UMASK
value: '002'
ports:
- name: web
containerPort: 6246
volumeMounts:
- mountPath: /opt/data
name: config
dnsConfig:
options:
- name: ndots
value: "1"
volumes:
- name: config
nfs:
server: 192.168.50.20
path: /volume1/docker/maintainerr
---
kind: Service
apiVersion: v1
metadata:
name: maintainerr
namespace: default
spec:
ports:
- name: web
port: 6246
targetPort: web
selector:
app: maintainerr
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: maintainerr
namespace: default
spec:
entryPoints:
- websecure
routes:
- kind: Rule
match: Host(`maintainerr.home.local`)
services:
- name: maintainerr
port: 6246
OK, so I put this in a little too quickly. I did some digging in the discussion and solved my own problem. I had to save the settings first before testing them. It would be nice if there was some notification that settings need to be saved first before testing.
Describe the bug I have maintainerr set up in my kubernetes cluster. When trying to configure maintainerr to connect to sonarr or radarr, I get an error "Connection failed! Please check and save your settings"
To Reproduce Steps to reproduce the behavior:
Expected behavior I expected maintainerr to connect to radarr or sonarr
Screenshots
Device (please complete the following information):
Additional context Is there a way for me to enable additional debugging to try to determine what the problem is here?