OneUptime / oneuptime

OneUptime is the complete open-source observability platform.
https://oneuptime.com
Apache License 2.0
4.85k stars 228 forks source link

Bug: Can't register at the fresh installation #1463

Closed Brukkil closed 5 months ago

Brukkil commented 5 months ago

Describe the bug After the successful installation of the app, I can't log in or register due to mixed http&https requests Also, I need to mention that I've Nginx proxy manager ahead of the k8s cluster so it handles the SSL termination Route looks like https://app -> NGINX proxy manager -> K8s node

To Reproduce Steps to reproduce the behavior:

  1. helm pull oneuptime/oneuptime && unarchive it 1.1. helm upgrade -i oneuptime oneuptime -f oneuptime/values.yaml
  2. Go to the https://app/accounts/register page
  3. See error BasicForm.tsx:418 Mixed Content: The page at 'https://up.brukkil.tech/accounts/register' was loaded over HTTPS, but requested an insecure XMLHttpRequest endpoint 'http://up.brukkil.tech/identity/signup'. This request has been blocked; the content must be served over HTTPS.

Screenshots image

Desktop (please complete the following information):

Deployment Type Is this issue on self-hosted version 7.0.2487?

My values file

global:
  storageClass: nfs
  clusterDomain: &global-cluster-domain cluster.local

# Please change this to the domain name / IP where OneUptime server is hosted on.
host: up.brukkil.tech
httpProtocol: https

# (Optional): You usually do not need to set this if you're self hosting. If you do set it, set it to a long random value.
oneuptimeSecret:
encryptionSecret:

# (Optional): You usually do not need to set this if you're self hosting.
openTelemetryCollectorHost:
fluentdHost:

deployment:
  replicaCount: 1

metalLb:
  enabled: false
  ipAdddressPool:
    enabled: false
    addresses:
      # - 51.158.55.153/32 # List of IP addresses of all the servers in the cluster.

nginx:
  service:
    loadBalancerIP:
    type: LoadBalancer
    externalIPs:
      # - 51.158.55.153 # Please make sure this is the same as the one in metalLb.ipAdddressPool.addresses

postgresql:
  enabled: true # Set this to false if you're using an external postgresql database.
  clusterDomain: *global-cluster-domain
  auth:
    username: oneuptime
    database: oneuptimedb
  architecture: standalone
  primary:
    service:
      ports:
        postgresql: "5432"
    terminationGracePeriodSeconds: 0 # We do this because we do not want to wait for the pod to terminate in case of node failure. https://medium.com/tailwinds-navigator/kubernetes-tip-how-statefulsets-behave-differently-than-deployments-when-node-fails-d29e36bca7d5
    persistence:
      size: 25Gi
  readReplicas:
    terminationGracePeriodSeconds: 0 # We do this because we do not want to wait for the pod to terminate in case of node failure. https://medium.com/tailwinds-navigator/kubernetes-tip-how-statefulsets-behave-differently-than-deployments-when-node-fails-d29e36bca7d5
    persistence:
      size: 25Gi

clickhouse:
  enabled: true
  clusterDomain: *global-cluster-domain
  service:
    ports:
      http: "8123"
  shards: 1
  replicaCount: 1
  terminationGracePeriodSeconds: 0 # We do this because we do not want to wait for the pod to terminate in case of node failure. https://medium.com/tailwinds-navigator/kubernetes-tip-how-statefulsets-behave-differently-than-deployments-when-node-fails-d29e36bca7d5
  zookeeper:
    enabled: true
  persistence:
    size: 25Gi
  auth:
    username: oneuptime
  initdbScripts:
    db-init.sql: |
      CREATE DATABASE oneuptime;

redis:
  enabled: true
  clusterDomain: *global-cluster-domain
  architecture: standalone
  auth:
    enabled: true
  master:
    service:
      ports:
        redis: "6379"
    persistence:
      enabled: false # We dont need redis persistence, because we dont do anything with it.
  replica:
    persistence:
      enabled: false # We dont need redis persistence, because we dont do anything with it.
  commonConfiguration: |-
    appendonly no
    save ""

image:
  registry: docker.io
  repository: oneuptime
  pullPolicy: Always
  tag: release
  restartPolicy: Always

autoscaling:
  enabled: true
  minReplicas: 1
  maxReplicas: 3
  targetCPUUtilizationPercentage: 80
  targetMemoryUtilizationPercentage: 80

nodeEnvironment: production

billing:
  enabled: false
  publicKey:
  privateKey:
  smsDefaultValueInCents:
  callDefaultValueInCentsPerMinute:
  smsHighRiskValueInCents:
  callHighRiskValueInCentsPerMinute:

subscriptionPlan:
  basic:
  growth:
  scale:
  enterprise:

analytics:
  host:
  key:

internalSmtp:
  enabled: true
  incomingEmailDomain:
  sendingDomain:
  dkimPrivateKey:
  dkimPublicKey:
  email:
  name:
  service:
    loadBalancerIP:
    # Change this to LoadBalancer if you want to receive emails from the internet. This could be useful for Incoming Email monitors.
    type: ClusterIP
    externalIPs:
      # - 51.158.55.153 # Please make sure this is the same as the one in metalLb.ipAdddressPool.addresses

incidents:
  disableAutomaticCreation: false

statusPage:
  cnameRecord:

probes:
  one:
    name: "Probe"
    description: "Probe"
    monitoringWorkers: 3
    monitorFetchLimit: 10
    key:
    replicaCount: 1
    syntheticMonitorScriptTimeoutInMs: 60000
    customCodeMonitorScriptTimeoutInMs: 60000
    # Feel free to leave this blank if you're not integrating this with OpenTelemetry Backend.
    openTelemetryExporter:
      headers:
  # two:
  #   name: "Probe 2"
  #   description: "Probe 2"
  #   monitoringWorkers: 3
  #   monitorFetchLimit: 10
  #   key:
  #   replicaCount: 1
  #   syntheticMonitorScriptTimeoutInMs: 60000
  #   customCodeMonitorScriptTimeoutInMs: 60000
  #   openTelemetryExporter:
  #     headers:

port:
  app: 3002
  ingestor: 3400
  testServer: 3800
  accounts: 3003
  statusPage: 3105
  dashboard: 3009
  adminDashboard: 3158
  nginxHttp: 80
  nginxHttps: 443
  haraka: 2525
  probe: 3500
  otelCollectorGrpc: 4317
  otelCollectorHttp: 4318
  isolatedVM: 4572

testServer:
  enabled: false

openTelemetryExporter:
  endpoint:
    server:
    client:
  headers:
    app:
    dashboard:
    accounts:
    statusPage:
    adminDashboard:
    ingestor:
    nginx:

containerSecurityContext:
podSecurityContext:
affinity:
tolerations:
nodeSelector:

# This can be one of the following: DEBUG, INFO, WARN, ERROR
logLevel: INFO

# Enable cleanup cron jobs
cronJobs:
  cleanup:
    enabled: true
  e2e:
    enabled: true
    isUserRegistered: false
    registeredUserEmail:
    registeredUserPassword:
    # This is the URL of the status page you want to test. This is used to check if the status page is up and running.
    statusPageUrl:
    failedWebhookUrl:

letsEncrypt:
  # Generate a private key via openssl, encode it to base64
  accountKey:
  # Email address to register with letsencrypt for notifications
  email:

oneuptimeIngress:
  enabled: true
  annotations:
    # cert-manager.io/cluster-issuer: "letsencrypt-prod" # or "letsencrypt-staging" for testing
    # cert-manager.io/acme-challenge-type: "http01"
  # Please change this to the ingress class name for your cluster. If you use a cloud provider, this is usually the default ingress class name.
  # If you dont have nginx ingress controller installed, please install it by going to https://kubernetes.github.io/ingress-nginx/deploy/
  className: nginx # Required. Please change this to the ingress class name for your cluster. If you use a cloud provider, this is usually the default ingress class name.
  hosts: # List of hosts for the ingress. Please change this to your hosts
    # - "oneuptime.com" # Host 1
    # - "www.oneuptime.com" # Host 2
    - up.brukkil.tech
  tls:
    enabled: false
    hosts:
      - host: "up.brukkil.tech"
        secretName: "up.brukkil.tech-tls"

script:
  workflowScriptTimeoutInMs: 5000

# extraTemplates -- Array of extra objects to deploy with the release. Strings
# are evaluated as a template and can use template expansions and functions. All
# other objects are used as yaml.
extraTemplates:
  #- |
  #    apiVersion: v1
  #    kind: ConfigMap
  #    metadata:
  #      name: my-configmap
  #    data:
  #      key: {{ .Values.myCustomValue | quote }}

# External Postgres Configuration
# You need to set postgresql.enabled to false if you're using an external postgres database.
externalPostgres:
  host:
  port:
  username:
  password:
  # If you're using existing secret for the password, please use this instaead of password.
  existingSecret:
    name:
    # This is the key in the secret where the password is stored.
    passwordKey:
  database:
  ssl:
    enabled: false
    # If this is enabled, please set either "ca"
    ca:
    # (optional)
    cert:
    key:

## External Redis Configuration
# You need to set redis.enabled to false if you're using an external redis database.

externalRedis:
  host:
  port:
  username:
  password:
  # If you're using existing secret for the password, please use this instaead of password.
  existingSecret:
    name:
    # This is the key in the secret where the password is stored.
    passwordKey:
  database:
  tls:
    enabled: false
    # If this is enabled, please set "ca" certificate.
    ca:
    # (optional)
    cert:
    key:

## External Clickhouse Configuration
# You need to set clickhouse.enabled to false if you're using an external clickhouse database.
externalClickhouse:
  host:
  ## If the host is https, set this to true. Otherwise, set it to false.
  isHostHttps: false
  port:
  username:
  password:
  # If you're using existing secret for the password, please use this instaead of password.
  existingSecret:
    name:
    # This is the key in the secret where the password is stored.
    passwordKey:
  database:
  tls:
    enabled: false
    # If this is enabled, please set either "ca"
    ca:
    # (optional)
    cert:
    key:
Brukkil commented 5 months ago

When I open the app from the incognito window - it works perfectly It's pretty strange behavior. It seems like my local issues with the cache