After successfully installing InfluxDBv2 using Helm charts with a single instance, I was able to access it without any issues. However, when I increased the replica count to two, I encountered an unauthorized error and am unable to access the database.
My question is whether it is possible to scale InfluxDB v2 to multiple instances as needed. If so, what procedures or best practices should be followed to achieve this?
Here is how I customized my StatefulSet configuration to replicate my influx instance:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ template "influxdb.fullname" . }}
labels:
{{- include "influxdb.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.statefulset.replicas }}
selector:
matchLabels:
{{- include "influxdb.selectorLabels" . | nindent 6 }}
serviceName: "{{ include "influxdb.fullname" . }}"
template:
metadata:
labels:
{{- include "influxdb.selectorLabels" . | nindent 8 }}
{{- if .Values.podLabels }}
{{ toYaml .Values.podLabels | indent 10 }} # Adjusted indentation here
{{- end }}
{{- if .Values.podAnnotations }}
annotations:
{{ toYaml .Values.podAnnotations | indent 8 }}
{{- end }}
spec:
tolerations:
{{- toYaml .Values.statefulset.pod.tolerations | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: {{ .Values.service.portName }}
containerPort: 8086
protocol: TCP
env:
# Automated setup will not run if an existing boltdb file is found at the configured path.
# This behavior allows for the InfluxDB container to reboot post-setup without encountering "DB is already set up" errors.
- name: DOCKER_INFLUXDB_INIT_MODE
value: setup
# The username to set for the system's initial super-user (Required).
- name: DOCKER_INFLUXDB_INIT_USERNAME
value: {{ .Values.adminUser.user }}
# The password to set for the system's inital super-user (Required).
- name: DOCKER_INFLUXDB_INIT_PASSWORD
valueFrom:
secretKeyRef:
{{- if .Values.adminUser.existingSecret }}
name: {{ .Values.adminUser.existingSecret -}}
{{ else }}
name: {{ template "influxdb.fullname" . }}-auth
{{- end }}
key: admin-password
# The name to set for the system's initial organization (Required).
- name: DOCKER_INFLUXDB_INIT_ORG
value: {{ .Values.adminUser.organization }}
# The name to set for the system's initial bucket (Required).
- name: DOCKER_INFLUXDB_INIT_BUCKET
value: {{ .Values.adminUser.bucket }}
# The duration the system's initial bucket should retain data. If not set, the initial bucket will retain data forever.
- name: DOCKER_INFLUXDB_INIT_RETENTION
value: {{ .Values.adminUser.retention_policy }}
# The authentication token to associate with the system's initial super-user. If not set, a token will be auto-generated by the system.
- name: DOCKER_INFLUXDB_INIT_ADMIN_TOKEN
valueFrom:
secretKeyRef:
{{- if .Values.adminUser.existingSecret }}
name: {{ .Values.adminUser.existingSecret -}}
{{ else }}
name: {{ template "influxdb.fullname" . }}-auth
{{- end }}
key: admin-token
# Path to the BoltDB database.
- name: INFLUXD_BOLT_PATH
value: {{ .Values.persistence.mountPath }}/influxd.bolt
# Path to persistent storage engine files where InfluxDB stores all Time-Structure Merge Tree (TSM) data on disk.
- name: INFLUXD_ENGINE_PATH
value: {{ .Values.persistence.mountPath }}
{{- with .Values.env }}
# Extra environment variables from .Values.env
{{- toYaml . | nindent 12 }}
{{- end }}
{{- if .Values.securityContext }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
{{- end }}
volumeMounts:
- name: data
mountPath: {{ .Values.persistence.mountPath }}
subPath: {{ .Values.persistence.subPath }}
livenessProbe:
httpGet:
path: {{ .Values.livenessProbe.path | default "/health" }}
port: http
scheme: {{ .Values.livenessProbe.scheme | default "HTTP" }}
initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds | default 0 }}
periodSeconds: {{ .Values.livenessProbe.periodSeconds | default 10 }}
timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds | default 1 }}
failureThreshold: {{ .Values.livenessProbe.failureThreshold | default 3 }}
readinessProbe:
httpGet:
path: {{ .Values.readinessProbe.path | default "/health" }}
port: http
scheme: {{ .Values.readinessProbe.scheme | default "HTTP" }}
initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds | default 0 }}
periodSeconds: {{ .Values.readinessProbe.periodSeconds | default 10 }}
timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds | default 1 }}
successThreshold: {{ .Values.readinessProbe.successThreshold | default 1 }}
failureThreshold: {{ .Values.readinessProbe.failureThreshold | default 3 }}
{{- if .Values.startupProbe.enabled }}
startupProbe:
httpGet:
path: {{ .Values.startupProbe.path | default "/health" }}
port: http
scheme: {{ .Values.startupProbe.scheme | default "HTTP" }}
initialDelaySeconds: {{ .Values.startupProbe.initialDelaySeconds | default 30 }}
periodSeconds: {{ .Values.startupProbe.periodSeconds | default 5 }}
timeoutSeconds: {{ .Values.startupProbe.timeoutSeconds | default 1 }}
failureThreshold: {{ .Values.startupProbe.failureThreshold | default 6 }}
{{- end }}
{{- if .Values.securityContext.runAsGroup }}
securityContext:
fsGroup: {{ .Values.securityContext.runAsGroup }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | nindent 8 | trim }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{ toYaml . | nindent 8 | trim }}
{{- end }}
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "{{ .Values.persistence.accessMode }}" ]
storageClassName: "{{ .Values.persistence.storageClass }}"
resources:
requests:
storage: {{ .Values.persistence.size }}
After successfully installing InfluxDBv2 using Helm charts with a single instance, I was able to access it without any issues. However, when I increased the replica count to two, I encountered an unauthorized error and am unable to access the database. My question is whether it is possible to scale InfluxDB v2 to multiple instances as needed. If so, what procedures or best practices should be followed to achieve this? Here is how I customized my StatefulSet configuration to replicate my influx instance: