Closed mvtab closed 1 month ago
This was my fault, I researched forward and found out I hadn't deleted the PVC's from the initial cluster. Deleted the PVCs and it's working.
The masters now bootstrap, but the dashboard won't:
Dashboard pod logs:
{"type":"log","@timestamp":"2024-08-01T09:05:28Z","tags":["info","plugins-service"],"pid":1,"message":"Plugin \"dataSourceManagement\" has been disabled since the following direct or transitive dependencies are missing or disabled: [dataSource]"}
{"type":"log","@timestamp":"2024-08-01T09:05:28Z","tags":["info","plugins-service"],"pid":1,"message":"Plugin \"applicationConfig\" is disabled."}
{"type":"log","@timestamp":"2024-08-01T09:05:28Z","tags":["info","plugins-service"],"pid":1,"message":"Plugin \"cspHandler\" is disabled."}
{"type":"log","@timestamp":"2024-08-01T09:05:28Z","tags":["info","plugins-service"],"pid":1,"message":"Plugin \"dataSource\" is disabled."}
{"type":"log","@timestamp":"2024-08-01T09:05:28Z","tags":["info","plugins-service"],"pid":1,"message":"Plugin \"visTypeXy\" is disabled."}
{"type":"log","@timestamp":"2024-08-01T09:05:28Z","tags":["info","plugins-service"],"pid":1,"message":"Plugin \"workspace\" is disabled."}
{"type":"log","@timestamp":"2024-08-01T09:05:28Z","tags":["warning","config","deprecation"],"pid":1,"message":"\"cpu.cgroup.path.override\" is deprecated and has been replaced by \"ops.cGroupOverrides.cpuPath\""}
{"type":"log","@timestamp":"2024-08-01T09:05:28Z","tags":["warning","config","deprecation"],"pid":1,"message":"\"cpuacct.cgroup.path.override\" is deprecated and has been replaced by \"ops.cGroupOverrides.cpuAcctPath\""}
[agentkeepalive:deprecated] options.freeSocketKeepAliveTimeout is deprecated, please use options.freeSocketTimeout instead
{"type":"log","@timestamp":"2024-08-01T09:05:29Z","tags":["info","plugins-system"],"pid":1,"message":"Setting up [52] plugins: [usageCollection,opensearchDashboardsUsageCollection,opensearchDashboardsLegacy,mapsLegacy,share,opensearchUiShared,legacyExport,embeddable,expressions,data,securityAnalyticsDashboards,savedObjects,home,apmOss,reportsDashboards,searchRelevanceDashboards,dashboard,mlCommonsDashboards,assistantDashboards,visualizations,visTypeVega,visTypeTimeline,visTypeTable,visTypeMarkdown,visBuilder,visAugmenter,anomalyDetectionDashboards,alertingDashboards,tileMap,regionMap,customImportMapDashboards,inputControlVis,ganttChartDashboards,visualize,indexManagementDashboards,notificationsDashboards,management,indexPatternManagement,advancedSettings,console,dataExplorer,charts,visTypeVislib,visTypeTimeseries,visTypeTagcloud,visTypeMetric,discover,savedObjectsManagement,securityDashboards,observabilityDashboards,queryWorkbenchDashboards,bfetch]"}
[agentkeepalive:deprecated] options.freeSocketKeepAliveTimeout is deprecated, please use options.freeSocketTimeout instead
[agentkeepalive:deprecated] options.freeSocketKeepAliveTimeout is deprecated, please use options.freeSocketTimeout instead
[agentkeepalive:deprecated] options.freeSocketKeepAliveTimeout is deprecated, please use options.freeSocketTimeout instead
[agentkeepalive:deprecated] options.freeSocketKeepAliveTimeout is deprecated, please use options.freeSocketTimeout instead
[agentkeepalive:deprecated] options.freeSocketKeepAliveTimeout is deprecated, please use options.freeSocketTimeout instead
[agentkeepalive:deprecated] options.freeSocketKeepAliveTimeout is deprecated, please use options.freeSocketTimeout instead
[agentkeepalive:deprecated] options.freeSocketKeepAliveTimeout is deprecated, please use options.freeSocketTimeout instead
[agentkeepalive:deprecated] options.freeSocketKeepAliveTimeout is deprecated, please use options.freeSocketTimeout instead
[agentkeepalive:deprecated] options.freeSocketKeepAliveTimeout is deprecated, please use options.freeSocketTimeout instead
[agentkeepalive:deprecated] options.freeSocketKeepAliveTimeout is deprecated, please use options.freeSocketTimeout instead
[agentkeepalive:deprecated] options.freeSocketKeepAliveTimeout is deprecated, please use options.freeSocketTimeout instead
[agentkeepalive:deprecated] options.freeSocketKeepAliveTimeout is deprecated, please use options.freeSocketTimeout instead
[agentkeepalive:deprecated] options.freeSocketKeepAliveTimeout is deprecated, please use options.freeSocketTimeout instead
{"type":"log","@timestamp":"2024-08-01T09:05:31Z","tags":["info","savedobjects-service"],"pid":1,"message":"Waiting until all OpenSearch nodes are compatible with OpenSearch Dashboards before starting saved objects migrations..."}
{"type":"log","@timestamp":"2024-08-01T09:05:31Z","tags":["error","opensearch","data"],"pid":1,"message":"[ResponseError]: Response Error"}
{"type":"log","@timestamp":"2024-08-01T09:05:31Z","tags":["error","savedobjects-service"],"pid":1,"message":"Unable to retrieve version information from OpenSearch nodes."}
{"type":"log","@timestamp":"2024-08-01T09:05:33Z","tags":["error","opensearch","data"],"pid":1,"message":"[ResponseError]: Response Error"}
{"type":"log","@timestamp":"2024-08-01T09:05:36Z","tags":["error","opensearch","data"],"pid":1,"message":"[ResponseError]: Response Error"}
{"type":"log","@timestamp":"2024-08-01T09:05:38Z","tags":["error","opensearch","data"],"pid":1,"message":"[ResponseError]: Response Error"}
{"type":"log","@timestamp":"2024-08-01T09:05:41Z","tags":["error","opensearch","data"],"pid":1,"message":"[ResponseError]: Response Error"}
{"type":"log","@timestamp":"2024-08-01T09:05:43Z","tags":["error","opensearch","data"],"pid":1,"message":"[ResponseError]: Response Error"}
{"type":"log","@timestamp":"2024-08-01T09:05:46Z","tags":["error","opensearch","data"],"pid":1,"message":"[ResponseError]: Response Error"}
{"type":"log","@timestamp":"2024-08-01T09:05:48Z","tags":["error","opensearch","data"],"pid":1,"message":"[ResponseError]: Response Error"}
{"type":"log","@timestamp":"2024-08-01T09:05:51Z","tags":["error","opensearch","data"],"pid":1,"message":"[ResponseError]: Response Error"}
{"type":"log","@timestamp":"2024-08-01T09:05:53Z","tags":["error","opensearch","data"],"pid":1,"message":"[ResponseError]: Response Error"}
{"type":"log","@timestamp":"2024-08-01T09:05:56Z","tags":["error","opensearch","data"],"pid":1,"message":"[ResponseError]: Response Error"}
{"type":"log","@timestamp":"2024-08-01T09:05:58Z","tags":["error","opensearch","data"],"pid":1,"message":"[ResponseError]: Response Error"}
{"type":"log","@timestamp":"2024-08-01T09:06:01Z","tags":["error","opensearch","data"],"pid":1,"message":"[ResponseError]: Response Error"}
{"type":"log","@timestamp":"2024-08-01T09:06:03Z","tags":["error","opensearch","data"],"pid":1,"message":"[ResponseError]: Response Error"}
{"type":"log","@timestamp":"2024-08-01T09:06:06Z","tags":["error","opensearch","data"],"pid":1,"message":"[ResponseError]: Response Error"}
{"type":"log","@timestamp":"2024-08-01T09:06:08Z","tags":["error","opensearch","data"],"pid":1,"message":"[ResponseError]: Response Error"}
{"type":"log","@timestamp":"2024-08-01T09:06:11Z","tags":["error","opensearch","data"],"pid":1,"message":"[ResponseError]: Response Error"}
Node log:
[2024-08-01T09:27:26,272][WARN ][o.o.s.a.BackendRegistry ] [opensearch-fluentd-masters-0] Authentication finally failed for ltb-admin
from 10.214.2.246:39630
[2024-08-01T09:27:36,899][WARN ][o.o.s.a.BackendRegistry ] [opensearch-fluentd-masters-0] Authentication finally failed for ltb-admin
from 10.214.2.246:43884
[2024-08-01T09:27:37,464][WARN ][o.o.s.a.BackendRegistry ] [opensearch-fluentd-masters-0] Authentication finally failed for ltb-admin
from 10.214.2.246:43890
Namespace overview:
NAME READY STATUS RESTARTS AGE
pod/opensearch-controller-manager-76d984bff-bb5vc 2/2 Running 0 96m
pod/opensearch-fluentd-dashboards-788d986f54-dzrwm 0/1 Running 2 (103s ago) 8m23s
pod/opensearch-fluentd-masters-0 1/1 Running 0 8m24s
pod/opensearch-fluentd-masters-1 1/1 Running 0 5m50s
pod/opensearch-fluentd-masters-2 1/1 Running 0 4m15s
pod/opensearch-fluentd-securityconfig-update-nj295 0/1 Completed 0 8m24s
I tried giving the same credentials as the admin user:
dashboards:
opensearchCredentialsSecret:
name: admin-credentials-secret
I tried creating new credentials and adding them instead, I tried giving no special credentials to the dashboards, it's simply not working.
EDIT:
Reading the documentation, I see mentioned "By default Dashboards is configured to use the demo admin user.". Where? How? Why is there a dashboarduser
in the security config with password kibanaserver
?
Could the documentation be clearer on the subject?
UPDATE: I completely removed the dashboards and the nodes themselves work, I can query them, but the operator can't:
Operator logs:
{"level":"info","ts":"2024-08-01T10:21:05.777Z","msg":"Generating certificates","controller":"opensearchcluster","controllerGroup":"opensearch.opster.io","controllerKind":"OpenSearchCluster","OpenSearchCluster":{"name":"opensearch-fluentd","namespace":"logging"},"namespace":"logging","name":"opensearch-fluentd","reconcileID":"2aee7728-3a6e-49e2-be19-0a4cf3d1ae18","interface":"transport"}
{"level":"info","ts":"2024-08-01T10:21:05.779Z","msg":"Generating certificates","controller":"opensearchcluster","controllerGroup":"opensearch.opster.io","controllerKind":"OpenSearchCluster","OpenSearchCluster":{"name":"opensearch-fluentd","namespace":"logging"},"namespace":"logging","name":"opensearch-fluentd","reconcileID":"2aee7728-3a6e-49e2-be19-0a4cf3d1ae18","interface":"http"}
{"level":"error","ts":"2024-08-01T10:21:06.784Z","msg":"Failed to get OpenSearch health status","controller":"opensearchcluster","controllerGroup":"opensearch.opster.io","controllerKind":"OpenSearchCluster","OpenSearchCluster":{"name":"opensearch-fluentd","namespace":"logging"},"namespace":"logging","name":"opensearch-fluentd","reconcileID":"2aee7728-3a6e-49e2-be19-0a4cf3d1ae18","error":"get error cluster health failed: [401 Unauthorized] Unauthorized","stacktrace":"github.com/Opster/opensearch-k8s-operator/opensearch-operator/pkg/reconcilers/util.GetClusterHealth\n\t/workspace/pkg/reconcilers/util/util.go:298\ngithub.com/Opster/opensearch-k8s-operator/opensearch-operator/pkg/reconcilers.(*ClusterReconciler).UpdateClusterStatus\n\t/workspace/pkg/reconcilers/cluster.go:479\ngithub.com/Opster/opensearch-k8s-operator/opensearch-operator/pkg/reconcilers.(*ClusterReconciler).Reconcile\n\t/workspace/pkg/reconcilers/cluster.go:128\ngithub.com/Opster/opensearch-k8s-operator/opensearch-operator/controllers.(*OpenSearchClusterReconciler).reconcilePhaseRunning\n\t/workspace/controllers/opensearchController.go:328\ngithub.com/Opster/opensearch-k8s-operator/opensearch-operator/controllers.(*OpenSearchClusterReconciler).Reconcile\n\t/workspace/controllers/opensearchController.go:143\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.15.0/pkg/internal/controller/controller.go:118\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.15.0/pkg/internal/controller/controller.go:314\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.15.0/pkg/internal/controller/controller.go:265\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.15.0/pkg/internal/controller/controller.go:226"}
{"level":"info","ts":"2024-08-01T10:21:16.789Z","msg":"Reconciling OpenSearchCluster","controller":"opensearchcluster","controllerGroup":"opensearch.opster.io","controllerKind":"OpenSearchCluster","OpenSearchCluster":{"name":"opensearch-fluentd","namespace":"logging"},"namespace":"logging","name":"opensearch-fluentd","reconcileID":"e1ccce2d-1e90-477e-bc15-ca176dc19d7c","cluster":{"name":"opensearch-fluentd","namespace":"logging"}}
{"level":"info","ts":"2024-08-01T10:21:16.802Z","msg":"Generating certificates","controller":"opensearchcluster","controllerGroup":"opensearch.opster.io","controllerKind":"OpenSearchCluster","OpenSearchCluster":{"name":"opensearch-fluentd","namespace":"logging"},"namespace":"logging","name":"opensearch-fluentd","reconcileID":"e1ccce2d-1e90-477e-bc15-ca176dc19d7c","interface":"transport"}
{"level":"info","ts":"2024-08-01T10:21:16.803Z","msg":"Generating certificates","controller":"opensearchcluster","controllerGroup":"opensearch.opster.io","controllerKind":"OpenSearchCluster","OpenSearchCluster":{"name":"opensearch-fluentd","namespace":"logging"},"namespace":"logging","name":"opensearch-fluentd","reconcileID":"e1ccce2d-1e90-477e-bc15-ca176dc19d7c","interface":"http"}
{"level":"error","ts":"2024-08-01T10:21:17.766Z","msg":"Failed to get OpenSearch health status","controller":"opensearchcluster","controllerGroup":"opensearch.opster.io","controllerKind":"OpenSearchCluster","OpenSearchCluster":{"name":"opensearch-fluentd","namespace":"logging"},"namespace":"logging","name":"opensearch-fluentd","reconcileID":"e1ccce2d-1e90-477e-bc15-ca176dc19d7c","error":"get error cluster health failed: [401 Unauthorized] Unauthorized","stacktrace":"github.com/Opster/opensearch-k8s-operator/opensearch-operator/pkg/reconcilers/util.GetClusterHealth\n\t/workspace/pkg/reconcilers/util/util.go:298\ngithub.com/Opster/opensearch-k8s-operator/opensearch-operator/pkg/reconcilers.(*ClusterReconciler).UpdateClusterStatus\n\t/workspace/pkg/reconcilers/cluster.go:479\ngithub.com/Opster/opensearch-k8s-operator/opensearch-operator/pkg/reconcilers.(*ClusterReconciler).Reconcile\n\t/workspace/pkg/reconcilers/cluster.go:128\ngithub.com/Opster/opensearch-k8s-operator/opensearch-operator/controllers.(*OpenSearchClusterReconciler).reconcilePhaseRunning\n\t/workspace/controllers/opensearchController.go:328\ngithub.com/Opster/opensearch-k8s-operator/opensearch-operator/controllers.(*OpenSearchClusterReconciler).Reconcile\n\t/workspace/controllers/opensearchController.go:143\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.15.0/pkg/internal/controller/controller.go:118\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.15.0/pkg/internal/controller/controller.go:314\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.15.0/pkg/internal/controller/controller.go:265\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.15.0/pkg/internal/controller/controller.go:226"}
Node logs:
[2024-08-01T10:21:17,501][WARN ][o.o.s.a.BackendRegistry ] [opensearch-fluentd-masters-0] Authentication finally failed for ltb-admin
from 10.214.2.3:48316
[2024-08-01T10:21:28,140][WARN ][o.o.s.a.BackendRegistry ] [opensearch-fluentd-masters-0] Authentication finally failed for ltb-admin
from 10.214.2.3:33674
[2024-08-01T10:21:28,686][WARN ][o.o.s.a.BackendRegistry ] [opensearch-fluentd-masters-0] Authentication finally failed for ltb-admin
from 10.214.2.3:33688
[2024-08-01T10:21:39,366][WARN ][o.o.s.a.BackendRegistry ] [opensearch-fluentd-masters-0] Authentication finally failed for ltb-admin
from 10.214.2.3:57430
[2024-08-01T10:21:39,685][WARN ][o.o.s.a.BackendRegistry ] [opensearch-fluentd-masters-0] Authentication finally failed for ltb-admin
from 10.214.2.3:57440
[2024-08-01T10:21:47,713][WARN ][o.o.s.a.BackendRegistry ] [opensearch-fluentd-masters-0] Authentication finally failed for ltb-admin
from 10.214.2.3:47386
[2024-08-01T10:21:47,999][WARN ][o.o.s.a.BackendRegistry ] [opensearch-fluentd-masters-0] Authentication finally failed for ltb-admin
from 10.214.2.3:47392
[2024-08-01T10:21:48,366][WARN ][o.o.s.a.BackendRegistry ] [opensearch-fluentd-masters-0] Authentication finally failed for ltb-admin
curl test:
[opensearch@opensearch-fluentd-masters-0 ~]$ curl https://localhost:9200 -k -u ltb-admin:<pass>
{
"name" : "opensearch-fluentd-masters-0",
"cluster_name" : "opensearch-fluentd",
"cluster_uuid" : "48TVUw9LSqOxlxUqCD2SAg",
"version" : {
"distribution" : "opensearch",
"number" : "2.15.0",
"build_type" : "tar",
"build_hash" : "61dbcd0795c9bfe9b81e5762175414bc38bbcadf",
"build_date" : "2024-06-20T03:26:49.193630411Z",
"build_snapshot" : false,
"lucene_version" : "9.10.0",
"minimum_wire_compatibility_version" : "7.10.0",
"minimum_index_compatibility_version" : "7.0.0"
},
"tagline" : "The OpenSearch Project: https://opensearch.org/"
}
Hey @mvtab. I was able to change the default admin fine. I did not use python libs though to generate the salt. Here is my script for generating the salt that was executed in Ubuntu 22.04:
opensearch_pass=$(openssl rand -base64 24)
echo $opensearch_pass
htpasswd -bnBC 8 "" $opensearch_pass | grep -oP '\$2[ayb]\$.{56}'
Here is my opensearchcluster crd
apiVersion: opensearch.opster.io/v1
kind: OpenSearchCluster
metadata:
annotations:
meta.helm.sh/release-name: opensearch-cluster
meta.helm.sh/release-namespace: logging
creationTimestamp: "2024-08-16T21:31:30Z"
finalizers:
- Opster
generation: 2
labels:
app.kubernetes.io/managed-by: Helm
name: opensearch-cluster
namespace: logging
resourceVersion: "144312069"
uid: 5da5873b-9705-4ff0-8a48-bb05cb914edb
spec:
bootstrap:
resources: {}
confMgmt: {}
dashboards:
enable: true
opensearchCredentialsSecret:
name: admin-credentials-secret
replicas: 1
resources:
limits:
cpu: 500m
memory: 1Gi
requests:
cpu: 500m
memory: 1Gi
service:
type: ClusterIP
version: 2.3.0
general:
drainDataNodes: true
httpPort: 9200
monitoring: {}
pluginsList:
- repository-s3
serviceName: opensearch-cluster
setVMMaxMapCount: true
vendor: opensearch
version: 2.3.0
initHelper:
resources: {}
nodePools:
- component: masters
diskSize: 30Gi
replicas: 3
resources:
limits:
cpu: 500m
memory: 2Gi
requests:
cpu: 500m
memory: 2Gi
roles:
- master
- data
security:
config:
adminCredentialsSecret:
name: admin-credentials-secret
adminSecret: {}
securityConfigSecret:
name: securityconfig-secret
updateJob:
resources: {}
tls:
http:
caSecret: {}
generate: true
secret: {}
transport:
caSecret: {}
generate: true
secret: {}
status:
availableNodes: 3
componentsStatus:
- component: Restarter
status: Finished
health: green
initialized: true
phase: RUNNING
version: 2.3.0
The 2 secrets that was added
+ # Source: secret/templates/secret.yaml
+ apiVersion: v1
+ kind: Secret
+ metadata:
+ labels:
+ app: securityconfig-secret
+ chart: secret
+ heritage: Helm
+ release: securityconfig-secret
+ name: securityconfig-secret
+ data:
+ action_groups.yml: '++++++++ # (49 bytes)'
+ config.yml: '++++++++ # (364 bytes)'
+ internal_users.yml: '++++++++ # (1689 bytes)'
+ nodes_dn.yml: '++++++++ # (44 bytes)'
+ roles.yml: '++++++++ # (6287 bytes)'
+ roles_mapping.yml: '++++++++ # (464 bytes)'
+ tenants.yml: '++++++++ # (44 bytes)'
+ whitelist.yml: '++++++++ # (46 bytes)'
+ type: Opaque
+ # Source: secret/templates/secret.yaml
+ apiVersion: v1
+ kind: Secret
+ metadata:
+ labels:
+ app: admin-credentials-secret
+ chart: secret
+ heritage: Helm
+ release: admin-credentials-secret
+ name: admin-credentials-secret
+ data:
+ password: '++++++++ # (32 bytes)'
+ username: '++++++++ # (5 bytes)'
+ type: Opaque
One last thing. I am using open-search operator version: 2.6.0
Hope this helps
I really don't understand why, but apparently I was using an extremely old version of the chart: 2.3.0. Current is 2.23.1.
Closing this.
Bug description
The default admin password can not be changed. Related to #409
Reproduction steps
I have an Ansible setup and would like to provision an opensearch cluster with custom credentials. The steps I followed are the following:
echo <> | base64
and create secret with the values,python -c 'import bcrypt; print(bcrypt.hashpw("<password>".encode("utf-8"), bcrypt.gensalt(12, prefix=b"2a")).decode("utf-8"))'
and put it in the example securityconfig,All together in a file:
Expected behavior
I would expect a working cluster to be bootstrapped with the new admin credentials.
Actual behavior
Cluster does not bootstrap at all, showing this error on all opensearch nodes:
opensearch-fluentd-securityconfig-update logs:
last logs in bootstrap node:
Environment
Kubernetes operating system: opensuse-leap-15.6 Container environment:
Kubernetes version: 1.30.3 Opensearch version: 2.15.0