Closed Bilvani closed 3 years ago
I currently use AWS Elasticsearch as a backend for Wazuh. I confirm it works. Since AWS Elasticsearch is leveraging Open Distro for Elasticsearch, the setup is almost simple.
Deploy a Kibana instance that has the wazuh-kibana-app plugin installed
FROM amazon/opendistro-for-elasticsearch-kibana:z
ARG KIBANA_VERSION=x
ARG WAZUH_VERSION=y
# Install the Wazuh Kibana application
RUN NODE_OPTIONS="--max-old-space-size=3072" /usr/share/kibana/bin/kibana-plugin install https://packages.wazuh.com/wazuhapp/wazuhapp-${WAZUH_VERSION}_${KIBANA_VERSION}.zip
Those are "high level" steps. I guess the Wazuh team can come up with a more detailed documentation!
Thank you for answering to my question. Do you have any material on how to make Filebeat or Logstash push Wazuh events to your AWS Elasticsearch domain
Hello!
The Logstash Pipeline output looks like this:
output {
amazon_es {
hosts => [
"vpc-your-endpoint-here.us-east-1.es.amazonaws.com"
]
region => "us-east-1"
index => "wazuh-alerts-3.x-%{+xxxx.ww}"
document_id => "%{id}"
}
}
And the authentications happens du to a mapping between the AWS IAM instance role of Logstash and a backend role in AWS Elasticsearch.
The Terraform code I used to build that mapping:
resource "elasticsearch_odfe_role" "logstash" {
role_name = "logstash"
cluster_permissions = [
"cluster_monitor",
"cluster_composite_ops",
"indices:admin/template/get",
"indices:admin/template/put",
"cluster:admin/ingest/pipeline/put",
"cluster:admin/ingest/pipeline/get",
]
index_permissions {
index_patterns = [
"wazuh-alerts-3.x-*",
]
allowed_actions = [
"create_index",
"crud",
]
}
}
resource "elasticsearch_odfe_roles_mapping" "logstash" {
role_name = elasticsearch_odfe_role.siem_logstash.role_name
backend_roles = [
"Your Logstash instance role ARN"
]
}
I hope this helps!
Thanks a lot for responding.
Regarding Elasticsearch service in aws, I created a Master user type: Internal user database( Gave master username and password) but then im unable to specify the user to connect ES from Kibana.
I have tried
Using curl command
I have also tried mentioning them in the deployment file of kibana
Hello!
For Kibana, I used the amazon/opendistro-for-elasticsearch-kibana Docker image. Chose the image with the version of Kibana that will fit both your Elasticsearch cluster version and wazuh-kibana-app version (that's the tricky part because most of the time, we cannot find a version of the wazuh-kibana-app for our AWS Elasticsearch domain version).
For the configuration, I created a Secret that contains the Kibana config YAML file mounted at /usr/share/kibana/config/kibana.yml
:
server.name: "kibana"
server.host: "0.0.0.0"
server.port: 5601
server.ssl.enabled: true
server.ssl.certificate: /usr/share/kibana/config/certs/tls.crt
server.ssl.key: /usr/share/kibana/config/certs/tls.key
elasticsearch.hosts:
- "https://vpc-your-cluster.us-east-1.es.amazonaws.com"
elasticsearch.ssl.verificationMode: none
elasticsearch.username: "username"
elasticsearch.password: "password"
elasticsearch.requestHeadersWhitelist: ["securitytenant","Authorization"]
elasticsearch.requestTimeout: 500000
opendistro_security.readonly_mode.roles: ["kibana_read_only"]
opendistro_security.cookie.password: "a_random_string"
opendistro_security.cookie.secure: true
opendistro_security.cookie.ttl: 86400000
opendistro_security.session.keepalive: false
opendistro_security.session.ttl: 86400000
# Based on the AWS documentation, we must use the ".kibana_1" index
kibana.index: ".kibana_1"
kibana.defaultAppId: "discover"
# Enables you specify a file where Kibana stores log output.
logging.dest: stdout
# Set the value of this setting to true to suppress all logging output other than error messages.
logging.quiet: true
# Make sure the status API is available to Kubernetes probes
status.allowAnonymous: true
I'm sure this will help!
Thanks a lot for the help @JPLachance, we should create a specific page on the documentation for this use case.
Hope you got it solved with these tips @Bilvani, any other input will be appreciated.
It was really helpful @xr09, Thanks a lot for providing the tips @JPLachance
I'll leave this issue open to track progress of the documentation for this use case. Thanks again both!
I got this in kibana pod logs ( kibana image: amazon/opendistro-for-elasticsearch-kibana:1.11.0)
When i searched for it, they are stating that it will be resolved in ES 7.10 and closed the issue
But the thing is that aws latest ES version is 7.9
Do you have any idea about this @JPLachance ? Any kind of suggestion would be helpful.
Hello!
Well, I'm sorry, I'm still on an old version of Elasticsearch and I did not experience that issue.
If you find a solution, please share!
Hey!
Will share for sure, So may i know which version of aws ES , amazon/opendistro-for-elasticsearch-kibana image u were using?
I got to know about Wazuh - Kibana - Open Distro version compatibility matrix through this https://github.com/wazuh/wazuh-kibana-app @JPLachance
When I'm making use of curl and specifying the username and password from kibana pod I was able to reach to ES
But when I'm trying to access Kibana UI, I'm unable to do so as I'm getting error in the logs as failed authentication.
Do you have any idea about it ? @JPLachance Any kind of suggestion would be helpful.
Greetings,
Kibana needs a custom ODFE role to work properly. Here is my Terraform code:
resource "quantum_password" "kibana_server" {
special_chars = "!#$%&()*+,-./"
}
resource "elasticsearch_odfe_role" "kibana_server" {
role_name = "siem_kibana_server"
description = "Provide the minimum permissions for the Kibana server"
cluster_permissions = [
"cluster_monitor",
"cluster_composite_ops",
"cluster:admin/xpack/monitoring*",
"indices:admin/template*",
"indices:data/read/scroll*",
]
index_permissions {
index_patterns = [
".kibana",
".kibana-*",
".kibana_*",
".reporting*",
".monitoring*",
".tasks",
".management-beats*",
]
allowed_actions = [
"indices_all",
]
}
index_permissions {
index_patterns = [
"*",
]
allowed_actions = [
"indices:admin/aliases*",
]
}
index_permissions {
index_patterns = [
".wazuh",
]
allowed_actions = [
"indices_all",
]
}
index_permissions {
index_patterns = [
"wazuh-alerts-3.x-*",
]
allowed_actions = [
"indices:data/read/search",
]
}
}
resource "elasticsearch_odfe_user" "kibana_server" {
username = "siem_kibana_server"
password = quantum_password.kibana_server.password
}
resource "elasticsearch_odfe_roles_mapping" "kibana_server" {
role_name = elasticsearch_odfe_role.kibana_server.role_name
users = [elasticsearch_odfe_user.kibana_server.username]
}
This code creates a role, creates a user, grants the role to the user.
I then took the user username and password and I placed them in the /usr/share/kibana/config/kibana.yml
file I already shared in this thread.
Make sure your Kibana config file is mounted in the Kibana container and used by Kibana. On my side, I use a very old version of Kibana, mount paths might have changed.
Please share if you find the solution :)
Thanks @JPLachance, Im able to access kibana UI.
# Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License").
# You may not use this file except in compliance with the License.
# A copy of the License is located at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# or in the "license" file accompanying this file. This file is distributed
# on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# express or implied. See the License for the specific language governing
# permissions and limitations under the License.
# Description:
# Default Kibana configuration from kibana-docker.
server.name: kibana
server.host: "0"
elasticsearch.hosts: https://vpc-endpoint.us-east-1.es.amazonaws.com:443
elasticsearch.ssl.verificationMode: none
elasticsearch.username:
elasticsearch.password:
elasticsearch.requestHeadersWhitelist: ["securitytenant","Authorization"]
opendistro_security.multitenancy.enabled: false
opendistro_security.readonly_mode.roles: ["kibana_read_only"]
# Use this setting if you are running kibana without https
opendistro_security.cookie.secure: false
kibana.index: ".kibana_1"
newsfeed.enabled: false
telemetry.optIn: false
telemetry.enabled: false
Could u let me know what are all the indexes and aliases that need to be created in Elasticsearch. @JPLachance
I actually tried adding indexes using wazuh documentation, it got acknowleged as true but then when I'm trying to view the indexes I'm unable to c the added wazuh indexes.
And I'm getting this error while accessing kibana UI
Hello!
I don't know how the Wazuh team are managing the Index template, but I use Terraform again:
data "template_file" "wazuh" {
template = file("${path.module}/templates/wazuh-alerts-3.x.json")
vars = {
index_state_management_policy_id = var.security_siem.elasticsearch_config.index_cleanup_90d_policy_id
}
}
resource "elasticsearch_index_template" "wazuh" {
name = "wazuh-alerts"
body = data.template_file.wazuh.rendered
}
The index template JSON file can be found here: https://github.com/wazuh/wazuh/blob/master/extensions/elasticsearch/7.x/wazuh-template.json
About the index pattern, I though this was created automatically... If it is not, simply follow the Kibana documentation.
Have a great day!
Thanks a lot @JPLachance, Thanks for responding to my queries, Can u tell about creating service account for kibana and logstash?
Have a nice day!
Can u give me ur deployment file(open-distro kibana deployment file), as I'm trying to make changes in order to change the permission(so it can create logs dir), I'm unable to do so. It would be really great if u could provide me the deployment file @JPLachance
Kibana deployment file:
api-Version: apps/v1 kind: Deployment metadata: name: wazuh-kibana namespace: {{k8s_env.namespace}} spec: replicas: 1 selector: matchLabels: app: wazuh-kibana template: metadata: labels: app: wazuh-kibana name: wazuh-kibana spec: securityContext: runAsUser: 1000 runAsGroup: 1000 fsGroup: 1000 containers:
#- '--config.file=/usr/share/kibana/config/kibana.yml'
resources:
requests:
cpu: 400m
memory: 1536Mi
limits:
cpu: 600m
memory: 2048Mi
ports:
- containerPort: 5601
name: kibana
volumeMounts:
- name: kibana1-config-vol
mountPath: /usr/share/kibana/optimize/wazuh/config/wazuh.yml
#subPath: wazuh.yml
- name: kibana-config-vol
mountPath: /usr/share/kibana/config/kibana.yml
subPath: kibana.yml
volumes:
Hello!
This is indeed an issue I faced. It is caused by the fact we try to run the Kibana container as non-root. By running as non-root, permissions in the optimize folder are not right.
A solution is to fix permissions using a Kubernetes init container:
initContainers:
- command:
- sh
- -c
- chown -R kibana:kibana /usr/share/kibana/optimize
image: bilvani/wazuh:latest
imagePullPolicy: IfNotPresent
name: init
volumeMounts:
- mountPath: /usr/share/kibana/optimize
mountPropagation: None
name: kibana-optimize
subPath: optimize
Since I use a Kubernetes StatefulSet with an EBS for the kibana-optimize volume, this init container fixes the permissions and Wazuh can work properly.
Wazuh maintainers are not facing this issue because the official YAML for Kibana runs it as root.
Knowing this should help! Kubernetes is a great tool with multiple corner-cases :D
That was really helpful @JPLachance.
What did you do regarding Wazuh API Connection? and where did u provide wazuh api credentials?
I was trying to add them in this path /usr/share/kibana/optimize/wazuh/config/wazuh.yml
wazuh.yml: hosts:
It will be really helpful if u could provide me all the deployment yaml files.@JPLachance
Hello!
Sorry for the late reply, I'm quite busy!
I have the following in the wazuh.yml file:
checks.pattern : true
checks.template: true
checks.api : true
checks.setup : true
checks.fields : true
timeout: 30000
api.selector: true
xpack.rbac.enabled: false
wazuh.monitoring.enabled: false
hosts:
- dev:
url: https://wazuh-api.your-domain.com
port: 55000
user: the-wazuh-api-username
password: the-wazuh-api-password
Basically, the Wazuh manager API is behind an AWS Elastic Load Balancer. That load balancer is behind an AWS Route 53 entry, so I've a clean URL to reach the Wazuh API.
What you need there will depend a lot on how you configured your Wazuh Kubernetes Service.
If Kibana runs in the same Kubernetes Namespace the Wazuh ClusterIP Service does, then you can reach the Wazuh API using https://wazuh:55000
as your endpoint.
From your Kibana Pod:
bash-4.2$ curl -k https://wazuh:55000
401 Unauthorized
It all depends on how you deployed Kibana and the Wazuh Service. Sharing all my YAMLs won't help much , sorry! 😄
Have a great weekend!
This was a great source of information, thanks again @JPLachance for your help here, I'll use this as a base for the future docs on this use case.
@Bilvani I'm marking this issue as closed, if you have any other questions feel free to use our community channel on Slack or the mailing list, perhaps someone else has first hand experience with your requirements:
Thanks a lot @JPLachance.
Please provide documentation for integrating aws elastic search with wazuh. Please provide the documention if u have anything.