Open pchristos opened 4 years ago
I tried to use /var/ossec/api/configuration/preloaded_vars.conf
and configure_api.sh
to tackle (1) above.
A few remarks:
configure_api.sh
is not very straightforward to use, especially due to the prompts for user input. I shouldn't have to define most of the settings in preloaded_vars.conf
to avoid the prompts. I'd expect at least a -y
flag to be provided by the shell script in order to auto-answer "yes" to all prompts in a dynamic environment, where user intervention would be nearly impossible.change_auth
function of configure_api.sh
expects USER
and PASS
to be defined. However, both variables are empty for starters and there's no indication regarding when/where they should be set, which causes htpasswd
to fail. Shouldn't it (a) be pointed out to edit preloaded_vars.conf
, (b) use sane default, eg. foo:bar as is in the Dockerfile, or (c) use set -u
to catch undefined variables in the script?change_auth
function?Sorry for the hammerring, but we are really excited to see Wazuh in production soon!
Cheers!
Hello @pchristos,
We have moved this issue to the k8s repo, since it's more related.
You could build a predefined config.js
file with HTTPS disabled and use config maps to mount it.
With 1 done you could define the variable API_GENERATE_CERTS as False.
Due to the way synchronization works on the Wazuh cluster it is not possible to define the workers as replicas and let k8s manage them itself. It's not like an app worker with shared state on a common db, in this case each worker has both its data and its share of the cluster state and losing it can affect the cluster health as a whole.
This could be a new issue for the core repo (wazuh/wazuh
), there are a lot of enhancements we could do to ossec.conf
.
We'll take note of your remarks about configure_api.sh
, there's room for improvement there. By the way if you feel like it PRs are welcome!
Thanks for joining us, and please let us know how it goes with your deployment.
Hey,
Thanks for the response. So, here's my two cents:
1 & 2 - So a combination of these two is required? Basically (2) cannot work without (1), right? Don't you think that makes things a bit less intuitive? I'd expect setting API_GENERATE_CERTS
to just do the trick.
Regarding (3) - So the problem here is shared state? Is each worker supposed to keep its own state? I believe that's something you can accomplish with a single StatefulSet
definition that includes a volumeClaimTemplates
block.
Hey,
Very much new to wazuh but have been taking a look at some of this stuff myself with a look to deploying to k8s. Regarding 3, I've got some local changes which switches to a single worker StatefulSet
and ConfigMap
which along with the sed
in https://github.com/wazuh/wazuh-docker/pull/261 allows this to be cleaned up and use replicas to manage and scale the worker nodes. Happy to open a PR for this if that'd be useful for others? I've deployed it myself and wazuh seems happy.
Hello, @rjmoseley that sounds interesting. Feel free to open a PR so we can evaluate those changes.
hey hi @pchristos , i'm trying for the same, deploying wazuh on kubernetes with service clusterip + ingress. can you please share the details how you have configured the same.
Hello,
I've been working on deploying Wazuh on EKS using Helm. At the moment, I have an end-to-end working HA setup with Wazuh (1 master and 2 worker nodes) + ELK running on top of Kubernetes.
However, I've come across a few issues regarding its configurability. For instance:
I see how you use a
LoadBalancer
service to expose the Wazuh API to the world and allow it to perform TLS termination. However, it seems like a setup with aClusterIP
service +Ingress
is not easy to configure. How can I disable HTTPS for the Wazuh API, so that HTTPS termination can be handled by my cloud provider's external load balancer? I've played around withAPI_GENERATED_CERTS
, but that doesn't seem to do the tricky. It appears that HTTPS is enabled by default in/var/ossec/api/configuration/config.js
, meaning thatAPI_GENERATED_CERTS
is effectively a no-op. Do I have to edit/var/ossec/api/configuration/preloaded_vars.conf
and re-run either or both ofinstall_api.sh
andconfigure_api.sh
.Due to the above, this
if
statement seems to always evaluate to false, since HTTPS is enabled by default andserver.crt
is always present. I believe it's taken fromconfiguration-template
dir even for fresh installations.Regarding the workers'
StatefulSet
- is there any particular reason you've created twoStatefulSet
defintions, one per worker pod, instead of a single manifest with 2 replicas? Is there any sort of limitation that I'm missing here?The configuration of the master and workers nodes looks very similar. The only actual difference I've noticed is in the
<cluster>
block regarding<node_name>
and<node_type>
. I'm just wondering whether these two configuration files are actually meant to be so similar. For instance, does it actually make sense for the<auth>
block to be part of worker configuration? Isn't this what dictates the behavior of theauthd
registration service? Isn't this service supposed to be solely exposed by the master node?What I'd expect:
To be able to tweak various settings via
ConfigMap
s and, especially, environment variables. At the moment, it's not crystal clear how to do that without hacking around.To be able to switch from HTTP and HTTPS and vice versa. TBH this looks more like a bug, no?
A single
StatefulSet
manifest for worker nodes with configurable number ofreplicas
.Different
ossec.conf
per node type, so that responsibilities per node type are clear, unless this is not the case.Thanks in advance!