Open rhnewtron opened 6 years ago
How would you authenticate from the log collector (fluentd, logstash) to elasticsearch? Using a serviceaccount implies using the serviceaccount token which does not work afaik.
For Read access this seems to work.
sh-4.2$ es_acl get --doc=rolesmapping
...
"sg_role_prometheus" : {
"users" : [ "system:serviceaccount:openshift-metrics:prometheus" ]
},
....
"gen_user_HASH_OF_USERNAME" : {
"users" : [ "system:serviceaccount:NAMESPACE:default" ]
}
So it would simply mean we have to replace the default:
sh-4.2$ es_acl get --doc=roles
"gen_user_HASH_OF_USERNAME" : {
"cluster" : [ "USER_CLUSTER_OPERATIONS" ],
"indices" : {
"?kibana?HASH_OF_USERNAME" : {
"*" : [ "INDEX_KIBANA" ]
},
"NAMESPACE?NAMESPACE_UUID?*" : {
"*" : [ "INDEX_PROJECT" ]
},
"project?NAMESPACE?NAMESPACE-UUID?*" : {
"*" : [ "INDEX_PROJECT" ]
}
}
}
With:
sh-4.2$ es_acl get --doc=roles
"gen_user_HASH_OF_USERNAME" : {
"cluster" : [ "USER_CLUSTER_OPERATIONS" ],
"indices" : {
"?kibana?HASH_OF_USERNAME" : {
"*" : [ "INDEX_KIBANA", "CREATE_INDEX", "WRITE" ]
},
"NAMESPACE?NAMESPACE_UUID?*" : {
"*" : [ "INDEX_PROJECT", "CREATE_INDEX", "WRITE" ]
},
"project?NAMESPACE?NAMESPACE-UUID?*" : {
"*" : [ "INDEX_PROJECT", "CREATE_INDEX", "WRITE" ]
}
}
}
Of course based on Role within Project (edit and above, view NOT) .
Or am I missing something ?
Service principal with Token would be great, because I could go through the "router" (which is an exposure option already"). Using SSL certificate, would only allow going through the Kubernetes Service, meaning I have to connect the logging namespace in the multitenant-sdn with a lot of namespaces. I can not oversee the security implications of this (maybe some unsecured port?)
Or am I missing something ?
No, I think that should do it.
Service principal with Token would be great
How would you configure fluentd and/or logstash to do that?
High Level: I would probably run the logstash as a container in some namespace, assign it some form of serviceAccountName that has the proper permissions to write to the logs (e.g. logwriter). And the pod would then read the token from /var/run/secrets/kubernetes.io/serviceaccount/token and use that for the connection to the backend through the route .
However lots of questions are unanswered.
@rhnewtron @richm You could configure SA as described because we require you to provide a token from which we determine your name that is in the rolesmapping. You can do this now if you extract the configs to a configmap and mount them back into the ES pod
Right - but my question is - how do I configure the fluentd elasticsearch plugin or the logstash elasticsearch plugin do to token auth? e.g. to configure the fluentd elasticsearch plugin for client cert auth I set the client_cert and client_key parameters.
@rhnewtron @richm My preference here I would think is to push this into the ACL configs that elasticsearch loads on start. That modification would be easier and more broadly configurable. This would also allow you to modify the cert authentication assuming ES knows about your certs
This plugin is mostly about dynamic role and rolemapping generation. The scenario described is a static configuration in my opinion best handled as suggested in https://github.com/fabric8io/openshift-elasticsearch-plugin/issues/149#issuecomment-406355367 which maybe we could facilitate in the run script
@richm Right, from looking at the fluentd and logstash elasticsearch output plugin's, they do not seem to support Authorization Headers, so this would probably be another issue to address in the other projects :/
@jcantrill I'm a little confused. Is'nt the config created dynamically by this plugin "when I first access elasticsearch" ? So if I make a static config out of this dynamic one, don't I break the semantics of Openshift Origin ?
Ot does it work like this:
LiveConfig = Static (/opt/app-root/src/sgconfig/sg_roles_mapping.yml) + dynamic generated part ?
Still the question is this "supported" by Openshift (assuming we have the OCP licensed version) ?
@richm Workaround would be to use the "http output plugin" of logstash (https://www.elastic.co/guide/en/logstash/current/plugins-outputs-http.html). It supports custom headers, however token lifetime and rotation would still be an issue. Alternative would be to use a "Second Passthrough" Route with client certs manually created.
@jcantrill .. I played around. It seems that this: https://github.com/openshift/origin-aggregated-logging/blob/master/elasticsearch/run.sh#L84-L88 + the read only nature of config maps, kills the idea of just mapping the config map. I now have to use an init container :(
I worked around the issue with using an init container copying the structure from the config map to /opt/app-root/src/sgconfig and a emptyDir volume for /opt/app-root/src/sgconfig
@rhnewtron How is it you are able to use an init container given it runs before elasticstarts up. I'm not sure why you cant use a configmap given you could just extract the configs and then mount the entire dir with the configs and acls back into the dc
@jcantrill It seems that the lines https://github.com/openshift/origin-aggregated-logging/blob/master/elasticsearch/run.sh#L84-L88 try to attach the prometheus ID to the config file sg_role_mapping.yml. If I now provide the file via config map volume mount, the startup script can not modify the configmap, as they are read-only by nature.
So my solution was:
This seems to work. I could then log via SSL to the ES
It seems the ES output plugin supports custom headers (e.g. Bearer Token): https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html#plugins-outputs-elasticsearch-custom_headers
Actually I got this working with logstash with both SSL and Token auth. In both cases I need to modify the SearchGuard configuration. Unfortunately it is not as simple as moving the configs to a configmap and then load it when changed with sgadmin.sh.
As the start skript of Elasticsearch in Openshift modifies the sg_role_mapping.yml in place, and config maps are not writable, I need to work with init containers to seed some time initial volume that is then mounted as sgconfig ... all very ugly.
Having the plugin generate a config with write permissions depending of the role of the user in the tenant (index) would greatly simplyfy this. I could just work with Token based write access.
Note: Token based auth is Much slower then SSL CLient Cert based auth (130k vs 400k messages in 30 seconds on my setup), however for small log volumes (e.g. audig logs etc) this is not a blocker and simplicity beats performance in this case. For large volume I would work with SSL.
I try to use the Openshift integrated EFK stack as central logging platform for both:
From looking at the config of the search-guard config, it seems that there is no way to provide write access for anythin except the fluentd client (which has write access for ALL indexes). It would be great if I could somehow configure (e.g. with a role) serviceAccountNames that are allowed to write to the tenants index. This would allow two new scenarios: