Open olljanat opened 3 years ago
Hello @olljanat !
We do indeed plan to offer a solution for this, it's in discussion right now.
We (Karbon) rely on the underlying abilities of the platform to provide us the required authentication and granular authorization mechanisms.
Those capabilities weren't satisfactory until very (very) recently.
If you have an urgent use, I would suggest you investigate your 2nd route, otherwise we plan to gradually fix those issues over the coming releases (use of tokens instead of admin credentials, then restriction of access rights, and finally scoping to one or more specific storage policies/datastores).
cc @subodh01 @SunilAgrawal
Best regards, Sylvain "shu" Huguet Product - Karbon Clusters
Hello @olljanat
In addition to the ongoing work on permissions reduction outlined above, it is very important to take into account that there will always be some critical data exposed in k8s clusters.
This is the case for many CSI providers but more broadly for many applications who need to talk with external components. To overcome this, it is also important to use the RCAC mechanisms of k8s in order to protect system namespace access from the users.
Best Regards
If you have an urgent use, I would suggest you investigate your 2nd route, otherwise we plan to gradually fix those issues over the coming releases (use of tokens instead of admin credentials, then restriction of access rights, and finally scoping to one or more specific storage policies/datastores).
Sounds good. Any estimate about that which AOS versions/timeframes we are talking about here?
To overcome this, it is also important to use the RCAC mechanisms of k8s in order to protect system namespace access from the users.
yes that it good point to have mentioned here in case someone else is reading this discussion.
However at least on our environment hardening production is k8s cluster will not be issue but development cluster(s) will be open also for persons who do not have any access to production cluster and are no allowed to have any access to production data which why I need to make sure that it does no provide any backdoor for them.
Any estimate about that which AOS versions/timeframes we are talking about here?
Unfortunately no, we do not comment publicly on any timeframes. Please reach out to your Nutanix account team if you want to schedule a call to discuss this.
Development cluster(s) will be open also for persons who do not have any access to production cluster and are no allowed to have any access to production data
What @tuxtof was mentioning still stands in this case, We will still need a token to access our API and we will still need to store that token somewhere, even if it's as a Secret in Kubernetes, and even if the actions it can take are limited. A (crafty) developer accessing that token could still potentially either create additional storage for purposes not for Karbon, or access all PVCs. Even if we scope it down to a single Storage Container, the token would allow access to all Volume Groups created on that Storage Container. At some point, there is always a way to circumvent any security mechanism we put in place if the developer has full access to the k8s cluster and is not restricted in accessing Secrets and certain namespaces.
Even if we scope it down to a single Storage Container, the token would allow access to all Volume Groups created on that Storage Container. At some point, there is always a way to circumvent any security mechanism we put in place if the developer has full access to the k8s cluster and is not restricted in accessing Secrets and certain namespaces.
Sure but those are just things which should be clarified on documentation. Anyone with unrestricted k8s cluster access must be allowed access to all Volume Groups which are reachable with service accounts/tokens stored to that cluster. Other way they cannot have those access rights.
That is exactly how we do it nowadays on Docker Swarm with my reverse proxy. Development/Prototype swarms have dedicated storage containers which only contains non-sensitive data. That way our developers are able to test exactly same configuration which is used on production without access to actual production data.
Additionally we separated production and development iSCSI traffic to different VLANs using this feature https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Security-Guide-v5_17:wc-service-specific-traffic-isolate-t.html so it is not possible to capture production iSCSI traffic by sniffing network traffic on development swarms.
Any news to this one? Afaiu AOS 6.0 added some new volume group APIs to Prism Central which maybe might help here?
Hello @olljanat , Volume Group API RBAC is always planned for 2022 , based on that we will modify the CSI drivers to support it
@yannickstruyf3
Currently using CSI driver with Nutanix Volumes requires full cluster admin credentials :grimacing:
That generates huge security issue on environments where same Nutanix cluster is used to provide Kubernetes clusters for development and production use cases and even more serious issue on service provider environments.
There is two ways to fix this issue:
I have already implemented reverse proxy which can be used like that with Nutanix DVP (Docker Volume Plug-in). Depending how CSI plugin does API call it might be that my proxy works already with that one or it might need some changes (I have not investigated it yet).
However before I start using this CSI plugin I would like to understand that what are Nutanix plans to fixing this issue? Are you planning to offer official solution to either those options or is just matter of customers to taking care of this?