Closed iamNoah1 closed 2 years ago
Hi @iamNoah1, thanks for reporting this. We're going to try to reproduce the issue. Keep you posted :-)
Hey @iamNoah1 in order to do this:
- Log in as a user of the tenant
which commands you run?
I followed these steps:
# Crate the AKS cluster
az group create capsule-603
az aks create --resource-group capsule-603 --name capsule-603
kubectl version --short
...
Server Version: v1.22.11
# Capsule insallation
helm upgrade --install capsule clastix/capsule
CAPSULE_REPO=$(mktemp -d)
git clone https://github.com/clastix/capsule $CAPSULE_REPO
cd $CAPSULE_REPO
# Create the Tenant owner User and certificates and kubeconfig
./hack/create-user.sh noah jesters
...
kubectl apply -f https://github.com/iamNoah1/capsule-demo/raw/master/manifest.yaml # the jesters Tenant
# List Namespaces
KUBECONFIG=noah-jesters.kubeconfig kubectl get ns
Error from server (Forbidden): namespaces is forbidden: User "noah" cannot list resource "namespaces" in API group "" at the cluster scope
# Create Namespaces
KUBECONFIG=noah-jesters.kubeconfig kubectl create ns noah-ns
namespace/noah-ns created
but as you see I wasn't able to reproduce the issue.
Also, which version of Capsule you installed?
Thanks
hm, crazy :/ ... how to see the version of capsule?
helm list -n capsule-system
says
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
capsule capsule-system 1 2022-07-11 17:56:22.353522 +0200 CEST deployed capsule-0.1.8 0.1.1
@iamNoah1 please run:
kubectl -n capsule-system get pods -l app.kubernetes.io/instance=capsule -o=jsonpath='{.items[].spec.containers[].image}'
You should see something like:
quay.io/clastix/capsule:v0.1.1
@maxgio92 output is the same as yours: quay.io/clastix/capsule:v0.1.1
Thanks @iamNoah1. Could you write down each single step/command you do in order to reproduce the issue? So that we can compare them :-)
terraform apply --var resource_group="multitenancy" --var aks_name="multitenant-aks"
using https://github.com/iamNoah1/terraform-aksecho "$(terraform output -raw kube_config)" > ./azurek8s
export KUBECONFIG=
pwd/azurek8s
helm repo add clastix https://clastix.github.io/charts
helm install capsule clastix/capsule -n capsule-system --create-namespace
kubectl apply -f manifest.yaml
./create-user.sh noah jesters
export KUBECONFIG=noah-jesters.kubeconfig
kubectl get namespaces
@iamNoah1 could you please tell us what's your expected outcome after the latest command?
kubectl get namespaces
what you're expecting here?
Since the user noah
is tenant owner of the jesters
tenant, he cannot get namespaces at cluster level as @maxgio92 explained in his example:
# List Namespaces
KUBECONFIG=noah-jesters.kubeconfig kubectl get ns
Error from server (Forbidden): namespaces is forbidden: User "noah" cannot list resource "namespaces" in API group "" at the cluster scope
Capsule is designed to restrict namespaces access to tenant owners, so the tenant owner can only create namespaces in his tenant:
KUBECONFIG=noah-jesters.kubeconfig kubectl create namespace development
KUBECONFIG=noah-jesters.kubeconfig kubectl create namespace production
KUBECONFIG=noah-jesters.kubeconfig kubectl get ns
Error from server (Forbidden): namespaces is forbidden: User "noah" cannot list resource "namespaces" in API group "" at the cluster scope
If you want user noah
getting his own namespaces only, then you can use the capsule-proxy that is basically a reverse proxy in front of the kubernetes APIs server dealing with tenant owner permissions:
after configuring the capsule-proxy
KUBECONFIG=noah-jesters.kubeconfig kubectl get ns
NAME STATUS AGE
production Active 36m
development Active 36m
Hope this helps
My expected outcome is that I cannot see any namespaces, but I can see every namespace
... but I can see every namespace
Likely you're acting as cluster admin, check the noah-jesters.kubeconfig
file and the kubectl
context
yeah, something is messed up. Strange thing is, that everything works fine using an existing cluster. Anyhow, I assume that everything works fine with capsule and it is just an issue on my side. Thought, it could have something to do with the k8s version.
I think @iamNoah1 as @bsctl you likely used a wrong context of the kubeconfig, using a cluster admin instead of the tenant owner.
Anyway, if you experience this issue again do not hesitate to let us know :-)
Bug description
I created a new tenant, logged in with the user and tried to get all namespaces which was successful, but shouldn't.
How to reproduce
kubectl get namespaces
Steps to reproduce the behavior:
Provide the Capsule Tenant YAML definitions https://github.com/iamNoah1/capsule-demo/blob/master/manifest.yaml
Provide all managed Kubernetes resources3. Not exactly sure what to provide here.
Expected behavior
Not to be able to see cluster wide namespaces
Logs
If applicable, please provide logs of
capsule
.In a standard stand-alone installation of Capsule, you'd get this by running
kubectl -n capsule-system logs deploy/capsule-controller-manager
.Additional context
capsule --version
): 0.1.1helm list -n capsule-system
): 0.1.8kubectl version
)