Closed Jeffwan closed 4 years ago
/cc @PatrickXYS @cfregly
Hi @Jeffwan, I was wondering do we really need have permission to check kubeflow namespace
? Since this is required because the namespace we set in the notebook is kubeflow
, what if we set the namespace as the namespace we created in kubeflow dashboard, then we don't need rolebinding
.
For example, if I use below logic to find the namespace I'm using right now not simply kubeflow
:
with open('/var/run/secrets/kubernetes.io/serviceaccount/namespace', 'r') as f:
namespace = f.readline()
Then the SA user should have permission to check the namespace. We then set the namespace we got above in the pipeline:
def mnist_pipeline(
name="mnist-{{workflow.uid}}",
namespace=namespace,
step="1000",
s3bucketexportpath=""):
And it should be good to go, I've already tested it.
@PatrickXYS
Once multi-user pipeline is supported, this will be easier.
Currently, there're two different groups of containers
I think there're few principles
with open('/var/run/secrets/kubernetes.io/serviceaccount/namespace', 'r') as f:
namespace = f.readline()
The namespace here is still the pod namespace, the runner's pod is in kubeflow
namespace, then it's kubeflow
here. How do you use other namespace here?
@Jeffwan The latest PR is able to address No.1, 3, 4. Will put more efforts to solve all the issues.
Let's close this issue now. If we have extra feedbacks, we can revisit the issue
https://github.com/aws-samples/eks-kubeflow-workshop/blob/master/notebooks/05_Kubeflow_Pipeline/05_03_Pipeline_mnist.ipynb
Reference: commands to delete resources
Otherwise, we just give user
cluster-admin
permission which is not recommended but works.Provides Dockerfile and update python to write event logs, write an artifact to enable tensorboard
Use high level KFP Op to construct TFjob instead of ResourceOp
Consider to pass S3 MNIST training data to pipeline instead of download dataset in the code. Add instructions to replace S3 export bucket to user's