Open Shrinjay opened 1 year ago
Thank you for opening your first issue in this project! Engagement like this is essential for open source projects! :hugs:
If you haven't done so already, check out Jupyter's Code of Conduct. Also, please try to follow the issue template as it helps other other community members to contribute more effectively.
You can meet the other Jovyans by joining our Discourse forum. There is also an intro thread there where you can stop by and say Hi! :wave:
Welcome to the Jupyter community! :tada:
This sounds really interesting. So a given EG server would still target exactly one Kubernetes cluster in which the kernels are launched - these changes simply enable the ability for that cluster to be remote. Is that correct?
You'll also need to take into account the possibility of the user having configured the EG_SHARED_NAMESPACE
functionality. In that case, the remote cluster may not have the enterprise-gateway
namespace and I would argue that we should probably raise an error if remote-k8s-cluster AND eg-shared-namespace are both enabled. The BYO namespace (using KERNEL_NAMESPACE
) should still work, although we should probably ensure that KERNEL_NAMESPACE
does not reflect the EG namespace in which EG resides.
At any rate, those are implementation details that we can work out. We look forward to your pull request.
(Just a note that changes within the Process Proxies functionality, will be dropped in our EG 4.0 release in favor of using the kernel provisioner framework that is now part of the general Jupyter stack, so it would be great if you can participate in the Gateway Provisioners project. I'm (frantically) trying to get things in place for a release over there as we speak. If you're unable to make applicable changes, no worries, we'll port them over at some point closer to making the switch. I just wanted to make you aware of that future transition.)
So a given EG server would still target exactly one Kubernetes cluster in which the kernels are launched - these changes simply enable the ability for that cluster to be remote. Is that correct?
Yup! That would be exactly correct.
Thanks for reminding me about the shared namespace option! That's something I forgot to consider, I'll also look through the other k8s options to ensure we don't raise a conflict.
In regards to the new gateway provisioners, thanks for making me aware of that as well. I'll take a look there for the effort to port over the new k8s client pattern w remote cluster and let you know.
I'll take a look there for the effort to port over the new k8s client pattern w remote cluster and let you know.
You should find things nearly identical with the exception of name changes and the fact that there isn't a hosting application in the repo.
I need to apologize for the existing documentation (still converting from EG), tests (that are nearly zero), and lack of an installable package (hopefully soon), so bear with us.
I need to apologize for the existing documentation (still converting from EG), tests (that are nearly zero), and lack of an installable package (hopefully soon), so bear with us.
Is there an issue or PR that tracks the effort to let EG use the Provisioners?
Yes - you opened #1208. :smile:
So a given EG server would still target exactly one Kubernetes cluster in which the kernels are launched
It would be useful if one could also point to a specific kube context for a given kernel
Problem
Proposed Solution
Therefore the code changes are as follows
This also has an added benefit of replacing the use of static clients with an atomic export, so configurations are consistent.
Willing to start working on this if there's no opposition!