Closed georghildebrand closed 4 years ago
Hi @georghildebrand, thanks for taking a look at this project.
It wasn't meant to be a generic cluster switcher. The basic idea was to integrate Databricks clusters as a kernel into a local JupyterLab. So one can create kernelspecs for any remote Databricks cluster and switch between them and any other available kernels. One use case is to work locally (e.g. in an environment that was built with the help of this project to mirror the remote cluster) and whenever necessary, switch for execution to the remote cluster (e.g. hyperparameter tuning). If you have any other kernelspec that allows execution on a remote environment (e.g. k8s), then you can also switch between it and Databricks.
Maybe something similar could have been achieved by jupyter gateway (I don't have experience with it), however, from a network perspective, ssh was pretty much the only option to access a Databricks cluster from the outside. So I gave a remote kernel via ssh a try and it worked pretty good.
@bernhard-42 , thanks for clarifying that. I am looking currently into data gateway.
I guess this is done, closing it
Hi all, thanks for this nice effort and great work! However, I miss the potential to switch the connectivity around (eg. connecting from my k8s to DB cluster).
So, how is this different to jupyter gateway or jupyter enterprise gateway?