Closed aculich closed 6 years ago
Thanks @aculich. For those that wish to help by submitting a PR, please limit changes that are vendor/cloud provider specific to its own section within https://github.com/jupyterhub/zero-to-jupyterhub-k8s/blob/master/doc/source/create-k8s-cluster.rst file. We would like to keep the remainder of the documentation vendor agnostic. Thanks. Please let us know if you have questions.
@choldgraf and I tested Heptio based on pointers from our AWS rep, and @yuvipanda mentioned kops as a direction the open source community is moving, however it relies on having a DNS name already registered for its discovery process which can get in the way for quick testing on an IP address.
note that we also had to disable RBAC (which is not desirable in the long-term) with our Heptio install: https://kubernetes.io/docs/admin/authorization/rbac/#permissive-rbac-permissions
There is more to do.... and we'll ask for input from folks at the UCCSC AWS User Group meeting today.
Nice to see work happening with Heptio, @aculich and folks.
@rdodev, do you know who would be a good contact if we have additional questions? :sunny:
Hey @willingc happy to help and can be point person with any questions or issues relating to our AWS quickstart.
Thanks @rdodev. Good stuff happening at Heptio :smile:
FWIW I really need to get something like this working on AWS within a week or so...otherwise we'll need to switch to something else for the bootcamp in early September. @aculich do you have time to give it another go with me this week?
@rdodev would you have a chance to do a live-chat with @aculich and I as we try to get k8s running on AWS? I'm helping teach a bootcamp to a buncha neuroscientists in early September and was hoping to run a k8s-based jupyterhub on AWS!
@choldgraf I got the heptio tutorial https://aws.amazon.com/quickstart/architecture/heptio-kubernetes/ up and running the other day with no issues. I haven't had time to try with JupyterHub but kubectl and helm were working. Heptio's friday podcasts on YouTube are really good too. The first one basically walks you through the tutorial install.
Huh - that is the same one Aaron and I were using and we ran into a buncha problems in the end (that I of course don't remember now). I'll give it another shot soon though. Been wrestling with binder DNS records all morning :-) --
On Thu, Aug 10, 2017 at 9:43 AM Carol Willing notifications@github.com wrote:
@choldgraf https://github.com/choldgraf I got the heptio tutorial https://aws.amazon.com/quickstart/architecture/heptio-kubernetes/ up and running the other day with no issues. I haven't had time to try with JupyterHub but kubectl and helm were working. Heptio's friday podcasts on YouTube are really good too. The first one basically walks you through the tutorial install.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/jupyterhub/zero-to-jupyterhub-k8s/issues/88#issuecomment-321607532, or mute the thread https://github.com/notifications/unsubscribe-auth/ABwSHQpNNFrXDJ4Q5HrOwpYQ9GG-fhZcks5sWzMbgaJpZM4N6cry .
FYI. I used the new VM option FWIW.
Great to see things are working as expected @willingc one thing worth highlighting is the fact that AWS QS clusters are not "production-grade" and are only meant for testing/staging. Would be glad to help productionize (sic) your environment if and when you folks are ready.
I've got things running up to the point of the helm install. I followed the heptio guide and got my kubernetes machines running. Helm + kubectl are also installed. Here's the error that I'm getting:
helm install jupyterhub/jupyterhub --version=v0.4 --name=kube --namespace=kube -f config.yaml
Error: User "system:serviceaccount:kube-system:default" cannot get namespaces in the namespace "kube". (get namespaces kube)
helm version
Client: &version.Version{SemVer:"v2.5.1", GitCommit:"7cf31e8d9a026287041bae077b09165be247ae66", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.5.1", GitCommit:"7cf31e8d9a026287041bae077b09165be247ae66", GitTreeState:"clean"}
Any ideas?
You should try a helm init again with the service account instructions in https://github.com/jupyterhub/zero-to-jupyterhub-k8s/pull/124
On Fri, Aug 11, 2017 at 11:54 AM, Chris Holdgraf notifications@github.com wrote:
I've got things running up to the point of the helm install. I followed the heptio guide and got my kubernetes machines running. Helm + kubectl are also installed. Here's the error that I'm getting:
helm install jupyterhub/jupyterhub --version=v0.4 --name=kube --namespace=kube -f config.yaml
Error: User "system:serviceaccount:kube-system:default" cannot get namespaces in the namespace "kube". (get namespaces kube)
helm version Client: &version.Version{SemVer:"v2.5.1", GitCommit:"7cf31e8d9a026287041bae077b09165be247ae66", GitTreeState:"clean"} Server: &version.Version{SemVer:"v2.5.1", GitCommit:"7cf31e8d9a026287041bae077b09165be247ae66", GitTreeState:"clean"}
Any ideas?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/jupyterhub/zero-to-jupyterhub-k8s/issues/88#issuecomment-321892082, or mute the thread https://github.com/notifications/unsubscribe-auth/AAB23s-YohRpM_FE1Htx0lHxAK2dV2WZks5sXKNPgaJpZM4N6cry .
-- Yuvi Panda T http://yuvi.in/blog
Oh you mean from that PR that I created and have already forgotten that I created? whoops ;-)
that fixes the namespace error...now helm is hanging on install:
helm install jupyterhub/jupyterhub --version=v0.4 --name=kube --namespace=kube -f config.yaml --debug
[debug] Created tunnel using local port: '61697'
[debug] SERVER: "localhost:61697"
[debug] Original chart version: "v0.4"
[debug] Fetched jupyterhub/jupyterhub to /home/choldgraf/.helm/cache/archive/jupyterhub-v0.4.0+fb6fc47.tgz
[debug] CHART PATH: /home/choldgraf/.helm/cache/archive/jupyterhub-v0.4.0+fb6fc47.tgz
been stuck on that last one for like 10 minutes, ended with:
Error: timed out waiting for the condition
UPDATE: I got it working by running this command: https://kubernetes.io/docs/admin/authorization/rbac/#permissive-rbac-permissions
which @yuvipanda mentions makes the cluster insecure. I think there's a better solution coming soon but just putting this here for reference
OK I think I am close. Got jupyterhub deployed and everything with one snag:
It's not generating a public-facing IP address:
kubectl --namespace=kube get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hub 10.109.128.19 <none> 8081/TCP 3m
proxy-api 10.96.110.230 <none> 8001/TCP 3m
proxy-public 10.100.36.195 a72d589697ecd... 80:31656/TCP 3m
I'd assume that EXERNAL-IP
would have a proper IP address. I wonder if this is something about how my AWS instance is set up? Do I need to configure something special to allow public access?
The address under external IP is a valid dns name you can use. If it is cut off, try doing a describe svc proxy-public with kubectl to copy the full url.
On Aug 11, 2017 12:50 PM, "Chris Holdgraf" notifications@github.com wrote:
OK I think I am close. Got jupyterhub deployed and everything with one snag:
It's not generating a public-facing IP address:
kubectl --namespace=kube get svc NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE hub 10.109.128.19
8081/TCP 3m proxy-api 10.96.110.230 8001/TCP 3m proxy-public 10.100.36.195 a72d589697ecd... 80:31656/TCP 3m I'd assume that EXERNAL-IP would have a proper IP address. I wonder if this is something about how my AWS instance is set up? Do I need to configure something special to allow public access?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/jupyterhub/zero-to-jupyterhub-k8s/issues/88#issuecomment-321903635, or mute the thread https://github.com/notifications/unsubscribe-auth/AAB23r-Ed8WLtZbBKKoifoxSb3SfL0llks5sXLB0gaJpZM4N6cry .
boosh! a72d589697ecd11e7b8e202ffae2b2ec-945672095.us-west-2.elb.amazonaws.com
getting PersistentVolumeChain is not bound
errors...I think there's a fix for that in the guide IIRC
What is the output of
kubectl get storageclass -o yaml?
On Aug 11, 2017 1:11 PM, "Chris Holdgraf" notifications@github.com wrote:
getting PersistentVolumeChain is not bound errors...I think there's a fix for that in the guide IIRC
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/jupyterhub/zero-to-jupyterhub-k8s/issues/88#issuecomment-321907747, or mute the thread https://github.com/notifications/unsubscribe-auth/AAB23vx9vaUAGdDsX-4fZZfXj1d76GgIks5sXLVogaJpZM4N6cry .
apiVersion: v1
items: []
kind: List
metadata:
resourceVersion: ""
selfLink: ""
So @yuvipanda and I chatted and it seems like this could be an issue for AWS. We need users to be able to have their own disks and it looks like this isn't something that comes by default.
@willingc when you got this up and running did you figure out a way to allow for people to have disks in their jupyterhub instance? @rdodev any thoughts on how one might enable this w/ the current setup?
@choldgraf I guess I'm not fully abreast what the use case architecture is for jupyterhub. Is it similar to tmpnb.org? If you have literature or diagrams would be greatly helpful.
hmmm, well there's lotsa docs describing JupyterHub and the tools it utilizes here:
https://zero-to-jupyterhub.readthedocs.io/en/latest/
As an example, a common use-case is a classroom setting. A teacher puts together a Docker image that contains all the requirements/dependencies/code/data etc needed for the class, and that image is served to students via JupyterHub. When students log in, kubernetes spins up a pod for them and attaches it to a persistent disk that contains the student's files (so that they can modify their notebooks and those changes will persist in time). It sounds like we're having trouble with the persistent-disk-attaching part.
@choldgraf great, thanks for the info. Let me look into it.
@choldgraf are the manifest files you've used in the master branch of the repo?
which repo? at this point I'm not actually working from any repo. just following the instructions post-kubernetes-install from here: https://zero-to-jupyterhub.readthedocs.io/en/latest/
(also just FYI I think that @yuvipanda will be of more help than I here, he's a lot better at debugging kube stuff)
@choldgraf yeah would like to see the manifests files and how y'all are provisioning pods, volumes, etc. It's "blind" debugging right now :)
ah - I think this might be the best place to look actually:
https://github.com/jupyterhub/helm-chart/tree/master/jupyterhub
this is a repo with helm-charts to deploy a few tools in the jupyterhub ecosystem. That's probably what you wanted :-)
here's the singleuser bits for jupyterhub:
https://github.com/jupyterhub/helm-chart/blob/master/jupyterhub/values.yaml#L67
@choldgraf after a cursory look through the manifests seems this should be working. I'll spin up a cluster and try to replicate it -- might be a while though.
no problem - thanks for your help on a saturday morning :-)
@choldgraf sorry to bother you, but what version of kubernetes is the cluster running right now? 1.7+?
No worries at all!
kubectl version
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.3", GitCommit:"2c2fe6e8278a5db2d15a013987b53968c743f2a1", GitTreeState:"clean", BuildDate:"2017-08-03T07:00:21Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.2", GitCommit:"922a86cfcd65915a9b2f69f3f193b8907d741d9c", GitTreeState:"clean", BuildDate:"2017-07-21T08:08:00Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
It looks like the cluster he spun up didn't have a default storageclass (or any storage classes). The Jupyterhub setup from that helm chart assumes there is a default storageclass, and that seems to be the current failing.
Also as Chris said - thank you so much for helping us out!
On Aug 12, 2017 10:50 AM, "Chris Holdgraf" notifications@github.com wrote:
No worries at all!
kubectl version Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.3", GitCommit:"2c2fe6e8278a5db2d15a013987b53968c743f2a1", GitTreeState:"clean", BuildDate:"2017-08-03T07:00:21Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.2", GitCommit:"922a86cfcd65915a9b2f69f3f193b8907d741d9c", GitTreeState:"clean", BuildDate:"2017-07-21T08:08:00Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/jupyterhub/zero-to-jupyterhub-k8s/issues/88#issuecomment-321995939, or mute the thread https://github.com/notifications/unsubscribe-auth/AAB23gk0C8vulryQi35gD3-GOuFEMwxXks5sXeXmgaJpZM4N6cry .
@yuvipanda I think you're spot on. Good catch. PVs will fail to provision because the PVC doesn't have a storageclass declared. https://kubernetes.io/docs/concepts/storage/persistent-volumes/#class-1
Yup, exactly! That was an intentional decision I made when making the Helm Chart, since I figured most clusters should have a default provisioner. It let me keep cloud specific code off the chart, which was great!
@choldgraf consider this example for setting object class and give it a try: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#aws
@rdodev does the heptio installer not set up a storage class by default? But since it sets up the AWS Cloud Provider, us creating a storageclass will be good enough?
@choldgraf can you try:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
annotations:
storageclass.beta.kubernetes.io/is-default-class: "true"
name: gp2
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
Put this into a file and do 'kubectl apply -f
@yuvipanda that is correct AWS QS clusters do not come with a default StorageClass
@rdodev ah I see. is that an explicit decision that's unlikely to change in the future? Curious what the reasoning is!
@yuvipanda flexibility. We already have a few "opinions" codified there, I think default storageclass was omitted to allow folks expressly declare storageclass; however, if you can make a good case why this should be the case we're always happy to hear it: https://github.com/heptio/aws-quickstart
(lemme just jump in here and say both you guys are awesome, thanks for helping out....we are getting close!)
I think we got it working!!!
btw: @rdodev do you know if there's a non-PDF version of that guide somewhere? It would make it easier for me to link to specific sections
@choldgraf excellent. You mean like the README here? https://github.com/heptio/aws-quickstart
@rdodev awesome!
My expectation on default storage classes mostly comes from https://kubernetes.io/docs/concepts/storage/persistent-volumes/#writing-portable-configuration, especially:
In the future, we expect most clusters to have DefaultStorageClass enabled, and to have some form of storage available. However, there may not be any storage class names which work on all clusters, so continue to not set one by default. At some point, the alpha annotation will cease to have meaning, but the unset storageClass field on the PVC will have the desired effect.
All the charts in the github.com/kubernetes/charts repo expect you to have a default storage class, for example (overrideable if needed), and I think that's a great practice to provide an experience that does the right thing in most cases by default and allows tweaking.
I can write up a more thought out issue in the quickstart repo later if that'd be helpful!
@yuvipanda By all means, drop an issue and we'll consider it.
@rdodev done :) Thank you for your help here! And thanks @willingc for the connect!
See #129 for instructions I've added to z2jh...comments welcome!
If you're interested in support for this software on AWS, Jetstream, or other cloud providers, please let us know here... or even better, send us a Pull Request with your contributions to getting the code working on your desired cloud provider!
We so far have heard interest in supporting Jetstream using the OpenStack Magnum API, as well as using kubeadm.
We also have heard interest in supporting AWS. Here are some links provided to us by our AWS reps:
https://kubernetes.io/docs/getting-started-guides/aws/ https://aws.amazon.com/quickstart/architecture/heptio-kubernetes/