metaparticle-io / package

Metaparticle/Package: Language Fluent Containerization and Deployment in Java, .NET and Javascript (and more coming soon)
https://metaparticle.io
MIT License
493 stars 55 forks source link

Unable to access "Sharding" example when deployed with the runner #34

Closed DazWilkin closed 6 years ago

DazWilkin commented 6 years ago

Using JavaScript|Node.js

The "Replicated" example works for me both locally and deployed to a GKE cluster.

The "Sharding" example does not (appear to) work correctly when deployed to a GKE cluster.

I can access it locally running as a single Docker container.

It appears to npm start correctly:

sharding-0:sharding-0 
sharding-0:sharding-0 > sharding@0.0.1 start /sharding
sharding-0:sharding-0 > node ./index.js
sharding-0:sharding-0 
sharding-0:sharding-0 server up on 8080
sharding-1:sharding-0 
sharding-1:sharding-0 > sharding@0.0.1 start /sharding
sharding-1:sharding-0 > node ./index.js
sharding-1:sharding-0 
sharding-1:sharding-0 server up on 8080
sharding-2:sharding-0 
sharding-2:sharding-0 > sharding@0.0.1 start /sharding
sharding-2:sharding-0 > node ./index.js
sharding-2:sharding-0 
sharding-2:sharding-0 server up on 8080

From within the cluster, I can access the StatefulSet pods:

curl sharding-0.sharding:8080
Hello Henry: hostname: sharding-0

But there's no LoadBalancer created as there was this with "Replicated" example. And I'm unable to access the service, ReplicaSet or pods representing the router from within the cluster. I tried port-forwarding to one of the sharding-sharder (RS) pods but it does not respond on the port:

kubectl port-forward sharding-sharder-56dc4995db-hl89w 8080:8080

Flummoxed.

kubectl get deployments
NAME               DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
sharding-sharder   3         3         3            3           2m

kubectl get services
NAME               TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
sharding           ClusterIP   None            <none>        8080/TCP   2m
sharding-sharder   ClusterIP   10.43.255.169   <none>        8080/TCP   2m

kubectl get rs
NAME                          DESIRED   CURRENT   READY     AGE
sharding-sharder-56dc4995db   3         3         3         2m

kubectl get statefulsets
NAME       DESIRED   CURRENT   AGE
sharding   3         3         3m

kubectl get pods
NAME                                READY     STATUS    RESTARTS   AGE
sharding-0                          1/1       Running   0          4m
sharding-1                          1/1       Running   0          4m
sharding-2                          1/1       Running   0          4m
sharding-sharder-56dc4995db-2hdzb   1/1       Running   0          4m
sharding-sharder-56dc4995db-6dkpw   1/1       Running   0          4m
sharding-sharder-56dc4995db-kbc8h   1/1       Running   0          4m
brendandburns commented 6 years ago

@DazWilkin I think that this is an instance of:

https://github.com/metaparticle-io/metaparticle-ast/issues/1

To try it out, try:

kubectl run busybox-sharding --image=busybox --rm -it

That should give you a web session inside of your cluster, then you can:

wget -O- -q http://sharding-sharder:8080/some/path/here

And that should hit the sharding service.

Basically the bug is that the metaparticle compiler doesn't mark the load balancer as external to create an external load balancer.

I (or someone) need to go fix it in the compiler here: https://github.com/metaparticle-io/metaparticle-ast/blob/master/compiler/kubernetes-compiler.go#L338

to add:

    if public {
        svc.Spec.Type = "LoadBalancer"
    }

Just like is present here:

https://github.com/metaparticle-io/metaparticle-ast/blob/master/compiler/kubernetes-compiler.go#L338

DazWilkin commented 6 years ago

Thanks @brendanburns .. I think the issues aren't exactly the same.

I am able to access the StatefulSet pods from the cluster:

curl sharding-0.sharding:8080
Hello Henry: hostname: sharding-0

And sharing-sharder exists but it appears to error with:

ERROR: logging before flag.Parse: I1211 15:20:56.423433       6 main.go:106] Sharder starting, spreading load to [http://sharding-0.sharding:8080 http://sharding-1.sharding:8080 http://sharding-2.sharding:8080]

This is how I identified the StatefulSet ordinals and tried accessing them directly. It's curious that this configuration to the StatefulSet pods is correct and works but the brendanburns/sharder pods don't appear able to access it. Perhaps I should try and non-GKE cluster?

kubectl get service/sharding-sharder
NAME               TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
sharding-sharder   ClusterIP   10.43.254.182   <none>        8080/TCP   38m

and, using radial/busyboxplus:curl:

curl http://sharding-sharder:8080/some/path/here
curl: (7) Failed to connect to sharding-sharder port 8080: Connection refused

curl http://sharding-sharder:8080/user/dazwilkin
curl: (7) Failed to connect to sharding-sharder port 8080: Connection refused

curl sharding-0.sharding:8080
Hello Henry: hostname: sharding-0

curl sharding-1.sharding:8080
Hello Henry: hostname: sharding-1

curl sharding-2.sharding:8080
Hello Henry: hostname: sharding-2
brendandburns commented 6 years ago

That error isn't really an error, that's just a glog vs golang log package issue...

Can you dump the sharding-sharder Service and Deployment YAMLs and put them in this issue?

Thanks --brendan


From: Daz Wilkin notifications@github.com Sent: Monday, December 11, 2017 7:50 AM To: metaparticle-io/package Cc: Brendan Burns; Comment Subject: Re: [metaparticle-io/package] Unable to access "Sharding" example when deployed with the runner (#34)

Thanks @brendanburnshttps://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fbrendanburns&data=04%7C01%7Cbburns%40microsoft.com%7C46c2329324224fc0955e08d540aed7a3%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636486042050404094%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwifQ%3D%3D%7C-1&sdata=xeB5pVpMq3kVKpsjZpzplbt57E42vVCIX%2F%2FhdWle1Xc%3D&reserved=0 .. I think the issues aren't exactly the same.

I am able to access the StatefulSet pods from the cluster:

curl sharding-0.sharding:8080 Hello Henry: hostname: sharding-0

And sharing-sharder exists but it appears to error with:

ERROR: logging before flag.Parse: I1211 15:20:56.423433 6 main.go:106] Sharder starting, spreading load to [http://sharding-0.sharding:8080 http://sharding-1.sharding:8080 http://sharding-2.sharding:8080]

This is how I identified the StatefulSet ordinals and tried accessing them directly. It's curious that this configuration to the StatefulSet pods is correct and works but the brendanburns/sharder pods don't appear able to access it. Perhaps I should try and non-GKE cluster?

kubectl get service/sharding-sharder NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE sharding-sharder ClusterIP 10.43.254.182 8080/TCP 38m

and, using radial/busyboxplus:curl:

curl http://sharding-sharder:8080/some/path/here curl: (7) Failed to connect to sharding-sharder port 8080: Connection refused

curl http://sharding-sharder:8080/user/dazwilkin curl: (7) Failed to connect to sharding-sharder port 8080: Connection refused

curl sharding-0.sharding:8080 Hello Henry: hostname: sharding-0

curl sharding-1.sharding:8080 Hello Henry: hostname: sharding-1

curl sharding-2.sharding:8080 Hello Henry: hostname: sharding-2

— You are receiving this because you commented. Reply to this email directly, view it on GitHubhttps://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fmetaparticle-io%2Fpackage%2Fissues%2F34%23issuecomment-350764486&data=04%7C01%7Cbburns%40microsoft.com%7C46c2329324224fc0955e08d540aed7a3%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636486042050404094%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwifQ%3D%3D%7C-1&sdata=54x8pEo1rrGEDCpSTxDbZvzelrvUcibEoJy90uQboQ4%3D&reserved=0, or mute the threadhttps://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAFfDghcdNMSSvIVubjmdLit0bA6SoZblks5s_U8pgaJpZM4Q74g9&data=04%7C01%7Cbburns%40microsoft.com%7C46c2329324224fc0955e08d540aed7a3%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636486042050404094%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwifQ%3D%3D%7C-1&sdata=dt%2ByEXw%2BLisx6cX%2Foz3ZEqOJzZyI8swD7kL01%2Fk8mXk%3D&reserved=0.

DazWilkin commented 6 years ago

Sure. Thanks for continued support!

kubectl get service/sharding-sharder --output=yaml

apiVersion: v1
kind: Service
metadata:
  creationTimestamp: 2017-12-11T15:03:44Z
  name: sharding-sharder
  namespace: default
  resourceVersion: "432794"
  selfLink: /api/v1/namespaces/default/services/sharding-sharder
  uid: 7b738e51-de84-11e7-9c82-42010a8a001e
spec:
  clusterIP: 10.43.254.182
  ports:
  - port: 8080
    protocol: TCP
    targetPort: 8080
  selector:
    app: sharding-sharder
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

and: kubectl get deployment/sharding-sharder --output=yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
  creationTimestamp: 2017-12-11T15:03:44Z
  generation: 1
  labels:
    app: sharding-sharder
  name: sharding-sharder
  namespace: default
  resourceVersion: "453662"
  selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/sharding-sharder
  uid: 7b611d15-de84-11e7-9c82-42010a8a001e
spec:
  replicas: 3
  selector:
    matchLabels:
      app: sharding-sharder
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: sharding-sharder
    spec:
      containers:
      - env:
        - name: SHARD_ADDRESSES
          value: http://sharding-0.sharding:8080,http://sharding-1.sharding:8080,http://sharding-2.sharding:8080
        image: brendanburns/sharder
        imagePullPolicy: Always
        name: sharder
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
status:
  availableReplicas: 3
  conditions:
  - lastTransitionTime: 2017-12-11T15:21:18Z
    lastUpdateTime: 2017-12-11T15:21:18Z
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  observedGeneration: 1
  readyReplicas: 3
  replicas: 3
  updatedReplicas: 3
brendandburns commented 6 years ago

Ok, I think I ran this down, the fix is here:

https://github.com/metaparticle-io/metaparticle-ast/pull/6

I'll get new binaries up soon.

DazWilkin commented 6 years ago

Outstanding. Thank you. Looking forward to trying it out.

brendandburns commented 6 years ago

Ok, should be fixed. You will need to download/install a new version of the mp-compiler binary from:

https://github.com/metaparticle-io/metaparticle-ast/releases

DazWilkin commented 6 years ago

Works! Thanks.

kubectl get services
NAME                           TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)          AGE
kubernetes                     ClusterIP      10.43.240.1     <none>          443/TCP          11h
metaparticle-example           ClusterIP      None            <none>          8080/TCP         10m
metaparticle-example-sharder   LoadBalancer   10.43.242.134   35.203.132.68   8080:30581/TCP   10m

and:

NETWORKLB=$(kubectl get services/metaparticle-example-sharder \
--output=jsonpath='{.status.loadBalancer.ingress[0].ip }')

for t in {1..100}
do
  NAME=$(cat /dev/urandom | tr -dc 'a-zA-Z' | fold -w 10 | head -n 1)
  curl --silent http://${NETWORKLB}:8080/users/${NAME}/
done\
| sort \
| uniq -c
     23 Hello Henry: hostname: metaparticle-example-0
     15 Hello Henry: hostname: metaparticle-example-1
     11 Hello Henry: hostname: metaparticle-example-2
     16 Hello Henry: hostname: metaparticle-example-3
     15 Hello Henry: hostname: metaparticle-example-4
     20 Hello Henry: hostname: metaparticle-example-5
brendandburns commented 6 years ago

Closing since this was fixed.