The code in this repository is experimental and has been provided for reference purposes only. It is not actively maintained and has been archived.
Many folks have already migrated to Federation 2. If you're starting something new, please see the latest Federation docs!
This is the GraphOps Repo
for the apollographq/supergraph-demo Source Repo
.
Contents:
Large-scale graph operators use Kubernetes to run their Graph Router and Subgraph Services, with continuous app and service delivery.
Kubernetes provides a mature control-plane for deploying and operating your graph using container images like those produced by the supergraph-demo Source Repo
.
This repo follows the Declarative GitOps CD for Kubernetes Best Practices:
Source Repo
- provides image tag versions via Bump image versions
PR
ConfigMap
for declarative k8s config managementBump image versions
PRs to propagate image version bumps to this repo.Graph Registry
- provides supergraph schema via Bump supergraph schema
PR
Apollo Uplink
- that the Gateway can poll for live updates (default).Apollo Registry
- for retrieval via rover supergraph fetch
.Apollo Build Webhook
- for triggering custom CD with the composed supergraph schema.Bump supergraph schema
PRs are created by the supergraph-build-webhook.yml workflow via:
GraphOps Repo
(this repo) - declarative graph config for Kubernetes for GitOps
dev
, stage
, and prod
dev
-> stage
-> prod
make promote-dev-stage
make promote-stage-prod
kustomize
for k8s-native config management:
Config data flows from the following sources:
Source Repo
:
Bump image versions
PR is opened on the GraphOps Repo
:Source Repo
.GraphOps Repo
PR to bump the docker image versions in the dev
environment (auto-merge).Graph Registry
:
Bump supergraph schema
PR is opened on the GraphOps Repo
:rover supergraph fetch
is used to retrieve the supergraph schema from the Apollo Registry.GraphOps Repo
:
GraphOps Repo
.GraphOps Repo
.GraphOps Repo
has definitive desired state for each environment.Continous deployment of config data flows from the GraphOps Repo
into the target k8s cluster:
kustomize
is used to generate parameterized config resources for each environment:
configMapGenerator
supergraph.graphl schemaimages
with tag version bumpsProgressive delivery controllers like Argo Rollouts or Flagger may also be used
BlueGreen
and Canary
deployment strategiesRollback
via git commit & GitOps or progressive delivery controller rollback.
New Gateway docker image versions are published as source changes are pushed to the main branch of the supergraph-demo repo.
This is done by the release.yml workflow, which does an incremental matrix build and pushes new docker images to DockerHub, and then opens a Bump image versions
PR in this repo that uses kustomize edit set image
to inject the new image version tags into the kustomization.yaml for each environment.
Note: This workflow can be easily adapted for a single-repo-per-package scenarios, where they separately publish their own docker images and issue separate version bump PRs to this GraphOps Repo
.
Detecting changes to the supergraph built via Managed Federation
rover subgraph publish
rover supergraph fetch
- to poll the RegistryBump supergraph schema
PR with auto-merge enabled when changes detected
GraphOps Repo
with the new version from Apollo StudioGenerate a new Gateway Deployment
and ConfigMap
using kustomize
supergraph.graphql
when Bump supergraph schema
is mergedRegister the webhook in Apollo Studio in your graph settings
Adapt the webhook to a GitHub repository_dispatch
POST request
repo
scoped personal access token (PAT)repository_dispatch
event triggers a GitHub workflow
repository_dispatch
and scheduled
to catch any lost webhooks:GitHub workflow automatically creates a PR with auto-merge enabled
You'll need:
then run:
make demo
which runs:
make k8s-up-dev
which creates:
Deployment
configured to use a supergraph ConfigMap
Service
and Ingress
and applies the following:
kubectl apply -k infra/dev
kubectl apply -k subgraphs/dev
kubectl apply -k router/dev
using router/base/router.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: router
name: router-deployment
spec:
replicas: 1
selector:
matchLabels:
app: router
template:
metadata:
labels:
app: router
spec:
containers:
- env:
- name: APOLLO_SCHEMA_CONFIG_EMBEDDED
value: "true"
image: prasek/supergraph-router:1.1.1
name: router
ports:
- containerPort: 4000
volumeMounts:
- mountPath: /etc/config
name: supergraph-volume
volumes:
- configMap:
name: supergraph-c4mh62bddt
name: supergraph-volume
---
apiVersion: v1
kind: ConfigMap
metadata:
name: supergraph-c4mh62bddt
data:
supergraph.graphql: |
schema
@core(feature: "https://specs.apollo.dev/core/v0.1"),
@core(feature: "https://specs.apollo.dev/join/v0.1")
{
query: Query
}
directive @core(feature: String!) repeatable on SCHEMA
directive @join__field(graph: join__Graph, requires: join__FieldSet, provides: join__FieldSet) on FIELD_DEFINITION
directive @join__type(graph: join__Graph!, key: join__FieldSet) repeatable on OBJECT | INTERFACE
directive @join__owner(graph: join__Graph!) on OBJECT | INTERFACE
directive @join__graph(name: String!, url: String!) on ENUM_VALUE
type DeliveryEstimates {
estimatedDelivery: String
fastestDelivery: String
}
scalar join__FieldSet
enum join__Graph {
INVENTORY @join__graph(name: "inventory" url: "http://inventory:4000/graphql")
PRODUCTS @join__graph(name: "products" url: "http://products:4000/graphql")
USERS @join__graph(name: "users" url: "https://users:4000/graphql")
}
type Product
@join__owner(graph: PRODUCTS)
@join__type(graph: PRODUCTS, key: "id")
@join__type(graph: PRODUCTS, key: "sku package")
@join__type(graph: PRODUCTS, key: "sku variation{id}")
@join__type(graph: INVENTORY, key: "id")
{
id: ID! @join__field(graph: PRODUCTS)
sku: String @join__field(graph: PRODUCTS)
package: String @join__field(graph: PRODUCTS)
variation: ProductVariation @join__field(graph: PRODUCTS)
dimensions: ProductDimension @join__field(graph: PRODUCTS)
createdBy: User @join__field(graph: PRODUCTS, provides: "totalProductsCreated")
delivery(zip: String): DeliveryEstimates @join__field(graph: INVENTORY, requires: "dimensions{size weight}")
}
type ProductDimension {
size: String
weight: Float
}
type ProductVariation {
id: ID!
}
type Query {
allProducts: [Product] @join__field(graph: PRODUCTS)
product(id: ID!): Product @join__field(graph: PRODUCTS)
}
type User
@join__owner(graph: USERS)
@join__type(graph: USERS, key: "email")
@join__type(graph: PRODUCTS, key: "email")
{
email: ID! @join__field(graph: USERS)
name: String @join__field(graph: USERS)
totalProductsCreated: Int @join__field(graph: USERS)
}
---
apiVersion: v1
kind: Service
metadata:
name: router-service
spec:
ports:
- port: 4000
protocol: TCP
targetPort: 4000
selector:
app: router
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: router-ingress
spec:
rules:
- http:
paths:
- backend:
service:
name: router-service
port:
number: 4000
path: /
pathType: Prefix
and 3 subgraph services subgraphs/base/subgraphs.yaml:
make demo
then runs the following in a loop until the query succeeds or 2 min timeout:
kubectl get all
make k8s-query
which shows the following:
NAME READY STATUS RESTARTS AGE
pod/inventory-65494cbf8f-bhtft 1/1 Running 0 59s
pod/products-6d75ff449c-9sdnd 1/1 Running 0 59s
pod/router-deployment-84cbc9f689-8fcnf 1/1 Running 0 20s
pod/users-d85ccf5d9-cgn4k 1/1 Running 0 59s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/inventory ClusterIP 10.96.108.120 <none> 4000/TCP 59s
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 96s
service/products ClusterIP 10.96.65.206 <none> 4000/TCP 59s
service/router-service ClusterIP 10.96.178.206 <none> 4000/TCP 20s
service/users ClusterIP 10.96.98.53 <none> 4000/TCP 59s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/inventory 1/1 1 1 59s
deployment.apps/products 1/1 1 1 59s
deployment.apps/router-deployment 1/1 1 1 20s
deployment.apps/users 1/1 1 1 59s
NAME DESIRED CURRENT READY AGE
replicaset.apps/inventory-65494cbf8f 1 1 1 59s
replicaset.apps/products-6d75ff449c 1 1 1 59s
replicaset.apps/router-deployment-84cbc9f689 1 1 1 20s
replicaset.apps/users-d85ccf5d9 1 1 1 59s
Smoke test
-------------------------------------------------------------------------------------------
++ curl -X POST -H 'Content-Type: application/json' --data '{ "query": "{ allProducts { id, sku, createdBy { email, totalProductsCreated } } }" }' http://localhost:80/
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 352 100 267 100 85 3000 955 --:--:-- --:--:-- --:--:-- 3911
{"data":{"allProducts":[{"id":"apollo-federation","sku":"federation","createdBy":{"email":"support@apollographql.com","totalProductsCreated":1337}},{"id":"apollo-studio","sku":"studio","createdBy":{"email":"support@apollographql.com","totalProductsCreated":1337}}]}}
Success!
-------------------------------------------------------------------------------------------
make demo
then cleans up:
deployment.apps "graph-router" deleted
service "graphql-service" deleted
ingress.networking.k8s.io "graphql-ingress" deleted
Deleting cluster "kind" ...
Promoting configs from from dev -> stage -> prod can be as simple as:
make promote-dev-stage
make promote-stage-prod
The GitOps operator in each Kubernetes cluster will pull the environment configuration from this GraphOps Repo
and any changes will be applied to that cluster.
CD via GitOps:
We'll use flux
v2 for this example, so you'll need:
then run:
make demo-flux
which runs:
make k8s-up-flux-dev
which shows something like:
.scripts/k8s-up-flux.sh dev
Using dev/kustomization.yaml
kind version 0.11.1
No kind clusters found.
+ kind create cluster --image kindest/node:v1.21.1 --config=clusters/kind-cluster.yaml --wait 5m
Creating cluster "kind" ...
β Ensuring node image (kindest/node:v1.21.1) πΌ
β Preparing nodes π¦
β Writing configuration π
β Starting control-plane πΉοΈ
β Installing CNI π
β Installing StorageClass πΎ
β Waiting β€ 5m0s for control-plane = Ready β³
β’ Ready after 28s π
Set kubectl context to "kind-kind"
You can now use your cluster with:
kubectl cluster-info --context kind-kind
Not sure what to do next? π
Check out https://kind.sigs.k8s.io/docs/user/quick-start/
+ flux install
β generating manifests
β manifests build completed
βΊ installing components in flux-system namespace
β verifying installation
β source-controller: deployment ready
β kustomize-controller: deployment ready
β helm-controller: deployment ready
β notification-controller: deployment ready
β install finished
+ flux create source git k8s-graph-ops --url=https://github.com/apollographql/supergraph-demo-k8s-graph-ops.git --branch=main
β generating GitRepository source
βΊ applying GitRepository source
β GitRepository source created
β waiting for GitRepository source reconciliation
β GitRepository source reconciliation completed
β fetched revision: main/13fbe62857a713f396947a552d0d72ca760d3010
+ flux create kustomization infra --namespace=default --source=GitRepository/k8s-graph-ops.flux-system --path=./infra/dev --prune=true --interval=1m --validation=client
β generating Kustomization
βΊ applying Kustomization
β Kustomization created
β waiting for Kustomization reconciliation
β Kustomization infra is ready
β applied revision main/13fbe62857a713f396947a552d0d72ca760d3010
+ flux create kustomization subgraphs --namespace=default --source=GitRepository/k8s-graph-ops.flux-system --path=./subgraphs/dev --prune=true --interval=1m --validation=client
β generating Kustomization
βΊ applying Kustomization
β Kustomization created
β waiting for Kustomization reconciliation
β Kustomization subgraphs is ready
β applied revision main/13fbe62857a713f396947a552d0d72ca760d3010
+ flux create kustomization router --depends-on=infra --namespace=default --source=GitRepository/k8s-graph-ops.flux-system --path=./router/dev --prune=true --interval=1m --validation=client
β generating Kustomization
βΊ applying Kustomization
β Kustomization created
β waiting for Kustomization reconciliation
the router ingress config needs the nginx ingress controller, so you'll see this while the nginx ingress admission controller is starting, but with GitOps and flux
the configuration will be re-applied so will self-heal once it's started:
β apply failed: Error from server (InternalError): error when creating "3a946b48-8ea1-4516-8dcf-7341332f4d88.yaml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post "https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1beta1/ingresses?timeout=10s": dial tcp 10.96.26.139:443: i/o timeout
then smoke tests will run while the nginx ingress admission controller is starting, with the initial tests failing while the nginx admission controller is starting:
-------------------------
Test 1
-------------------------
++ curl -X POST -H 'Content-Type: application/json' --data '{ "query": "{allProducts{delivery{estimatedDelivery,fastestDelivery},createdBy{name,email}}}" }' http://localhost:80/
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 241 100 146 100 95 29200 19000 --:--:-- --:--:-- --:--:-- 48200
-------------------------
β Test 1
-------------------------
[Expected]
{"data":{"allProducts":[{"delivery":{"estimatedDelivery":"6/25/2021","fastestDelivery":"6/24/2021"},"createdBy":{"name":"Apollo Studio Support","email":"support@apollographql.com"}},{"delivery":{"estimatedDelivery":"6/25/2021","fastestDelivery":"6/24/2021"},"createdBy":{"name":"Apollo Studio Support","email":"support@apollographql.com"}}]}}
-------------------------
[Actual]
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx</center>
</body>
</html>
-------------------------
β Test 1
-------------------------
and then once it's started and the router ingress can be applied, the smoke tests will pass:
-------------------------
Test 1
-------------------------
++ curl -X POST -H 'Content-Type: application/json' --data '{ "query": "{allProducts{delivery{estimatedDelivery,fastestDelivery},createdBy{name,email}}}" }' http://localhost:80/
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 438 100 343 100 95 1982 549 --:--:-- --:--:-- --:--:-- 2531
Result:
{"data":{"allProducts":[{"delivery":{"estimatedDelivery":"6/25/2021","fastestDelivery":"6/24/2021"},"createdBy":{"name":"Apollo Studio Support","email":"support@apollographql.com"}},{"delivery":{"estimatedDelivery":"6/25/2021","fastestDelivery":"6/24/2021"},"createdBy":{"name":"Apollo Studio Support","email":"support@apollographql.com"}}]}}
β
Test 1
-------------------------
Test 2
-------------------------
++ curl -X POST -H 'Content-Type: application/json' --data '{ "query": "{allProducts{id,sku,createdBy{email,totalProductsCreated}}}" }' http://localhost:80/
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 341 100 267 100 74 17800 4933 --:--:-- --:--:-- --:--:-- 22733
Result:
{"data":{"allProducts":[{"id":"apollo-federation","sku":"federation","createdBy":{"email":"support@apollographql.com","totalProductsCreated":1337}},{"id":"apollo-studio","sku":"studio","createdBy":{"email":"support@apollographql.com","totalProductsCreated":1337}}]}}
β
Test 2
β
All tests pass! π
then finally the kind cluster will be deleted:
.scripts/k8s-down.sh
Deleting cluster "kind" ...
See the BlueGreen
example below with more advanced examples coming soon!
We'll use Argo Rollouts to do a basic BlueGreen
deployment in this example.
make k8s-up-flux-bluegreen
which does does a BlueGreen
deploy of the subgraphs using GitOps
and subgraphs/dev-bluegreen/kustomization.yaml
and shows the following:
.scripts/k8s-up-flux.sh dev bluegreen
Using Kustomizations:
- infra/dev/kustomization.yaml
- subgraphs/dev-bluegreen/kustomization.yaml
- router/dev/kustomization.yaml
kind version 0.11.1
No kind clusters found.
Creating cluster "kind" ...
β Ensuring node image (kindest/node:v1.21.1) πΌ
β Preparing nodes π¦
β Writing configuration π
β Starting control-plane πΉοΈ
β Installing CNI π
β Installing StorageClass πΎ
β Waiting β€ 5m0s for control-plane = Ready β³
β’ Ready after 28s π
Set kubectl context to "kind-kind"
You can now use your cluster with:
kubectl cluster-info --context kind-kind
Thanks for using kind! π
+ flux install
β generating manifests
β manifests build completed
βΊ installing components in flux-system namespace
β verifying installation
β notification-controller: deployment ready
β source-controller: deployment ready
β kustomize-controller: deployment ready
β helm-controller: deployment ready
β install finished
+ flux create source git k8s-graph-ops --url=https://github.com/apollographql/supergraph-demo-k8s-graph-ops.git --branch=main
β generating GitRepository source
βΊ applying GitRepository source
β GitRepository source created
β waiting for GitRepository source reconciliation
β GitRepository source reconciliation completed
β fetched revision: main/9c6b88c18faecc76047a75e842f837c00d79f1f4
+ flux create kustomization infra --namespace=default --source=GitRepository/k8s-graph-ops.flux-system --path=./infra/dev --prune=true --interval=1m --validation=client
β generating Kustomization
βΊ applying Kustomization
β Kustomization created
β waiting for Kustomization reconciliation
β Kustomization infra is ready
β applied revision main/9c6b88c18faecc76047a75e842f837c00d79f1f4
+ flux create kustomization subgraphs --namespace=default --source=GitRepository/k8s-graph-ops.flux-system --path=./subgraphs/dev-bluegreen --prune=true --interval=1m --validation=client
β generating Kustomization
βΊ applying Kustomization
β Kustomization created
β waiting for Kustomization reconciliation
β Kustomization subgraphs is ready
β applied revision main/9c6b88c18faecc76047a75e842f837c00d79f1f4
+ flux create kustomization router --depends-on=infra --namespace=default --source=GitRepository/k8s-graph-ops.flux-system --path=./router/dev --prune=true --interval=1m --validation=client
β generating Kustomization
βΊ applying Kustomization
β Kustomization created
β waiting for Kustomization reconciliation
β apply failed: Error from server (InternalError): error when creating "76b5f11b-1666-48d5-80ca-862d183f2248.yaml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post "https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1beta1/ingresses?timeout=10s": context deadline exceeded
+ kubectl wait --namespace ingress-nginx --for=condition=ready pod --selector=app.kubernetes.io/component=controller --timeout=120s
pod/ingress-nginx-controller-6cd89dbf45-sjs49 condition met
you can then run:
make smoke
which shows the following:
.scripts/k8s-smoke.sh
-------------------------
Test 1
-------------------------
++ curl -X POST -H 'Content-Type: application/json' --data '{ "query": "{allProducts{delivery{estimatedDelivery,fastestDelivery},createdBy{name,email}}}" }' http://localhost:80/
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 438 100 343 100 95 1366 378 --:--:-- --:--:-- --:--:-- 1745
Result:
{"data":{"allProducts":[{"delivery":{"estimatedDelivery":"6/25/2021","fastestDelivery":"6/24/2021"},"createdBy":{"name":"Apollo Studio Support","email":"support@apollographql.com"}},{"delivery":{"estimatedDelivery":"6/25/2021","fastestDelivery":"6/24/2021"},"createdBy":{"name":"Apollo Studio Support","email":"support@apollographql.com"}}]}}
β
Test 1
-------------------------
Test 2
-------------------------
++ curl -X POST -H 'Content-Type: application/json' --data '{ "query": "{allProducts{id,sku,createdBy{email,totalProductsCreated}}}" }' http://localhost:80/
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 341 100 267 100 74 14052 3894 --:--:-- --:--:-- --:--:-- 17947
Result:
{"data":{"allProducts":[{"id":"apollo-federation","sku":"federation","createdBy":{"email":"support@apollographql.com","totalProductsCreated":1337}},{"id":"apollo-studio","sku":"studio","createdBy":{"email":"support@apollographql.com","totalProductsCreated":1337}}]}}
β
Test 2
β
All tests pass! π
using these Rollouts
:
kubectl get rollouts
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE
inventory-bluegreen 1 1 1 1
products-bluegreen 1 1 1 1
users-bluegreen 1 1 1 1
and these Kustomizations
:
kubectl get kustomization
NAME READY STATUS AGE
infra True Applied revision: main/9c6b88c18faecc76047a75e842f837c00d79f1f4 2m9s
router True Applied revision: main/9c6b88c18faecc76047a75e842f837c00d79f1f4 107s
subgraphs True Applied revision: main/9c6b88c18faecc76047a75e842f837c00d79f1f4 109s
Pushing a subgraph change to the products subgraph in the supergraph-demo results in:
kubectl get kustomization
NAME READY STATUS AGE
infra True Applied revision: main/b4e5b385ac6ddb145cf2f95f77bda678997c75e4 78m
router True Applied revision: main/b4e5b385ac6ddb145cf2f95f77bda678997c75e4 78m
subgraphs True Applied revision: main/b4e5b385ac6ddb145cf2f95f77bda678997c75e4 78m
Which shows the following:
kubectl get rollouts
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE
inventory-bluegreen 1 1 1 1
products-bluegreen 1 2 1 1
users-bluegreen 1 1 1 1
kubectl get pods
NAME READY STATUS RESTARTS AGE
pod/inventory-bluegreen-59d479fc9f-57fqw 1/1 Running 0 78m
pod/products-bluegreen-599c9f6c88-k7dwx 1/1 Running 0 78m
pod/products-bluegreen-6fb56d84ff-k2jks 1/1 Running 0 67m
pod/router-deployment-588b77bc9b-k9gz5 1/1 Running 0 78m
pod/users-bluegreen-6b789d8cb7-wrxt7 1/1 Running 0 78m
Since we've set the product subgraph rollout to have:
# Rollouts can be resumed using: `kubectl argo rollouts promote ROLLOUT`
autoPromotionEnabled: false
we can install and use the Argo Rollouts Kubectl Plugin to manually promote the BlueGreen
deployment for the product service:
kubectl argo rollouts promote products-bluegreen
rollout 'products-bluegreen' promoted
which results in the preview
products deployment becoming active
and the previous active
deployment being decomissioned, resulting in one active
pod and replicaset for the products subgraph.
kubectl get rollouts
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE
inventory-bluegreen 1 1 1 1
products-bluegreen 1 1 1 1
users-bluegreen 1 1 1 1
and
kubectl get pods
NAME READY STATUS RESTARTS AGE
pod/inventory-bluegreen-59d479fc9f-57fqw 1/1 Running 0 80m
pod/products-bluegreen-6fb56d84ff-k2jks 1/1 Running 0 69m
pod/router-deployment-588b77bc9b-k9gz5 1/1 Running 0 80m
pod/users-bluegreen-6b789d8cb7-wrxt7 1/1 Running 0 80m
make smoke
which shows:
-------------------------
Test 1
-------------------------
++ curl -X POST -H 'Content-Type: application/json' --data '{ "query": "{allProducts{delivery{estimatedDelivery,fastestDelivery},createdBy{name,email}}}" }' http://localhost:80/
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 438 100 343 100 95 641 177 --:--:-- --:--:-- --:--:-- 818
Result:
{"data":{"allProducts":[{"delivery":{"estimatedDelivery":"6/25/2021","fastestDelivery":"6/24/2021"},"createdBy":{"name":"Apollo Studio Support","email":"support@apollographql.com"}},{"delivery":{"estimatedDelivery":"6/25/2021","fastestDelivery":"6/24/2021"},"createdBy":{"name":"Apollo Studio Support","email":"support@apollographql.com"}}]}}
β
Test 1
-------------------------
Test 2
-------------------------
++ curl -X POST -H 'Content-Type: application/json' --data '{ "query": "{allProducts{id,sku,createdBy{email,totalProductsCreated}}}" }' http://localhost:80/
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 341 100 267 100 74 22250 6166 --:--:-- --:--:-- --:--:-- 28416
Result:
{"data":{"allProducts":[{"id":"apollo-federation","sku":"federation","createdBy":{"email":"support@apollographql.com","totalProductsCreated":1337}},{"id":"apollo-studio","sku":"studio","createdBy":{"email":"support@apollographql.com","totalProductsCreated":1337}}]}}
β
Test 2
β
All tests pass! π
make k8s-down
which shows
.scripts/k8s-down.sh
Deleting cluster "kind" ...
Checkout the apollographq/supergraph-demo Source Repo
.
Learn more about how Apollo can help your teams ship faster.