Closed logand22 closed 2 years ago
What's the purpose of the orbit.manifest.json file and do I need to update it?
We update orbit.manifest.json
when we want to add additional configuration options through gravity.
Example: In planet we add a flannel-backend
param to the manifest and in gravity we allow the flannel-backend
value to be specified using the cli or the cluster configuration. The config value will be save in the container's /etc/container-environment
file.
# planet/build.assets/docker/os-rootfs/etc/planet/orbit.manifest.json
...
{
"type": "String",
"name": "flannel-backend",
"env": "FLANNEL_BACKEND",
"cli": {
"name": "flannel-backend"
}
}
# gravity/lib/ops/opsservice/configure.go
if globalConfig.FlannelBackend != "" {
args = append(args,
fmt.Sprintf("--flannel-backend=%v", globalConfig.FlannelBackend))
}
# /etc/container-environment
KUBE_MASTER_IP="10.138.0.65"
KUBE_CLOUD_PROVIDER="gce"
KUBE_SERVICE_SUBNET="100.100.0.0/16"
KUBE_POD_SUBNET="100.96.0.0/16"
KUBE_POD_SUBNET_SIZE="24"
KUBE_SERVICE_NODE_PORT_RANGE="30000-32767"
KUBE_PROXY_PORT_RANGE=""
PLANET_PUBLIC_IP="10.138.0.65"
PLANET_VXLAN_PORT="8472"
PLANET_AGENT_NAME="10_138_0_65.naughtyyalow5378"
PLANET_INITIAL_CLUSTER="10_138_0_65.naughtyyalow5378:10.138.0.65"
KUBE_APISERVER="leader.telekube.local"
KUBE_APISERVER_PORT="6443"
PLANET_ETCD_PROXY="off"
PLANET_ETCD_MEMBER_NAME="10_138_0_65.naughtyyalow5378"
ETCD_INITIAL_CLUSTER="10_138_0_65.naughtyyalow5378=https://10.138.0.65:2380"
PLANET_ETCD_GW_ENDPOINTS="10.138.0.65:2379"
ETCD_INITIAL_CLUSTER_STATE="new"
PLANET_ROLE="master"
KUBE_CLUSTER_ID="naughtyyalow5378"
KUBE_NODE_NAME="robotest-f168927b-node-2"
PLANET_ELECTION_ENABLED="true"
PLANET_DNS_HOSTS=""
PLANET_DNS_ZONES=""
PLANET_ALLOW_PRIVILEGED="true"
PLANET_SERVICE_UID="980665"
PLANET_SERVICE_GID="980665"
KUBE_HIGH_AVAILABILITY="false"
DOCKER_OPTS="--storage-driver=overlay2 --exec-opt native.cgroupdriver=cgroupfs --log-opt max-size=50m --log-opt max-file=9 --storage-opt=overlay2.override_kernel_check=1"
KUBE_APISERVER_FLAGS="--service-node-port-range=30000-32767 --endpoint-reconciler-type=master-count --apiserver-count=1"
KUBE_COMPONENT_FLAGS="--feature-gates=AllAlpha=true,APIResponseCompression=false,BoundServiceAccountTokenVolume=false,CSIMigration=false,KubeletPodResources=false,EndpointSlice=false,IPv6DualStack=false,RemoveSelfLink=false"
KUBE_ENABLE_IPAM="false"
FLANNEL_BACKEND="gce"
KUBE_CLOUD_FLAGS="--cloud-provider=gce --cloud-config=/etc/kubernetes/cloud-config.conf"
PLANET_DNS_UPSTREAM_NAMESERVERS="127.0.0.53"
PLANET_DNS_LOCAL_NAMESERVERS="127.0.0.2:53"
PLANET_NODE_LABELS="kubernetes.io/hostname=robotest-f168927b-node-2,role=node,kubernetes.io/arch=amd64,kubernetes.io/os=linux,gravitational.io/advertise-ip=10.138.0.65,gravitational.io/k8s-role=master"
What's the recommend way to build planet and test it?
It is always a bit of work testing gravity/planet. I usually test on my local machine with vagrant to setup a gravity cluster. I'm not too sure if it still works as it is now. Always required a bit of tweaking for me. Testing on GCE might be easier though.
make production telekube
. This will build custom tarball in build/<version>/telekube.tar
.vagrant
directory and spin up nodes using vagrant up
. make ansible-install
will install the gravity cluster. If you want to manually install use make ansible-upload
.vagrant ssh node-1
to ssh into node-1
.If you want to test out changes to planet binary without recreating the gravity tarball and installing gravity again, the planet bin is located at /var/lib/gravity/local/packages/unpacked/gravitational.io/planet/<version>/rootfs/usr/bin/planet
.
You can build a custom planet binary by running make production
in the planet repo and you can find the binary in build/assets/planet
. Replace the planet binary on the node and restart the planet container with systemctl restart gravity__gravitational.io__planet__<version>.service
This var may be useful for local development. Setting this to true will build gravity with locally built planet packages.
You can also use custom planet tag when building gravity. Use make get-version
to get planet build tag.
@ulysseskan @wadells might have easier ways of setting up a cluster and testing.
I don't necessarily understand this part since it's my understanding it's already done in the Dockerfile. How do these things work together?
Ah we have two separate build pipelines introduces in https://github.com/gravitational/planet/pull/838. The idea was to enable builds on Darwin and replace the old build pipeline...
We update orbit.manifest.json when we want to add additional configuration options through gravity.
@bernardjkim, this means that if my changes don't enable additional configuration options for the user through gravity, then I don't need to update this file?
this means that if my changes don't enable additional configuration options for the user through gravity, then I don't need to update this file?
Yup. It shouldn't be needed if we don't need any configuration through gravity.
Can this be closed?
Closing in favor of using the cloud-provider
flag
This PR adds the
ecr-credential-provider
plugin to Planet to enable pulling from private elastic container registries.For more information on this plugin see kubernetes/cloud-provider-aws. For more information on kubelet image credential feature of kubernetes. See here.
Makefile.buildx
.versions.mk
for later use.*.dkr.ecr.*.amazonaws.com
pattern which would be private amazon ecr images. This isn't as flexible as it could be but should get the job done in the interim. Additional follow up work could be done to make it configurable via CLI flags.constants.go
and hardcodes the kubelet options instart.go
.A few extra questions I might have is:
orbit.manifest.json
file and do I need to update it?planet
and test it?