Open gizas opened 2 years ago
First version of PCF integration: https://github.com/elastic/integrations/tree/cloudfoundry/packages/cloudfoundry
Details on how to install KubeCF in GKE:
elastic-obs-integrations-dev
project=elastic-obs-integrations-dev
clustername=gizascf-test
gcloud beta container --project "$project" clusters create "$clustername" \ --zone "us-central1-a" --no-enable-basic-auth --cluster-version "1.21.11-gke.1100" \ --machine-type "n2-standard-2" --image-type "UBUNTU" --disk-type "pd-standard" \ --disk-size "100" \ --scopes "https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly","https://www.googleapis.com/auth/trace.append" \ --preemptible --num-nodes "2" \ --enable-ip-alias --network "projects/$project/global/networks/default" \ --subnetwork "projects/$project/regions/europe-west4/subnetworks/default" \ --default-max-pods-per-node "110" --addons HorizontalPodAutoscaling,HttpLoadBalancing \ --enable-autoupgrade --enable-autorepair
We used GKE K8s version 1.21.11 as it is the last found to support PCF CRDs and KubeCF v2.7.12
4. Download Kubecf from here: https://github.com/cloudfoundry-incubator/kubecf/releases
5. Create value.yaml as sample
system_domain: gizas.cf-obs.elastic.dev
features: eirini: enabled: true
install_stacks: ["sle15"]
6. Connect to GKE cluster and Run commands in your folder where kubecf is downloaded:
kubectl create namespace cf-operator helm install cf-operator --namespace cf-operator --set "global.singleNamespace.name=kubecf" ./cf-operator.tgz helm install kubecf --namespace kubecf --values values.yaml ./kubecf_release.tgz
If `kubect get crds ` returns no crds --> This is an indication that KubeCF version not supported in given k8 version
7. Create following A dns records in your google project:
api.gizas.cf-obs.elastic.dev app.gizas.cf-obs.elastic.dev app1.gizas.cf-obs.elastic.dev app2.gizas.cf-obs.elastic.dev app3.gizas.cf-obs.elastic.dev doppler.gizas.cf-obs.elastic.dev gizas.cf-obs.elastic.dev log-cache.gizas.cf-obs.elastic.dev log-stream.gizas.cf-obs.elastic.dev login.gizas.cf-obs.elastic.dev uaa.gizas.cf-obs.elastic.dev
to point to the external-IP of `kubectl get service router-public -n kubecf`
Hi! We just realized that we haven't looked into this issue in a while. We're sorry! We're labeling this issue as Stale
to make it hit our filters and make sure we get back to it as soon as possible. In the meantime, it'd be extremely helpful if you could take a look at it as well and confirm its relevance. A simple comment with a nice emoji will be enough :+1
. Thank you for your contribution!
Integration for CloudFoundry
This issue aims to track the work for creating PCF (Pivotal Cloud Foundry) integration on agent
There are different ways to install CloudFoundry for supporting your developer work.
Vcap
as described https://github.com/cloudfoundry-attic/vcap . This is the Virtualised flavour of CF. Not supported for ARM MacOS installations at the time of documentTerraform
code of https://github.com/elastic/cf-obs-terraform/tree/pcf_updates_tfKubeCF
, the PCF implementation on top of k8s. KubeCF is also resource extensive and details for various installations can be found here Troubleshooting various kubecf managed to install it only in GKE and only in 2 node cluster as alternative solutions were failing in resource consumption and k8s versioningDetails for this installation to be provided
PCFDev plugin Project reached EOL and no longer an option for developers Full installation of PCF in various cloud providers is not ideal for developers scenario as it is time and resource extensive and needs full configuration
All changes