elastic / integrations

Elastic Integrations
https://www.elastic.co/integrations
Other
41 stars 452 forks source link

Integration for CloudFoundry #3524

Open gizas opened 2 years ago

gizas commented 2 years ago

Integration for CloudFoundry

This issue aims to track the work for creating PCF (Pivotal Cloud Foundry) integration on agent

There are different ways to install CloudFoundry for supporting your developer work.

Details for this installation to be provided

PCFDev plugin Project reached EOL and no longer an option for developers Full installation of PCF in various cloud providers is not ideal for developers scenario as it is time and resource extensive and needs full configuration

All changes

gizas commented 2 years ago

First version of PCF integration: https://github.com/elastic/integrations/tree/cloudfoundry/packages/cloudfoundry

gizas commented 2 years ago

Details on how to install KubeCF in GKE:

  1. Use gcloud to login to a project. We used elastic-obs-integrations-dev
  2. DNS zone should pre-exist as PCF installations use active dns
  3. Run folllowing commands to create GKE cluster
    
    project=elastic-obs-integrations-dev
    clustername=gizascf-test

gcloud beta container --project "$project" clusters create "$clustername" \ --zone "us-central1-a" --no-enable-basic-auth --cluster-version "1.21.11-gke.1100" \ --machine-type "n2-standard-2" --image-type "UBUNTU" --disk-type "pd-standard" \ --disk-size "100" \ --scopes "https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly","https://www.googleapis.com/auth/trace.append" \ --preemptible --num-nodes "2" \ --enable-ip-alias --network "projects/$project/global/networks/default" \ --subnetwork "projects/$project/regions/europe-west4/subnetworks/default" \ --default-max-pods-per-node "110" --addons HorizontalPodAutoscaling,HttpLoadBalancing \ --enable-autoupgrade --enable-autorepair


We used GKE K8s version 1.21.11 as it is the last found to support PCF CRDs and KubeCF v2.7.12
4. Download Kubecf from here: https://github.com/cloudfoundry-incubator/kubecf/releases
5. Create value.yaml as sample

system_domain: gizas.cf-obs.elastic.dev

features: eirini: enabled: true

install_stacks: ["sle15"]

6. Connect to GKE cluster and Run commands in your folder where kubecf is downloaded:

kubectl create namespace cf-operator helm install cf-operator --namespace cf-operator --set "global.singleNamespace.name=kubecf" ./cf-operator.tgz helm install kubecf --namespace kubecf --values values.yaml ./kubecf_release.tgz


If  `kubect get crds ` returns no crds --> This is an indication that KubeCF version not supported in given k8 version

7. Create following A dns records in your google project:

api.gizas.cf-obs.elastic.dev  app.gizas.cf-obs.elastic.dev app1.gizas.cf-obs.elastic.dev app2.gizas.cf-obs.elastic.dev app3.gizas.cf-obs.elastic.dev doppler.gizas.cf-obs.elastic.dev gizas.cf-obs.elastic.dev log-cache.gizas.cf-obs.elastic.dev log-stream.gizas.cf-obs.elastic.dev login.gizas.cf-obs.elastic.dev uaa.gizas.cf-obs.elastic.dev


to point to the external-IP of `kubectl get service router-public -n kubecf`
botelastic[bot] commented 7 months ago

Hi! We just realized that we haven't looked into this issue in a while. We're sorry! We're labeling this issue as Stale to make it hit our filters and make sure we get back to it as soon as possible. In the meantime, it'd be extremely helpful if you could take a look at it as well and confirm its relevance. A simple comment with a nice emoji will be enough :+1. Thank you for your contribution!