oracle-terraform-modules / terraform-oci-oke

The Terraform OKE Module Installer for Oracle Cloud Infrastructure provides a Terraform module that provisions the necessary resources for Oracle Container Engine.
https://oracle-terraform-modules.github.io/terraform-oci-oke/
Universal Permissive License v1.0
152 stars 205 forks source link

Exposing apps from kubernetes cluster #505

Closed Dev-Nino closed 2 years ago

Dev-Nino commented 2 years ago

Hello,

Successfully created all the resources by the help of this repo.

Also installed jenkins container inside operator host using helm install. image

I want to access it in the internet. What are the possible configurations to do?

Thank you so much.

wildone commented 2 years ago

Here are the contents of terraform.tfvars:

# Copyright 2017, 2021 Oracle Corporation and/or affiliates.
# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl

# Identity and access parameters
#api_fingerprint      = "${OCI_API_FINGERPRINT}"
# api_private_key      = <<EOT
#-----BEGIN RSA PRIVATE KEY-----
#content+of+api+key
#-----END RSA PRIVATE KEY-----
#EOT

api_private_key_path = "./key.pem"

home_region = "us-sanjose-1"
region = "us-sanjose-1"

#tenancy_id           = "${OCI_TENANCY_ID}"
#user_id              = "${OCI_USER_ID}"

# general oci parameters
#compartment_id = "${OCI_COMPARTNMENT_PAT_TEST}"
#label_prefix   = "${OCI_COMPARTNMENT_PAT_TEST_PREFIX}"  # (test) sub-compartment of pat.ai

# ssh keys
#ssh_private_key      = "${OCI_SSH_PRIVATE_KEY}"
# ssh_private_key    = <<EOT
#-----BEGIN RSA PRIVATE KEY-----
#content+of+api+key
#-----END RSA PRIVATE KEY-----
#EOT
ssh_private_key_path = "./ssh.key"
# ssh_public_key       = "${OCI_SSH_PUBLIC_KEY}"
# ssh_public_key_path  = "none"
ssh_public_key_path = "./ssh.pub"

# networking
create_drg                   = false
drg_display_name             = "drg"

internet_gateway_route_rules = []

local_peering_gateways = {}

lockdown_default_seclist = true

nat_gateway_route_rules = []

nat_gateway_public_ip_id = "none"

subnets = {
  bastion  = { netnum = 0, newbits = 13 }
  operator = { netnum = 1, newbits = 13 }
  cp       = { netnum = 2, newbits = 13 }
  int_lb   = { netnum = 16, newbits = 11 }
  pub_lb   = { netnum = 17, newbits = 11 }
  workers  = { netnum = 1, newbits = 2 }
  fss      = { netnum = 18, newbits = 11 }
}

vcn_cidrs     = ["10.0.0.0/16"]
vcn_dns_label = "oke"
vcn_name      = "vcnoke"

# bastion host
create_bastion_host = true
bastion_access      = ["anywhere"]
bastion_image_id    = "Autonomous"
bastion_os_version  = "7.9"
bastion_shape = {
  shape            = "VM.Standard.E3.Flex",
  ocpus            = 1,
  memory           = 4,
  boot_volume_size = 50
}
bastion_state    = "RUNNING"
bastion_timezone = "Etc/UTC"
bastion_type     = "public"
upgrade_bastion  = false

## bastion notification
enable_bastion_notification   = false
bastion_notification_endpoint = ""
bastion_notification_protocol = "EMAIL"
bastion_notification_topic    = "bastion_server_notification"

# bastion service
create_bastion_service        = false
bastion_service_access        = ["0.0.0.0/0"]
bastion_service_name          = "bastion"
bastion_service_target_subnet = "operator"

# operator host
create_operator                    = true
operator_image_id                  = "Oracle"
enable_operator_instance_principal = true
operator_nsg_ids                   = []
operator_os_version                = "8"
operator_shape = {
  shape            = "VM.Standard.E4.Flex",
  ocpus            = 1,
  memory           = 4,
  boot_volume_size = 50
}
operator_state    = "RUNNING"
operator_timezone = "Etc/UTC"
upgrade_operator  = false

# Operator in-transit encryption for the data volume's paravirtualized attachment.
enable_operator_pv_encryption_in_transit = false

# operator volume kms integration
operator_volume_kms_id = ""

## operator notification
enable_operator_notification   = false
operator_notification_endpoint = ""
operator_notification_protocol = "EMAIL"
operator_notification_topic    = ""

# availability_domains
availability_domains = {
  bastion  = 1,
  operator = 1,
  fss      = 1
}

# oke cluster options
admission_controller_options = {
  PodSecurityPolicy = false
}
allow_node_port_access       = false
allow_worker_internet_access = true
allow_worker_ssh_access      = false
cluster_name                 = "oke"
control_plane_type           = "public"
control_plane_allowed_cidrs  = ["0.0.0.0/0"]
control_plane_nsgs           = []
dashboard_enabled            = true
kubernetes_version           = "v1.21.5"
pods_cidr                    = "10.244.0.0/16"
services_cidr                = "10.96.0.0/16"

## oke cluster kms integration
use_cluster_encryption = false
#cluster_kms_key_id     = "${OCI_CLUSTER_KMS_KEY_ID}" #OKE-SOFTWARE-AES

### oke node pool volume kms integration
use_node_pool_volume_encryption = false
node_pool_volume_kms_key_id     = ""

## oke cluster container image policy and keys
use_signed_images = false
image_signing_keys = []

# node pools
check_node_active = "all"
enable_pv_encryption_in_transit = false
node_pools = {
  np1 = { shape = "VM.Standard.E4.Flex", ocpus = 1, memory = 16, node_pool_size = 1, boot_volume_size = 150, label = { app = "frontend", pool = "np1" } }
  np2 = { shape = "VM.Standard.E4.Flex", ocpus = 1, memory = 16, node_pool_size = 1, boot_volume_size = 150, label = { app = "frontend", pool = "np2" } }
  # np3 = { shape = "VM.Standard.E4.Flex", ocpus = 1, memory = 16, node_pool_size = 1, boot_volume_size = 150, label = { app = "frontend", pool = "np1" } }
  # np4 = { shape = "VM.Standard.E4.Flex", ocpus = 1, memory = 16, node_pool_size = 1, boot_volume_size = 150, label = { app = "frontend", pool = "np1" } }
  # np2 = {shape="VM.Standard.E4.Flex",ocpus=4,memory=16,node_pool_size=1,boot_volume_size=150, label={app="backend",pool="np2"}}
  # np3 = {shape="VM.Standard.A1.Flex",ocpus=8,memory=16,node_pool_size=1,boot_volume_size=150, label={pool="np3"}}
  # np4 = {shape="BM.Standard2.52",node_pool_size=1,boot_volume_size=150}
  # np5 = {shape="VM.Optimized3.Flex",node_pool_size=6}
  # np5 = {shape="BM.Standard.A1.160",node_pool_size=6}
  # np6 = {shape="VM.Standard.E2.2", node_pool_size=5}
  # np7 = {shape="BM.DenseIO2.52", node_pool_size=5}
  # np8 = {shape="BM.GPU3.8", node_pool_size=1}
  # np9 = {shape="BM.GPU4.8", node_pool_size=5}
  # np10 = {shape="BM.HPC2.36   ", node_pool_size=5}
}
node_pool_image_id    = "none"
node_pool_name_prefix = "np"
node_pool_os          = "Oracle Linux"
node_pool_os_version  = "7.9"
node_pool_timezone    = "Etc/UTC"
worker_nsgs           = []
worker_type           = "private"

# upgrade of existing node pools
upgrade_nodepool        = false
node_pools_to_drain     = ["np1", "np2"]
nodepool_upgrade_method = "out_of_place"

# oke load balancers
enable_waf                   = false
load_balancers               = "both"
preferred_load_balancer      = "public"
# internal_lb_allowed_cidrs  = ["172.16.1.0/24", "172.16.2.0/24"] # By default, anywhere i.e. 0.0.0.0/0 is allowed
internal_lb_allowed_ports  = [80, 443, "7001-7005", 30000, 8080] # By default, only 80 and 443 are allowed
# public_lb_allowed_cidrs    = ["0.0.0.0/0"] # By default, anywhere i.e. 0.0.0.0/0 is allowed
public_lb_allowed_ports    = [443,"9001-9002", 30000, 8080] # By default, only 443 is allowed

#fss
create_fss = true
fss_mount_path = "/oke_fss"
max_fs_stat_bytes = 23843202333
max_fs_stat_files = 223442

# ocir
#email_address    = "${OCI_CONTAINER_REGISTRY_EMAIL}"
secret_id        = "none"  # ${OCI_CONTAINER_REGISTRY_SECRET_ID}
#secret_name      = "${OCI_CONTAINER_REGISTRY_SECRET_NAME}"
#secret_namespace = "${OCI_CONTAINER_REGISTRY_SECRET_NAMESPACE}"
#username         = "${OCI_CONTAINER_REGISTRY_USERNAME}"

# calico
enable_calico  = false
calico_version = "3.19"

# horizontal and vertical pod autoscaling
enable_metric_server = true
enable_vpa           = true
vpa_version          = 0.8

#OPA Gatekeeper
enable_gatekeeper = false
gatekeeeper_version = "3.7"

# service account
create_service_account               = true
service_account_name                 = "kubeconfigsa"
service_account_namespace            = "kube-system"
service_account_cluster_role_binding = "cicd"

# freeform_tags
freeform_tags = {
  # vcn, bastion and operator freeform_tags are required
  # add more freeform_tags in each as desired
  vcn = {
    environment = "dev"
  }
  bastion = {
    access      = "public",
    environment = "dev",
    role        = "bastion",
    security    = "high"
  }
  operator = {
    access      = "restricted",
    environment = "dev",
    role        = "operator",
    security    = "high"
  }
  oke = {
    service_lb  = {
      environment = "dev"
      role        = "load balancer"
    }
  }
}

# placeholder variable for debugging scripts. To be implemented in future
debug_mode = false
hyder commented 2 years ago

There are a couple of ways you can achieve this:

  1. You can tunnel to the operator host and run kubectl port-forwarding but you'll have to do that everytime
  2. You can deploy an Ingress Controller and create an ingress to route your requests to the Jenkins service.

Since you are exposing your Jenkins publicly, please ensure you've secured it properly.

Hope that that helps.

dralquinta commented 2 years ago

This is rather a problem on the way the ingress controller or LBaaS service is set rather than Kubernetes. As @hyder says, you can either deploy an ingress controller and then do a routing or publish a service based on a clustered service and then expose it through an LBaaS.

I'd suggest to close this issue as there is no relationship with OKE in here.

wildone commented 2 years ago

@hyder thank you with @dralquinta help we figured it out we were trying to access the Jenkins via the NAT Gateway instead of using the public LB as its was not clear to us in the diagram. Maybe diagram could be updated to be clear on that part.