confluentinc / terraform-provider-confluent

Terraform Provider for Confluent
Apache License 2.0
31 stars 64 forks source link

Setup of Github Runners for running terraform scripts for dedicated clusters. #206

Open VipulZopSmart opened 1 year ago

VipulZopSmart commented 1 year ago

We have QA and STG are standard clusters and are reachable from local(thus Github actions can be setup to automate the process of running terraform scripts), but for PROD cluster, we have dedicated cluster and privateLink enabled from some AWS accounts. Is it possible to setup Github runners setup for that cluster, so that we can make use of that provisioning of resources in PROD from the github instead of EC2 provisioned in that AWS account(privateLink setup).

Also, we're thinking of updating the confluent terraform provider version in our terraform scripts - so we're thinking of following github versioning(tag based release) as there could be a breaking change in the upgrade. We can do this for our QA and STG setup but not possible for PROD as we do provisioning of resources from the EC2. Please suggest some way to achieve this and how do we do this whenever we get a version upgrade?

VipulZopSmart commented 1 year ago

@linouk23 please suggest something.

linouk23 commented 1 year ago

thanks for opening this issue @VipulZopSmart!

Do you want to split this issue into 2 separate ones?

VipulZopSmart commented 1 year ago

I think single issue is fine.

linouk23 commented 1 year ago

We can do this for our QA and STG setup but not possible for PROD as we do provisioning of resources from the EC2. Please suggest some way to achieve this and how do we do this whenever we get a version upgrade?

Could you add more details here?

Is it possible to setup Github runners setup for that cluster, so that we can make use of that provisioning of resources in PROD from the github instead of EC2 provisioned in that AWS account(privateLink setup).

Well, there're multiple ways but we don't have an example for any. The core idea is you need to execute terraform from the VPC that has a connectivity to PL cluster. For example, looking at this doc might be heplful. That said, I'm not sure if that's the best way for production usage.

cc @VipulZopSmart

pixie79 commented 1 year ago

I have the same issue here; we don't want to expose the whole internal API to the internet so that Github can do the CI/CD, but equally, running a runner to control Confluent's internal API is overkill.

I considered using API GW before the Kafka REST Endpoint with restrictions on the paths allowed. I hope we can then restrict it to the account bits but block reading and writing data to topics.

pixie79 commented 1 year ago

Can anyone confirm is this the full list of topics that would need to be allowed through API Gw for Github to be able to use the Kafka REST endpoint for provisioning. Are any not needed?

Also is there anyone to add an additional security token wrapper that could be validated by API Gw before passing the call onto the End point?

GET - Get List of Topics https://pkc-00000.region.provider.confluent.cloud/kafka/v3/clusters/{cluster_id}/topics

POST - Create Topic https://pkc-00000.region.provider.confluent.cloud/kafka/v3/clusters/{cluster_id}/topics

GET - Topic Settings https://pkc-00000.region.provider.confluent.cloud/kafka/v3/clusters/{cluster_id}/topics/{topic_name}

DELETE - Topic https://pkc-00000.region.provider.confluent.cloud/kafka/v3/clusters/{cluster_id}/topics/{topic_name}

GET - Cluster Links https://pkc-00000.region.provider.confluent.cloud/kafka/v3/clusters/{cluster_id}/links

POST - Create Cluster Link https://pkc-00000.region.provider.confluent.cloud/kafka/v3/clusters/{cluster_id}/links

GET - Cluster Link Settings https://pkc-00000.region.provider.confluent.cloud/kafka/v3/clusters/{cluster_id}/links/{link_name}

DELETE - Cluster Link https://pkc-00000.region.provider.confluent.cloud/kafka/v3/clusters/{cluster_id}/links/{link_name}

GET - Cluster Link Configs https://pkc-00000.region.provider.confluent.cloud/kafka/v3/clusters/{cluster_id}/links/{link_name}/configs

GET - Describe Config under a Cluster Link https://pkc-00000.region.provider.confluent.cloud/kafka/v3/clusters/{cluster_id}/links/{link_name}/configs/{config_name}

PUT - Config under a Cluster Link https://pkc-00000.region.provider.confluent.cloud/kafka/v3/clusters/{cluster_id}/links/{link_name}/configs/{config_name}

DELETE - Config under a Cluster Link https://pkc-00000.region.provider.confluent.cloud/kafka/v3/clusters/{cluster_id}/links/{link_name}/configs/{config_name}

PUT - Batch Alter a Config under a Cluster Link https://pkc-00000.region.provider.confluent.cloud/kafka/v3/clusters/{cluster_id}/links/{link_name}/configs:alter

PUT - Mirror a topic in a Source cluster https://pkc-00000.region.provider.confluent.cloud/kafka/v3/clusters/{cluster_id}/links/{link_name}/mirrors

GET - List Mirror topics under a link https://pkc-00000.region.provider.confluent.cloud/kafka/v3/clusters/{cluster_id}/links/{link_name}/mirrors

GET - List all Mirror topics https://pkc-00000.region.provider.confluent.cloud/kafka/v3/clusters/{cluster_id}/links/-/mirrors

GET - Mirror topic details https://pkc-00000.region.provider.confluent.cloud/kafka/v3/clusters/{cluster_id}/links/{link_name}/mirrors/{mirror_topic_name}

POST - Promote Mirror topic https://pkc-00000.region.provider.confluent.cloud/kafka/v3/clusters/{cluster_id}/links/{link_name}/mirrors:promote

POST - Failover mirror topic https://pkc-00000.region.provider.confluent.cloud/kafka/v3/clusters/{cluster_id}/links/{link_name}/mirrors:failover

POST - Pause mirror topic https://pkc-00000.region.provider.confluent.cloud/kafka/v3/clusters/{cluster_id}/links/{link_name}/mirrors:pause

POST - Resume mirror topic https://pkc-00000.region.provider.confluent.cloud/kafka/v3/clusters/{cluster_id}/links/{link_name}/mirrors:resume

linouk23 commented 1 year ago

Can anyone confirm is this the full list of topics that would need to be allowed through API Gw for Github to be able to use the Kafka REST endpoint for provisioning. Are any not needed?

@pixie79 your list seems to be accurate to me.