Open vishghelani opened 1 year ago
@vishghelani thanks for creating this issue!
We're looking at this opportunity and we're trying to see how many customers would be interested in this.
+1 I think there's some scenarios where having some Confluent Cloud but also Confluent Platform sidecar would be useful. We're planning on doing that some time in the future. It would also be useful for Confluent Connectors that work on Platform but not cloud and/or behave slightly differently due to guardrails for managed connectors. Probably some other useful features that could be managed as part of it(that im not currently aware of).
@linouk23 We are in a scenario where we need support both for confluent cloud and confluent platform. Do you know if and when this support might be added?
@PSanetra could you describe your use case? That might help with prioritizing this feature.
@linouk23 we have a case where we need to use the Alibaba Cloud Confluent Platform to provide a solution in china and Confluent Cloud for rest of the world.
Another use case is a consistent management approach for ephemeral and local development using this provider. Would be ideal to point the provider to a local confluent platform container deployment in development and a real confluent platform/confluent cloud endpoint in higher environments.
I support this case for local development. I want to have uniform experience managing kafka resources, from my local development machine, to cloud.
Simple fix for local development with nginx (using docker-compose)
nginx-compose.yml
version: '3'
services:
nginx:
image: nginx:latest
hostname: kafka-nginx
container_name: kafka-nginx
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
ports:
- "18082:80"
nginx.conf
events {}
http {
server {
listen 80;
location /kafka {
proxy_pass http://rest-proxy:8082;
rewrite ^/kafka(.*)$ $1 break;
}
}
}
provider-confluent.tf
provider "confluent" {
kafka_id = "MkU3OEVBNTcwNTJENDM2Qg"
kafka_rest_endpoint = "http://localhost:18082"
kafka_api_key = "fake_local"
kafka_api_secret = "fake_local"
schema_registry_id = "local"
schema_registry_rest_endpoint = "http://localhost:8081"
schema_registry_api_key = "fake_local"
schema_registry_api_secret = "fake_local"
}
podman-compose -f nginx-compose.yml up -d
@ekozynin, thank you for sharing your great workaround that looks very impressive!
If you find some time to write a more detailed write up about it, I would be excited to add it to our official TF docs "Guides" section. Thank you!
@linouk23 guide for local development is attached. please let me know if any changes/formatting/else is required.
Thanks to @ekozynin we can also use our K8s - CP - Clusters with TF :) Regardless if local or in production - see code below
But please! consider including this switchable rest-proxy-call logic into the provider. It seems not like a big-deal code-wise but it would be a huge deal! for all CP users!
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /v3/$2
name: kafka-rest-proxy-ingress
namespace: confluent
spec:
ingressClassName: nginx
rules:
- host: kafka-rest-proxy.testcluster
http:
paths:
- path: /kafka/v3(/|$)(.*)
pathType: Prefix
backend:
service:
name: kafkarestproxy
port:
number: 8082
- path: /v3(/|$)(.*)
pathType: Prefix
backend:
service:
name: kafkarestproxy
port:
number: 8082
That looks very promising @Fobhep!
But please! consider including this switchable rest-proxy-call logic into the provider. It seems not like a big-deal code-wise but it would be a huge deal! for all CP users!
Do you have an example of TF UX? Like how we'd update:
provider "confluent" {
kafka_id = var.kafka_id # optionally use KAFKA_ID env var
kafka_rest_endpoint = var.kafka_rest_endpoint # optionally use KAFKA_REST_ENDPOINT env var
kafka_api_key = var.kafka_api_key # optionally use KAFKA_API_KEY env var
kafka_api_secret = var.kafka_api_secret # optionally use KAFKA_API_SECRET env var
}
to accept this.switchable rest-proxy-call logic?
mmh - excellent question - how about something like
provider "confluent" {
kafka_rest_endpoint _embedded = true
# this would set the api call to /kafka/v3/* whereas false would go to /v3/* - true could be default
}
Hello,
I understand this supports management of confluent cloud kafka clusters, is this provider likely to support confluent platform in the future?
Vishal