Closed ros-calumgilchrist closed 3 years ago
@ros-calumgilchrist
Thanks for the detailed use case. I'm curious — do you need user tags only on the Service, or do you apply those (or other) tags to other resources throughout the cluster (either manually, or via UserTags in the OpenShift install config)?
@ros-calumgilchrist another clarifying question: is it a static set of user tags you'd like uniformly applied to resources created by OpenShift (of which the ELB is one), or do you need distinct tag sets for each ELB individually?
@ironcladlou You are right, we set UserTags in the config, and also apply tags through the Cloudformation created parts of the infrastructure.
All resources created by Openshift would work perfectly for us. As ideally we would have strict tag guidelines on all our resources. But the ELB is the first use case of something made by Openshift that has violated our tagging policy.
@abhinavdahiya, I'd appreciate any thoughts you have on this subject
So the installer team wanted to move the infrastructure.config.openshift.io API to include a map of tags for AWS based on the UserTags specified in the InstallConfig to the installer see: https://github.com/openshift/api/pull/266/commits/c9b4e5b2d7850deea6719b30aaf38159871ed654. There was apush back from @derekwaynecarr as there was no way to *realize the API as we cannot enforce the tags on all objects being created by the AWS cloud provider and we dropped that from the API see https://github.com/openshift/api/pull/266/commits/9826df0ec16bbb577ad8eac5cf0bc9daa4f7a6b8
The goal was that infrastructure.config.openshift.io would have tags that all *openshift operators would use to tag any cloud resource they created, but as of today only resources created by the installer are custom tag-able. Although I would like to move us in the direction that all resources in OpenShift are custom tag-able.
@abhinavdahiya thanks for the links. @derekwaynecarr, here's a use potential case for you (ref https://github.com/openshift/api/pull/231#discussion_r265128960).
Seems as though we're unlikely to try supporting this at the operator level?
The issue I had with the global API was it would not propagate to resources created by the Kubernetes core project. I am not adverse to individual operators offering the ability to propagate tags for resources they manage, but it should not cascade or imply the same tags will be applied to all resources not managed by their operator.
I' also miss the feature to add a custom annotations to the ingress service. Any progress here?
I want to add an additional router (for router sharding) in my cluster via an internal LoadBalancer service.
apiVersion: v1
items:
- apiVersion: operator.openshift.io/v1
kind: IngressController
metadata:
name: internal
namespace: openshift-ingress
spec:
domain: int.foo.bar
endpointPublishingStrategy:
type: LoadBalancerService
nodePlacement:
nodeSelector:
matchLabels:
node-role.kubernetes.io/worker: ""
namespaceSelector:
matchLabels:
type: internal
status: {}
kind: List
I need to add a custom annotation to my service to obtain a private ip address instead of an public ip address (just like cloud.google.com/load-balancer-type: "Internal"
in GCP). Seems like that this is not possible, as the service is currently not customizable.
I need to add a custom annotation to my service to obtain a private ip address instead of an public ip address (just like
cloud.google.com/load-balancer-type: "Internal"
in GCP). Seems like that this is not possible, as the service is currently not customizable.
You can achieve this by specifying "internal" scope:
# ...
spec:
domain: int.foo.bar
endpointPublishingStrategy:
type: LoadBalancerService
loadBalancer:
scope: Internal
# ...
The operator will add the cloud.google.com/load-balancer-type=Internal
annotation to the LoadBalancer service.
Or are you saying you need the equivalent annotation for a different cloud platform? The operator currently handles the annotation for internally scoped load-balancers on AWS, Azure, GCP, OpenStack, and IBM Cloud.
Hi @Miciah,
i want to have an option in the operator to configure custom service annotations. The cloud.google.com/load-balancer-type
was just an example. I know of the scope
option, but i'm running OCP on vSphere. I wan't to use this in combination with an additional Ingress Controller and Router Sharding.
I see different use cases for custom service annotations:
spec.loadBalancerIP
, but sometimes with custom service annotation like kubernetes.io/ingress.global-static-ip-name
). Also applies to GCP!Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle rotten /remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting /reopen
.
Mark the issue as fresh by commenting /remove-lifecycle rotten
.
Exclude this issue from closing again by commenting /lifecycle frozen
.
/close
@openshift-bot: Closing this issue.
Is there currently a method to add additional annotations to the Service for the Ingress Routers?
I see that the annotations are setup here but it doesn't seem clear to me if I can add a Tag annotation:
I imagine I could set this after Openshift cluster creation but would just be in trouble if the service/LB needed recreated.
Am I missing a path to add these annotations or is it logic that would need to be added to the configuration for the operator?
Background
The environment I'm working in has strict AWS tagging requirements and in a best case has a service that deletes load balancers missing tags. There are some workarounds I can think of to satisfy this if this level of customisation is to be avoided for the ingress operator.
Environment
Using Openshift 4.1 in UPI mode with Cloudformation templates into AWS.