nginxinc / kubernetes-ingress

NGINX and NGINX Plus Ingress Controllers for Kubernetes
https://docs.nginx.com/nginx-ingress-controller
Apache License 2.0
4.65k stars 1.96k forks source link

Add the possibility for the controller to be in a shared namespace #4558

Closed panzouh closed 10 months ago

panzouh commented 11 months ago

Is your feature request related to a problem? Please describe. I would like to use nginx plugged with a sidecar, the purpose of the sidecar is to pull a configuration from an API and then pushing it to Nginx via a shared volume. My problem is that when the configuration is reloaded i would like to send a SIGHUP signal to nginx process to gracefully shutdown and reload configuration.

Describe the solution you'd like The solution would be simple it would be to add this key like in the values.yaml :

controller:
  ## The name of the Ingress Controller daemonset or deployment.
  name: controller

  ## The kind of the Ingress Controller installation - deployment or daemonset.
  kind: deployment

  ## Shared process namespace between containers in the Ingress Controller pod.
  sharedProcessNamespace: false

And this in ./templates/deployment.yaml & ./templates/daemonset.yaml :

apiVersion: apps/v1
kind: Deployment # Same on daemonset
metadata:
  name: {{ include "nginx-ingress.controller.fullname" . }}
  namespace: {{ .Release.Namespace }}
  labels:
    {{- include "nginx-ingress.labels" . | nindent 4 }}
{{- if .Values.controller.annotations }}
  annotations: {{ toYaml .Values.controller.annotations | nindent 4 }}
{{- end }}
spec:
  {{- if not .Values.controller.autoscaling.enabled }}
  replicas: {{ .Values.controller.replicaCount }}
  {{- end }}
  selector:
    matchLabels:
      {{- include "nginx-ingress.selectorLabels" . | nindent 6 }}
  template:
    metadata:
      labels:
        {{- include "nginx-ingress.selectorLabels" . | nindent 8 }}
{{- if .Values.nginxServiceMesh.enable }}
        nsm.nginx.com/enable-ingress: "true"
        nsm.nginx.com/enable-egress: "{{ .Values.nginxServiceMesh.enableEgress }}"
        nsm.nginx.com/deployment: {{ include "nginx-ingress.controller.fullname" . }}
{{- end }}
{{- if .Values.controller.pod.extraLabels }}
{{ toYaml .Values.controller.pod.extraLabels | indent 8 }}
{{- end }}
{{- if or .Values.prometheus.create .Values.controller.pod.annotations }}
      annotations:
{{- if .Values.prometheus.create }}
        prometheus.io/scrape: "true"
        prometheus.io/port: "{{ .Values.prometheus.port }}"
        prometheus.io/scheme: "{{ .Values.prometheus.scheme }}"
{{- end }}
{{- if .Values.controller.pod.annotations }}
{{ toYaml .Values.controller.pod.annotations | indent 8 }}
{{- end }}
{{- end }}
    spec:
      {{- if .Values.controller.sharedProcessNamespace }}
      shareProcessNamespace: true
      {{- end }}
      # [...]

Describe alternatives you've considered I considered for a moment to add a cronjob to my cluster that fetch configuration and kubectl exec into Nginx to reload configuration but for me it is not the proper way to do it because it does not take in consideration if the configuration changed or not.

Edit : I also considered to support multiple configmaps since the flag nginxConfigMaps is named with a plural but support only one configuration.

github-actions[bot] commented 11 months ago

Hi @panzouh thanks for reporting!

Be sure to check out the docs and the Contributing Guidelines while you wait for a human to take a look at this :slightly_smiling_face:

Cheers!

vepatel commented 11 months ago

Hey @panzouh , reading the description it seem you're interested in directly communicating with nginx from outside the IC so Is Ingress Controller the right project for this purpose?

panzouh commented 11 months ago

Hello, my problem concerns indeed communication with the nginx process, but this is made impossible by the values in the chart and in templates. Although nginx could expose a route or a socket to force a reload, I think it has everything already built in to do what I need https://nginx.org/en/docs/control.html. I also think that it is simplier to patch a few lines in a value file than changing nginx behavior.

brianehlert commented 11 months ago

I have been watching this and keep going back to this comment:

I also considered to support multiple configmaps since the flag nginxConfigMaps is named with a plural but support only one configuration.

In the end I think everyone would be best served by supporting multiple config maps. The NGINX Ingress Controller has a number of reload optimizations that are built into the controller process itself. And the system really is not designed to have some external actor also attempting to configure and reload NGINX. I think the risk of either race or source of truth issues becomes real when there is more than one actor trying to manage the configuration and reload nginx.

In the end we want to deliver something that stays true to a native K8s integration and keeps the K8s API the source of truth and thus any CI/CD tooling that drives configuration using the K8s API. And not providing strong conduits to work around the way the system is implemented.

Curious what others think.

panzouh commented 11 months ago

In the end we want to deliver something that stays true to a native K8s integration and keeps the K8s API the source of truth and thus any CI/CD tooling that drives configuration using the K8s API. And not providing strong conduits to work around the way the system is implemented.

I do agree, It is indeed a better direction considering the K8s integration, although I did not look much in the code to evaluate the cost of this feature instead of adding a couple keys in the value file and templates.

panzouh commented 11 months ago

I watched a little bit the code, could these snippets do the trick ?

// main.go
func processConfigMaps(kubeClient *kubernetes.Clientset, cfgParams *configs.ConfigParams, nginxManager nginx.Manager, templateExecutor *version1.TemplateExecutor) *configs.ConfigParams {
    var aggregatedParams *configs.ConfigParams
    if *nginxConfigMaps != "" {
        nginxConfigMapArr := strings.Split(*nginxConfigMaps, ",")
        for _, nginxConfigMap := range nginxConfigMapArr {
            ns, name, err := k8s.ParseNamespaceName(nginxConfigMap)
            if err != nil {
                glog.Fatalf("Error parsing the nginx-configmaps argument: %v", err)
            }
            cfm, err := kubeClient.CoreV1().ConfigMaps(ns).Get(context.TODO(), name, meta_v1.GetOptions{})
            if err != nil {
                glog.Fatalf("Error when getting %v: %v", *cfm, err)
            }
            cfgParams = configs.ParseConfigMap(cfm, *nginxPlus, *appProtect, *appProtectDos, *enableTLSPassthrough)
            if cfgParams.MainServerSSLDHParamFileContent != nil {
                fileName, err := nginxManager.CreateDHParam(*cfgParams.MainServerSSLDHParamFileContent)
                if err != nil {
                    glog.Fatalf("Configmap %s/%s: Could not update dhparams: %v", ns, name, err)
                } else {
                    cfgParams.MainServerSSLDHParam = fileName
                }
            }
            if cfgParams.MainTemplate != nil {
                err = templateExecutor.UpdateMainTemplate(cfgParams.MainTemplate)
                if err != nil {
                    glog.Fatalf("Error updating NGINX main template: %v", err)
                }
            }
            if cfgParams.IngressTemplate != nil {
                err = templateExecutor.UpdateIngressTemplate(cfgParams.IngressTemplate)
                if err != nil {
                    glog.Fatalf("Error updating ingress template: %v", err)
                }
            }
            // Merge the config params.
            if aggregatedParams == nil {
                aggregatedParams = cfgParams
            } else {
                aggregatedParams = configs.MergeConfigParams(aggregatedParams, cfgParams)
            }
        }
    }
    return aggregatedParams
}
//configs/configmaps.go
func MergeConfigParams(target, source *ConfigParams) *ConfigParams {
    if target == nil {
        // If the target is nil, create a new instance to avoid modifying the input
        target = &ConfigParams{}
    }

    // Merge individual fields
    if source.HTTP2 {
        target.HTTP2 = source.HTTP2
    }

    // [...]
    return target
}
vepatel commented 11 months ago

@panzouh you can push custom config via means of configmap as well, see https://github.com/nginxinc/kubernetes-ingress/tree/v3.3.1/examples/shared-examples/custom-templates#example. Can just redeploy the configmap using kubectl and it'll reload the nginx. Not really sure what's the requirement of having multiple configmaps here.

panzouh commented 11 months ago

I don't think that creating a fully customized configuration for a basic http block is the correct way of dealing with the problem i rather push another configmap via the configmap flag

vepatel commented 11 months ago

Can you clarify it a bit more here, is the goal to modify nginx.conf using another confingmap?

panzouh commented 11 months ago

Sure, I have a configuration which is dynamically pulled from an API and built to be a http block saved in an other Nginx configuration, and in Nginx default configmap I include the folder where the file is.

panzouh commented 11 months ago

👋 @shaun-nx When my change will be released through the helm repository ?

brianehlert commented 11 months ago

What is in main will go out with the next release. The project releases quarterly.

shaun-nx commented 10 months ago

Hi @panzouh let us know if the change worked for you. If so we can close this issue 😄

panzouh commented 10 months ago

Hi yes I had to clone the repo to try it out but yes adding sharedProcessNamespace solved my issue. I can't wait to see the next release ! 😉