open-feature / open-feature-operator

A Kubernetes feature flag operator
https://openfeature.dev
Apache License 2.0
198 stars 36 forks source link

Cloud Native Flag Configuration Strategy #261

Closed beeme1mr closed 1 year ago

beeme1mr commented 1 year ago

Problem

The OpenFeature Operator (OFO) supports a Custom Resource Definition (CRD) that represents a flag configuration that's available for all workloads with a specific annotation. While this works well in simple scenarios, it proves challenging at scale. Organizations typically have many teams and projects that all share a single Kubernetes environment. It's also common to have multiple environments (e.g. dev, staging, production) that need to share common elements of a flag configuration (flag key, variants) across environments while having the ability to define environment specific states and targeting rules.

Questions

Resources

Kavindu-Dodan commented 1 year ago

Here I am documenting my initial observations and thoughts on this matter.

How can a developer create a new flag?

Except for a few exceptional cases, existing vendors require feature flags to be defined first through their feature flag management tool. This tool usually is a UI, where the flag can be connected to an application. Further, the UI usually provides configurations and monitoring capabilities. The exception to this is the "code first" approach, where some vendors allow the feature flag to be defined through their SDK and later connect to the feature flag management system.

In my view, feature flags defined through a feature flag management system give more flexibility compared to the "code first" approach. To enhance the developer experience, there is the possibility to introduce an interactive CLI which can be linked to the feature flag management system and allow local evaluations without the need for existing flags.

How can a DevOps engineer control a flag per environment?

If the specific feature flag is evaluation context-aware, then developers can utilize evaluation context to provide environment-specific data to control the feature flag behavior.

From DevOps engineer perspective, they could feed environment-specific configurations to the feature flag management system. This could be part of a GitOps workflow with a pipeline to publish such configurations.

How can configurations be shared?

Usually, vendors include these configurations with their feature flag management system (web interface and UI).

However, this could be part of a Git repository where feature flag definitions and configurations live side-by-side for a generic use case. Once configurations or feature flag definitions are changed, a triggered pipeline could output the combined feature flag definitions (flag definition + configurations)

How are configurations structured to allow access control to the proper teams?

Feature flag management systems have built-in user management capability (ex:- RBAC ).

For a generic GitOps scenario, write access can be restricted with code owner definitions. If there's a need of read access restrictions, then repositories can be made hidden and inaccessible.

How can dev tools be leveraged to simplify configuration (e.g. IntelliSense, autocomplete, linting)?

Feature flag SDKs usually come with developer-friendly APIs which are easy to configure.

How can automation validate configurations before applying the change?

If feature flags are defined through a feature flag management system, then such validations automatically happen through the system.

For a generic, GitOps scenario, these validations could be built as a build step. For example, feature flag schema could be verified through unit tests or end-to-end tests.