Closed hamzaoza closed 11 months ago
From the histogram below, Data Catalog entries peak around 10.
I find this very interesting. A lot of the teams I've seen love/have gotten in the habit of creating a physical dataset for everything. I've gotten feedback when I've implicitly left something as a MemoryDataSet
, including that it's complicated to debug (it's not, don't rely on literally everything getting written to disk to debug). What would also be interesting to see is number of datasets (are some projects bigger? are the nodes more granular? are other projects leveraging MemoryDataSet
more?).
Is it possible to also see how often people read back datasets (after pipeline run), especially intermediate datasets? We don't care at all about storage/cost (and apparently performance), so we turn on versioning and write every dataset to disk. How often do we look at them? I'm guessing at the end of a bigger client project you've got 100K-1M+ datasets on disk, and people have only every looked at <1K. This need to write every dataset to disk manifests itself when you're reusing modular pipelines, and want to generate namespace.catalog_entry_X
for each intermediate catalog entry in the modular pipeline. That's when I started using Jinja templating (see quantumblacklabs/kedro#583 for the syntax I needed), and that's the one place where I still feel templating is most useful. Was it really even necessary to physically manifest each internal catalog entry?
I guess what I'm getting at is, maybe the configuration is fine, and a lot of the teams I've seen are way overdoing their catalogs. 😂
Not a fan of pattern matching; will copy my comments to @hamzaoza here:
else
statement, which is not really how I think about the default state of my catalog entry.Sidebar: Just give me autoformatting with prettier
on YAML templated using Jinja, and I'm happy. That's my main gripe with templating with Jinja (that prettier
no longer works).
Interesting analysis! I personally like Jinja2 template system. The only problem I sow with it regards readability. I generally have to run the same pipeline with different inputs and store results in different locations. So, data catalogs and config files became for me an important tool to keep track of which data was analyzed and where it was stored. This is a problem if the only thing I have is the template, because this information is lost. Furthermore, an explicit catalog like the vanilla one (or the one using YAML anchors) becomes really important for reproducibility, like in the case I have to regenerate the same results again. In my case, I use, from one side, the template system to make the process of using different inputs and outputs easier, from the other side, I store the generated catalog and the config files together with the data to ensure readability and reproducibility. It would also be nice if there would be a way to programmatically run the pipeline by passing the dictionaries of variables to the TemplatedConfigLoader through the CLI, instead of having to manually set them in the hooks or having to override the run command and subclass the KedroSession to achieve it.
Thanks for this amazing piece of work @hamzaoza - I'm also quite impressed with how dbt works with Jinja, where they have concise SQL models at rest, but the compiled fully materialized SQL is available for debugging. Perhaps, the same approach could be used to allow people to write concise, complex catalogs - but allow users to materialise them in the format that Kedro sees at runtime?
I've tried to consolidate my thinking and would like to present 4 prototypes for ways we can take this research forward and turn into features. Please comment, interact, react to the below:
We have multiple examples of people making a trivial change to ConfigLoader
and TemplatedConfigLoader
to automatically include certain environment variables within their configuration present at runtime. This can be critical in a deployment setting where an orchestrator surch as Argo or Airflow may inject certain information via environment variables.
The following order of precedence would apply here:
Credentials - Easy way to sync these with environment variables Environments - Several internal examples of projects making this tweak to their config loader
Today one can introduce environment variables into their global_dict
using this trivial change to the TemplatedConfigLoader
@hook_impl
def register_config_loader(self, conf_paths: Iterable[str]) -> ConfigLoader:
return TemplatedConfigLoader(
conf_paths,
globals_pattern="*globals.yml",
globals_dict={k: v for k, v in os.environ if k.startswith("KEDRO_")}
)
In this proposal we introduce a new keyword argument that allows the user to specify a regular expression that matches environment variables and includes them in scope. Many of our users do similar things, this is simply providing a convenience function for doing so.
@hook_impl
def register_config_loader(self, conf_paths: Iterable[str]) -> ConfigLoader:
return TemplatedConfigLoader(
conf_paths,
globals_pattern="*globals.yml",
env_var_key_pattern=['^KEDRO.+'] # regex pattern
)
A related area that people bring up is the way Kedro handles credential management, particularly since the enterprise world has become more sophisticated in the last couple of years with vendor led solutions such as HashiCorp Vault and Kerberos.
Environment variables have a part to play and perhaps the following change could help in the same sort of way:
@hook_impl
def register_config_loader(self, conf_paths: Iterable[str]) -> ConfigLoader:
return TemplatedConfigLoader(
conf_paths,
globals_pattern="*globals.yml",
env_var_credential_pattern=['^AWS.+ACCES_KEY.*'], # regex pattern
evn_var_mapping= {
# optional mechanism to rename certain keys
'AWS_ACCESS_KEY_ID' : 's3_token'
'AWS_SECRET_ACCESS_KEY' : 'secret_mapping'
}
)
It is clear from supporting the product that users want to define their pipelines dynamically. This is partly because we as programmers want to follow the DRY (Do not repeat yourself) principles and because large Kedro projects will end in the user duplicating lots of configuration (configuration environments is one example where this is unavoidable)
Our thinking on this has been heavily influence by the Google SRE cookbook which outlines the exact journey we as a Kedro team have gone on:
ConfigLoader
)TemplatedConfigLoader
)The solution that we land at will ultimately fall into category 3,4 or 6. Traditionally, the Kedro team has been resistant to going in this direction because it inevitably makes configuration less readable and more difficult for newbies to understand. Perhaps this proposal when combined with [Idea 3], perhaps adding a compilation step mitigates this readability point.
Since introducing Jinja2 support to TemplatedConfigLoader
in 0.17.0
it has become a common pattern discussed in the open source community:
1, 2, 3, 4, 5, 6, 7
0.17.0
was released and we have a great deal of user evidence to show that it is ben used. Jinja gives users basic programming structures like loops, conditionals and variables.anyconfig
which Kedro uses behind the scenes.Currently we does not support one key Jinja2 feature which would improve the developer experience: importing and reusing macros. via the include
command.
globals.yml
and the DSL unique to TemplatedConfigLoader
. In general the main arguments against going down this route are:
Negative feedback from this proposal focused on:
Jssonnet
can be thought of as a Jinja-like language that is better suited to a whitespaced semi-strucutred language like YAML or JSON.A core principle in Kedro is that there should be only one obvious way of doing things - this means that all of the proposals above are mutually exculsive.
This is a high priority for Kedro - but it's a big decision choosing which horse to back.
kedro compile
commands so that users can materialise what Kedro sees at run-timeMaterialise human readable version of what Kedro sees in terms of configuration at run time. The compiled YAML would live in a gitignore
-d directory structure that fully resolves namespaces, hierarchical overrides, templating and other optimisations made in the name of conciseness at the expense of comphrensibility.
- Pattern mathcing - Masks the true number of datasets, Concern about the order of operations, Concern about unintended consequences
- Jinja2 - Multiple points of failure which also makes it difficult to debug,
- Config environments - The inheritance pattern of local / custom / base can be hard for new users to pick up
❯ kedro compile
2021-09-28 15:12 - test.cli - INFO - Compiled _conf_compiled/conf/local/catalog.yml
2021-09-28 15:12 - test.cli - INFO - Compiled _conf_compiled/conf/base/parameters.yml
2021-09-28 15:12 - test.cli - INFO - Compiled _conf_compiled/conf/base/logging.yml
2021-09-28 15:12 - test.cli - INFO - Compiled _conf_compiled/conf/base/catalog.yml
Original conf/base/catalog.yml
will not be seen by Kedro since it has a local
override. This record will not exist in the _conf_compiled/base/catalog.yml
file.
Compiled local/catalog.yml
wins, but there is a comment explaining the lineage to the user.
YAML anchors reduce the amount of re-use present in the file, however readability suffers as a result
However, the fully resolved equivalent can be reviewed in the compiled director
gitignore
-d so your IDE should help - but I've made this mistake when using dbt
before.jinja2
, pattern matching or even something like jsonnet
[Proposal 2].The run arguments available via the Kedro CLI have evolved to date organically. Today there are 3 mechanisms for injecting some configuration into vanilla Kedro via the CLI:
# | Command | Comment |
---|---|---|
1 | kedro run --env=production |
Tweak the configuration order or precedence so that the configuration within the conf/production directory takes precedence. This technique is influenced by the thinking laid out in the 12 factor app. |
2 | kedro run --params param_key1:1,param_key2:4 |
This is the only way Kedro currently supports explicit and specific CLI configuration overrides, but only for parameters NOT credentials or catalog entries. To achieve this we have introduced a DSL or sorts and we know from telemetry and supporting users day today this is a very popular feature. |
3 | kedro run --config config.yml |
This feature is a superset of all runtime configuration as it allows the user to lay out complex CLI commands as a file which can be maintained in a text editor and version controlled. There is an argument that this should be called --kwargs as it would be more specific and less overloaded than 'config'. |
The request to provide CLI overrides comes up relatively frequently examples 1,2,3 as well as multiple references on internally facing channels.
Much of this stems from a desire to separate the business logic (nodes, pipelines, models) from the inputs and outputs (catalog +credentials, parameters). Kedro provides a separation of these concerns, but they are still situated within the codebase on the user's file system.
This separation become a higher priority in production deployments for several key reasons.
kedro catalog generate
on the other side.staging
/ qa
/ prod
pipelines. However, over time we have observed users start to use this pattern for deploying slightly different 'flavours' of a similar use-case. This is a great use of the functionality - but it speaks to the fact that as projects grow the configuration overhead grows especially when the various hierarchical overrides starting coming into play.The proposals in the post will follow following order of precedence with CLI overrides taking the highest priority before dropping down to the other levels.
conf
structure via the CLI:The idea here is that the user could package up a version of catalog, parameters (and credentials if so inclined) so that they have a mechanism of injecting lots of configuration at once, independently of the codebase or packaged pipeline.
kedro run override --kind=zip "path/to/catalogs.zip"
The --kind
argument could allow us to point to folder directories, or glob paths as well.
In this example we allow the user to inject specific overrides as JSON. YAML isn't appropriate here since whitespacing in the terminal is a pain so it makes sense to work with JSON equivalents.
The --kind=json
flag should be self explanatory, but the explicit --catalog
, --params
flags allow the user to specific about what they are trying to override.
kedro run override --kind=json --catalog='{"car":{"type":"pandas.CSVDataSet","filepath":"...'
kedro run override --kind=json --params='{"a":{"x":{"value":1},"y":{"value":2}}}'
kedro run override --kind=json --params="$(cat my_params.json)"
--credentials
and even ways to inject --globals
for use in TemplatedConfigLoader
.KedroSession
store as well.Thanks for this amazing piece of work @hamzaoza - I'm also quite impressed with how dbt works with Jinja, where they have concise SQL models at rest, but the compiled fully materialized SQL is available for debugging. Perhaps, the same approach could be used to allow people to write concise, complex catalogs - but allow users to materialise them in the format that Kedro sees at runtime?
Actually @datajoely we have a kedro pmpx render-conf
command that compiles and renders everything after templating in full explicit form (including folder structures) into a separate location (set to log/rendered/
folder IIRC). It was a requested feature but I think once people know what they are doing - they don't really use that.
Jinja is not a whitespaced language so that working against a YAML target is dangerous
Not Jinja, but Helm charts use templated configuration on YAML targets and is considered an industry standard: https://helm.sh/docs/chart_template_guide/functions_and_pipelines/, you often see things like:
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
@mzjp2 - Yeah spotted that, in general the whitespaccing issue with Jinja isn't a dealbreaker - it just highlights that you need a series of hacks (or yet another DSL in this case) in to make it work well for YAML.
@datajoely I am not sure I am a big fan of this syntax, it looks quite strange and not very user-friendly... When I was saying that we can specify where to get the config from, I meant something as simple as providing the folder manually. E.g. currently you can run your project in three ways:
kedro run ...
<project_name> ...
(if you have packaged and installed your project)Session.create(...)
(if you want to import your package programatically)In all of those, there's a hard assumption that your current directory contains the config. What I would like is to make that assumption rather a soft one, i.e. the user should be able to specify where the config is located by pointing to a folder or a .tar.gz file. Something like this:
kedro run ...
will assume your config is in conf/
in your current directory (or the name you have provided in settings.py
)kedro run --conf=/home/ivan/my_new_conf/
will load the config from an entirely different placekedro run --conf=/home/server/configuration/conf.tar.gz
will load the config from the tar archive--conf
is obviously not the final name.
Why is that functionality useful? This can help a lot with deploying configuration and packaging. E.g. when you do kedro package
we can not only have the .whl
file there, but also the conf.tar.gz
and people can deploy them seperately. Moreover we can very easily get the conf.tar.gz
packaged alongside the rest of the code (although we do not recommend that) and when people run their code, they can simply point to their site-packages
folder and the tar archive in there (there was such a need in a recent call where the Kedro user had to deploy to a very strict deployment pipeline where they had no control over).
@datajoely I am not sure I am a big fan of this syntax, it looks quite strange and not very user-friendly... When I was saying that we can specify where to get the config from, I meant something as simple as providing the folder manually. E.g. currently you can run your project in three ways:
kedro run ...
<project_name> ...
(if you have packaged and installed your project)
@idanov what would you imagine the syntax to look like instead? For reference the proposal 4.1 would make it look like this:
[kedro|$package_name] run override --kind=zip "path/to/catalogs.zip"
Community PR #927 further suggests that more complex CLI override facilities (Proposal 4) are desired
In all of those, there's a hard assumption that your current directory contains the config. What I would like is to make that assumption rather a soft one, i.e. the user should be able to specify where the config is located by pointing to a folder or a .tar.gz file. Something like this:
kedro run ... will assume your config is in conf/ in your current directory (or the name you have provided in settings.py) kedro run --conf=/home/ivan/my_new_conf/ will load the config from an entirely different place kedro run --conf=/home/server/configuration/conf.tar.gz will load the config from the tar archive --conf is obviously not the final name.
For the record, my team has surcharged the CLI to add this option and this is exactly how we deploy our applications. The option is called --conf-root
(which refers the to the CONF_ROOT global variables where the ConfigLoader
looks for the configuration, I think it has been renamed CONF_SRC recently). We do append to the conf_paths
list the src/<your-package>/conf
folder path to be able to have default value in the project itself (as discussed in #770).
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Was lead here after trying to solve my own problem via current jinja2 implementation in https://github.com/kedro-org/kedro/issues/1532.
My thought is: declarative is good, but a fairly sophisticated solution is need to cover all the use cases. I'm a big fan of terraform -- even if it has some idiosyncrasies. But it does have variables, loops, etc -- goes well beyond simple pattern matching. Pattern matching can be stretched, but when you stretch it it also gets complex and hard to read.
(See devspace for a yaml solution that quite extremely flexible -- variables, profiles with pattern replacement, and use of helm templates if need be. No loops though. Or Helm itself.)
As a practical matter, be aware that a declarative solution that naturally meets all use cases and is really DRY will be quite sophisticated. I suggest providing a separate jinja2 loader, with documentation stating "use the other solution if possible". Then incrementally add features to your declarative solution to incorporate reasonable uses of jinja2 found in the wild. (From my POV using helm charts is ok as well as suggested above, but ... eh ... jinja2 is easier to read.)
(More parochial concern) I am trying to maintain a mapping to DVC. In regards to this, if/when you implement looping in your declarative solution, assign (force assignment of) real identities to unrolled nodes (not just sequence numbers) to help with lifecycle management. This same issue comes up in Terraform.
IMO generating intermediate files is ok. Perhaps put in hidden .kedro/ directory? Explicit compilation step only for checking & optimization.
Afterthought: You could probably write a terraform extension that allows declaration in terraform syntax, and maintains correspondence with local yaml files. This requires a complex outside dependency, but might be simplest path to a sophisticated declarative way to do configuration.
Configuration has had a complete overhaul with the new OmegaConfigLoader
. The research gathered in this issue has been greatly utilised for that, but if configuration needs to be further developed new insights will need to be collected.
For reference, In 0.19, we have kedro catalog resolve
which is very close to what proposed as kedro compile
. It allows user to see the materialised version of catalog.
I think we could go further here @noklam but there is some overlap, yes
Summary
Configuration overhead is an issue that has arisen time and time again from user feedback – particularly as Kedro projects scale in complexity. From user interviews, it can be observed the three main configurations used were for
kedro run
, the Data Catalog and parameters. The remaining options were seen as “setup once and forgot” for the remainder of the project. Overall configuration in Kedro is well received and liked by users who appreciate the approach Kedro has taken so far.During this research, it became clear that configuration scaling impacts a small set of use cases where you have multiple environments (e.g.
dev
,staging
andprod
) and multiple use cases – maybe you’re using the same or a similar pipeline across different products for different countries. To gather deeper insights participants were presented with two existing options for the Data Catalog, and two possible solutions: pattern matching and Jinja templating (favouring the former of the two). Users were also asked about their feelings about moving the Data Catalog entirely to Python. Participants were universally against the idea of moving the Data Catalog into Python as it would fundamentally go against the principles of Kedro.Table of Contents
1. Introduction
Configuration overhead is an issue that has arisen time and time again from user feedback – particularly as Kedro projects scale in complexity. It’s also an issue for new users who have never been exposed to this concept i.e., Data Scientists using software engineering principles for the first time. This research aims to understand the key pain points users face when using configuration and test possible solutions for the Data Catalog to develop a specification criterion for any solution.
2. Background
Kedro is influenced by the 12 Factor App but this results in a lot of duplication of configuration. From users, we have heard that yaml files can become unwieldy with each entry written manually making it error prone. Users also want to apply runtime parameters and want to parameterise runs in complex ways which Kedro doesn’t currently support.
As a result, some teams have tried to solve this independently – most notably by using Jinja2 templating through the template config loader though this has not become widespread across other teams. However, as we continue to grow, it is likely that more users will encounter similar issues and will need a Kedro native solution to support growth.
Finally, this is not a problem unique to Kedro. Google SREs have already faced a similar issue in the past who have outlined their thoughts and experiences here.
3. Research Approach
To develop a holistic overview of configuration in Kedro, a journalistic approach was used. Therefore, we were looking to answer the following questions:
Note: There is some overlap in the last two questions.
Research Scope
To help keep things manageable, the primary focus of this research was on the Data Catalog and how users interact with it. Nonetheless, pain points for other forms of configuration in Kedro were also captured and will be discussed later. Therefore, elements like parameters, credentials, etc. were not explicitly user tested. Furthermore, custom solutions created by teams may be referenced but will not be considered in the overall solution as they are not Kedro native features.
4. User Interview Matrix
In total, 19 interviews (lasting 1 hour each) across personas and experience levels were conducted to capture a spectrum of views. The user matrix breakdown is shown below.
Note: External users were sourced from Kedro Discord
5. Configuration Synthesis
kedro run
What technology is currently used to support this configuration?
Jinja
Where in the Kedro project can the user make this configuration?
src/<project-package>/settings.py
pyproject.toml
kedro run --config **.yml
export KEDRO_ENV=xyz
src/<project-name>/hooks.py
conf/**/credentials.yml
conf/base/**.yml
conf/local/**.yml
conf/**/**.yml
export KEDRO_ENV=**
conf/**/parameters.yml
kedro run --params param_key1:value1,param_key2:2.0
kedro run --config **.yml
conf/**/catalog.yml
Who is the lead user responsible for this configuration?
How does the user feel about this approach?
What do users like about this approach?
• Standard project structure
• Easy to collaborate with others
• Provides great defaults out of the box
• Easy to ramp up a Kedro project
• Can use the –pipeline flag to run specific branches of code• Can git commit a config.yml file to reduce run errors
• Overall, one of the easiest things to work with
• Enables automation and scaling of Kedro
• Easy to collaborate with others
• A properly written hook can save lots of time
• Works as it should and is seamless
• Can handle a variety of credentials out of the box
• Each person can have their own setup to access data
• Enables a structured approach to dev/qa/prod
• Globals.yml can be different for each environment
• Decouples code and config
• Helps teams test and prototype in environments in a risk-free way
• Easy and straightforward to use
• Easy to read and maintain
• Like the “params:” prefix to quickly identify them in code
• Declarative syntax makes it easy to use, read and debug
• Simplification of I/O
• Decouples code and I/O
• Already has many data connectors built in
• Transcoding datasets
What are the pain points of this configuration?
• Running into issues with
kedro install
on Windows• Changes to hooks and pipeline registry between versions
• Arguments in the terminal are not version controlled
• --nodes is node_names in the yml file
• You need some knowledge to setup - not easy for beginners
• Can reduce transparency of code.
• Users might have the idea - but they don't always find it easy to implement
• Jinja was not well received by clients
• For beginners, can be a little hard to grasp why credentials are separated from the Data Catalog or code
• Feels misaligned with CI/CD tooling
• The inheritance pattern of local / custom / base can be hard for new users to pick up
• Parameters not inheriting base keys and you need to overwrite the entire entry
• Repetition and duplication of files
• Can grow to large files leading a very nested dictionary
• Cannot have ranges or step increments
• Little IDE support means you need to follow the logic yourself
• Duplication of files
• Minor changes to entries need to be applied everywhere - can be difficult to sync
• Not easy to write a custom class for unsupported datasets
• For some teams, YAML anchors are beyond their skillset
• Very long catalog files
What new features are users requesting to support their work?
• More documentation for migrations with breaking changes
• Hooks for when a model starts and ends
• Have nested dependencies in globals.yml
• Enable flexible inheritance across environments
• Provision to separate use cases and environments
• Implement namespaces to parameters
• More dynamic entries i.e., ranges
• Address the repetition and duplication of catalogs
• More guidance on picking the best datatype for an entry
• Support more upcoming datasets i.e., TensorFlow
Overall configuration in Kedro is well received and liked by users. No column had a particularly negative response and users largely understood and appreciated the approach Kedro has taken so far. During this exercise, it became clear that configuration scaling impacts a small set of use cases summarised in the table below.
This would indicate that large configuration files are mostly seen internally often on large analytics project. This stems from Kedro not supporting multiple uses in a monorepo, therefore, forcing the user to use Config Environments as a stop-gap solution. This however then prevents teams from using it for its intended purpose of separating development environments.
6. GitHub Analysis
To support qualitative insights from user research, a custom GitHub query was created to gather quantitative on the Data Catalog.
At the time of running (18 Aug 2021) this presented 411 results of which 138 were real Kedro Data Catalog files. Note, empty Data Catalogs, spaceflights or iris examples and non Kedro projects were manually filtered out. This query assumes that these files are representative of open-source users and that Data Catalogs follow the
/conf/
folder structure. Furthermore, it’s impossible to determine if these are complete files of finished projects or still under development.From this, it was found that only 9% of users were using YAML anchors and only 2% were using
globals.yml
. However, 89% of users were using some type of namespacing in their catalog entries. Furthermore, the number of Data Catalog entries per file were counted. From the histogram below, Data Catalog entries peak around 10.7. Data Catalog Generator
To better understand what users need from the Data Catalog, users were presented with possible options using prototype code. Participants were presented with two existing options for the Data Catalog, and two possible solutions: pattern matching and Jinja templating (with users favouring the former of the two). Users were also asked about their feelings about moving the Data Catalog entirely to Python. Here, participants were universally against the idea of moving the Data Catalog into Python as it would fundamentally go against the principles of Kedro.
• Declarative syntax makes it easy to use, read and debug
• Simplification of I/O
• Decouples code and I/O
• Already has many data connectors built in
• Transcoding datasets
• Still easy to read and debug
• Built in YAML feature, so used in other tools that use YAML
• Still somewhat declarative
• Drastically reduces the number lines
• Viewed beginner friendly
• Takes away additional steps of having declare new files in the Data Catalog
• Somewhat established in the Python world so may have already used it elsewhere
• Reduces the number lines but not as much as Pattern Matching
• Greater control between memory and file datasets
• Access to StackOverflow to help debug issues
• Duplication of files
• Minor changes to entries need to be applied everywhere - can be difficult to sync
• Not easy to write a custom class for unsupported datasets
• For some teams, YAML anchors are beyond their skillset
• Very long Data Catalog files
• Users were using it without knowing they are using it
• Getting accustomed to the notation can take a while to learn and fully understand
• Sub-keys are declared elsewhere which impacts readability
• Concern about the order of operations
• Doesn't work for raw datasets
• Breaks when the files have different schema definitions in the Data Catalog entries
• Concern about unintended consequences
• Doesn't solve the file duplication problem
• Same naming structure doesn't mean files have the same structure
• Doesn't work for raw datasets
• User experience suggest beginners struggle to use and understand it - some teams have even removed it completely from their work
• Can over complicate the Data Catalog with logic
• Breaks when the files have different schema definitions in the Data Catalog entries
• Doesn't solve the file duplication problem
• Bigger learning curve compared to previous options
• Whitespace control can be difficult to manage
• Mixes code and I/O which goes against Kedro principles
• Considered very unfriendly - especially for non-tech users
• Huge concerns on giving too much freedom to users who might abuse this flexibility
8. Solution Criteria
While it was important to test the ideas, it was even more important to understand the criteria of a successful solution that would improve the experience of using the Data Catalog. Therefore, users identified the following 7 components: