Open astrojuanlu opened 1 month ago
Good question! The schema was originally a local setting where you can point to a file (and thus you can edit the file to include new datasets). This is now not possible.Ideally I would like it to version against kedro-datasets
, but it's not possible to ship different version of schema with the extension.
I would love to access the API of https://github.com/redhat-developer/vscode-yaml
so I can potentially support these in a flexible way, unfortunately it's not well documented so I didn't pursue.
The current solution relies on RedHat's YAML extension. There are two other options:
Having this problem too when using custom datasets from a another package
The problem also applies to YAML anchors/aliases:
_spark_parquet: &spark_parquet
type: spark.SparkDataset
save_args:
mode: overwrite
Thanks @michal-mmm, I can reproduce.
Entries starting with underscore aren't treated as datasets in general (couldn't find it documented @noklam?)
Spotted this with https://github.com/Galileo-Galilei/kedro-mlflow
kedro_mlflow.io.models.MlflowModelTrackingDataset
Not sure if this should be fixed on the extension or on the schema.