Currently the server type can only be one of the hard-coded values defined the schema:
servers:
production:
type: MyCustomServerType
description: "Access point of the data x, y, z .. "
YAML [schema](https://datacontract.com/datacontract.schema.json) validation failed
Line 16: Value is not accepted. Valid values: "bigquery", "BigQuery", "s3", "sftp", "redshift", "azure", "sqlserver", "snowflake", "databricks", "dataframe", "glue", "postgres", "oracle", "kafka", "pubsub", "kinesis", "trino", "local".
It is understandable, that the predefined values make it easier for example to develop tooling around it, like datacontract-cli, but the datacontract-specification should be decoupled from this.
The contract should allow any custom type to specified (Kusto, SQLite, MariaDB, .., ..) - there will always be more types in the existence than a tool can support, but we should have a common language how we describe what data we offer, and from where.
Currently the server type can only be one of the hard-coded values defined the schema:
It is understandable, that the predefined values make it easier for example to develop tooling around it, like datacontract-cli, but the datacontract-specification should be decoupled from this. The contract should allow any custom type to specified (Kusto, SQLite, MariaDB, .., ..) - there will always be more types in the existence than a tool can support, but we should have a common language how we describe what data we offer, and from where.