databricks / cli

Databricks CLI
Other
132 stars 50 forks source link

Make bundle JSON schema modular with `$defs` #1700

Closed shreyas-goenka closed 2 weeks ago

shreyas-goenka commented 1 month ago

Changes

This PR makes sweeping changes to the way we generate and test the bundle JSON schema. The main benefits are:

  1. More modular JSON schema. Every definition in the schema now is one level deep and points to references instead of inlining the entire schema for a field. This unblocks PyDABs from taking a dependency on the JSON schema.

  2. Generate the JSON schema during CLI code generation. Directly stream it instead of computing it at runtime whenever a user calls databricks bundle schema. This is nice because we no longer need to embed a partial OpenAPI spec in the CLI. Down the line, we can add a Schema() method to every struct in the Databricks Go SDK and remove the dependency on the OpenAPI spec altogether. It'll become more important once we decouple Go SDK structs and methods from the underlying APIs.

  3. Add enum values for Go SDK fields in the JSON schema. Better autocompletion and validation for these fields. As a follow-up, we can add enum values for non-Go SDK enums as well (created internal ticket to track).

  4. Use "packageName.structName" as a key to read JSON schemas from the OpenAPI spec for Go SDK structs. Before, we would use an unrolled presentation of the JSON schema (stored in bundle_descriptions.json), which was complex to parse and include in the final JSON schema output. This also means loading values from the OpenAPI spec for target schema works automatically and no longer needs custom code.

  5. Support recursive types (eg: for_each_task). With us now using $refs everywhere it's trivial to support.

  6. Using complex variables would be invalid according to the schema generated before this PR. Now that bug is fixed. In the future adding more custom rules will be easier as well due to the single level nature of the JSON schema.

Since this is a complete change of approach in how we generate the JSON schema, there are a few (very minor) regressions worth calling out.

  1. We'll lose a few custom descriptions for non Go SDK structs that were a part of bundle_descriptions.json. Support for those can be added in the future as a followup.
  2. Since now the final JSON schema is a static artefact, we lose some lead time for the signal that JSON schema integration tests are failing. It's okay though since we have a lot of coverage via the existing unit tests.

Tests

Unit tests. End to end tests are being added in this PR: https://github.com/databricks/cli/pull/1726

Previous unit tests were all deleted because they were bloated. Effort was made to make the new unit tests provide (almost) equivalent coverage.