frictionlessdata / datapackage

Data Package is a standard consisting of a set of simple yet extensible specifications to describe datasets, data files and tabular data. It is a data definition language (DDL) and data API that facilitates findability, accessibility, interoperability, and reusability (FAIR) of data.
https://datapackage.org
The Unlicense
488 stars 112 forks source link

Yaml as well as JSON for Data Package descriptor files #292

Open rufuspollock opened 7 years ago

rufuspollock commented 7 years ago

Idea: allow data package descriptor files to be in yaml as well as json.

Why: yaml is easier to create and read for ordinary people. JSON is easy to get wrong.

Why not: adds complexity for all implementors of tools as they need and additional format.

I'm creating this for discussion. Very tentative idea atm.

Subissues:

pwalsh commented 7 years ago

I'm neutral on this.

We programmers like to think YAML is easier for ordinary folk, but in my experience the importance of whitespace in YAML is actually a killer for ordinary folk - it is just a different type of problem to that of JSON.

However, I personally think YAML is a fine format, and supporting it as a first class citizen seems a reasonable choice, except in the browser where adding additional dependencies can actually matter (after we add YAML, we add TOML, etc., whatever the favoured serialisation format of the day, and suddenly we have bloat).

danfowler commented 7 years ago

Agreeing here that both YAML and JSON are easy to get wrong. Probably worth keeping an eye on the popularity of CSVY as a method of applying JSON Table Schema as YAML frontmatter to a CSV file.

rufuspollock commented 7 years ago

WONTFIX.

OK. If yaml is not really easier I think I'm going to close this as WONTFIX for now. It imposes additional costs on implementors without making it much easier for publishers.

Definitely open to reconsider if raised in future.

jgmill commented 7 years ago

I use YAML internally when writing the metadata for my data package because it makes for more readable code. For the output I then convert it to JSON as this is the standard you guys specified.

I think it makes sense to stick with one format for the standard. I'm neutral on whether that should be YAML or JSON as I'm not familiar with the specif pros and cons.

danfowler commented 7 years ago

@muehlenpfordt to support your point, @akariv made datapackage-pipelines which allows a pipeline creator to describe the Table Schema in YAML

danfowler commented 7 years ago

Another point to be made in this closed issue is that datapackage.json is also expressible in this Metatab format.

rufuspollock commented 6 years ago

I'm reopening this as i think yaml support would be really nice and simple and YAML is now really familiar (e.g. from jekyll etc) and is a lot easier to write than JSON IME.

rufuspollock commented 5 years ago

@pwalsh @akariv what do you think about going for this as an option going forward?

micimize commented 5 years ago

I don't think a valid data package should ever lack a datapackage.json, but maybe we could make _datapackage.json a location convention for generated package descriptors, and have tooling for generating them.

This way developers can use whatever source-of-truth (yaml, graphql schema, classes), and use/write generators for their specific use-case. We can always include these sources in packages for documentation purposes.

rufuspollock commented 4 years ago

YAML def looks like it is becoming a default for writing human-writable but computer parsable config e.g. look at on CI tooling. I think it is time we support yaml, perhaps even as the default.

peterdesmet commented 2 years ago

My 2 cents: as a developer I like yaml very much as a human readable config format, but I agree with @pwalsh's comment in https://github.com/frictionlessdata/specs/issues/292#issuecomment-246169480: the importance of whitespace is not intuitive. Other than familiarity with a format, I'm not sure it offers that much benefit to the publishers, while placing quite a burden on implementors, with potentially more requests coming:

  1. Being able to mix JSON (e.g. datapackage.json) with YAML (e.g. schema.yaml)
  2. Supporting TOML, XML, ...
  3. Being able to write extensions to specs (which are expressed as JSON schemas) in YAML

Since Data Packages are a container format for publishing and archiving data, I think it is good to keep a long term perspective in mind and be restrictive/conservative when it comes to specs and only support JSON.

ezwelty commented 5 months ago

I would have argued YAML is easier for a non-programmer to read, but in any case I've now made it a habit of also including a markdown and/or pdf rendering. I do find it easier to maintain in two specific cases: complex pattern constraints and long package/resource/field descriptions that span more than one line.

- pattern: https?:\/\/.+
- description: |-
    Drilling method:

    - mechanical
    - thermal
{
  "pattern": "https?:\\/\\/.+",
  "description": "Drilling method:\n\n- mechanical\n- thermal"
}
rufuspollock commented 2 months ago

If am understanding correctly this was closed as WONTFIX in https://github.com/frictionlessdata/datapackage-v2-draft/pull/50

I'd just add my 2c that from my experience of last half dozen years using yaml is very attractive. I understand the issue of placing a burden on implementors - perhaps the burden can be lowered if one forbade e.g. "mixing and matching".

So flagging this for consideration in v2.1 or similar - this wouldn't be a breaking change and it could be an opt-in for tools gradually? (or, perhaps we have some method for optional extensions that can be tried out for a time and we see how it goes).

roll commented 2 months ago

@rufuspollock In my opinion one of the most important things of v2 work was establishing this system that any change can be promoted by anyone and then voted by the Working Group so yes the current decision was to keep JSON-only but if there is still demand it totally needs to be re-opened following this process :+1:

roll commented 2 months ago

BTW please take a look at the current wording regarding YAML - https://datapackage.org/standard/glossary/#descriptor

vkhodygo commented 1 month ago

A colleague of mine told me about your project recently, and it seems to be a life-saver. However,. I'm surprised that YAML is not the default option considering how widespread it is nowadays.

I'd like to add a few comments regarding what's been said here already:

One of the clear advantages of YAML is that it allows usage of tags/labels as well as references which is a nice feature to have to reduce data duplication and enforce data type compliance.

Note that this reply is opinionated mostly because I had to manually create metadata files for large datasets basically implementing a trimmed-down version of datapackage, and YAML was my first choice for that.

khusmann commented 1 month ago

@vkhodygo I agree YAML can be nice for authoring datapackages, but as you say:

Mixing JSON with YAML is a bit ambiguous, sticking with one format that's both human- and machine-readable is the best option.

So the current guidance in data package v2 is:

A descriptor MAY be serialized using alternative formats like YAML or TOML as an internal part of some project or system if supported by corresponding implementations. A descriptor SHOULD NOT be externally published in any other format rather than JSON.

I think this gives us the best of both worlds -- you can enjoy all the benefits of YAML you mention as you build the data package internally, but then when you publish, you simply render to JSON. This way publishers and data consumers can enjoy all the benefits of JSON's simple parse-ability & unambiguous standard, and gives us one standard format for easy exchange.

ezwelty commented 4 weeks ago

I think this gives us the best of both worlds -- you can enjoy all the benefits of YAML you mention as you build the data package internally, but then when you publish, you simply render to JSON

I'm wondering whether this is really the best of both worlds as I'm faced with exactly this step. First, it requires a custom build step. For a datapackage maintained on GitHub with a YAML, this rules out publishing directly to Zenodo using the standard GitHub-Zenodo integration. Second, I would argue that it increases the need for an additional file that contains a more human-friendly rendering of the JSON.

Here is a side-by-side comparison of YAML and JSON for a more complex datapackage. I'd argue the YAML can stand as a basic text-based readme, but that the JSON looks more like machine code? datapackage.json.pdf datapackage.yaml.pdf