Closed jamesls closed 1 year ago
While refactoring the spec repo I changed the format of the JEP https://github.com/jmespath-community/jmespath.spec/blob/efbdbd9ee3292285b65f1f8b38fbcf56760087fa/TEMPLATE.md?plain=1#L8 The semver level change is declared in the JEP, then when it is accepted the version is incremented at that level conceptually but there is no actual version tag until there are enough changes to justify it. Once the repo has hit some sort of milestone, it is tagged with the current semver version. The version tags are not necessarily contiguous. The tags are to provide implementers reasonable milestone targets when choosing a point to update their implementations, without preventing implementers from going ahead and keeping up with the bleeding edge of the spec.
I also significantly reworked the way tests are tracked in the spec repo. The tests were changed to YAML format to make them more readable and leverage YAML features to de-duplicate data. The concept of a 'suite' in the original testing structure was done away with as it was more just a means of de-duplicating data. The function tests were split out into separate YAML documents, one for each function. Along with the specific tests, these documents contain descriptions of the functions signature in a structured format that lends itself to decoupling the presentation in the website and meta-programming. Ultimately the idea was that if an implementation wanted to be included in the list it would need to provide a CLI that would allow running the compliance tests against.
I combined the tests repo, the JEPs and the spec all into one repo so that changes are tracked in the same commit history. The expectation was that a JEP would accompany all of the required tests, grammar changes, and documentation all in the same PR.
In our effort with @innovate-invent we settled on using version number 2015-09-05-9e8d0e3
for the currently widely known version of JMESPath. It tracks the date and SHA1 commit identifier of the last commit that updated the spec in this repository.
Our plan was to go with semantic versioning from there onwards.
The semver level change is declared in the JEP, then when it is accepted the version is incremented at that level conceptually but there is no actual version tag until there are enough changes to justify it. Once the repo has hit some sort of milestone, it is tagged with the current semver version. The version tags are not necessarily contiguous. The tags are to provide implementers reasonable milestone targets when choosing a point to update their implementations, without preventing implementers from going ahead and keeping up with the bleeding edge of the spec.
I like the idea, couple of questions:
I also significantly reworked the way tests are tracked in the spec repo.
It's an interesting idea that I'd like to look over more. One concern I have is that many of the existing implementations have test runners for the existing test suite format, and I'm hesitant to require them to rewrite their runner unless there's significant benefit in doing so (not that there isn't, haven't had a chance to look them over yet).
Our plan was to go with semantic versioning from there onwards.
To clarify, are you saying that the 2015-09-05-9e8d0e3
style version is intended to be temporary until the switch to semver?
Our plan was to go with semantic versioning from there onwards.
To clarify, are you saying that the
2015-09-05-9e8d0e3
style version is intended to be temporary until the switch to semver?
When we debated this, I was leaning towards using this style going forward but we landed on the consensus that semantic versioning was better in the long run. Since no version 1.0
was ever released, I also wanted a clear track from where we diverged from the official repository. Hence, we would have kept the date-style versioning scheme only temporarily until the next version 1.0
which would have been our first release.
- What would determine a milestone for tagging the current semver version? The breadth of changes? Amount of time?
I hadn't gotten that far. I was thinking of just kina winging it and tagging a version any time an "important" JEP was merged. I expect that JEPs would somewhat (but not necessarily) accumulate while being drafted and would be a part of some sort of milestone.
- In the time between a JEP being accepted/merged until the time the semver is bumped, would the JEP be considered part of the spec (i.e are library authors free to implement the JEP)?
The semver is incremented by a JEP being merged. I it is occurring to me that most JEPs would be a MINOR change so I am not too sure how to actually work this. I think the MAJOR would be incremented once per batch of breaking changes rather than per breaking JEP. Perhaps the same should be done for the MINOR as well.
One concern I have is that many of the existing implementations have test runners for the existing test suite format
We wanted to push for implementations to provide a CLI rather than their own test pipeline. That said, I can very easily provide a script that will convert the existing tests to the original format. They can clone the repo and run this script to generate the json on demand. I don't expect it to be too difficult to swap to a yaml parser from a json parser for their test suites though.
I'm sure this will evolve over time so I wanted to start with as lightweight a process as possible.
I will also backfill the existing JEPs to this repo in a separate PR.
Unresolved Issues
There's still a few things we need to figure out in the overall process, but I don't think it needs to block reviewing JEPs.
Interested in hearing other ideas people may have.