Closed perlboy closed 2 years ago
By way of example, the following pull request filters (mostly) the static html content currently located within master
branch. This comparison is a start, 300+ file changes to 26, 400,000 lines of change versus <10,000: https://github.com/perlboy/standards/pull/6
Thank you for these suggestions. The DSB will consider them for the new year.
Just a note that the inclusion of static pages for previous versions is only being done to prevent broken links where notifications for specific versions have been published on the CDR web site. Tagging and static zips of versions will not address this need.
Just a note that the inclusion of static pages for previous versions is only being done to prevent broken links where notifications for specific versions have been published on the CDR web site. Tagging and static zips of versions will not address this need.
Tagging and auto publishing based on these tags as per suggested change (5) does solve this need and would result in essentially the same output that is already being statically committed albeit without the cruft of static output within primary source control.
Preamble: Biza currently has 42 open issues within Standards Maintenance. In an effort to optimise our own backlog we are closing those which have no actual response from the DSB. They may be reopened at a later time or referenced when the issues are highlighted by third parties.
5 months has passed since the DSB indicated they would look at this "in the new year"
In today's implementation call representatives of the DSB stated they had listened to feedback regarding Standards releases. I wholeheartedly disagree, in the past 2 years the DSB has adopted none of the suggestions in this thread while embarking on a bespoke and foreign release process resulting in a complete mess of traceability with binary overwrites of various data and an expectation participants will scour statically published versions of the standards when changes are proposed. The result is that in the past 4 weeks there have been 800,000 lines added to and 85,000 lines removed from the standards repository across 4 releases.
This makes it extremely difficult for effective source analysis especially given a history of undocumented or invalid modifications including untagged changes made post release. This is combined with a hodge-podge of snowflake shell scripts making reproducibility of the Standards the technical equivalent of knowing the secret handshake and combination.
The DSB appears to continue to think that git, easily one of the worlds most capable source control systems, is nothing more than a file system. There is proven evidence in the wild of how this process can be done correctly resulting in a 99% reduction in source code changes while maintaining traceability of changes. There is also evidence in the CDR ecosystem that changes are being missed because of the DSBs approach to source management.
As the maintainer of parallel versions of the Standards used by more than 10% of Active Data Holders, the Founder of the largest SaaS CDR Holder provider in the ecosystem, a verifying party of over 50% of all active holders, a software developer seeking to easily understand and write code using the APIs in modern development environments (Outcome Principle 4) and the largest non-government contributor to the Standards process I formally request that the Data Standards Body reevaluate the options for consistent and comprehensible Standards publishing and adopt a modern release pattern for the publishing of the Standards.
Adatree supports @perlboy 's recommended approach. As a SaaS provider on the other end of the value chain we are facing similar difficulties in identifying changes from release to release and will often resort to Biza.io's own parallel version of the standards as a cross-check to ensure we haven't missed anything. This is a symptom of a broken process but it's an easy thing to fix. It would save a lot of time and effort for participants so is surely worth doing.
In the last year we have:
As stated on the implementation call we have been trying to listen, adapt and improve but we are still committed to continuous improvement. It's worth noting that we didn't adopt ReDoc after the trial due to usability concerns reported by the community. Participants actively preferred the current layout.
To help us address these concerns specific incremental suggestions for improvement would be welcome. The feedback we have received to date has been general (things could be better) or overly impactful (you should change everything to work this way). It is difficult for us to address these suggestions successfully.
@perlboy and @ShaneDoolanAdatree - are there any specific incremental changes we could make that would meaningfully address some of the concerns you raise?
Honestly James it seems you simply don't understand what is being asked here.
- Begun using an open change process with a documented and transparent branching and merge strategy in the standards staging repository
For which the DSB was clearly told was a divergent engineering strategy and with no real support for proceeded anyway.
- Trialled the use of ReDoc for the draft energy standards
Deploying a piece of software as stock while investing zero time beyond importing the spec can hardly be called an effective trial in comparison to a piece of software which has been massaged over 3 years.
- Worked to improve the release notes documentation
Which continue to miss changes you've deemed "immaterial".
- Restructured the deployment process to make builds consistent and automated
Automated? Where? The CI (which is disabled) builds a Docker image which is never used for deployment. Automation appears to be your finger on the merge button.
So here we go, let's go and compile the Standards.
Clone them:
% git clone https://github.com/ConsumerDataStandardsAustralia/standards.git
Cloning into 'standards'...
remote: Enumerating objects: 12373, done.
remote: Counting objects: 100% (5307/5307), done.
remote: Compressing objects: 100% (1459/1459), done.
remote: Total 12373 (delta 4238), reused 4837 (delta 3813), pack-reused 7066
Receiving objects: 100% (12373/12373), 107.84 MiB | 12.24 MiB/s, done.
Resolving deltas: 100% (7379/7379), done.
Updating files: 100% (2673/2673), done.
No doc so I guess let's use this "build.sh" thing:
% ./build.sh
*** Starting Full Docs Build ***
/tmp/standards
*** Input Swagger: api/cds_banking.json
*** Output Format: swagger
*** Output Extension json
*** Output Dir: ../slate/source/includes/swagger
*** Checking Swagger Validator ***
Error: Unable to access jarfile /Users/stuart/swagger-codegen/swagger-codegen-cli.jar
Ok, so I need something, look at swagger-gen/swagger_generate.sh:
% cat swagger-gen/swagger_generate.sh | grep swagger-codegen-cli.
VALID_SWAGGER=$(java -jar $SWAGGER_CODEGEN/swagger-codegen-cli.jar validate -i $INPUT_SWAGGER )
#java -jar $SWAGGER_CODEGEN/swagger-codegen-cli.jar validate -i $INPUT_SWAGGER
java -jar $SWAGGER_CODEGEN/swagger-codegen-cli.jar generate -i $INPUT_SWAGGER -l $OUTPUT_FORMAT -o $SWAGGER_CODEGEN_OUTPUT
Cool, this must
be what "they" mean:
% wget -O ~/swagger-codegen/swagger-codegen-cli.jar https://repo1.maven.org/maven2/io/swagger/swagger-codegen-cli/2.4.23/swagger-codegen-cli-2.4.23.jar
--2021-11-05 17:08:30-- https://repo1.maven.org/maven2/io/swagger/swagger-codegen-cli/2.4.23/swagger-codegen-cli-2.4.23.jar
Resolving repo1.maven.org (repo1.maven.org)... 199.232.196.209, 199.232.192.209
Connecting to repo1.maven.org (repo1.maven.org)|199.232.196.209|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 15240552 (15M) [application/java-archive]
Saving to: ‘/Users/stuart/swagger-codegen/swagger-codegen-cli.jar’
/Users/stuart/swagger-codegen/swagger-codegen-cli.jar 100%[=============================================================================================================================================================================================================================================================================================================================================================================================>] 14.53M 7.05MB/s in 2.1s
2021-11-05 17:08:33 (7.05 MB/s) - ‘/Users/stuart/swagger-codegen/swagger-codegen-cli.jar’ saved [15240552/15240552]
Also there's something about openapi generator in there too:
% cat swagger-gen/swagger_generate.sh | grep openapi
# wget http://central.maven.org/maven2/org/openapitools/openapi-generator-cli/3.3.4/openapi-generator-cli-3.3.4.jar -O openapi-generator-cli.jar
#Location of openapi codegen install
OAS_CODEGEN=$HOME/openapi-codegen
VALID_OAS=$(java -jar $OAS_CODEGEN/openapi-generator-cli.jar validate -i $INPUT_SWAGGER)
% wget http://central.maven.org/maven2/org/openapitools/openapi-generator-cli/3.3.4/openapi-generator-cli-3.3.4.jar -O ~/openapi-codegen/openapi-generator-cli.jar
--2021-11-05 17:03:58-- http://central.maven.org/maven2/org/openapitools/openapi-generator-cli/3.3.4/openapi-generator-cli-3.3.4.jar
Resolving central.maven.org (central.maven.org)... failed: nodename nor servname provided, or not known.
wget: unable to resolve host address ‘central.maven.org’
# That's ok, I know about maven too
% wget https://repo1.maven.org/maven2/org/openapitools/openapi-generator-cli/3.3.4/openapi-generator-cli-3.3.4.jar -O ~/openapi-codegen/openapi-generator-cli.jar
--2021-11-05 17:04:51-- https://repo1.maven.org/maven2/org/openapitools/openapi-generator-cli/3.3.4/openapi-generator-cli-3.3.4.jar
Resolving repo1.maven.org (repo1.maven.org)... 199.232.192.209, 199.232.196.209
Connecting to repo1.maven.org (repo1.maven.org)|199.232.192.209|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 17172626 (16M) [application/java-archive]
Saving to: ‘/Users/stuart/openapi-codegen/openapi-generator-cli.jar’
/Users/stuart/openapi-codegen/openapi-generator-cli.jar 100%[=============================================================================================================================================================================================================================================================================================================================================================================================>] 16.38M 7.32MB/s in 2.2s
2021-11-05 17:04:54 (7.32 MB/s) - ‘/Users/stuart/openapi-codegen/openapi-generator-cli.jar’ saved [17172626/17172626]
So good to go now right?
% ./build.sh
*** Starting Full Docs Build ***
/tmp/standards
*** Input Swagger: api/cds_banking.json
*** Output Format: swagger
*** Output Extension json
*** Output Dir: ../slate/source/includes/swagger
*** Checking Swagger Validator ***
*** Swagger Validator: Validating spec file (api/cds_banking.json)
*** Checking OAS Validator ***
*** OAS Validator: Validating spec (api/cds_banking.json) No validation issues detected.
*** Generating swagger
[main] INFO io.swagger.parser.Swagger20Parser - reading from api/cds_banking.json
[main] WARN io.swagger.codegen.ignore.CodegenIgnoreProcessor - Output directory does not exist, or is inaccessible. No file (.swagger-codegen-ignore) will be evaluated.
[main] INFO io.swagger.codegen.DefaultGenerator - writing file /tmp/cds_swagger_gen/README.md
[main] INFO io.swagger.codegen.AbstractGenerator - writing file /tmp/cds_swagger_gen/.swagger-codegen-ignore
[main] INFO io.swagger.codegen.AbstractGenerator - writing file /tmp/cds_swagger_gen/.swagger-codegen/VERSION
# Swagger JSON This is a swagger JSON built by the [swagger-codegen](https://github.com/swagger-api/swagger-codegen) project.
*** Moving to output dir ../slate/source/includes/swagger
*** Outfile: ../slate/source/includes/swagger/cds_banking.json
*** Removing temporary swagger gen output dir /tmp/cds_swagger_gen
*** Complete ***
*** Input Swagger: api/cds_energy.json
*** Output Format: openapi
*** Output Extension json
*** Output Dir: ../slate/source/includes/swagger
*** Checking OAS Validator ***
[main] ERROR i.s.parser.SwaggerCompatConverter - failed to read resource listing
com.fasterxml.jackson.core.JsonParseException: Unexpected character ('}' (code 125)): was expecting double-quote to start field name
at [Source: (StringReader); line: 129, column: 30]
at com.fasterxml.jackson.core.JsonParser._constructError(JsonParser.java:1804)
at com.fasterxml.jackson.core.base.ParserMinimalBase._reportError(ParserMinimalBase.java:663)
at com.fasterxml.jackson.core.base.ParserMinimalBase._reportUnexpectedChar(ParserMinimalBase.java:561)
at com.fasterxml.jackson.core.json.ReaderBasedJsonParser._handleOddName(ReaderBasedJsonParser.java:1757)
at com.fasterxml.jackson.core.json.ReaderBasedJsonParser.nextFieldName(ReaderBasedJsonParser.java:907)
at com.fasterxml.jackson.databind.deser.std.BaseNodeDeserializer.deserializeObject(JsonNodeDeserializer.java:247)
at com.fasterxml.jackson.databind.deser.std.BaseNodeDeserializer.deserializeObject(JsonNodeDeserializer.java:255)
at com.fasterxml.jackson.databind.deser.std.BaseNodeDeserializer.deserializeObject(JsonNodeDeserializer.java:255)
at com.fasterxml.jackson.databind.deser.std.BaseNodeDeserializer.deserializeObject(JsonNodeDeserializer.java:255)
at com.fasterxml.jackson.databind.deser.std.BaseNodeDeserializer.deserializeObject(JsonNodeDeserializer.java:255)
at com.fasterxml.jackson.databind.deser.std.BaseNodeDeserializer.deserializeObject(JsonNodeDeserializer.java:255)
at com.fasterxml.jackson.databind.deser.std.BaseNodeDeserializer.deserializeObject(JsonNodeDeserializer.java:255)
at com.fasterxml.jackson.databind.deser.std.BaseNodeDeserializer.deserializeObject(JsonNodeDeserializer.java:255)
at com.fasterxml.jackson.databind.deser.std.JsonNodeDeserializer.deserialize(JsonNodeDeserializer.java:68)
at com.fasterxml.jackson.databind.deser.std.JsonNodeDeserializer.deserialize(JsonNodeDeserializer.java:15)
at com.fasterxml.jackson.databind.ObjectMapper._readTreeAndClose(ObjectMapper.java:4044)
at com.fasterxml.jackson.databind.ObjectMapper.readTree(ObjectMapper.java:2539)
at io.swagger.parser.SwaggerCompatConverter.readResourceListing(SwaggerCompatConverter.java:210)
at io.swagger.parser.SwaggerCompatConverter.read(SwaggerCompatConverter.java:123)
at io.swagger.parser.SwaggerCompatConverter.readWithInfo(SwaggerCompatConverter.java:94)
at io.swagger.parser.SwaggerParser.readWithInfo(SwaggerParser.java:42)
at io.swagger.v3.parser.converter.SwaggerConverter.readLocation(SwaggerConverter.java:92)
at io.swagger.parser.OpenAPIParser.readLocation(OpenAPIParser.java:19)
at org.openapitools.codegen.cmd.Validate.run(Validate.java:46)
at org.openapitools.codegen.OpenAPIGenerator.main(OpenAPIGenerator.java:62)
[main] ERROR i.s.parser.SwaggerCompatConverter - failed to read resource listing
com.fasterxml.jackson.core.JsonParseException: Unexpected character ('}' (code 125)): was expecting double-quote to start field name
at [Source: (StringReader); line: 129, column: 30]
at com.fasterxml.jackson.core.JsonParser._constructError(JsonParser.java:1804)
at com.fasterxml.jackson.core.base.ParserMinimalBase._reportError(ParserMinimalBase.java:663)
at com.fasterxml.jackson.core.base.ParserMinimalBase._reportUnexpectedChar(ParserMinimalBase.java:561)
at com.fasterxml.jackson.core.json.ReaderBasedJsonParser._handleOddName(ReaderBasedJsonParser.java:1757)
at com.fasterxml.jackson.core.json.ReaderBasedJsonParser.nextFieldName(ReaderBasedJsonParser.java:907)
at com.fasterxml.jackson.databind.deser.std.BaseNodeDeserializer.deserializeObject(JsonNodeDeserializer.java:247)
at com.fasterxml.jackson.databind.deser.std.BaseNodeDeserializer.deserializeObject(JsonNodeDeserializer.java:255)
at com.fasterxml.jackson.databind.deser.std.BaseNodeDeserializer.deserializeObject(JsonNodeDeserializer.java:255)
at com.fasterxml.jackson.databind.deser.std.BaseNodeDeserializer.deserializeObject(JsonNodeDeserializer.java:255)
at com.fasterxml.jackson.databind.deser.std.BaseNodeDeserializer.deserializeObject(JsonNodeDeserializer.java:255)
at com.fasterxml.jackson.databind.deser.std.BaseNodeDeserializer.deserializeObject(JsonNodeDeserializer.java:255)
at com.fasterxml.jackson.databind.deser.std.BaseNodeDeserializer.deserializeObject(JsonNodeDeserializer.java:255)
at com.fasterxml.jackson.databind.deser.std.BaseNodeDeserializer.deserializeObject(JsonNodeDeserializer.java:255)
at com.fasterxml.jackson.databind.deser.std.JsonNodeDeserializer.deserialize(JsonNodeDeserializer.java:68)
at com.fasterxml.jackson.databind.deser.std.JsonNodeDeserializer.deserialize(JsonNodeDeserializer.java:15)
at com.fasterxml.jackson.databind.ObjectMapper._readTreeAndClose(ObjectMapper.java:4044)
at com.fasterxml.jackson.databind.ObjectMapper.readTree(ObjectMapper.java:2539)
at io.swagger.parser.SwaggerCompatConverter.readResourceListing(SwaggerCompatConverter.java:210)
at io.swagger.parser.SwaggerCompatConverter.read(SwaggerCompatConverter.java:123)
at io.swagger.parser.SwaggerCompatConverter.readWithInfo(SwaggerCompatConverter.java:94)
at io.swagger.parser.SwaggerParser.readWithInfo(SwaggerParser.java:42)
at io.swagger.v3.parser.converter.SwaggerConverter.readLocation(SwaggerConverter.java:92)
at io.swagger.parser.OpenAPIParser.readLocation(OpenAPIParser.java:19)
at org.openapitools.codegen.cmd.Validate.run(Validate.java:46)
at org.openapitools.codegen.OpenAPIGenerator.main(OpenAPIGenerator.java:62)
Exception in thread "main" java.lang.NullPointerException
at java.base/java.util.HashSet.<init>(HashSet.java:119)
at org.openapitools.codegen.cmd.Validate.run(Validate.java:48)
at org.openapitools.codegen.OpenAPIGenerator.main(OpenAPIGenerator.java:62)
NOPE!
As stated on the implementation call we have been trying to listen, adapt and improve but we are still committed to continuous improvement. It's worth noting that we didn't adopt ReDoc after the trial due to usability concerns reported by the community. Participants actively preferred the current layout.
If this is a reference to the comments noted here it appears the author didn't even bother to read the configuration guide.
Nonetheless, Redoc vs. Slate is kind of irrelevent to this discussion because this discussion is about modern engineering practices and reproducible builds without dumping binary content into source control as if it is some extension to OneDrive.
To help us address these concerns specific incremental suggestions for improvement would be welcome. The feedback we have received to date has been general (things could be better) or overly impactful (you should change everything to work this way). It is difficult for us to address these suggestions successfully.
Well that's complete rubbish. I'd suggest it's effectively the inverse, every time suggestions have been made the DSB has rejected them by stating that internal release processes are "not subject to formal consultation" or hypothecated entirely bespoke release processes.
@perlboy and @ShaneDoolanAdatree - are there any specific incremental changes we could make that would meaningfully address some of the concerns you raise?
That's exactly what these suggestions, from December 2019 were:
Incremental improvements each with minimal impact on the Standards and significant gains for those reading source control. This starts with a CI process that actually works which it doesn't.
As I stated above @JamesMBligh you simply don't seem to understand what is being asked. Consequently I recommend the DSB consult with a qualified Devops Engineer who will probably be able to do at least (1) -> (4) within a few days. Clearly the DSB hasn't done this because it has spent more time arguing about doing nothing at all.
I repeat the following:
As the maintainer of parallel versions of the Standards used by more than 10% of Active Data Holders, the Founder of the largest SaaS CDR Holder provider in the ecosystem, a verifying party of over 50% of all active holders, a software developer seeking to easily understand and write code using the APIs in modern development environments (Outcome Principle 4) and the largest non-government contributor to the Standards process I formally request that the Data Standards Body reevaluate the options for consistent and comprehensible Standards publishing and adopt a modern release pattern for the publishing of the Standards.
And additionally note that as per Part 8, Division 8.3, 8.10 that:
When making or amending a data standard, the Data Standards Chair must have regard to [...] submissions (if any) received during the public consultation (if any) that was undertaken in relation to the consultation draft in accordance with rule 8.9 [and] any advice from any other relevant committee, advisory panel or consultative group that has been established by the Chair (see paragraph 56FH(2)(a) of the Act)
So here we are, the Data Standards Chair is making Standards despite advice and feedback that the way these Standards are being made is causing participants on both sides of the ecosystem to fail in their implementation.
@JamesMBligh my agreement with @perlboy is on the point that the release diff of the official standards is difficult to follow and that his proposed solution would work thus making standards changes easier for participants to absorb. That's not discounting the work done to improve the standards or that I think feedback hasn't been incorporated. I don't think the topics are related.
The feedback we have received to date has been general (things could be better)
Honestly I think there's a fair bit of detail in the original post unless I'm missing something.
or overly impactful (you should change everything to work this way).
I think the impact to implement what is proposed should be low since Biza.io has provided a reference implementation. I would even call it low hanging fruit, but that's just my opinion of course.
In summary, Adatree agrees that there is the opportunity to make standards changes easier to absorb as opposed to the current state which is a complete standards re-read whenever there is a release. We believe this would reduce the risk of participant non-conformance which is critical for ecosystem success, and that's why we're here. We think the solution proposed in this change request is a good approach. We believe the existence of parallel standards that are widely in use among participants to be evidence of that.
@ShaneDoolanAdatree, I've taken some time to think about this to ensure I understand the various issues to best of my ability before responding.
The original post focused on some specific changes for the build process for the standards but, if I understand correctly, your real issue with the way the standards are published is that it is difficult to be sure that specific, detailed, changes are not missed (hence the need for a full re-read). If that is the case this is definitely something we need to improve.
This issue won't be fully resolved by making the changes suggested in this issue as a reader would still be required to look at the underlying code diffs to see what the changes required are. If this was a sufficient solution to the root problem then you could do that now by doing a diff only on the source files (ie. the .md and .json files that are used to generate the docs).
As a result I have raised the following items on standards staging to improve the publishing process with the intention of incorporating this into v1.15.0 as a start:
If there is something I've missed let me know and I will raise more issues on standards-staging to address (or you can raise them directly if you like).
@perlboy, I won't be responding to your long response above but I certainly hope it was cathartic writing it. I've done my best to extract useful feedback from it.
Just a suggestion but now that you're the CEO of a company with a serious future you may want to consider counting to ten before publicly posting responses like this.
@JamesMBligh This reply doesn't seem to be contributing to the conversation at hand nor on-topic so I won't pass comment on it.
Broadly speaking, the proposals above appear to be the inverse of traditional engineering practice.
It is unclear to me why you seem so personally opposed to the proposals supported by members of the community.
Just an update on this. v1.15.0 is now staged at: https://consumerdatastandardsaustralia.github.io/standards-staging
It incorporates improved change deltas in context (via tab on the right hand side where the non-normative examples are shown). Archived versions have also been moved to a separate location so they will no longer be part of the main repository.
"Staged" went to "Prod" 2 hrs after publishing. Still 50,000 deletions and 56,000 insertions when excluding archives courtesy of rewrites of compiled html.
"Version Delta" feature is Ok for those who do nothing but browse websites but effectively useless for an engineer trying to perform a reliable differential that removes the possibility of human error (which is high since it appears to be manually created). It's also problematic for anything beyond Release-1 which is very often given the DSBs propensity to release every 2-4 weeks and typical Holder dev cycles being measured in months.
Diff excluding archives:
% git diff release/1.14.0 release/1.15.0 --stat -- ':!docs/archive' ':!slate/source/archive' | tail -n 1
171 files changed, 56274 insertions(+), 49117 deletions(-)
Build.sh now completes but only after priming random jar (same as previous comment):
% git checkout release/1.15.0
Already on 'release/1.15.0'
Your branch is up to date with 'origin/release/1.15.0'.
% ./build.sh
*** Starting Full Docs Build ***
~/git/standards-staging
*** Input Swagger: api/cds_banking.json
*** Output Format: swagger
*** Output Extension json
*** Output Dir: ../slate/source/includes/swagger
*** Checking Swagger Validator ***
*** Swagger Validator: Validating spec file (api/cds_banking.json)
*** Checking OAS Validator ***
*** OAS Validator: Validating spec (api/cds_banking.json) No validation issues detected.
*** Generating swagger
# Swagger JSON This is a swagger JSON built by the [swagger-codegen](https://github.com/swagger-api/swagger-codegen) project.
*** Moving to output dir ../slate/source/includes/swagger
*** Outfile: ../slate/source/includes/swagger/cds_banking.json
*** Removing temporary swagger gen output dir /tmp/cds_swagger_gen
*** Complete ***
*** Input Swagger: api/cds_energy.json
*** Output Format: openapi
*** Output Extension json
*** Output Dir: ../slate/source/includes/swagger
*** Checking OAS Validator ***
*** OAS Validator: Validating spec (api/cds_energy.json) No validation issues detected.
*** Generating openapi
Vagrant up fails with a Ruby Gem version issue:
Based on the Vagrantfile Ruby 2.4 is the preferred builder. Ruby 2.4 reached end of support in April 2020 and installing Ruby via rubyenv results in:
WARNING: ruby-2.4.10 is past its end of life and is now unsupported.
The Gemfile however does not align with this version and the Gemfile version is pinned to a specific and out of date Ruby 2.6 version:
Your Ruby version is 2.4.10, but your Gemfile specified 2.6.3
In summary, while the archives have now been removed and there's now a fresh coat of paint for non-technical people to look at, the release branch still contains static assets (html etc), people relying on source are still trawling 50,000 lines (for ~65 changes) and the published version has no traceable nor reproducible build process.
"Staged" went to "Prod" 2 hrs after publishing. ...
Yes, that is correct. I referred this thread to staging as the staging repo has the full commit and PR history and that may be of value to the discussion. Also, any suggestions for changes to the publishing process should really be raised there.
... Still 50,000 deletions and 56,000 insertions when excluding archives courtesy of rewrites of compiled html.
It is unclear why a diff of the compiled distribution files and intermediate files is relevant. If you want a clear view of what has changed that is material to the standards then I would recommend looking only at the source files. To assist, the following command may be helpful as it picks up manually edited files only and excludes any automatically generated files.
% git diff release/1.14.0 release/1.15.0 --stat -- ':slate/source/includes' ':swagger-gen/api' ':!slate/source/includes/swagger' ':!slate/source/includes/cds_*.md' | tail -n 1
78 files changed, 8836 insertions(+), 759 deletions(-)
This still shows that the team has done a lot of work in this release incorporating a lot of complex change (I think they did a great job). It is an order of magnitude less than the numbers you quote, however. If you exclude the obsolete
folder also (where old API versions are inserted as brand new files so each line is counted as an insertion) the insertion count drops to 2467.
Build.sh now completes but only after priming random jar (same as previous comment):
As stated previously we haven't built this repository to support external teams doing builds (which is why there is no doco giving guidance). Also the future intent is to separate the presentation from the actual input files that define the standards so that is where any time investment will go.
Have a great Christmas and New Years Stu.
(Oops, this is James BTW. Logged in with the wrong account)
"Staged" went to "Prod" 2 hrs after publishing. ... Yes, that is correct. I referred this thread to staging as the staging repo has the full commit and PR history and that may be of value to the discussion. Also, any suggestions for changes to the publishing process should really be raised there.
This DP is the place it has been raised and standards-maintenance
is the documented location for change requests. The assertion that the method of generating documentation isn't in scope of the documentation itself seems to ignore the fact the Chair has a legislated obligation to consider the feedback given by participants before making Standards.
... Still 50,000 deletions and 56,000 insertions when excluding archives courtesy of rewrites of compiled html.
It is unclear why a diff of the compiled distribution files and intermediate files is relevant.
For starters because there are changes that are made post build via .sh scripts
If you want a clear view of what has changed that is material to the standards then I would recommend looking only at the source files. To assist, the following command may be helpful as it picks up manually edited files only and excludes any automatically generated files.
% git diff release/1.14.0 release/1.15.0 --stat -- ':slate/source/includes' ':swagger-gen/api' ':!slate/source/includes/swagger' ':!slate/source/includes/cds_*.md' | tail -n 1 78 files changed, 8836 insertions(+), 759 deletions(-)
This is demonstrating exactly (2) of this original DP made a year ago:
(2) Removal of static content from the master branch
Since the above isn't documented outside of this thread and the DSB has regularly altered structure it cannot be relied upon as a reliable means of assessing change.
This still shows that the team has done a lot of work in this release incorporating a lot of complex change (I think they did a great job). It is an order of magnitude less than the numbers you quote, however. If you exclude the
obsolete
folder also (where old API versions are inserted as brand new files so each line is counted as an insertion) the insertion count drops to 2467.
All of these are context aware suggestions for which it cannot be assumed an engineer will ever read nor understand. The default position will always be to execute a git based differential. This is a fact of modern software development.
Build.sh now completes but only after priming random jar (same as previous comment): As stated previously we haven't built this repository to support external teams doing builds (which is why there is no doco giving guidance). Also the future intent is to separate the presentation from the actual input files that define the standards so that is where any time investment will go.
The DSB is being told, by multiple parties, that the ability to track changes in the way it is releasing Standards is challenging to follow. The course of action the DSB has decided to take has zero support from participants despite perfectly legitimate suggestions to align with common engineering practice.
Have a great Christmas and New Years Stu. (Oops, this is James BTW. Logged in with the wrong account)
You to, enjoy your eggnog and Christmas carols.
Despite numerous attempts at resolving this I note that 1.16.1, a minor point release after all the "improvements" mooted above now features an additional 87,864 lines added and 32,908 lines removed. Ironically some of that diff is the "changeset" from the previous release being merged. 1.16.1 went from "Initial build" to passing the threshold for the "Complete review" within 5 hours - the reviewers must indeed be Johnny 5:
Since maintaining a usable copy of the Standards is a significant burden, consuming literally weeks in engineering time, we've determined it must have a commercial return and subsequently the public Biza.io DSB Standards have been removed for the foreseeable future.
To the DSB, congratulations, this thread of attrition has taken its toll, I've run out of energy to attempt to right this ship and will abandon the idea of trying to help implementers successfully follow the standards.
To the well-meaning engineers that will possibly come across this thread looking for a specification that sanely succeeds at codegen ("Inline enum? Lolzzzz") or simply trying to understand the actual change set, I'm sorry, I have failed you. I tried reason. I tried stick. I tried escalating to the Chair and all we got was cute words and niceties.
Closing ticket.
Description
Each release, old versions of the Standards documentation are rotated into the
docs/
when the version changes. Each time a release occurs (such as v1.1.0 yesterday) the diff created is huge and difficult to easily process with respect to changes that have occurred. A diff since the last release has been made here to illustrate the issue: https://github.com/perlboy/standards/pull/3/filesArea Affected
Standards documentation.
Change Proposed
We propose the following:
master
branchgh-pages
master
at root and tags within the docs directory at trigger time resulting in all static and binary data not being required within themaster
branchmaster
published copyThrough the above changes new releases would contain only source Slate significantly reducing the diff produced on release (currently exceeding 400,000 lines).