Closed jdickey closed 8 years ago
The earlier PR (#49), specifically the UseCases::RespondToSingleProposal
class, hadn't gotten as far as actually making the updated Article (item 6 in the "Success/For Meldd 0.5/Gem Release 1.0" list) available to the implementation layer (via an attribute on the Result
class). We got diverted and sideswiped before getting that far in the requirements list. Second (or is it fourth?) try's the charm...
Well, that's another fine mess we've gotten ourselves into!
After looking at the current AcceptSingleProposal
class, the question isn't so much "why did our most recent new test fail" but, rather, "how on earth did anything call #call
and (apparently) work?"
We've a bit of a chicken-and-egg problem, which we should have seen as soon as we decided to use a Dry::Types::Struct
for attributes rather than our new old reliable Dry::Types::Value
. Take a look at the #update_attributes_from
method and see if you see the problem.
Basically, we rushed in with a reasonable strategy, fell back on cookie-cutter tactics, and Bad Things Happened. Think of this code spike as "Dunkirk Without the Ships".
Right. What exactly are we trying to accomplish again? Broadly speaking: given an input Proposal (aka Proposed Contribution), produce a record of that Proposal being Accepted, and produce an Article entity whose body content reflects that the formerly-Proposed Contribution has been Accepted (by no longer having Proposed Contribution marker tag pairs (MTPs), but instead a single Accepted Contribution marker tag pair). "What's so hard about that," you may well wonder. Nothing at all, once you start thinking of things correctly. The Result
object, as originally conceived and specified, was meandering make-work that simply tripped us up.
What is this "record of that Proposal being Accepted", you ask. It doesn't exist yet; that's a significant part of the problem. Right, then; let's create it.
An Accepted Contribution should have:
That says our entity should look something like:
class Entities::Contribution::Accepted < Dry::Types::Value
attribute :article_id, Types::ArticleIdentV
attribute :identifier, Types::UUID
attribute :proposal_id, Types::UUID
attribute :responded_at, Types::Strict::DateTime.default { DateTime.now }
attribute :response_text, Types::Strict::String.default('')
attribute :updated_body, Types::Strict::String.constrained(min_length: 1)
end
"Hang on," I can hear you thinking. "Why only the Article body content, rather than a complete Article entity as before?" Remember how these use cases are going to be used: under the direct or indirect control of a delivery-system implementation feature such as a Rails controller action method. Many such frameworks, including Rails, have efficient means for updating individual fields or combinations of fields within a single ORM record. By making only the updated body content (and the responded_at
, which can be interpreted as "updated at") available from this entity, Similarly, we include the :article_id
attribute (a duplicate of that in the entity referenced by :proposal_id
) to save one Contribution-repository (database) lookup. "Hang on, then; why not simply include the Article
itself, and save two lookups?" Ease of development, testing, and integration, mostly; there's a lot more data involved in setting up an Article than simply an Article ID, and the ID is much less likely to change in form over the life of the app than the entire Article entity is.
Our Result
value object is going to look somewhat different than it does as of Commit 5ea3155
class Prolog::UseCases::AcceptSingleProposal::Result < Dry::Types::Value
attribute :entity, Types::Class # instance of `Entities::Contribution::Accepted`
attribute :errors, Types::ErrorArray
attribute :original_content, Types::Strict::String.constrained(min_length: 1)
attribute :proposal, Types::Class # instance of `Entities::Contribution::Proposed`
# ... helper methods as in existing value object
end
This should give the persistence mechanism, and the UI, everything they need to persist and represent the result of the use-case action properly.
You knew it wasn't quite that simple.
Consider the current Prolog::UseCases::AcceptSingleProposal#original_content
method:
def original_content
article.body[endpoints]
end
The article
being queried is read from the article_repo
, so it's the pre-existing article (with proposed-contribution marker tag pairs). The endpoints are from the proposal
sent to #call
. What's wrong with this picture?
What's wrong is, as a read through the ProposeEditContribution
code demonstrates, is that the endpoints are the values given as the original selection when initiating that use case. Why is that a problem, you ask?
Because we've added begin- and end-proposed-contribution marker tag pairs to the body content, which has the effect of shifting the endpoints you'd use to select the same content by the length of the begin-proposed-contribution tag pair. We even have test code that verifies that the endpoints in the contribution proposal are the same endpoints as specified to the "Propose Edit Contribution" use case.
The endpoints persisted along with the Proposed Contribution need to be offset by the length of the begin-proposed-contribution marker tag pair in the (updated) body content. Thought was given to the alternative of persisting endpoints that include the begin- and end-proposed-contribution MTPs, but that doesn't seem appropriate, does it?
Nothing is more effective than pair programming at preventing rabbit-holing.
— Sarah Mei (@sarahmei) March 20, 2015
My understanding was that there would need to be several adjustments made to the endpoints up until author decision.
propose_edit_contribution
use case)accept_single_proposal
use case)accept_single_proposal
use case)So if we start off with a set of endpoints 100 and 110 in the article selected by the contributor, after submission they would become 132 and 142 due to tag <a id="contribution-5-begin></a>
and persisted.
Then, upon decision (or viewing pending decisions?) the endpoints in the Proposed entity would have to be updated, say by +50 due to decision made on contributions 2, +62 due to a new contribution 6 that's now pending, and -20 due to a new contribution 7 that's been submitted and accepted. So as the author makes the decision, the original content would now have endpoints 132+50+62-20 and 142+50+62-20.
I would propose that we 1) include an :original_content
attribute to the Proposed entity, which would facilitate verification of the endpoints and be displayed to the author when viewing contribution pending decisions without having to update the endpoints in the Proposed entity, and 2) add a :proposed_offset
attribute to the Proposed entity, which would reflect the length of the begin and end tags added for that proposed contribution and an :accepted_offset
attribute to the Accepted entity, which would reflect the sum of the (length of the accepted tag minus the :proposed_offset in the corresponding Proposed entity should always be negative) and (length of the proposed_content minus the length of the original_content could be either positive or negative).
Upon decision, the logic would be
:proposed_at
or :responded_at
) later than the :proposed_at
attribute in the newly-decided-upon-Proposed, sort in time sequence:proposed_offset
or :accepted_offset
value to the endpoints of newly-decided-upon-Proposed:original_content
Right. Endpoints are, in hindsight, useless once the Proposed Contribution has been persisted. This is the kind of thing you can't think through in a blind rush at 2 AM Sunday morning with post-nasal drip measured in litres per hour.
I like your solution from an ease-of-implementation standpoint; the problem with it is that it requires numerous reads of the contribution repo with the distinct possibility that most if not all of those results will be tossed. If the entire underlying table is locked, then that's fine (if inconvenient to other users); if it's not locked, then we have lots of opportunity to wind up with invalid data. Minimising the number of times we hit the repo/database is, I suspect, key to increasing our reliability.
I think we can leave the Proposed entity as it is. Rather, when Accepting, we
updated_body
attribute of the Accepted-Contribution entity, each of the following will be appended in turn:
identifier
;Result
entity.#call
.Why is this so flipping hard?
Hmm, yes, I see how that would work for building up updated_body
upon Accept. However, we would still need to capture :original_content
in Proposed entity anyway to show the original content when author is viewing pending decisions.
Not sure I understand your concern with the endpoint update scheme per contribution repo reads. As I see it, we only need to read it one time, upon handling the result of an Accept/Reject, update the endpoints, then proceed per use case?
Of course we need :original_content
in Proposed
and…
shuffling of papers, then feet
…I see we don't have it. Oops. Another iteration through ProposeEditContribution
is in our imminent future. Now if I could just remember what else I needed to do there… 😴
:P I remember our previous conversation when you decided to remove it, as it was deemed redundant given the endpoints
anyway, I think the endpoint update upon Accept/Reject scheme is easier. How would you see it requiring multiple reads of contribution repo?
hmmm. We could query for all contributions against the article. For popular articles and small contributions, that could be costly. I was envisioning parsing the article body, building a list of UUIDs of contributions overlapping/enclosing the selected range, and then querying for each of those — which is also inefficient as hell for popular articles and large selections.
The underlying intent was to use the article body for the single source of as much truth as practical, supplementing that as needed from the repo.
Actually, if we are able to 'Scan the body content for the begin- and end-proposal MTPs for the current proposal (using the proposal's UUID)` then yea we don't need to care about endpoints.
I would go further to argue that we wouldn't need to worry about any other contributions pending or decided, as no pending contributions would overlap this one, and it would be the contributor/author's job to preserve historical MTPs through the contribution/decision process. In any case, even if historical MTPs are lost from the article body its no big deal, as the contributions are still associated to the article, and just won't be available for determination of position within the article (post 1.0 feature)
Just take the current article, find the MTPs corresponding to our UUID in question, do the swap (if Accepted).
it would be the contributor/author's job to preserve historical MTPs through the contribution/decision process.
I'd understood that to be a big part of this UC's job; it's not appropriate for the submit-proposal stage since, if the author rejects the contribution, the original positions of the contained/overlapped (and resolved) contributions is lost. I agree that the endpoints needn't be preserved. So there's still going to be some scanning/parsing required.
Persisting the historical MTP for the current contribution being decided is a main objective. Preserving historical MTPs from the past that may be in the contribution text is not. Have to trust the users not to mess up, like with the proposed HTML.
…trust the users…
We're doomed!
But at least the current setup opens up the technical ability to revert changes, as you noted at some point earlier. I have a sneaking suspicion that we're going to expose that as a user-level feature well before 1.0…
You'll notice that we've fallen into a toxic anti-pattern of late. We make some changes to AcceptSingleProposal
that require/assume changes to ProposeEditContribution
; we go back and make those changes; then repeat the cycle. After a while, as you'd expect, both classes start looking like Frankenstein monsters put through a wood chipper and rebuilt: they may work, given a suitably flexible definition of "work", but they aren't anything you want to touch afterward, let alone put in your public portfolio; they're the sort of code that the adjective "craptastiic" was invented for.
So, after playing a life-size game of Jenga while drunk and blindfolded, using life-size, reinforced-concrete blocks rather than the standard tiny wooden ones, we're tired of the tower crashing down about our heads. How we've been going about trying to move forward is, if not entirely broken, at least highly suboptimal. We don't want to do it that way again (and again).
We've been engaged in expedient, tactical, procedural thinking masquerading as strategic architecture, grafting more bags on the side of bags on the side of our components. We've noted this before, but a combination of factors (pressure, exhaustion, pressure…) have continued to dog us, and we're not going to play that game anymore. We might not have new commits on our timeline for a couple of days; we're certainly going to have to move our interim deadlines (e.g., for 0.11.0) again; but we've relearned a valuable lesson: as Agile preaches, it's better to start with known dependents and use those to drive the design of their dependencies ("outside-in development") than the reverse (which our whole road map basically assumes).
Going forward here, we're going to develop a test proxy, or "shim", for our ProposeEditContribution
class that allows us to
ProposeEditContribution
such that it implements the API specified by our test proxy.Doing so while keeping our blood pressure down to merely theoretically-possible bowling-score numbers simply adds to the challenge. 💥
Changes that we know of now for the Contribution::Proposed
entity:
:original_content
attribute, containing the markup selected by the Proposer prior to Proposing the Contribution;:endpoints
attribute (which wouldn't have the same values as originally submitted to "Propose Edit Contribution", since the proposed-contribution marker tag pair has been inserted into the body content).Just as a reminder, the Article body content
:updated_body
attribute of the Contribution::Accepted
entity returned in the :entity
attribute of the Result
instance:
:entity
attribute of the Result
instance must have the value :error
, with the appropriate as-previously-specified values in the :errors
attribute.We'll temporarily create a new Contribution::ProposedWithContent
entity that for the moment will live alongside the existing Contribution::Proposed
entity and iterate from there.
Getting back to this after a much too much too long burnout delay...
Endpoints need to die. They're useful in the ProposeEditContribution use case, but the values specified there are invalid afterwards, and we have a more reliable way of identifying the original content for a proposed contribution — grabbing the body content between the begin- and end-proposed-contribution marker tag pairs for a specific proposal's ID. Bye, bye, endpoints. 👋
We had been wondering whether to remove the existing proposed contribution from the repository when persisting the new accepted-contribution entity. That's not our call here, as our current (post-0.9 or -0.10) convention calls for the new accepted-contribution entity and the updated body content (if not the entire article entity) being returned via the Result
object. Use cases do not modify persisted state; they "merely" inform the delivery mechanism (e.g., a Rails, Sinatra, or Roda app) whether data-crunching was successful and, if so, what data should be persisted. As a reminder, this is meant to simplify error handling by removing most if not all causes of failure (e.g., an unexpected I/O error) from the UC and relocating it to the delivery mechanism which has better baked-in support for such things (e.g., transactions).
Given that, do we need to inject the contribution repository at all, given that the proposal being responded to is a parameter of the #call
method? We think not, but we're going to sleep on it before yanking stuff out.
The result.updated_body
attribute needs to be broken out as a computed value, rather than the current dummy placeholder. Obvious, but here for the record.
This should be injected into the #call
method along with the proposal. There's no other way to acquire it.
Once we fix what's in the previous comment, all that separates us from completion of 0.11 is detecting the failure modes specified for 0.5 and ensuring that the Result
follows our specified convention for reporting such errors.
Fina-bloody-ly!
Updating and adding to the discussion earlier:
Endpoints are useful as inputs to the ProposeEditContribution
use case, but must not be propagated afterwards. The implementation code which calls that use case is responsible, on success, for persisting the Article with proposed-contribution marker tag pairs added, and for persisting the proposed contribution itself. Once that is accomplished (which is before this use case is invoked, obviously), endpoints are no longer relevant and should not in fact be persisted as part of the proposed contribution.
There is no need for this use case to have access to the Contribution Repository itself; the existing Proposed Contribution is passed as a parameter to its #call
method and, on success, a new Accepted Contribution entity is returned as part of the Result returned by that method.
But where does the factory come from? Per our standard Repository API, the #create
method functions as a factory method; rather than passing in the repository per se, a proxy to a repository instance's #create
method should be passed in to the use-case constructor.
#call
)A couple of weeks ago, we noted the need to add original content to the persisted proposed contribution; as noted immediately above, this obviates the need for the endpoints going forward.
So what's the new, new, new plan?
ProposeEditContribution
use-case code. Again;#call
method as mentioned above;#original_content
method with a delegation to the method of that name on the proposed contribution;RejectProposedContribution
class;AcceptProposedContribution
and RejectProposedContribution
into new class, hypothetically BaseResponseToSingleProposal
;😩
Hang on; do we really need the :original_content
attribute as such in the Contribution::Proposed
entity? No future proposal may overlap an existing proposal until that existing proposal is responded to, so it's not a question of losing the current content (though the accepted- or future _rejected-_contribution entities should explicitly preserve the original content).
How are we going to use it? Obvious uses would be in the article-viewing and proposer/author dashboard UI contexts, where "original content" needs to be presented as part of a proposal summary. As part of the Accept or Reject Proposed Contribution workflows? To populate the created entity as mentioned, sure, but those use cases won't grep
the body content for it; they'll look for the proposal marker tag pairs instead, and take what's between those as the authoritative source of original content.
To answer our opening question, having :original_content
in the Contribution::Proposed
entity is a convenience for later workflows' UI, and is likely sufficiently useful that it should be part of the entity. Endpoints, on the other hand, still need to go away.
yep, agreed, :original_content in the Contribution::Proposed entity is convenience to save lookup of tags
Thinking about the second listed error condition:
2. If the specified Contribution has already been Responded to, then the
:errors
Array
-like attribute must contain aHash
-like instance with the key of:contribution
and the value of:responded_to
.
To make this work, we need to hit the Contribution Repository, obviously.
But, hang on; when can this possibly occur? J Random Author has a lot of pages open; numerous windows and tabs in each of multiple browsers. That happens. She might be about to Respond to a Proposed Contribution, then get an urgent interrupt to go work on something else (in another tab/window) right now. Later, she's forgotten completely about the open Prolog session, fires up a new session in another browser entirely. After going to her dashboard, she Accepts a Contribution. She then goes on to do other things.
At some later point, she happens to flip past the first tab/window again. "I thought I Accepted that", she thinks, and activates the appropriate UI element to do so.
What should happen?
The Use Case should query the Contribution Repository for any Contributions with a :proposal_id
matching the Proposed Contribution's :identifier
. This takes time, especially in a heavily-used app instance, and is likely to have app-level implementation implications, such as transactions/locking the Contribution and Article Repositories' persistence implementations.
Additionally, or alternately, we may wish to replace our one-way association where a Responded Contribution "knows" which Proposed Contribution it's associated with, to a two-way association. This is likely more reliable in theory than practice, as it would require re-reading the proposal (currently a parameter to #call
) from the repo.
The real question is, is our existing use case granular enough? We've already broken out the initially-proposed "Respond to Proposed Contribution" into separate "Accept" and "Reject" use cases. Should we have an additional "Verify Ability to Respond to Contribution" use case that gets executed before Accept or Reject? That would be responsible for checking all error conditions described in the Wiki page for "Accept a Proposed Contribution" (that would operate identically for Reject as well), leaving "Accept a Proposed Contribution" and "Reject a Proposed Contribution" even more streamlined than they are now.
What could possibly go wrong? (…go wrong? go wrong? go snap!)
hmm, good point, that it is a possible scenario "Verify Ability to Respond to Contribution" would check for the 2 preconditions prior to Accept/Reject: -Author must be the current ("logged-in") Member; -Author must not have previously Responded to the Proposed Contribution using this or any other Use Case.
This use case would be run after the user clicks? Accept/Reject, not as she first opens up the decision to be made? Failure would entail informing the user of the problem (if not logged in then can log in and try again).
On further consideration, we'll leave the "user not logged in" test in there but it's difficult to visualise how it'd be triggered in a reasonably competent, non-malicious app.
Yes, successful execution of "Authorise Contribution Response"[1] would be precursor to either accept-/reject-contribution use case. Let's limp with that and see where it gets us.
[1] I think of better names when I don't have to come up with them at zero-dark-thirty, yes?
Right. As of Commit 3e64614, we have an AuthoriseContributionResponse
use case that is responsible for verifying that the AcceptSingleProposal
or RejectSingleProposal
use cases can do their thing without worrying about error detection/reporting, leaving a single path of execution through each of the Respond-to-Proposal use cases. As far as that goes, so far, so good.
We have one potential error condition that isn't being checked by AuthoriseContributionResponse
(and isn't in its (now-misnamed) spec: should we verify that the begin- and end-Proposed-Contribution marker tag pairs are actually in the Article body?
And if we do that, should we then pass the (current, with-Proposal) Article entity back as part of the AuthoriseContributionResponse::Result
, and then pass it into Accept
or Reject
in lieu of the Article Repository (which we're not going to write to in any use case anyway)?
The alternative, closer-to-original-vision implementation would instead look like
Authorise
does not access the Article Repository and does not do this additional validation;AuthoriseContributionResponse#initialize
method no longer takes an article_repo
parameter, which is thus removed from the `AUthoriseContributionResponse::CollaboratorsAs of Commit 3e64614, we have a new AuthoriseContributionResponse
use case, which performs specified before-Acceptance or -Rejection validation so that the actual Respond use cases don't have to; they are reduced to a single logic path. (Remember that use-case classes don't write to persistence or do anything else that can fail. Naive but "good enough for 0.5".)
There is one possible validation error that the new use case completely ignores, as does its spec: what happens when the specified Proposal does not have corresponding begin- and/or end-Proposed-Contribution marker tag pairs in the Article body? The existing code never even queries the Article Repository, even though it's passed in as a parameter to the AuthoriseContributionResponse#initialize
initialiser! We foresee two possible courses of action:
AuthoriseContributionResponse
class has no additional validation/verification added to it beyond what currently exists;:article_repository
parameter is removed from the initialiser for that class, as it is never used in the existing class;:article_repository
parameter to the AcceptSingleProposal::Collaborators
value object instantiated by AuthoriseContributionResponse#initialize
in lieu of creating a new AuthoriseContributionResponse::Collaborators
value object;AcceptSingleProposal
class and imminent RejectSingleProposal
class continue to be responsible for retrieving their Article entities from persistent storage (in the former class' case, via delegation through its AcceptSingleProposal::BuildUpdatedBody
method).AuthoriseContributionResponse
class;AuthoriseContributionResponse::Result
value object;AcceptSingleProposal#initialize
in place of the current ArticleRepository
;AcceptSingleProposal::Collaborators
value object appropriately;In either case, the Result
object returned from AcceptSingleProposal#call
(and, in future, RejectSingleProosal#call
) would have a newly-created Article entity instance based on the existing Article (whether passed in or queried),
As we look at our existing and imminent use-case specs and code, we see entities such as Article which are distinct from the record-like objects defined in prolog_core
, such as Prolog::Core::Article
. In hindsight, we see what even a cursory code or process review while writing that earlier Gem would have discovered: that it's far better to identify the entities and value objects needed by domain logic (e.g., use cases), and only finally to determine which and how those (or proxies for those) will be written to and read from persistent storage. Doing so would likely have saved developer-months of flail.
Access point to the article repository would be necessary, but with regards to test for properly-formatted Proposed Edit Contribution marker tag pairs below in AuthoriseContributionResponse
, shouldn't that be the responsibility of ProposeEditContribution
?
After out-of-line discussion, the consensus appears to be going with the "Single Access Point to Article Repository" option above, omitting the initial "properly-formatted...marker tag pair" check. Proceeding on that basis.
Progress, we think.
Commit 27a7978 put the AuthoriseContributionResponse
class into final(for now) form, and Commit c88e2c7 did likewise for AcceptSingleProposal
. The #call
method is downright trivial, assigning attributes based on incoming parameters and letting the Result
-building method indirectly build the accepted-contribution entity based on computed attributes.
Single-path execution. All error conditions should have been taken care of in AuthoriseContributionResponse
; that was the whole point. On to RejectProposedContribution
and further progress. Finally.
It appears, on initial inspection, that the RejectSingleProposal
use case would be precisely identical to the existing AcceptSingleProposal
, except for the hard-coded fields in the Result
value object (#accepted?
, #rejected?
, and #response
). The entity presently coded as Entities::Contribution::Accepted
should probably be renamed Entities::Contribution::EditResponse
, as no changes are needed for the rejected-contribution handling. (This is almost certainly a Good Thing.) Essentially all remaining code in and below AcceptSingleContribution
is shared between the two.
The Prolog::Entities::Contribution::Accepted
entity has an :updated_body
attribute that (obviously) is not needed for the Rejected
entity. This suggests a simple class mini-hierarchy:
# ... module nesting ...
class Responded < Dry::Types::Value
# .. common attributes ...
end
# ...
class Accepted < Responded
attribute :updated_body, Types::Strict::String.constrained(min_size: 1)
end
# ...
class Rejected < Responded
end
# ... module nesting
Composition, as far as we can tell, isn't an idiomatic option here.
See Issue #59 for an unsolved bugbear that we are not going to allow to delay 0.11 further.
In breaking news, this just in…
Newly-released Dry::Types 0.8.0, released on 1 July, deeply freezes all Value objects, courtesy of their PR #99. This is actually a good thing overall (for the reasons cited in the accompanying issue), but it means that (at least temporarily) our Collaborator
s which include Repositories must be Struct
instances, not Value
objects. Bother.
Well, there appear to have been some major changes as several dry-*
Gems got updated in the last week; a certain Bugs Bunny clip comes to mind. We'll sort it out in the morning; the upshot apparently is that updating to the current (as of early 4 July) dry-validation
dependency stack appears to have broken things that are going to require more effort to fix than to simply roll back the dependencies to that point.
For the record, these are the dry-rb Gems that are installed when specifying dry-validations
version 0.8.0 (the default as of 4 July 2016 0300 SGT):
Changes to Collaborator
objects as described in the immediate previous comment have been rolled back as they are not sufficient to resolve the updated-Gem problem.
😩
To configure as on 23 May (prior to dry-container 0.3.4):
gem 'dry-validation', '0.7.4' # not current 0.8.0
Then:
gem install dry-configurable
gem install dry-container -v 0.3.3
gem install dry-equalizer
gem install dry-logic -v 0.2.3
gem install dry-monads -v 0.0.1
gem install dry-types -v 0.7.2
gem install dry-validation -v 0.7.4
rm Gemfile.lock
bundle install --local
rbenv rehash
bundle binstub --force flay flog pry rake reek rubocop
rbenv rehash
See Issue #60.
Once these Gems are installed (in this order and to these version specifications), the code in Commit 1fda588 passes tests as expected, giving confidence that it would work as expected in that environment. The Gemspec should probably be modified to specify each of the specific versions listed above, and to explicitly reference Issue #60 as a FIXME note.
Note that Commit 2b2a885 did not update the activemodel
dependency from 4.2 to 5.0. Rails 5.0 dropped less than a week ago; we're not ready to go there yet, even if we had an otherwise-compelling reason (which we, at present, do not).
This and the companion "Reject a Single Proposed Contribution" Use Case (Issue #56) supersede and simplify what was specified in Issue #40 and PR #49 et al. It implements this specification in the Wiki.