TheProlog / prolog-use_cases

Use-case layer for Meldd/Prolog application.
0 stars 0 forks source link

Implement the "Summarise Own Submitted Proposals" Use Case. #63

Closed jdickey closed 8 years ago

jdickey commented 8 years ago

Also known as "Summarise Own Contribution History". As specified in the Wiki page, an authenticated Member may request a summary of Contributions she has previously Proposed, which will return an enumeration of Proposals previously Proposed and/or Responded to.

mitpaladin commented 8 years ago

Question: Does this include Contributions Proposed (and therefore immediately Accepted) to an Author's own Articles?

Yea, it should by default include all Contributions, with future filtering options to narrow down types of interest

Question: Can/should filtering be applied to this list and, if so, how?

for 0.5, lets leave it in chronological order by submission time with no filtering

jdickey commented 8 years ago

that's pretty much what I figured; the UC would deliver the unfiltered list, and the calling app could do with that as it will. Sorting/presentation isn't domain logic and therefore has no place in the use-case Gem.

jdickey commented 8 years ago

The Standard Repository API is fine as far as it goes…but among other things, it assumes that we're in an ORM-style 1:1 relationship between entities returned by the #find method and records persisted to storage somewhere.

But we have different entities for proposed and responded (accepted/rejected) Contributions. We haven't made any real attempt (yet) to unify them, or even extract a "common core" that might be persisted in one store, with the respective additions persisted in others. But even with such a scheme, we'd still need more than a simple ContributionRepository#find method. Either we conceive of the different states (types) of contributions as being from separate repositories or we provide an intermediate "adapter" layer with methods such as #find_proposed_for, #find_approved_by, and #find_rejected_by. What these methods' "live" implementation does is irrelevant to the use case (though implementation and tests obviously need building, eventually).

The second of the two alternatives (an intermediate layer) has a certain amount of appeal; certainly it's more consistent with the assumption heretofore that there existed a single Contribution Repository that could be injected from the implementation layer and have done with it. And yet…if this use case had separate "contribution repositories" For each contribution status, each would be individually much simpler (including implementing the Standard Repository API, whose #find method would simply hand back a set of Contribution entities of the relevant status).

And one more thing: there's nothing in the Data Mapper API (which is essentially what we're implementing here) that says that different repositories can't be facades for the same persistence container; some layer of coordination and such would obviously be required, but that may be an acceptable additional effort to present a simpler, more consistent interface to the use-case code.

mitpaladin commented 8 years ago

a single Contribution Repository seems the most consistent, as we add different states and entity types later

jdickey commented 8 years ago

A single physical repository is almost certainly what we'd use. The question was how to model a source of heterogenous data, as the attributes of the two entities are not identical (nor one a strict superset of the other).

For reference, consider how ROM repositories work, where different functional interfaces to a single persistent store are feasible based on the logical needs of the client code. We already have an example of this sort of customisation, where use cases that require only the ability to query for all articles have a "finder" passed into them, rather than the full repo object. (That is implemented in tests as we expect it would be in implementation: by passing in a method object acquired from the repo.)

Is that clearer?

jdickey commented 8 years ago

From the commit message for Commit b9e8f06:

Note that this implementation uses a single Repository instance that hands back three different entity types based on the status of the Contributions modeled by those entities. GOOST reminds us that pain in tests is a reliable indicator of a code smell. Here, the Repository must implement branching internally that more tightly couples the entity classes to the underlying data. Possible use of "entity factories" within that Repository class is an attractive option for removing the messiest details from the Repository itself; however, that coupling has to occur somewhere; the Repository is (arguably) a less bad place for that than within the use case or any immediately-obvious dependency injection would provide.

However, a proper IoC container...

We are beginning to be far more amenable to components such as dry-container and dry-component than we had previously been.