useblocks / sphinx-needs

Adds needs/requirements to sphinx
https://sphinx-needs.readthedocs.io/en/latest/index.html
MIT License
195 stars 59 forks source link

Trigger reviews for dependent 'needs' #685

Open danieleades opened 1 year ago

danieleades commented 1 year ago

In my organisation we need a way to trigger reviews of dependent objects.

Say for a example we have a high-level user story capturing a customer requirement. This is decomposed into multiple system requirements for components, which may in turn be further decomposed into subsystem requirements.

If the user requirement is modified, then any children of that requirement need to be reviewed to ensure that they are still consistent with their parents. This review process needs to carry transitively all the way down through the graph.

It would be useful if sphinx-needs could support a workflow like this, though I'm not sure how this could best be achieved. This workflow is supported by other plain-text requirements management tools like doorstop or strictdoc.

Conceptually, each 'need' would have to somehow track the version/hash of their parents that they are linked to, and this would become invalidated once the parent is modified. the review/approval process would need to update the version/hash on each link

I can imagine the functionality to approve an edit and update the version/hash could end up living in a cli script or something.

Curious to get other's thoughts on this

PhilipPartsch commented 1 year ago

How do you want to save the review status?

  1. In an attribute of a need.
  2. You use e.g. git pull requests and so your Reviewed Need are only in defined branches (maybe with review policies maintained by your DevOps Tool).

For 1 scripting solution: We maybe could fetch with script the commit history of the file and compare it with the commit history of the files with the linked needs.

For 1 data solution: Is it maybe possible to store to the status even the review date? So we can check within sphinx-needs that all linked elements have a review date after the change to a "stakeholder use case" review date.

For 2: Could we countioniously reuse the same needs.json file with the different version information in? So a build would trigger a new version. if so we could compare the needs.json versions and so see a change happend and so a review has to happen again. Hint: Not sure how to enforce a review of the linked needs here.

I stop here, to get some feedback: What do you prefer? How do you like my input?

twodrops commented 1 year ago

We call this usecase, impact analysis or "suspect link" analysis. It is a very important one and we have it in our backlog to find a solution for this next year. Since we are on it, we can start discussing it here already. That's great :)

I guess instead of a fully automated solution we can think of a partially automated one which,

  1. Identifies "impacting" changes in needs
  2. Creates a report or marks in the generated HTML, the impacted needs.

In both cases, we have to store the meta-data on needs that there is an impact or not.

To identify an impact we can use a previous version of needs.json or the same needs.json with multiple versions as @PhilipPartsch said. Triggering a review can be a manual step based on the impact information in the generated report/ rendered HTML.

I haven't thought about it end to end, but this is what is coming to my mind as a potential light-weight solution without too much integration of Sphinx-Needs into the surrounding CI.

danwos commented 1 year ago

I think this feature can only be based on data from needs.json files and the version data that may be stored inside these. As a rough idea, you could configure Sphinx in the way that it uses the date as the version. In this case, the latest build on the same day will declare what the need-version information looks like.

But however, we need to make a need aware of its past first, before we can react to it and create certain analyis-views.

So the first step would be to import version-data from a needs.json by default and store it under the specific need. Maybe some regex for selecting the versions to use makes sense.

If this data is available, we can easily present it already in the need. Maybe as tab, where this first tab shows the current status (default) and the other tabs show the specific version data of the need (including red/green colors for diffs :)).

After that some extension for the filter-string makes sense, to filter for specific changes (e.g. status changed between current and 4.2.1). Have no idea, yet, how this could look like.

At the end we should provide a new directive to identify the "suspect links" and present the result somehow. Maybe this "suspect link" should first be also an extension of the filter-string algorithm.

Sounds everything very interesting. Happy to have it one day :)

BTW: The needs.json location should not be necessarily the git repo. Maybe some network folder or Artifactory can store the "released" needs.json and Sphinx-Needs can pull them during a build (nearly the same mechanism as external_needs is using)..

danieleades commented 1 year ago

i had something much simpler in mind. The idea isn't quite fleshed out, but something like-

  1. the syntax for specifying a 'parent' link is extended to include the option to specify the 'fingerprint' of the parent (probably an md5 hash of the parent content or something. Something like :links_outgoing: req_001:{hash}, ...
  2. if the fingerprint is specified, but doesn't match the parent throw a sphinx warning which shows a link to the requirement, the correct fingerprint, and instructions to silence it.
  3. a reviewer can check the warnings, perform their review, and then set the fingerprint to match the value given in the warning to silence it
  4. you can set a dummy value on new needs in order to trigger the warning and generate the correct value
  5. add a config option to elevate the warnings to hard errors- then warnings could be turned into hard errors in CI or something
twodrops commented 1 year ago

@danieleades

is extended to include the option to specify the 'fingerprint' of the parent

I feel we should not bring the complexity of having to specify a fingerprint/md5 to the user. The needs graph is already available right? Why do we need this extra information from user?

The diff, for example, can surely be calculated. The question is: Are all links in the node graph an "impactable link", I would say by default yes. Ideally, the change in the content is also an impact. The idea is to find out if a user changed something in a need that could impact a related need. As @danwos said we could have a filter-string/custom function which can control the impact calculation.

PhilipPartsch commented 1 year ago

What shall be covered as a change? All changes to attributes of the need, links and content? I believe there are attributes and links which are not so relevant and shall not be covered as a change. E.g. if there is a meta attribute to inform about where this is been used or to do some tagging.

What's about changes coming from templates? I believe this should not always need to have a complete review. Other opinions?

What's about changes coming from needextend?

danwos commented 1 year ago

I would take a look at the final data only. So after all other "manipulation" methods have been applied (templates, needextend, dynamic functions).

And I agree that not every change is relevant. Maybe this can be configured, to define which options shall be taken into account for some kind of analysis.

I have created an internal issue for defining a "history" concept. But I will present it also here after I'm done.

danieleades commented 1 year ago

I feel we should not bring the complexity of having to specify a fingerprint/md5 to the user. The needs graph is already available right? Why do we need this extra information from user?

what does the approval chain look like in that case?

I agree that you can diff the graph to determine what changes have occurred. But what then? That's transient build data no? if you run the same build twice does that clear the errors? What if you want a different person to clear all the suspect links as part of an approval workflow?

danieleades commented 1 year ago

by the way, i made a proof of concept of this workflow here - https://github.com/danieleades/sphinx-graph. Feedback welcome.