melexis / sphinx-traceability-extension

Traceability extension for Sphinx documentation generator
GNU General Public License v3.0
25 stars 9 forks source link

Add "Changed" status information? #182

Closed bavovanachte closed 3 years ago

bavovanachte commented 4 years ago

Some context:

In order to do verification reviews on some of our projects, we use the json output of the traceability plugin to generate spreadsheets of all SWRQT, UTEST and DESIGN items (in separate sheets). Then we go over and review each item in this list in the scope of ISO26262 verification reviews.

As this is a time-consuming process, we've streamlined this process recently by including a "Changed" status to the output. We take a "baseline" json file (from the project that the "new" project is based on, in case it's not from scratch) and use that to determine whether a given item is "New", "Changed" or "Unchanged". This information allows us to massively speed up the verification review process.

An excerpt from the script:

for item in item_list:
    if item['id'] in baseline_dict.keys():
        if item['content-hash'] == baseline_dict[item['id']]['content-hash']:
            item['status'] = "Unchanged"
        else:
            item['status'] = "Changed"
    else:
        item['status'] = "New"

Now, I was wondering if it makes sense to embed this feature directly in the traceability plugin, instead of handling it in postprocessing. This "Changed" status could then be (optionally) included in the item view or in a sort of attribute matrix.

I don't have a concrete suggestion on how to handle this (make it an attribute, a new kind of property,...) as I first want to gauge if you think this is a meaningful addition or not.

@Letme @JasperCraeghs @dryodon

Letme commented 3 years ago

How would you determine in traceability plugin (and sphinx) if something is changed? That would mean we would need to provide a reference .json file through conf.py, and even then comparison could only be made just before the output.

In my view this is why we have .rst files on git, so that we can easily diff the items. Maybe sorting or something needs to be agreed upon, so that we can make comparison with a simple text diff? I see this as quite a bit of stretch out and it seems like this feature would implement some sort of version control on top of json output.

JasperCraeghs commented 3 years ago

@bavovanachte Can this be closed?

bavovanachte commented 3 years ago

Hi all,

First off, fine for me to close. I can achieve what I need using an external script and the 2 jsons. The rest of the comment is just to explain in more detail what I meant to do with this.

That would mean we would need to provide a reference .json file through conf.py, and even then comparison could only be made just before the output.

Indeed. Concretely we used this to speed up verification reviews and FMEA for incremental product releases. For example: we would take the json of PROJ12345_AAA as reference when extracting the verification review template for PROJ12345_AAB. It then simplifies the review process as the generated template also contains information on whether a given item is unchanged versus the baseline (and maybe therefore doesn't need a review).

In my view this is why we have .rst files on git, so that we can easily diff the items. Maybe sorting or something needs to be agreed upon, so that we can make comparison with a simple text diff?

In theory I guess, but in practice a big no. Especially with complex (conditionally compiled) documentation builds for multiple products and where items tend to move into different places, or components are made custom for specific products but not others. At least that was the case for the projects I've worked at so far. The approach proposed is at least robust against items moving to other places, or requirements being duplicated into custom components and then changed.

@JasperCraeghs I won't press the issue any further as I can already do the same thing in postprocessing. If you don't see the value, feel free to close.