inventree / InvenTree

Open Source Inventory Management System
https://docs.inventree.org
MIT License
4.29k stars 774 forks source link

[FR] Test statistics #5995

Closed martonmiklos closed 3 months ago

martonmiklos commented 11 months ago

Please verify that this feature request has NOT been suggested before.

Problem statement

It would be great if statistics could be made from test results easily. I am not only thinking about from the part perspective view like suggested in the #821, but from other, higher perspective (per test templates for e.g.) or in general (see later).

What I am thinking about:

Test failure type analysis: As I read about the test reporting documentation I realized that we use somewhat differently the test results concepts what it was originally designed: We run test sequences (composed from several test steps). If any of the test steps failing the sequence terminated immediately and get logged in the following manner: I store the failed step name in the value field. In the case if all steps passing a passed result will be logged w/o any value. I do not know if this InvenTree usage approach should be follow-able, but logging each step as a separate test result would not make much sense in our case (we have looped test steps with a few K iterations for e.g.). Anyway: it would be good if the test results count could be grouped and counted by the failure reason. If we use the value for this calculation or we introduce a failure reason field for the test run is up for discussion. The calculation could be filtered by test run date (time interval), part type or test template.

Retest statistics Sometimes it is important to know how "smoothly" going the testing at the manufacturing floor. It would be great if the test results could include a retest field which would be zero for the first test results saved for a specific stock item for the given test template, and would be set to true in the case if the test is executed again for the same stock item. From this retest flag we can make some statistics about the retest metrics to see how stable the testing goes on the floor.

Test equipment statistics It would be great if the test equipment information (test station number for e.g.) could be logged to each test result and some metrics could be created to see how much the specific stations are utilizeed. (See next point.)

Execution time statistics There are tests which require operator intervention which contributes to the testing processes capacity. It would be great if the test execution time (start/stop) could be stored for each test result, and some statistics (min, max, average, distribution of the test time) could be created per test template (with filtering capability on time intervals, or specific build orders). It would be also useful to accumulate the test times grouped by test stations to see the given stations utilization.

Suggested solution

First I would like to discuss if storing more data (test time, station info, failure step) makes sense for the fellow developers. I know that our "InvenTree test adoptation" was went in a somewhat different way as InvenTree used at different places, but I think these additions would makes sense at many other users. If we agree upon something in this area I think this is something what I can help to move forward.

I know that the timing for new UI features is not the best at the moment (in the meantime of the UI switching), however for my own usage at first I do not need web UI: I would like to create a daily/weekly test statistics digest e-mail which would be created by an external tool with the usage of the InvenTree API.

The API design/discussion/implementation is something what I think I could be capable to help with.

Describe alternatives you've considered

-

Examples of other systems

I am not aware of any.

Do you want to develop this?

SchrodingersGat commented 11 months ago

@martonmiklos some very interesting ideas here, it aligns somewhat with our proposed expansion of MES functionality.

Some of the particulars here would be good as a plugin, but there is certainly room for expansion and improvement on the existing functionality. The stock item testing framework has received very little development over the last couple of years.

First I would like to discuss if storing more data (test time, station info, failure step) makes sense for the fellow developers.

This could be added in the metadata for the test result itself, as there may be a wide range of data that different companies want to implement. However if there are some generic data which we can agree would be generally useful, happy to add those in.

It would be great if the test results could include a retest field which would be zero for the first test results saved for a specific stock item for the given test template, and would be set to true in the case if the test is executed again for the same stock item.

The existing implementation keeps all recorded test results, so you can already determine how many pass / fail results there are against a particular test case. Unless I am misreading this?


In general, I am very keen to expand functionality here and provided some measure of statistical analysis would be a great feature. If you have some specific implementation ideas that you want to suggest (in terms of fields, API access, etc) then I'd suggest making a PR proposal and we can work from there?

martonmiklos commented 11 months ago

The existing implementation keeps all recorded test results, so you can already determine how many pass / fail results there are against a particular test case. Unless I am misreading this?

Yes, but I would like to "cache" the retest flag to being able to query it effectively. i.e. if I want to make a retest statistics from the past let's say 2 months (10 parts, 3000 test results) then it is just a matter of a count in SQL. If it is not cached then we would need to go through on each stock item, count the individual results per test template etc...

If you have some specific implementation ideas that you want to suggest (in terms of fields, API access, etc) then I'd suggest making a PR proposal and we can work from there?

Sounds like a plan!

I missed one thing what was in my mind previously: the test status view for a build output:

This would be a view where the test templates would be listed for the part and for each test template the tested/passed/failed/retested count would be shown for a given build order. With this view we could easily get an overview of the build outputs progressing on the tests.

github-actions[bot] commented 9 months ago

This issue seems stale. Please react to show this is still important.

martonmiklos commented 9 months ago

Not stale

github-actions[bot] commented 7 months ago

This issue seems stale. Please react to show this is still important.

martonmiklos commented 7 months ago

I am still working on it

martonmiklos commented 7 months ago

Hi @SchrodingersGat @matmair

I started to implement new API endpoints for generating a pass/fail statistic table broken down by part, build and test template.

As the test results are belong under the stock I added the following API endpoints: https://github.com/martonmiklos/InvenTree/blob/add_test_statistics/InvenTree/stock/api.py#L1765

I added stock/test-statistics/[by-part/by-build/by-test-template] endpoints.

What do you think about this approach? Shall I move the first two API endpoints to part/build?

On the other hand I am struggling with the filtering: I would like to get the test statistics to be filterable by date. I tried to try out everything here: https://github.com/martonmiklos/InvenTree/blob/add_test_statistics/InvenTree/stock/api.py#L1360

But the filtering UI did not got available in the rest UI: kép neither the url filtering worked.

I do not even know that the approach what I am trying to do is the correct way to implement. The actual "result matrix" generation is done here: https://github.com/martonmiklos/InvenTree/blob/add_test_statistics/InvenTree/stock/models.py#L2421

I appreciate any feedback!

SchrodingersGat commented 6 months ago

@martonmiklos I have not had time to review this fully but a few points I'd like to make initially:

But the filtering UI did not got available in the rest UI:

You need to use FilterSet - anything you add in the filter_queryset method is hidden from introspection by the django rest framework

martonmiklos commented 6 months ago
* Rather than expose the same API view multiple times, can you do the same thing by exposing a single endpoint and providing different query parameters to achieve different outputs?

Done I changed to use api/stock/test-statistics/by-part and api/stock/test-statistics/by-build API base urls.

* For filtering, look at how we use the [FilterSet](https://github.com/inventree/InvenTree/blob/40e867896bb8a4fb8fce84a0003bdac323ff8666/src/backend/InvenTree/part/api.py#L922) approach - it greatly simplifies things!

But the filtering UI did not got available in the rest UI:

You need to use FilterSet - anything you add in the filter_queryset method is hidden from introspection by the django rest framework

Yes I tried to replicate that: created a very basic FilterSet with two additional filter fields on the finished date: https://github.com/martonmiklos/InvenTree/blob/add_test_statistics/InvenTree/stock/api.py#L1312

And attaching it with the filterset_class: https://github.com/martonmiklos/InvenTree/blob/add_test_statistics/InvenTree/stock/api.py#L1336

However it is not shown in the API UI and in the schema. What did I made wrong?

SchrodingersGat commented 6 months ago

@martonmiklos that looks correct to me - but it does not show up in the DRF browsable API?

martonmiklos commented 6 months ago

@SchrodingersGat yes: kép

I am under the impression that I go against the DJango's the queryset/filterset paradigm.

I expose the queryset to the StockItemTestResult here: https://github.com/martonmiklos/InvenTree/blob/add_test_statistics/InvenTree/stock/api.py#L1333

However what I do "back in the model" after the TestStatistic::get -> TestStatisticSerializer::to_representation -> StockItemTestResult::part/build_test_statistics is the following:

I do queries by part/builds to retrive StockItemTestResults and then count them with a manually built query: https://github.com/martonmiklos/InvenTree/blob/add_test_statistics/InvenTree/stock/models.py#L2423

If I think correctly I should expose these queries built here with the TestStatistic::queryset somehow and then only apply the passed/failed/part/build filters here.

I think I could workaround the whole issue by flipping everything upside down and moving most of the logic to the client side: query the test templates by a part then query the passed/failed results by part for each template.

SchrodingersGat commented 6 months ago

I'm at a loss - your implementation looks fine, not sure why it is not working as is :|

martonmiklos commented 6 months ago

Well I could access the query parameters and "smuggle" down to the model method, but it would be still good to expose them to the API. Originally I wanted to create a "weekly test digest email" with an external tool and it would be good if the API could be generated.

martonmiklos commented 6 months ago

Okay, I digged a bit deeper: the issue is caused by "awkward" request handling.

If I bypass the serializer I got the filters appear: kép

However I do not know how this could be solved. Maybe @matmair could have some recommendation.

matmair commented 6 months ago

On a quick look through this seems to be correct, I will try to debug it before the release window closes

martonmiklos commented 6 months ago

On a quick look through this seems to be correct, I will try to debug it before the release window closes

It is not that urgent, I have not even PR-ed it yet as I wanted to include a few more related stuff:

martonmiklos commented 6 months ago

Ah folks you won't believe it...

The reason why it does not work is the fact that I brought up the data from the model in a dict not in an array. kép

After wrapping the data into an array it works like a charm: kép

martonmiklos commented 6 months ago

@SchrodingersGat , @matmair We reached to the most important question: kép

What icon shall we put to the "Test statistics" menu?

matmair commented 6 months ago

@martonmiklos TBH I think the current icon is fine? Do we really need another one?

martonmiklos commented 6 months ago

@martonmiklos TBH I think the current icon is fine? Do we really need another one?

@matmair having the same icon as the Test templates menu item above was somewhat strange for me. But I am fine with that.

matmair commented 6 months ago

I have no preference either way

SchrodingersGat commented 6 months ago

@martonmiklos for the existing user interface you could use something like:

Icon Image
fas fa-vials image
fas fa-chart-line image

for the react interface, which users tabler.io icons, perhaps:

Icon Image
<IconReportAnalytics /> image
martonmiklos commented 6 months ago

@martonmiklos for the existing user interface you could use something like:

@SchrodingersGat Cool, thanks, I will use the last two.

One more technical question: I just discovered that the response fields does not get propagated to the API descriptor: kép

Can you give some guidance how to do it?

I saw some @extend_schema magic like this over the code:

@extend_schema_view(
    post=extend_schema(
        responses={200: OpenApiResponse(description='User successfully logged out')}
    )

but I have not find example for adding fields. Maybe I am thinking in the wrong direction?

wolflu05 commented 6 months ago

You can do this like this:

https://github.com/inventree/InvenTree/blob/7e9d2f79ab247f459b483922a553744bb9bf69cd/src/backend/InvenTree/machine/api.py#L83-L85

martonmiklos commented 6 months ago

Hi @matmair , @SchrodingersGat

May I ask for some React/mantine datatable help? For the test statistics I would need dynamic columns, however I have not found any example in the wild for that use case with mantine datatable.

I have tried to alter the table's columns in the dataFormatter callback, but if I do that I got an error: kép what I have no clue how to track down (I have virtually 0 experience with typescript/VSCode).. If I use static columns (like it is committed right now) the table loads fine.

wolflu05 commented 6 months ago

Can you please show us how you tried to modify the table columns? I can only guess you tried to mutate the return value of the useMemo, but that should be immutable, you would need to put that into a useState and use the second return value function. And regarding your error, I think that's something related on the mantine datatable side, so don't worry. It only affects you if you loop over a list to render elements you you do not provide a unique key prop for the component you render in the loop.

Edit: you may also want to take a look at how the parametric part table is done, because it has dynamic columns.

martonmiklos commented 6 months ago

Can you please show us how you tried to modify the table columns? I

@wolflu05 many thanks for your quick response, I really appreciate it!

I forgot to include the link to my code, here is it, I have not used any useState/useMemo: https://github.com/martonmiklos/InvenTree/blob/add_test_statistics/src/frontend/src/tables/stock/TestStatisticsTable.tsx#L106

Edit: you may also want to take a look at how the parametric part table is done, because it has dynamic columns.

Which table do you refer exactly? This one? kép

wolflu05 commented 6 months ago

Have a look at https://github.com/inventree/InvenTree/blob/master/src/frontend/src/tables/part/ParametricPartTable.tsx

And yes, how you do it will not work. That's not how react works. But the linked example should pretty much cover what you are trying to do.

martonmiklos commented 6 months ago

And yes, how you do it will not work. That's not how react works. But the linked example should pretty much cover what you are trying to do.

Yeah I trying to get into react/typescript in "monkey style" and it is hard sometimes. 🤣

Anyway I got it rolling: kép

I will doing some polishing and then I will PR it.

github-actions[bot] commented 4 months ago

This issue seems stale. Please react to show this is still important.

SchrodingersGat commented 4 months ago

Not stale