So, since the data was being grouped by support_type and then summed...well, it was just wrong. Sloppy copy and paste from the pure dedupe section.
The correct way to identify dedupes is to calculate the count of components (DSD/DSD or TA/TA) for pure duplication, and for crosswalks, to determine if there is any DSD/TA allocation for the same data element disagg. There is no need at the identification phase to worry about what the allocation is. Its better just to count and see how many potential data element/disaggs overlap, and then filter for the 100% allocations.
There was an issue in the following lines of code:
So, since the data was being grouped by
support_type
and then summed...well, it was just wrong. Sloppy copy and paste from the pure dedupe section.The correct way to identify dedupes is to calculate the count of components (DSD/DSD or TA/TA) for pure duplication, and for crosswalks, to determine if there is any DSD/TA allocation for the same data element disagg. There is no need at the identification phase to worry about what the allocation is. Its better just to count and see how many potential data element/disaggs overlap, and then filter for the 100% allocations.