One of the biggest challenges with this tool is false positives--differences that are detected not because of a functional difference or defect, but because of data or viz differences that occur because of timing issues, or content differences. To avoid as many of these problems as possible, one typically must take a backup of production, restore it to Test Server 1, stop background jobs, take another backup, then restore to Test Server 2. Then you'd run TabCompare against both test servers.
All that is a lot of rigamarole. One feature that would be cool would be a flag that would auto-skip, or at least note in the output, comparisons of vizzes that we know are going to be different from each other due to data or a publishing action. We could compare the published versions of each viz to see if they are the same. Using the Metadata API, we could determine what data sources were responsible for each viz's data--and determine whether the extracts differed (matching on LUID and date, I'd guess).
We should definitely try and note this info in the output CSV, but I think adding a skip flag argument would be beneficial as well, as it would save time if you just wanted to disregard known differences.
The great thing about a feature like this is that not only would it cut down on false positives, but you'd also be able to get away with testing against "dirty" server pairs, like a production server against a single test server running a recent restore, because as long as the restore was done within the last 12 hours or so, there'd probably be enough vizzes whose extracts would still match to provide adequate testing.
One of the biggest challenges with this tool is false positives--differences that are detected not because of a functional difference or defect, but because of data or viz differences that occur because of timing issues, or content differences. To avoid as many of these problems as possible, one typically must take a backup of production, restore it to Test Server 1, stop background jobs, take another backup, then restore to Test Server 2. Then you'd run TabCompare against both test servers.
All that is a lot of rigamarole. One feature that would be cool would be a flag that would auto-skip, or at least note in the output, comparisons of vizzes that we know are going to be different from each other due to data or a publishing action. We could compare the published versions of each viz to see if they are the same. Using the Metadata API, we could determine what data sources were responsible for each viz's data--and determine whether the extracts differed (matching on LUID and date, I'd guess).
We should definitely try and note this info in the output CSV, but I think adding a skip flag argument would be beneficial as well, as it would save time if you just wanted to disregard known differences.
The great thing about a feature like this is that not only would it cut down on false positives, but you'd also be able to get away with testing against "dirty" server pairs, like a production server against a single test server running a recent restore, because as long as the restore was done within the last 12 hours or so, there'd probably be enough vizzes whose extracts would still match to provide adequate testing.