may be interesting to report the file pair metrics produced by different algorithms (or same algorithm with different parameter settings) as different columns in the ame "file pairs" view
if the "file pairs" view visualises pairs that are flagged as "confirmed plagiarism" e.g. based on manual inspection guided by one metric, it might also be useful then to sort the pairs based on another metric and see if any pairs that have not been flagged as plagiarism (yet) show up in between pairs that have already been flagged as plagiarism; this could be the case if the metrics/algorithms measure the effects of plagiarism in complentary ways (as we now already have global/local similarity metrics)
In the future we would want to experiment with different algorithms and compare these with each other. This will require a few steps: