Closed Jiaxin-Wen closed 11 months ago
The only code needed to compute the LDS metric outside of scripts for training & recording margins is available in the quickstart notebook and in the tests: https://github.com/MadryLab/trak/blob/39bf22ac0a803dec6dab7f3cd29c168ecdc069a7/tests/utils.py#L177-L205
In case there's community interest to provide more explicit support for LDS (and possibly counterfactuals as well), I'm leaving this is as a discussion.
The Section 2 of the trak paper proposes using LDS as the main metric to evaluate data attribution methods, which reduces the reliance on manual inspection, providing a more automated and objective assessment. Would you like to support the LDS metric in this repository?