pytorch / captum

Model interpretability and understanding for PyTorch
https://captum.ai
BSD 3-Clause "New" or "Revised" License
4.93k stars 499 forks source link

Improve LLM attribution test cases + add type annotations #1323

Closed craymichael closed 3 months ago

craymichael commented 3 months ago

Summary: Existing LLM test cases for attribution only validate shapes and basic attributes. This adds test cases that verify attribution method correctness for LLMs with known ground truth. Additionally, some attention mask logic is brought in from test_llm_attr_gpu.py from D58897547. Eventually these two files should be merged. Finally, type annotations are added for all functions.

Differential Revision: D61052364

facebook-github-bot commented 3 months ago

This pull request was exported from Phabricator. Differential Revision: D61052364

facebook-github-bot commented 3 months ago

This pull request was exported from Phabricator. Differential Revision: D61052364

facebook-github-bot commented 3 months ago

This pull request was exported from Phabricator. Differential Revision: D61052364

facebook-github-bot commented 3 months ago

This pull request was exported from Phabricator. Differential Revision: D61052364

facebook-github-bot commented 3 months ago

This pull request was exported from Phabricator. Differential Revision: D61052364

facebook-github-bot commented 3 months ago

This pull request was exported from Phabricator. Differential Revision: D61052364

facebook-github-bot commented 3 months ago

This pull request was exported from Phabricator. Differential Revision: D61052364

facebook-github-bot commented 3 months ago

This pull request has been merged in pytorch/captum@25c3cc8797aeabed5ae3809b0d8df1b5d5c5fb72.