Closed DianjingLiu closed 1 month ago
This pull request was exported from Phabricator. Differential Revision: D62894898
This pull request was exported from Phabricator. Differential Revision: D62894898
This pull request was exported from Phabricator. Differential Revision: D62894898
This pull request was exported from Phabricator. Differential Revision: D62894898
This pull request was exported from Phabricator. Differential Revision: D62894898
This pull request was exported from Phabricator. Differential Revision: D62894898
This pull request was exported from Phabricator. Differential Revision: D62894898
This pull request was exported from Phabricator. Differential Revision: D62894898
This pull request was exported from Phabricator. Differential Revision: D62894898
This pull request was exported from Phabricator. Differential Revision: D62894898
This pull request was exported from Phabricator. Differential Revision: D62894898
This pull request was exported from Phabricator. Differential Revision: D62894898
This pull request was exported from Phabricator. Differential Revision: D62894898
This pull request has been merged in pytorch/captum@4f8caeb920486cff6d6dc949cae8d9d74a82bd81.
Summary: Our current unit tests for LLM Attribution use mocked models which are similar to huggingface transformer models (e.g. Llama, Llama2), but may have some unexpected differences such as this. To validate coverage and ensure compatibility with future changes to models, we would like to add tests using huggingface models directly and validate compatibility with LLM Attribution, which will help us quickly catch any breaking changes.
So far we only test for model type
LlamaForCausalLM
Differential Revision: D62894898