Closed DianjingLiu closed 1 month ago
This pull request was exported from Phabricator. Differential Revision: D63016032
This pull request was exported from Phabricator. Differential Revision: D63016032
This pull request was exported from Phabricator. Differential Revision: D63016032
This pull request was exported from Phabricator. Differential Revision: D63016032
This pull request has been merged in pytorch/captum@fc910e5e0289ffd856d40503d5504d73e8b28b95.
Summary: When setting
use_cached_outputs=False
, theLLMAttribution
failed to run on some old versions of pytorch/python.Error message
Root cause
The
attention_mask
was not updated to adapt to the growth of input size. Error message see test plan.Impacted versions
{F1876426564}
Differential Revision: D63016032