issues
search
rachtibat
/
LRP-eXplains-Transformers
Layer-Wise Relevance Propagation for Large Language Models and Vision Transformers [ICML 2024]
https://lxt.readthedocs.io
Other
100
stars
12
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
enforced attention class to be 'eager' in order to enable usage of torch>2.1
#18
effingpaul
opened
1 day ago
0
updated llama model to support llama3
#17
egolimblevskaia
closed
1 week ago
0
Differences when using lxt.models.llama.LlamaForCausalLM vs. transformers.LlamaForCausalLM
#16
GenK-ai
closed
1 week ago
3
Mixtral quantized does not seem to work?
#15
aymeric-roucher
closed
1 week ago
2
Making Llama 3 quantized work
#14
aymeric-roucher
closed
1 week ago
3
Example usage for Vision Transformers?
#13
sidgairo18
opened
1 month ago
2
How to backward twice for lxt.model.llama
#12
cyzkrau
opened
2 months ago
1
phi3 configuration
#11
Aakriti23
closed
3 months ago
1
LLaMA family issues
#10
dvdblk
closed
1 week ago
6
add lrp for gpt2
#9
Tomsawyerhu
opened
4 months ago
2
How can i get each layer's lrp score?
#8
Patrick-Ni
closed
3 months ago
2
Error with torch.dtype=float16
#7
Patrick-Ni
closed
4 months ago
3
pip install ./lxt
#6
GeorgeRodinos
closed
4 months ago
4
added BERT implementation
#5
pkhdipraja
closed
5 months ago
4
Classification tasks example
#4
dvdblk
closed
5 months ago
6
LLaMA Quickstart repro with Inseq and compatibility question
#3
gsarti
opened
5 months ago
6
How to extract the relevance of neurons in FFN layers
#2
ChangWenhan
closed
6 months ago
2
[Desiderata] Captum-like implementation for Inseq compatibility
#1
gsarti
opened
9 months ago
6