jalammar / ecco

Explain, analyze, and visualize NLP language models. Ecco creates interactive visualizations directly in Jupyter notebooks explaining the behavior of Transformer-based language models (like GPT2, BERT, RoBERTA, T5, and T0).
https://ecco.readthedocs.io
BSD 3-Clause "New" or "Revised" License
1.96k stars 167 forks source link

Support to LLM like LLAMA-2 and Vicuna? #98

Closed BiEchi closed 1 year ago

BiEchi commented 1 year ago

Dear @jalammar , Greetings! I'm writing to check whether there are any updates on whether you'll implement the LLM support these days. I can contribute to this part if you think it might be valuable.

Jack

jalammar commented 1 year ago

That would be valuable indeed, if you have the bandwidth for it! Sure!

BiEchi commented 1 year ago

Hi @jalammar , Ecco is actually compatible with the current LLAMA 2 model because it's stll causal. The config looks like this:

model_config = {
    'embedding': "model.embed_tokens",
    'type': 'causal',
    'activations': ['down_proj'], #This is a regex
    'token_prefix': '_',
    'partial_token_prefix': ''
}

However, as the model becomes larger and larger, Ecco is occupying a significant amount of GPU memory. I'd like to contribute to some memory optimization options. Would you like to point to me where Ecco occupies GPU memory?

verazuo commented 1 year ago

Thx for this comment! I tried this model config and the pulled ecco library in @BiEchi 's repo, it works well on Vicuna.

EricPeter commented 1 year ago

Hi , I am trying to install ecco in google colab but am getting this error Collecting ecco

Using cached ecco-0.1.2-py2.py3-none-any.whl (70 kB)
Collecting transformers~=4.2 (from ecco)
  Using cached transformers-4.31.0-py3-none-any.whl (7.4 MB)
Requirement already satisfied: seaborn~=0.11 in /usr/local/lib/python3.10/dist-packages (from ecco) (0.12.2)
Collecting scikit-learn~=0.23 (from ecco)
  Using cached scikit-learn-0.24.2.tar.gz (7.5 MB)
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
  error: subprocess-exited-with-error

  × Preparing metadata (pyproject.toml) did not run successfully.
  │ exit code: 1
  ╰─> See above for output.

  note: This error originates from a subprocess, and is likely not a problem with pip.
  Preparing metadata (pyproject.toml) ... error
error: metadata-generation-failed

× Encountered error while generating package metadata.
╰─> See above for output.

note: This is an issue with the package mentioned above, not pip.
hint: See above for details.  

And I can't seem to find a way to fix it .

Dongximing commented 1 year ago

Hi , I just wandering is 'embedding': "model.embed_tokens" ? or 'model.embed_tokens.weight' could you give me a sample code and how did you use it ?

thanks a lot

BiEchi commented 1 year ago

@EricPeter Pls open a separate issue for this.

BiEchi commented 1 year ago

@Dongximing


text = """The first presient of US is """

print("===== Attribution Method =====")
attribution_method = 'dl'
print(attribution_method)
tokenizer = AutoTokenizer.from_pretrained(model, torch_dtype=dtype)

model_config = {
    'embedding': "model.embed_tokens",
    'type': 'causal',
    'activations': ['down_proj'],
    'token_prefix': '_',
    'partial_token_prefix': ''
}

lm = ecco.from_pretrained(model,
                          activations=False,
                          model_config=model_config,
                          gpu=False
                          )
BiEchi commented 1 year ago

Hi @jalammar, I'm closing this issue as I've got some conclusive results. Gradient-based saliency methods are computationally heavy at LLM generation because each token takes too much GPU memory due to the back prop process, and the time consumption is also stunningly high. The performance of gradient-based saliency methods is also extremely low on LLMs, because gradient-based methods assumes that the model is simulatable to its first-order Taylor series, i.e., its linear function (affined model). Thus, when the model gets more complex and unliearlized, the results of gradient-based methods drop significantly. I've tested naive gradient, integrated gradient and input * gradient methods, and none of them perform well on LLAMA-2 (7B), but the results are exceptionally good on GPT-2 (1.5B). Thus, I'd like to suggest people following this thread give up applying gradient-based methods to LLAMA-2. Perturbing-based methods may make a difference, but I'll make a different thread to discuss about it.

BiEchi commented 1 year ago

@verazuo Pls correct me if you see any good results applying these methods on LLMs. I can reopen this issue if it's still promising.

Dongximing commented 1 year ago

Hi @jalammar, I'm closing this issue as I've got some conclusive results. Gradient-based saliency methods are computationally heavy at LLM generation because each token takes too much GPU memory due to the back prop process, and the time consumption is also stunningly high. The performance of gradient-based saliency methods is also extremely low on LLMs, because gradient-based methods assumes that the model is simulatable to its first-order Taylor series, i.e., its linear function (affined model). Thus, when the model gets more complex and unliearlized, the results of gradient-based methods drop significantly. I've tested naive gradient, integrated gradient and input * gradient methods, and none of them perform well on LLAMA-2 (7B), but the results are exceptionally good on GPT-2 (1.5B). Thus, I'd like to suggest people following this thread give up applying gradient-based methods to LLAMA-2. Perturbing-based methods may make a difference, but I'll make a different thread to discuss about it.

Hi @BiEchi the result is just a normal distribution, for example, "please tell me this sentence positive or negative: I love you. the answer is [model output]" , I got a result. is there some benchmark to evaluate the performance of model explanation? or How do you think this result? thank you very much, btw as you said linear is not good, could you recommend me some other algorithm in this ecco library?

t1
BiEchi commented 1 year ago

@Dongximing this result makes sense to an LLM because saliency/integrated saliency methods perform extremely badly on complex models. The reason is that they're not developed to interpret LLMs. When proposed, these methods were just used for small models like CNN and at most LSTM. Later it got worked on GPT-2 as the model is still not too complex. When it comes to LLAMA, backbrop becomes extremely expensive and unreliable due to the linearality assumption of the saliency methods. For other algorithms, I'll come back to you in several days.

Dongximing commented 1 year ago

Hi BiEchi is that possible cause out of memory ? when the number of output increase . such as when the max_new_tokens = 1000 or input_size = 1000. also , do you know other open source to have model explanation in NLP? thanks

BiEchi commented 1 year ago

Hi @Dongximing yes, large models are significantly more costly than small models because backprop is several times larger and more computational heavy. GPT-2 is 1.5B, while LLAMA-2 is 7B. When the input sequence increases, each output token needs to attribute to more input tokens, so the memory usage increases. For output sequence, it's very obvious that the memory and time increases when it's longer, because you do one backprop for each output token. For other saliency methods, I'd suggest to use lime or shap provided by Captum. You can write some code in Ecco to suit that. I've got it working on my hand using Ecco. If you're interested pls leave comments here. If there are lots of likes for this comment, I'll try to clean my code and release it for your good.

Dongximing commented 1 year ago

if possible, please share it. thanks

jalammar commented 11 months ago

A new method that could work better with these models is Contrastive Explanations (https://arxiv.org/abs/2202.10419). You can try an implementation of it in Inseq https://github.com/inseq-team/inseq