opengear-project / GEAR

GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM
MIT License
128 stars 10 forks source link

Evaluation Code Produces Identical Results with Different Caching Methods #17

Open mohsenhariri opened 1 month ago

mohsenhariri commented 1 month ago

Title: Evaluation Code Produces Identical Results with Different Caching Methods

Description:

It seems the evaluation code leads to the same result with different caching methods. I used these models:

with 3 different caching methods: --compress_method KCVT, --compress_method GEAR, and --compress_method KIVI_V2. In all cases, the result is:

Model KIVI, accuracy
Mistral-7B-v0.1 0.4245640636846095
Mistral-7B-Instruct-v0.2 0.4761182714177407

image

Steps to Reproduce:

  1. Run the evaluation script with the models mistralai/Mistral-7B-v0.1 and mistralai/Mistral-7B-Instruct-v0.2.
  2. Use the following caching methods: --compress_method KCVT, --compress_method GEAR, and --compress_method KIVI_V2.
  3. Observe the identical results in KIVI accuracy across all methods.

Expected Behavior: Different caching methods should produce varying results in KIVI accuracy.

Additional Information: I checked the input arguments and the evaluation script reads them correctly, so I am sure I had different setups.

HaoKang-Timmy commented 1 month ago

Let me check that.

HaoKang-Timmy commented 1 month ago

Currently this version of code does not support Mistral yet. However you can try it with Llama3 and Llama2. Support of Mistral would be added soon.

CUHKSZzxy commented 1 month ago

Currently this version of code does not support Mistral yet. However you can try it with Llama3 and Llama2. Support of Mistral would be added soon.

Does this mean the current version is not ready to reproduce the GEAR on Mistral models, as reported in the paper draft? If this is not the case, could you provide some suggestions since I failed to find related shell scripts.

Thanks!

@HaoKang-Timmy