explodinggradients / ragas

Evaluation framework for your Retrieval Augmented Generation (RAG) pipelines
https://docs.ragas.io
Apache License 2.0
5.66k stars 528 forks source link

AttributeError: 'PhiForCausalLM' object has no attribute 'generate_prompt' #960

Open TheDominus opened 1 month ago

TheDominus commented 1 month ago

[ ] I have checked the documentation and related resources and couldn't resolve my bug.

Describe the bug AttributeError: 'PhiForCausalLM' object has no attribute 'generate_prompt'. Same thing is happening for multiple other LLM Models.

Ragas version: 0.1.7 Python version: 3.10

Code to Reproduce

from langchain_core.language_models import BaseLanguageModel
from langchain_core.embeddings import Embeddings
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
import torch
from datasets import Dataset
from ragas import evaluate
from ragas.metrics import faithfulness, answer_correctness
from ragas.llms import LangchainLLMWrapper
from ragas.embeddings import LangchainEmbeddingsWrapper
from langchain_community.embeddings import HuggingFaceEmbeddings

print("evaluator")
model_id = "microsoft/phi-2"
access_token = "hf_yourToken"

quantization_config = BitsAndBytesConfig(
            load_in_4bit=True,a
            bnb_4bit_use_double_quant=True,
            bnb_4bit_quant_type="nf4",
            bnb_4bit_compute_dtype="bfloat16",
        )

tokenizer = AutoTokenizer.from_pretrained(model_id, token=access_token)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    quantization_config=quantization_config,
    token=access_token
)

print("hello")
langchain_llm = LangchainLLMWrapper(model)
langchain_embeddings = HuggingFaceEmbeddings(model_name=model_id)

data_samples = {
    'question': ['When was the first super bowl?', 'Who won the most super bowls?'],
    'answer': ['The first superbowl was held on Jan 15, 1967', 'The most super bowls have been won by The New England Patriots'],
    'contexts' : [['The First AFL–NFL World Championship Game was an American football game played on January 15, 1967, at the Los Angeles Memorial Coliseum in Los Angeles,'], 
    ['The Green Bay Packers...Green Bay, Wisconsin.','The Packers compete...Football Conference']],
    'ground_truth': ['The first superbowl was held on January 15, 1967', 'The New England Patriots have won the Super Bowl a record six times']
}
dataset = Dataset.from_dict(data_samples)

results = evaluate(dataset,metrics=[faithfulness,answer_correctness],llm=langchain_llm, embeddings=langchain_embeddings)
print("evaluation is done")
print(results)

Error trace

File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/ragas/executor.py", line 96, in run
    results = self.loop.run_until_complete(self._aresults())
  File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
    return future.result()
  File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/ragas/executor.py", line 84, in _aresults
    raise e
  File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/ragas/executor.py", line 79, in _aresults
    r = await future
  File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/asyncio/tasks.py", line 571, in _wait_for_one
    return f.result()  # May raise f.exception().
  File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/ragas/executor.py", line 38, in sema_coro
    return await coro
  File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/ragas/executor.py", line 112, in wrapped_callable_async
    return counter, await callable(*args, **kwargs)
  File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/ragas/metrics/base.py", line 116, in ascore
    raise e
  File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/ragas/metrics/base.py", line 112, in ascore
    score = await self._ascore(row=row, callbacks=group_cm, is_async=is_async)
  File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/ragas/metrics/_answer_relevance.py", line 152, in _ascore
    result = await self.llm.generate(
  File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/ragas/llms/base.py", line 110, in generate
    return await loop.run_in_executor(None, generate_text)
  File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/tenacity/__init__.py", line 289, in wrapped_f
    return self(f, *args, **kw)
  File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/tenacity/__init__.py", line 379, in __call__
    do = self.iter(retry_state=retry_state)
  File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/tenacity/__init__.py", line 325, in iter
    raise retry_exc.reraise()
  File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/tenacity/__init__.py", line 158, in reraise
    raise self.last_attempt.result()
  File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/concurrent/futures/_base.py", line 451, in result
    return self.__get_result()
  File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
    raise self._exception
  File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/tenacity/__init__.py", line 382, in __call__
    result = fn(*args, **kwargs)
  File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/ragas/llms/base.py", line 147, in generate_text
    result = self.langchain_llm.generate_prompt(
  File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1695, in __getattr__
    raise AttributeError(f"'{type(self).__name__}' object has no attribute '{name}'")
AttributeError: 'GemmaForCausalLM' object has no attribute 'generate_prompt'

Expected behavior it should print the results of the evaluation

Additional context Add any other context about the problem here.

jjmachan commented 1 month ago

hey @TheDominus were you able to fix this?

TheDominus commented 1 month ago

Nope,

I was not able to fix this issue.

On Sat, 1 Jun, 2024, 4:54 pm Jithin James, @.***> wrote:

hey @TheDominus https://github.com/TheDominus were you able to fix this?

— Reply to this email directly, view it on GitHub https://github.com/explodinggradients/ragas/issues/960#issuecomment-2143412867, or unsubscribe https://github.com/notifications/unsubscribe-auth/AFCDK4T72CWMIR4SA7LRNIDZFGVOLAVCNFSM6AAAAABHZTFMWSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCNBTGQYTEOBWG4 . You are receiving this because you were mentioned.Message ID: @.***>