Open theinhumaneme opened 8 months ago
refer to: https://github.com/zilliztech/GPTCache/issues/585#issuecomment-1972720103
you should give the cache_obj
param for the init
func, like:
def init_gptcache(cache_obj: Cache, llm: str):
print(cache.has_init)
cache.init(
cache_obj=cache_obj,
pre_embedding_func=get_content_func,
embedding_func=OpenAIEmbeddings(model="text-embedding-3-small").embed_query,
data_manager=data_manager,
similarity_evaluation=SearchDistanceEvaluation(),
)
print(cache.has_init)
Okay that works, thank you but now i get this error
Traceback (most recent call last):
File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/development/scripts/chatbot-postgres-test.py", line 129, in <module>
execution_time = timeit.timeit(lambda: llm.invoke("Tell me a joke"), number=1)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/timeit.py", line 237, in timeit
return Timer(stmt, setup, timer, globals).timeit(number)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/timeit.py", line 180, in timeit
timing = self.inner(it, self.timer)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<timeit-src>", line 6, in inner
File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/development/scripts/chatbot-postgres-test.py", line 129, in <lambda>
execution_time = timeit.timeit(lambda: llm.invoke("Tell me a joke"), number=1)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 153, in invoke
self.generate_prompt(
File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 546, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 407, in generate
raise e
File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 397, in generate
self._generate_with_cache(
File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 579, in _generate_with_cache
cache_val = llm_cache.lookup(prompt, llm_string)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/langchain_community/cache.py", line 813, in lookup
res = get(prompt, cache_obj=_gptcache)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/gptcache/adapter/api.py", line 124, in get
res = adapt(
^^^^^^
File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/gptcache/adapter/adapter.py", line 78, in adapt
embedding_data = time_cal(
^^^^^^^^^
File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/gptcache/utils/time.py", line 9, in inner
res = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
TypeError: OpenAIEmbeddings.embed_query() got an unexpected keyword argument 'extra_param
is this because of the new openai endpoints or am i doing something wrong.
@theinhumaneme This seems to be the wrong format of the custom embedding function. you can refer to: https://github.com/zilliztech/GPTCache/blob/main/gptcache/embedding/openai.py
@theinhumaneme or, you can show the embed_query func, maybe i can give you some advice
@theinhumaneme or, you can show the embed_query func, maybe i can give you some advice
there is no to_embeddings
function in the OpenAIEmbeddings
class now. We have embed_query
and embed_documents
Here's the link https://api.python.langchain.com/en/latest/_modules/langchain_openai/embeddings/base.html#OpenAIEmbeddings.embed_query
@theinhumaneme You cannot put langchain's embedding methods into gptcache because they are incompatible, and gptcache will not be considered when langchain is modified.
Okay thank you, I will look into the openai library thank you
Current Behavior
i get a stack trace
Expected Behavior
I should be able to use the cache normally
Steps To Reproduce
Environment
No response
Anything else?
i get this error when i use the
set_llm_cache()
from langchainit works fine when i use it normally i.e init but fails when i am trying to embed my text using the openai embeddings i get an error stating that
to_embeddings
doesn't exist when i change the code in the function toembed_query
i get unexpectedextra_params
passed.Thank you :D