Closed kishanios123 closed 1 year ago
Is it convenient to provide the completed test code?
i will post this issue in another open bug ... where source code is mentioned ...is this good practice or not ?
Is it convenient to provide the completed test code?
In fact, what I mean is that you can write a simple demo to reproduce the error you encountered.
ok
import os
import time
#import openai as origOpenai
from gptcache import cache, Config
from gptcache.core import Cache
from gptcache.manager import manager_factory
from gptcache.embedding import Onnx
from gptcache.processor.post import temperature_softmax
from gptcache.similarity_evaluation.distance import SearchDistanceEvaluation
from gptcache.similarity_evaluation.onnx import OnnxModelEvaluation
from gptcache.adapter import openai
from gptcache.adapter.api import init_similar_cache
from gptcache.processor.pre import get_openai_moderation_input
cacheTemperature = 1.0
openaiKey = ''
cache.set_openai_key()
llm_cache = Cache()
onnx = Onnx()
data_manager = manager_factory("sqlite,faiss", data_dir="llm_cache", vector_params={"dimension": onnx.dimension})
llm_cache.init(
embedding_func=onnx.to_embeddings,
data_manager=data_manager,
similarity_evaluation=OnnxModelEvaluation(),
post_process_messages_func=temperature_softmax
)
moderation_cache = Cache()
moderation_data_manager = manager_factory(data_dir="moderation_cache")
moderation_cache.init(
data_manager=moderation_data_manager,
pre_embedding_func=get_openai_moderation_input
)
question = ""
answer = getOepnAiResponse(question)
def getOepnAiResponse(question):
start = time.time()
if not question.strip():
print("empty question" + " ,,, " + "empty answer" + " ,,, " + str(round(time.time() - start, 2)) + " ,,, " + "0", flush=True)
answer = "No input received. Please enter a valid input and try again."
return answer
try:
modRes = openai.Moderation.create(input=question, cache_obj=moderation_cache)
print(str(modRes), flush=True)
modOutput = modRes["results"][0]["flagged"]
except Exception as e:
print("got error once - " + str(e), flush=True)
modRes = openai.Moderation.create(input=question, cache_obj=moderation_cache)
modOutput = modRes["results"][0]["flagged"]
print("after error modOutput = " + str(modOutput), flush=True)
if not modOutput:
response = openai.ChatCompletion.create(model="gpt-3.5-turbo",temperature = cacheTemperature, # Change temperature here
messages=[{"role": "system", "content":sysPrompt},
{"role": "user", "content": question}],cache_obj=llm_cache)
answer = response['choices'][0]['message']['content']
token = str(response['usage']['total_tokens'])
print(question + " ,,, " + answer + " ,,, " + str(round(time.time() - start, 2)) + " ,,, " + token, flush=True)
return answer
else:
print(question + " ,,, " + "moderation failed" + " ,,, " + str(round(time.time() - start, 2)) + " ,,, " + "0", flush=True)
answer = "Your question violates the policy. Please ask only relevant and appropriate questions."
return answer
i am getting some other error too ...mentioning below :
response = openai.ChatCompletion.create(model="gpt-3.5-turbo",temperature = 1.0, # Change temperature here
File "/root/anaconda3/envs/openai/lib/python3.9/site-packages/gptcache/adapter/openai.py", line 100, in create return adapt( File "/root/anaconda3/envs/openai/lib/python3.9/site-packages/gptcache/adapter/adapter.py", line 172, in adapt return_message = time_cal( File "/root/anaconda3/envs/openai/lib/python3.9/site-packages/gptcache/utils/time.py", line 9, in inner res = func(*args, **kwargs) File "/root/anaconda3/envs/openai/lib/python3.9/site-packages/gptcache/adapter/adapter.py", line 161, in post_process return_message = chat_cache.post_process_messages_func( File "/root/anaconda3/envs/openai/lib/python3.9/site-packages/gptcache/processor/post.py", line 84, in temperature_softmax scores = softmax([x / temperature for x in scores]) File "/root/anaconda3/envs/openai/lib/python3.9/site-packages/gptcache/utils/softmax.py", line 6, in softmax assert len(x.shape) == 1, f"Expect to get a shape of (len,) but got {x.shape}." AssertionError: Expect to get a shape of (len,) but got (2, 1).
response = openai.ChatCompletion.create(model="gpt-3.5-turbo",temperature = 1.0, # Change temperature here
File "/root/anaconda3/envs/openai/lib/python3.9/site-packages/gptcache/adapter/openai.py", line 100, in create return adapt( File "/root/anaconda3/envs/openai/lib/python3.9/site-packages/gptcache/adapter/adapter.py", line 172, in adapt return_message = time_cal( File "/root/anaconda3/envs/openai/lib/python3.9/site-packages/gptcache/utils/time.py", line 9, in inner res = func(*args, **kwargs) File "/root/anaconda3/envs/openai/lib/python3.9/site-packages/gptcache/adapter/adapter.py", line 161, in post_process return_message = chat_cache.post_process_messages_func( File "/root/anaconda3/envs/openai/lib/python3.9/site-packages/gptcache/processor/post.py", line 84, in temperature_softmax scores = softmax([x / temperature for x in scores]) File "/root/anaconda3/envs/openai/lib/python3.9/site-packages/gptcache/utils/softmax.py", line 6, in softmax assert len(x.shape) == 1, f"Expect to get a shape of (len,) but got {x.shape}." AssertionError: Expect to get a shape of (len,) but got (1, 1).
@kishanios123 i will fix the error in a new pull request. This is due to onnx similarity evaluation
assert len(x.shape) == 1, f"Expect to get a shape of (len,) but got {x.shape}." AssertionError: Expect to get a shape of (len,) but got (2, 1).
Ok, Thanks... the other two issues will also be fixed?
assert len(x.shape) == 1, f"Expect to get a shape of (len,) but got {x.shape}." AssertionError: Expect to get a shape of (len,) but got (2, 1).
assert len(x.shape) == 1, f"Expect to get a shape of (len,) but got {x.shape}." AssertionError: Expect to get a shape of (len,) but got (1, 1).
yes, they are all caused by a problem.
I have thoroughly run your test code and, after repairing it, I found no issues. You may also want to try installing the dev
version to confirm.
Thank you once again for your inquiries and support !!!
@kishanios123 the issue should be fixed in the latest version
ok, I have updated the package... and will update you if found any bugs...
Current Behavior
File "/root/anaconda3/envs/openai/lib/python3.9/site-packages/gptcache/adapter/openai.py", line 100, in create return adapt( File "/root/anaconda3/envs/openai/lib/python3.9/site-packages/gptcache/adapter/adapter.py", line 172, in adapt return_message = time_cal( File "/root/anaconda3/envs/openai/lib/python3.9/site-packages/gptcache/utils/time.py", line 9, in inner res = func(*args, **kwargs) File "/root/anaconda3/envs/openai/lib/python3.9/site-packages/gptcache/adapter/adapter.py", line 161, in post_process return_message = chat_cache.post_process_messages_func( File "/root/anaconda3/envs/openai/lib/python3.9/site-packages/gptcache/processor/post.py", line 84, in temperature_softmax scores = softmax([x / temperature for x in scores]) File "/root/anaconda3/envs/openai/lib/python3.9/site-packages/gptcache/utils/softmax.py", line 5, in softmax x = np.array(x) ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (2,) + inhomogeneous part.
Expected Behavior
No response
Steps To Reproduce
No response
Environment
GPTCache==0.1.32
Anything else?
No response