zilliztech / GPTCache

Semantic cache for LLMs. Fully integrated with LangChain and llama_index.
https://gptcache.readthedocs.io
MIT License
7.25k stars 507 forks source link

[DOCS]: Does the onnx model support Chinese when used to determine similarity? #634

Open qiangruoyu opened 4 months ago

qiangruoyu commented 4 months ago

Documentation Link

No response

Describe the problem

Does the onnx model support Chinese when used to determine similarity?

Describe the improvement

There is no detailed documentation about this model. Could you please provide a detailed explanation of how this model was trained.

Anything else?

No response

SimFG commented 4 months ago

the onnx model only support English, and it is a very small model, only suitable for demo.

xiaolele1314 commented 3 months ago

use sbert,and custom similarity_evaluation example: class JxyEvaluation(SimilarityEvaluation): model_instance = None

def __init__(self):
    if JxyEvaluation.model_instance is None:
        # 使用 with torch.no_grad() 来降低内存使用(zhi'zuo2推理)
        with torch.no_grad():
            JxyEvaluation.model_instance = BertSimilarity()
    self.model = JxyEvaluation.model_instance

def evaluation(
    self, src_dict: Dict[str, Any], cache_dict: Dict[str, Any], **_
) -> float:
    try:
        src_question = src_dict["question"]
        cache_question = cache_dict["question"]
        if src_question.lower() == cache_question.lower():
            return 2
        start_time = time.time()
        score = float(self.model.similarity(src_question, cache_question))
        print("Cache Time consuming: {:.2f}s".format(
            time.time() - start_time))
        return score+1
    except Exception as e:  # pylint: disable=W0703
        return 0

def range(self) -> Tuple[float, float]:
    return 0.0, 2.0