Langboat / mengzi-retrieval-lm

An experimental implementation of the retrieval-enhanced language model
Apache License 2.0
75 stars 5 forks source link

Customize knowledge db #6

Open ii35322 opened 1 year ago

ii35322 commented 1 year ago

Hello, Thanks for the valuable repo, I already tried to run this code and it worked very well! Looks like the db we can download through huggingface. I want to ask can we build our customize knowledge database without download from huggingface? Thanks!

Ag2S1 commented 1 year ago

Of course. I have added simple documentation to the index-server folder, and please refer to try it. https://github.com/Langboat/mengzi-retrieval-lm/blob/main/index-server/README.md

ii35322 commented 1 year ago

Hello, thank you for your reply! Following your steps above, I can run the experiment with the customize database! Now I want to evaluate the model with retrieval. I saw there is a "generate.py" file, which can generate the text with the model. But I faced two issues:

  1. If my input length is less than 64, can I still generate the text with retrieval? (e.g. use padding to let the sentence become 64?)
  2. There is a hyperparameter flag '--retrieval' can choose, but I don't know what is the "retrieval list" I need to input there. For example, if I set the input text is "Client progress notes are written by staff of a company about a specified client. It includes a client’s achievements, status and any other details about a client. Client progress note is aimed at reflecting" , what is the retrieval list I need to set? Thanks again if you have time to take a look.
bling0830 commented 1 year ago
  1. If the input length is less than 64, the input length can be supplemented to a length greater than 64 by padding. The length of the input token needs to be at least 65.

  2. We set the retrieval parameter so that the user can customize the retrieval, but since the similarity between the user-defined retrieval and the input texts cannot be distinguished, there is no guarantee that the model can make good use of the user-defined retrieval.

    If the input text is "Client progress notes are written by staff of a company about a specified client. It includes a client's achievements, status and any other details about a client. Client progress note is aimed at reflecting", there will be 40 tokens after tokenize.

    The first step needs to be padding on the left side of the input text, so that the number of input tokens is greater than 64 to activate the retrieval.

    The retrieval list is a two-dimensional list, the first layer length is equal to the number of chunks, and the second layer length is equal to the number of neighbors. If the retrieval token in the retrieval list is less than 128, the padding operation will be performed when the tokenize is performed, and the truncation operation will be performed when the token is greater than 128.

    Therefore, for this input text, there will be 1 chunk after padding, and the neighbor of the current model is 2. The retrieval list should be set to something like [['--------','--------']]