Closed tiandong1234 closed 2 months ago
Hi @tiandong1234 – I've removed this constraint. Can you try running the code from the latest branch?
Hi @tiandong1234 – I've removed this constraint. Can you try running the code from the latest branch?
It's working now. I have a small suggestion for improvement: if someone like me loads the LLM model from a local path, the function load_embedder_and_tokenizer in the file model_utils.py might not correctly identify the model type. Could you add a parameter or argparse option that allows users to specify the model type?
@tiandong1234 Can you give an example of what this would look like? Also, feel free to submit a pull request.
Hi, when loading "jxm/t5-basellama-7bone-million-instructions__correct" model, it appears that I need the file "/home/jxm3/research/retrieval/inversion/llama_unigram.pt" . Could you please release this file? Or tell me how to generate this file?