Closed gmpetrov closed 1 year ago
I can't reproduce this @gmpetrov Can you provide more details about your env (pip list) or anything else that could help us diagnose the root cause?
Sure
pip list aiohttp 3.8.3 aiorwlock 1.3.0 aiosignal 1.3.1 alembic 1.9.2 appdirs 1.4.4 appnope 0.1.3 astroid 2.13.5 asttokens 2.2.1 async-generator 1.10 async-timeout 4.0.2 attrs 22.2.0 audioread 3.0.0 Authlib 1.2.0 azure-ai-formrecognizer 3.2.0 azure-common 1.1.28 azure-core 1.26.3 backcall 0.2.0 backoff 1.11.1 beautifulsoup4 4.11.2 beir 1.0.1 black 23.1.0 bleach 6.0.0 blobfile 2.0.1 cachetools 5.3.0 cattrs 22.2.0 certifi 2022.12.7 cffi 1.15.1 cfgv 3.3.1 chardet 5.1.0 charset-normalizer 2.1.1 ci-sdr 0.0.2 click 8.0.4 cloudpickle 2.2.1 colorama 0.4.6 coloredlogs 15.0.1 ConfigArgParse 1.5.3 contourpy 1.0.7 coverage 7.1.0 cryptography 39.0.1 ctc-segmentation 1.7.4 cycler 0.11.0 Cython 0.29.33 databind 1.5.3 databind.core 1.5.3 databind.json 1.5.3 databricks-cli 0.17.4 datasets 2.9.0 decorator 5.1.1 defusedxml 0.7.1 Deprecated 1.2.13 dill 0.3.6 Distance 0.1.3 distlib 0.3.6 dnspython 2.3.0 docker 6.0.1 docopt 0.6.2 docspec 2.0.2 docspec-python 2.0.2 docstring-parser 0.11 einops 0.6.0 elasticsearch 7.9.1 entrypoints 0.4 espnet 202209 espnet-model-zoo 0.1.7 espnet-tts-frontend 0.0.3 exceptiongroup 1.1.0 executing 1.2.0 faiss-cpu 1.7.2 farm-haystack 1.14.0rc0 /Users/gpetrov/workspace/haystack fast-bss-eval 0.1.3 fastjsonschema 2.16.2 filelock 3.9.0 Flask 2.2.2 flatbuffers 23.1.21 fonttools 4.38.0 frozenlist 1.3.3 fsspec 2023.1.0 g2p-en 2.1.0 ghp-import 2.1.0 gitdb 4.0.10 GitPython 3.1.30 grpcio 1.37.1 grpcio-tools 1.37.1 gunicorn 20.1.0 h11 0.14.0 h5py 3.8.0 huggingface-hub 0.12.0 humanfriendly 10.0 identify 2.5.17 idna 3.4 importlib-metadata 4.13.0 inflect 6.0.2 iniconfig 2.0.0 ipython 8.9.0 isodate 0.6.1 isort 5.12.0 itsdangerous 2.1.2 jaconv 0.3.3 jamo 0.4.1 jarowinkler 1.2.3 jedi 0.18.2 Jinja2 3.1.2 joblib 1.2.0 jsonschema 4.17.3 jupyter_client 8.0.2 jupyter_core 5.2.0 jupytercontrib 0.0.7 jupyterlab-pygments 0.2.2 kaldiio 2.17.2 kiwisolver 1.4.4 langdetect 1.0.9 lazy-object-proxy 1.9.0 librosa 0.9.2 llvmlite 0.39.1 loguru 0.6.0 lxml 4.9.2 Mako 1.2.4 Markdown 3.3.7 MarkupSafe 2.1.2 matplotlib 3.6.3 matplotlib-inline 0.1.6 mccabe 0.7.0 mergedeep 1.3.4 mistune 2.0.5 mkdocs 1.4.2 mlflow 2.1.1 mmh3 3.0.0 monotonic 1.6 more-itertools 9.0.0 mpmath 1.2.1 msgpack 1.0.4 msrest 0.7.1 multidict 6.0.4 multiprocess 0.70.14 mypy 1.0.0 mypy-extensions 1.0.0 nbclient 0.7.2 nbconvert 7.2.9 nbformat 5.7.3 networkx 3.0 nltk 3.8.1 nodeenv 1.7.0 nr.util 0.8.12 num2words 0.5.12 numba 0.56.4 numpy 1.23.5 oauthlib 3.2.2 onnx 1.12.0 onnxruntime 1.13.1 onnxruntime-tools 1.7.0 opensearch-py 2.1.1 outcome 1.2.0 packaging 22.0 pandas 1.5.3 pandocfilters 1.5.0 parso 0.8.3 pathspec 0.11.0 pdf2image 1.16.2 pexpect 4.8.0 pickleshare 0.7.5 Pillow 9.4.0 pinecone-client 2.1.0 pip 22.2.2 platformdirs 3.0.0 pluggy 1.0.0 pooch 1.6.0 posthog 2.3.0 pre-commit 3.0.4 prompt-toolkit 3.0.36 protobuf 3.20.1 psutil 5.9.4 psycopg2-binary 2.9.5 ptyprocess 0.7.0 pure-eval 0.2.2 py-cpuinfo 9.0.0 py3nvml 0.2.7 pyarrow 10.0.1 pycparser 2.21 pycryptodomex 3.17 pydantic 1.10.4 pydoc-markdown 4.6.4 pydub 0.25.1 Pygments 2.14.0 PyJWT 2.6.0 pylint 2.15.10 pymilvus 2.0.2 pyparsing 3.0.9 pypinyin 0.44.0 pyproject_api 1.5.0 pyrsistent 0.19.3 PySocks 1.7.1 pytesseract 0.3.10 pytest 7.2.1 pytest-asyncio 0.20.3 pytest-custom-exit-code 0.3.0 python-dateutil 2.8.2 python-docx 0.8.11 python-dotenv 0.21.1 python-frontmatter 1.0.0 python-magic 0.4.27 python-multipart 0.0.5 pytorch-wpe 0.0.1 pytrec-eval 0.5 pytz 2022.7.1 pyworld 0.3.2 PyYAML 5.4.1 pyyaml_env_tag 0.1 pyzmq 25.0.0 quantulum3 0.8.1 querystring-parser 1.2.4 rank-bm25 0.2.2 rapidfuzz 2.7.0 ray 1.13.0 rdflib 6.2.0 regex 2022.10.31 requests 2.28.2 requests-cache 0.9.8 requests-oauthlib 1.3.1 resampy 0.4.2 responses 0.18.0 scikit-learn 1.2.1 scipy 1.10.0 selenium 4.8.0 sentence-transformers 2.2.2 sentencepiece 0.1.97 seqeval 1.2.2 setuptools 63.2.0 shap 0.41.0 six 1.16.0 slicer 0.0.7 smmap 5.0.0 sniffio 1.3.0 sortedcontainers 2.4.0 soundfile 0.11.0 soupsieve 2.3.2.post1 SPARQLWrapper 2.0.0 SQLAlchemy 1.4.46 SQLAlchemy-Utils 0.39.0 sqlparse 0.4.3 stack-data 0.6.2 sympy 1.11.1 tabulate 0.9.0 threadpoolctl 3.1.0 tika 2.6.0 tiktoken 0.2.0 tinycss2 1.2.1 tokenize-rt 5.0.0 tokenizers 0.13.2 tomli 2.0.1 tomli_w 1.0.0 tomlkit 0.11.6 torch 1.13.1 torch-complex 0.4.3 torchvision 0.14.1 tornado 6.2 tox 4.2.6 tqdm 4.64.1 traitlets 5.9.0 transformers 4.25.1 trio 0.22.0 trio-websocket 0.9.2 typeguard 2.13.3 typing_extensions 4.4.0 ujson 5.1.0 Unidecode 1.3.6 url-normalize 1.4.3 urllib3 1.26.14 validators 0.19.0 virtualenv 20.19.0 watchdog 2.2.1 wcwidth 0.2.6 weaviate-client 3.10.0 webdriver-manager 3.8.5 webencodings 0.5.1 websocket-client 1.5.1 Werkzeug 2.2.2 wrapt 1.14.1 wsproto 1.2.0 xmltodict 0.13.0 xxhash 3.2.0 yapf 0.32.0 yarl 1.8.2 zipp 3.12.0
I just tried the example from a colab notebook and I had no issues 🤔
No luck on my side. It doesn't seem like an emergency. Let's see if others encounter this error before digging deeper.
Python 3.10.9 (main, Jan 11 2023, 09:18:18) [Clang 14.0.6 ] on darwin Type "help", "copyright", "credits" or "license" for more information.
from haystack.nodes.prompt import PromptNode, PromptTemplate prompt_node = PromptNode(model_name_or_path="google/flan-t5-large") prompt_node("What's the capital of Germany?") ['berlin']
I couldn't reproduce this as well. Do you have any updates here @gmpetrov?
@bilgeyucel @gmpetrov
Same issue for me. In colab
from haystack.nodes import PromptNode
prompt_node = PromptNode(model_name_or_path="google/flan-t5-base")
Error:
Currently supported invocation layers are: [<class
'haystack.nodes.prompt.invocation_layer.open_ai.OpenAIInvocationLayer'>, <class
'haystack.nodes.prompt.invocation_layer.azure_open_ai.AzureOpenAIInvocationLayer'>, <class
'haystack.nodes.prompt.invocation_layer.chatgpt.ChatGPTInvocationLayer'>, <class
'haystack.nodes.prompt.invocation_layer.azure_chatgpt.AzureChatGPTInvocationLayer'>, <class
'haystack.nodes.prompt.invocation_layer.hugging_face.HFLocalInvocationLayer'>, <class 'haysta
ck.nodes.prompt.invocation_layer.hugging_face_inference.HFInferenceEndpointInvocationLayer'>]
You can implement and provide custom invocation layer for google/flan-t5-small by
subclassing PromptModelInvocationLayer.
There is an ongoing HF infra outage. See if it resolves once their service is back up and running.
this issue is reproducible if we try to access the model available in the local system, rather live download from hugging face PromptNode(model_name_or_path="google/flan-t5-large") ... this works PromptNode(model_name_or_path="c:/xxx/flan-t5-large") .. this shows the error
Hey @eswarthammana, can you share the exact error you get?
Hey, I'm also getting a similar issue when trying to use the google/flan-t5-large model locally (downloaded from hugging face)
The error reads: Model google/flan-t5-large is not supported - no matching invocation layer found, Currently supported invocation layers are ........
You can implement and provide custom invocation layer for google/flan-t5-large by subclassing PromptModelInvocationLayer.
I think it would be great if there was an example of how to create a custom invocation layer? Is there an example somewhere?
Thanks.
Hi @ArmstrongEML, can you try passing HFLocalInvocationLayer
to PromptModel? I believe you don't need a custom invocation layer for this
from haystack.nodes import PromptModel, PromptNode
from haystack.nodes.prompt.invocation_layer import HFLocalInvocationLayer
local_model= PromptModel(
model_name_or_path=LOCAL_PATH,
invocation_layer_class=HFLocalInvocationLayer,
)
prompt_node = PromptNode(model_name_or_path=local_model)
Thanks @bilgeyucel !!
I'm now getting the error: Instantiating a pipeline without a task set raised an error: Repo id must be alphanumeric chars or '-', '_', '.', '--' and '..' are forbidden, '-' or '.' Cannot start or end the name, max length is 96.
The path I am using is less than 96 in length and I have tried to change the name of the folder that the model sits within.
@ArmstrongEML How about when you pass a task_name
to PromptNode model_kwargs={'task_name':'text2text-generation'}
?
prompt_node = PromptNode(model_name_or_path=local_model, model_kwargs={'task_name':'text2text-generation'})
Sorry, I got the error above when I ran:
local_model= PromptModel(
model_name_or_path=LOCAL_PATH,
invocation_layer_class=HFLocalInvocationLayer,
)
You can pass the same model_kwargs
to PromptModel
local_model= PromptModel(
model_name_or_path=LOCAL_PATH,
invocation_layer_class=HFLocalInvocationLayer,
model_kwargs={'task_name':'text2text-generation'}
)
Oh wait, I added the model_kwargs to the PromptModel! Looks to have worked!
Everything looks to be working now. Thank you so much!
Great to hear @ArmstrongEML :)
Error message ValueError: Could not load model google/flan-t5-large with any of the following classes: (<class 'transformers.models.auto.modeling_auto.AutoModelForSeq2SeqLM'>, <class 'transformers.models.t5.modeling_t5.T5ForConditionalGeneration'>). es / preprocessing steps / settings of reader etc.
To Reproduce prompt_node = PromptNode(model_name_or_path="google/flan-t5-large") print(prompt_node(f"Hello"))
System: M1 Macbook
Works with google/flan-t5-base