Closed rkatriel closed 1 year ago
Hi, thanks for the report!
I wonder whether this is because SimpleFrozenDict
isn't part of the __all__
declaration in https://github.com/explosion/confection/blob/main/confection/__init__.py. It ran for us locally and on the CI though. We'll look into it.
Just to double check: which version of confection
do you have installed? If it's a lower version, can you try upgrading to confection==0.1.2
?
Hi Sofie, thanks for the quick reply!
I had confection 0.0.4 (default version installed). I upgraded to 0.1.2 and that solved the error, but now I'm getting a new one (see the full stack trace below):
ValueError: The specified model 'gpt-4' is not availableValueError: The specified model 'gpt-4' is not available
My guess is that a weaker model (that doesn't require authentication) should be specified in the config file.
Best, Ron
/Library/Frameworks/Python.framework/Versions/3.10/bin/python /Users/ron.katriel/PycharmProjects/Transformer/spacy-llm-test.py
2023-09-08 11:29:08.348957: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
Traceback (most recent call last):
File "/Users/ron.katriel/PycharmProjects/Transformer/spacy-llm-test.py", line 3, in
Process finished with exit code 1
Happy to hear the issue with confection
is resolved! We'll make sure to adjust the lower pin so others don't get bitten by this, too. I also released a new version 0.1.3
of confection
that properly exports the frozen structures as part of the __all__
in the main module, just in case.
For this new issue, can you copy paste the config file you're using? This is with a freshly installed spacy-llm
0.5.0? Can you try a different model, e.g.
[components.llm.model]
@llm_models = "spacy.GPT-3-5.v2"
and let us know if that does work?
I'm using the config file from the README. Here it is, updated as you suggested
[nlp]
lang = "en"
pipeline = ["llm"]
[components]
[components.llm]
factory = "llm"
[components.llm.task]
@llm_tasks = "spacy.TextCat.v3"
labels = ["COMPLIMENT", "INSULT"]
[components.llm.model]
@llm_models = "spacy.GPT-3-5.v2"
Now I'm getting a different error (see the full stack trace below):
ConnectionError: API could not be reached after 31.684 seconds in total and attempting to connect 5 times. Check your network connection and the API's availability. 429 Too Many Requests
Looks like a server is being overloaded with requests.
Ron
/Library/Frameworks/Python.framework/Versions/3.10/bin/python /Users/ron.katriel/PycharmProjects/Transformer/spacy-llm-test.py
2023-09-08 11:55:33.184692: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
Traceback (most recent call last):
File "/Users/ron.katriel/PycharmProjects/Transformer/spacy-llm-test.py", line 4, in
Process finished with exit code 1
Could be the same problem as I have (no credits on my OpenAI account). See #286
I believe you are right. This should probably be mentioned upfront in the README.
Does spacy_llm work with any models that don't require a payed OpenAI account?
@rkatriel : if you want to use a model from OpenAI, we do assume that you have your API keys set as environment keys, as explained in the first sentences of the quickstart and with a reference to more info over at the API docs. If you see any ways of clarifying that further, feel free to submit a pull request :-)
With respect to issue #286 - we'll definitely have a look to ensure that error messages are not getting further confused.
Does spacy_llm work with any models that don't require a payed OpenAI account?
Yes, you can use an open-source model through Huggingface. Note that you'll need sufficient memory and GPU power on your system to be able to run one. You can have a look at this specific example and the API docs.
I stand corrected, should have read that section more carefully. Thanks for the pointer to the example using Huggingface.
Unfortunately, dolly-v2-3b doesn't work on my Intel-based Mac (no GPU support). I get the following error:
TypeError: Trying to convert BFloat16 to the MPS backend but it does not have support for that dtype.
It seems the only solution for this is to use a GGML model (not included in the models provided as part of the spaCy core library) or upgrade to the latest Apple Silicon (M2).
I tried other spaCy-supported Huggingface models: Llama, Falcon, StableLM, and OpenLLaMA. All (except Llama) installed successfully, but each fails on a different error when running the README example.
Suggestions for making spacy-llm work on my Mac would be much appreciated.
Thanks! Ron
All (except Llama) installed successfully, but each fails on a different error when running the README example.
Please include in the error messages here.
Here's for example the error I'm getting when loading spacy.StableLM.v1 (stablelm-base-alpha-3b):
/Library/Frameworks/Python.framework/Versions/3.10/bin/python /Users/ron.katriel/PycharmProjects/Transformer/spacy-llm-test.py
2023-09-11 11:47:15.660511: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/spacy_llm/models/hf/base.py:99: UserWarning: Couldn't find a CUDA GPU, so the setting 'device_map:auto' will be used, which may result in the LLM being loaded (partly) on the CPU or even the hard disk, which may be slow. Install cuda to be able to load and run the LLM on the GPU instead.
warnings.warn(
Traceback (most recent call last):
File "/Users/ron.katriel/PycharmProjects/Transformer/spacy-llm-test.py", line 3, in device_map
had weights offloaded to the disk. Please provide an offload_folder
for them. Alternatively, make sure you have safetensors
installed if the model you are using offers the weights in this format.
Process finished with exit code 1
Hi,
Both examples in the README file fail to run. I get the following error:
ImportError: cannot import name 'SimpleFrozenDict' from 'confection' (/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/confection/init.py)
It happens when the following is executed (see the full stack trace below):
from spacy_llm.util import assemble
The code is run from within PyCharm on a MacBook Pro (Intel core i7) under macOS Monterey (Version 12.5.1).
Thanks, Ron
/Library/Frameworks/Python.framework/Versions/3.10/bin/python /Users/ron.katriel/PycharmProjects/Transformer/spacy-llm-test.py 2023-09-08 10:55:19.576814: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. Traceback (most recent call last): File "/Users/ron.katriel/PycharmProjects/Transformer/spacy-llm-test.py", line 1, in
from spacy_llm.util import assemble
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/spacy_llm/init.py", line 1, in
from . import cache # noqa: F401
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/spacy_llm/cache.py", line 11, in
from .ty import LLMTask, PromptTemplateProvider
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/spacy_llm/ty.py", line 14, in
from .models import langchain
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/spacy_llm/models/init.py", line 1, in
from .hf import dolly_hf, openllama_hf, stablelm_hf
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/spacy_llm/models/hf/init.py", line 2, in
from .dolly import dolly_hf
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/spacy_llm/models/hf/dolly.py", line 3, in
from confection import SimpleFrozenDict
ImportError: cannot import name 'SimpleFrozenDict' from 'confection' (/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/confection/init.py)