python3 server.py --classification-model joeddav/distilbert-base-uncased-go-emotions-student --enable-modules summarize,chromadb,classify,rvc,coqui-tts, --coqui-gpu --summarization-model=tuner007/pegasus_summarizer
Using torch device: cpu
Initializing a text summarization model...
Downloading (…)okenizer_config.json: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.61k/1.61k [00:00<00:00, 14.5MB/s]
Downloading spiece.model: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.91M/1.91M [00:00<00:00, 19.3MB/s]
Downloading (…)cial_tokens_map.json: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.34k/1.34k [00:00<00:00, 13.1MB/s]
Traceback (most recent call last):
File "/home/rexommendation/Programs/SillyTavern-Extras/server.py", line 219, in
summarization_tokenizer = AutoTokenizer.from_pretrained(summarization_model)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rexommendation/Programs/SillyTavern-Extras/venv/lib/python3.11/site-packages/transformers/models/auto/tokenization_auto.py", line 702, in from_pretrained
return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, *kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rexommendation/Programs/SillyTavern-Extras/venv/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 1811, in from_pretrained
return cls._from_pretrained(
^^^^^^^^^^^^^^^^^^^^^
File "/home/rexommendation/Programs/SillyTavern-Extras/venv/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 1965, in _from_pretrained
tokenizer = cls(init_inputs, **init_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rexommendation/Programs/SillyTavern-Extras/venv/lib/python3.11/site-packages/transformers/models/pegasus/tokenization_pegasus_fast.py", line 142, in init
super().init(
File "/home/rexommendation/Programs/SillyTavern-Extras/venv/lib/python3.11/site-packages/transformers/tokenization_utils_fast.py", line 114, in init
fast_tokenizer = convert_slow_tokenizer(slow_tokenizer)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rexommendation/Programs/SillyTavern-Extras/venv/lib/python3.11/site-packages/transformers/convert_slow_tokenizer.py", line 1288, in convert_slow_tokenizer
return converter_class(transformer_tokenizer).converted()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rexommendation/Programs/SillyTavern-Extras/venv/lib/python3.11/site-packages/transformers/convert_slow_tokenizer.py", line 445, in init
from .utils import sentencepiece_model_pb2 as model_pb2
File "/home/rexommendation/Programs/SillyTavern-Extras/venv/lib/python3.11/site-packages/transformers/utils/sentencepiece_model_pb2.py", line 91, in
_descriptor.EnumValueDescriptor(
File "/home/rexommendation/Programs/SillyTavern-Extras/venv/lib/python3.11/site-packages/google/protobuf/descriptor.py", line 796, in new
_message.Message._CheckCalledFromGeneratedFile()
TypeError: Descriptors cannot not be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
Downgrade the protobuf package to 3.20.x or lower.
Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).
python3 server.py --classification-model joeddav/distilbert-base-uncased-go-emotions-student --enable-modules summarize,chromadb,classify,rvc,coqui-tts, --coqui-gpu --summarization-model=tuner007/pegasus_summarizer
summarization_tokenizer = AutoTokenizer.from_pretrained(summarization_model)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rexommendation/Programs/SillyTavern-Extras/venv/lib/python3.11/site-packages/transformers/models/auto/tokenization_auto.py", line 702, in from_pretrained
return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, *kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rexommendation/Programs/SillyTavern-Extras/venv/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 1811, in from_pretrained
return cls._from_pretrained(
^^^^^^^^^^^^^^^^^^^^^
File "/home/rexommendation/Programs/SillyTavern-Extras/venv/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 1965, in _from_pretrained
tokenizer = cls(init_inputs, **init_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rexommendation/Programs/SillyTavern-Extras/venv/lib/python3.11/site-packages/transformers/models/pegasus/tokenization_pegasus_fast.py", line 142, in init
super().init(
File "/home/rexommendation/Programs/SillyTavern-Extras/venv/lib/python3.11/site-packages/transformers/tokenization_utils_fast.py", line 114, in init
fast_tokenizer = convert_slow_tokenizer(slow_tokenizer)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rexommendation/Programs/SillyTavern-Extras/venv/lib/python3.11/site-packages/transformers/convert_slow_tokenizer.py", line 1288, in convert_slow_tokenizer
return converter_class(transformer_tokenizer).converted()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rexommendation/Programs/SillyTavern-Extras/venv/lib/python3.11/site-packages/transformers/convert_slow_tokenizer.py", line 445, in init
from .utils import sentencepiece_model_pb2 as model_pb2
File "/home/rexommendation/Programs/SillyTavern-Extras/venv/lib/python3.11/site-packages/transformers/utils/sentencepiece_model_pb2.py", line 91, in
_descriptor.EnumValueDescriptor(
File "/home/rexommendation/Programs/SillyTavern-Extras/venv/lib/python3.11/site-packages/google/protobuf/descriptor.py", line 796, in new
_message.Message._CheckCalledFromGeneratedFile()
TypeError: Descriptors cannot not be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
Using torch device: cpu Initializing a text summarization model... Downloading (…)okenizer_config.json: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.61k/1.61k [00:00<00:00, 14.5MB/s] Downloading spiece.model: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.91M/1.91M [00:00<00:00, 19.3MB/s] Downloading (…)cial_tokens_map.json: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.34k/1.34k [00:00<00:00, 13.1MB/s] Traceback (most recent call last): File "/home/rexommendation/Programs/SillyTavern-Extras/server.py", line 219, in
More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates