Open nikipi opened 1 year ago
I updated numpy using the command pip install -U numpy
Then tried to run the notebook cell
import ecco
lm = ecco.from_pretrained('t5-small') review="""Denis Villeneuve's Dune looks and sounds amazing -- and once the (admittedly slow-building) story gets you hooked, you'll be on the edge of your seat for the sequel."""
output = lm.generate(f"sst2 sentence: {review}", generate=1, do_sample=False)
But its giving the following error
TypeError Traceback (most recent call last)
1 frames /usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py in _prepare_decoder_input_ids_for_generation(self, batch_size, model_input_name, model_kwargs, decoder_start_token_id, bos_token_id, device) 654 if model_kwargs is not None and "decoder_input_ids" in model_kwargs: 655 decoder_input_ids = model_kwargs.pop("decoder_input_ids") --> 656 elif "input_ids" in model_kwargs and model_input_name != "input_ids": 657 decoder_input_ids = model_kwargs.pop("input_ids") 658 else:
TypeError: argument of type 'NoneType' is not iterable
Fixed !pip uninstall numpy -y !pip install numpy==1.21.5
I solved this problem by
pip install -U numpy
and the , the most important thing, you must restart your notebook kernel
tried the above options both in colab and local jupyter notebook. But giving the same error:
TypeError: argument of type 'NoneType' is not iterable
`` ! pip install ecco
! pip uninstall numpy -y ! pip install numpy==1.21.5
import ecco
lm = ecco.from_pretrained('t5-small') review="""Denis Villeneuve's Dune looks and sounds amazing -- and once the (admittedly slow-building) story gets you hooked, you'll be on the edge of your seat for the sequel."""
output = lm.generate(f"sst2 sentence: {review}", generate=1, do_sample=False) ``
TypeError Traceback (most recent call last) /tmp/ipykernel_912/1601604474.py in 6 of your seat for the sequel.""" 7 ----> 8 output = lm.generate(f"sst2 sentence: {review}", generate=1, do_sample=False)
~/miniconda3/envs/py3.8/lib/python3.8/site-packages/ecco/lm.py in generate(self, input_str, max_length, temperature, top_k, top_p, do_sample, attribution, generate, beam_size, **generate_kwargs) 202 assert len(input_ids.size()) == 2 # will break otherwise 203 if version.parse(transformers.version) >= version.parse('4.13'): --> 204 decoder_input_ids = self.model._prepare_decoder_input_ids_for_generation(input_ids.shape[0], None, None) 205 else: 206 decoder_input_ids = self.model._prepare_decoder_input_ids_for_generation(input_ids, None, None)
~/miniconda3/envs/py3.8/lib/python3.8/site-packages/transformers/generation/utils.py in _prepare_decoder_input_ids_for_generation(self, batch_size, model_input_name, model_kwargs, decoder_start_token_id, bos_token_id, device) 654 if model_kwargs is not None and "decoder_input_ids" in model_kwargs: 655 decoder_input_ids = model_kwargs.pop("decoder_input_ids") --> 656 elif "input_ids" in model_kwargs and model_input_name != "input_ids": 657 decoder_input_ids = model_kwargs.pop("input_ids") 658 else:
TypeError: argument of type 'NoneType' is not iterable
SumitDasTR
Same
Yeah the pip install numpy==1.21.5
solution does not work for me either. Has anyone solved this problem?
lm = ecco.from_pretrained('valhalla/t5-small-qa-qg-hl')#, gpu=False)
output = lm.generate(text, generate=20, do_sample=True, attribution=['ig', 'grad_x_input'])
gives
---> 14 output = lm.generate(text, generate=20, do_sample=True, attribution=['ig', 'grad_x_input'])
16 output.primary_attributions(attr_method='ig')
File [/opt/conda/envs/ecco/lib/python3.9/site-packages/ecco/lm.py:204](https://vscode-remote+ssh-002dremote-002bec2-002d34-002d204-002d44-002d163-002ecompute-002d1-002eamazonaws-002ecom.vscode-resource.vscode-cdn.net/opt/conda/envs/ecco/lib/python3.9/site-packages/ecco/lm.py:204), in LM.generate(self, input_str, max_length, temperature, top_k, top_p, do_sample, attribution, generate, beam_size, **generate_kwargs)
202 assert len(input_ids.size()) == 2 # will break otherwise
203 if version.parse(transformers.__version__) >= version.parse('4.13'):
--> 204 decoder_input_ids = self.model._prepare_decoder_input_ids_for_generation(input_ids.shape[0], None, None)
205 else:
206 decoder_input_ids = self.model._prepare_decoder_input_ids_for_generation(input_ids, None, None)
File [/opt/conda/envs/ecco/lib/python3.9/site-packages/transformers/generation/utils.py:662](https://vscode-remote+ssh-002dremote-002bec2-002d34-002d204-002d44-002d163-002ecompute-002d1-002eamazonaws-002ecom.vscode-resource.vscode-cdn.net/opt/conda/envs/ecco/lib/python3.9/site-packages/transformers/generation/utils.py:662), in GenerationMixin._prepare_decoder_input_ids_for_generation(self, batch_size, model_input_name, model_kwargs, decoder_start_token_id, bos_token_id, device)
660 if model_kwargs is not None and "decoder_input_ids" in model_kwargs:
661 decoder_input_ids = model_kwargs.pop("decoder_input_ids")
--> 662 elif "input_ids" in model_kwargs and model_input_name != "input_ids":
663 decoder_input_ids = model_kwargs.pop("input_ids")
664 else:
TypeError: argument of type 'NoneType' is not iterable
Just fixed the issue. Downgrading your transformers
to 4.13.0
solves the problem. This is due to an API update from transformers
update.
@SumitDasTR @Peter-Zhoutuanjie cc @jalammar
Note that this solution is a quick fix. If you want to use models like OPT
, LLAMA
, etc., you still need to fix the code from source to make it work.
Maybe another solution is: Weeks ago, I made the ecco *.ipynb mostly work in Python 3.10, with transformers 4.29.2
Locally. In a conda env with many other programs. I dimly remember having problems with version numbers of dependencies. I had to edit the req file(s) of ecco: as a false-beginner, I edited both setup.py, and requirements.txt
My notes are at https://github.com/martin12333/marti-onedrive/blob/main2/AI/pip----31pip310ecco.e.f8.sh
My files setup.py, and requirements.txt are at https://github.com/martin12333/marti-onedrive/tree/main2/AI/Jalammar
EDIT: additional info is at https://github.com/jalammar/ecco/issues/102#issuecomment-1675574099
Hey,
I tried pip install ecco on google colab then import ecco
but I have the error below:
RuntimeError Traceback (most recent call last) init.pxd in numpy.import_array()
RuntimeError: module compiled against API version 0x10 but this version of numpy is 0xf . Check the section C-API incompatibility at the Troubleshooting ImportError section at https://numpy.org/devdocs/user/troubleshooting-importerror.html#c-api-incompatibility for indications on how to solve this problem .
During handling of the above exception, another exception occurred:
ImportError Traceback (most recent call last) in <cell line: 1>()
----> 1 import ecco
2 lm = ecco.from_pretrained('distilgpt2')
5 frames /usr/local/lib/python3.10/dist-packages/ecco/init.py in
14
15 version = '0.1.2'
---> 16 from ecco.lm import LM
17 from transformers import AutoTokenizer, AutoModelForCausalLM, AutoModel, AutoModelForSeq2SeqLM
18 from typing import Any, Dict, Optional, List
/usr/local/lib/python3.10/dist-packages/ecco/lm.py in
13 from torch.nn import functional as F
14 from ecco.attribution import compute_primary_attributions_scores
---> 15 from ecco.output import OutputSeq
16 from typing import Optional, Any, List, Tuple, Dict, Union
17 from operator import attrgetter
/usr/local/lib/python3.10/dist-packages/ecco/output.py in
9 import torch
10 from torch.nn import functional as F
---> 11 from sklearn import decomposition
12 from typing import Dict, Optional, List, Tuple, Union
13 from ecco.util import strip_tokenizer_prefix, is_partial_token
/usr/local/lib/python3.10/dist-packages/sklearn/init.py in
80 from . import _distributor_init # noqa: F401
81 from . import __check_build # noqa: F401
---> 82 from .base import clone
83 from .utils._show_versions import show_versions
84
/usr/local/lib/python3.10/dist-packages/sklearn/base.py in
15 from . import version
16 from ._config import get_config
---> 17 from .utils import _IS_32BIT
18 from .utils._tags import (
19 _DEFAULT_TAGS,
/usr/local/lib/python3.10/dist-packages/sklearn/utils/init.py in
20 from scipy.sparse import issparse
21
---> 22 from .murmurhash import murmurhash3_32
23 from .class_weight import compute_class_weight, compute_sample_weight
24 from . import _joblib
sklearn/utils/murmurhash.pyx in init sklearn.utils.murmurhash()
init.pxd in numpy.import_array()
ImportError: numpy.core.multiarray failed to import
NOTE: If your import is failing due to a missing package, you can manually install dependencies using either !pip or !apt.
To view examples of installing some common dependencies, click the "Open Examples" button below.