Closed JeremyOBrien16 closed 2 years ago
Same error here. It was working fine last week and now I cannot import.
I'll take a look ... either something in fastai or huggingface has changed. What versions of each are you both working with?
I tried both the default version from pypi (0.0.18) or installing from source (0.0.19) with pip install -e ".[dev]". I got the same error when using on Colab and on my personal machine with both versions.
Thanks for the quick response!
fastai 1.0.61 transformers 4.0.0
Looks like huggingface pushed a transformers update a week ago: https://github.com/huggingface/transformers/releases
Rolled back to transformer==3.5.1 and install / imports working fine.
Thanks for the info.
Btw, fastai needs to be >= 2.15 as well. I'll find some time to fix things to work against the latest this week.
wg
On Mon, Dec 7, 2020 at 6:29 PM JeremyOBrien16 notifications@github.com wrote:
Rolled back to transformer==3.5.1 and install / imports working fine.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/ohmeow/blurr/issues/21#issuecomment-740319598, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAADNMEXAK5YXEEJ5VMXNJ3STWFPNANCNFSM4UPP5H6A .
It worked here for me. Thanks 👍
It worked here for me. Thanks +1
How?
For me is still failing?... well, installed from pip
python : 3.8.5
fastai : 2.1.8
fastprogress : 0.2.7
torch : 1.7.0
transformers : 3.5.1
Oh yeah, it works with
Thx
pip install transformers==3.5.1
pip install ohmeow-blurr==0.0.18
It's actually quite an easy fix I think. Huggingface have just buried their model files in a couple of extra layers of directories since the src
folder was getting quite big. So in ModelHelper
's __init__()
you need to split self._df.module_part_3
instead of self._df.module_part_1
. I got it to work on my fork here: https://github.com/HenryDashwood/blurr/commit/dececd4e706129694b25ee80c15dfb61ffadf9a9#diff-45d6eb3657aeaa14c3dacc16d9278dda8d6aa21c61456f9a03d80e34d368eeec
Unfortunately there are a few more version 4 breaking changes which come up when I try to test the notebooks. I might have some time to take a crack at them this week if it would be helpful?
Done.
The latest fast.ai and transformers libs actually broke a few things ... but all is working now; all tests are passing.
Closing this out ... lmk if anything is still broken for you all.
[Works fine with] python: 3.7.13 pytorch: 1.7.1+cu110 fastai: 2.2.5 transformers: 4.3.3 ohmeow-blurr==0.0.24
and to use BLURR_MODEL_HELPER. instead of BLURR.
@bappctl : try installing the latest version of blurr ... pip install ohmeow-blurr==1.0.2
There were some of the old imports being cached in there that nbdev didn't clear out when I made the initial release. That's been fixed and the namespaces have changed. All the NLP/Text bits will be under blurr.text
.
Feel free to close this out once things are working (or lmk if you are still having issues).
You are right ,
ohmeow-blurr==0.0.24 and BLURR_MODEL_HELPER instead of BLURR
worked for me.
thanks✌️
Closing this out as v.1 is out and the code above is obsolete.
After pip install in Colab, import blurr.modeling.all and blurr.data.all are throwing errors. Did not encounter errors last week, not sure when they began. Has there been a syntax change for setup?
ValueError Traceback (most recent call last) /usr/local/lib/python3.6/dist-packages/pandas/core/indexes/range.py in get_loc(self, key, method, tolerance) 354 try: --> 355 return self._range.index(new_key) 356 except ValueError as err:
ValueError: 1 is not in range
The above exception was the direct cause of the following exception:
KeyError Traceback (most recent call last) 6 frames