Open Jurys22 opened 1 year ago
Hi, thanks for pointing this out, I have updated the version and it should be fixed in the colab now https://colab.research.google.com/drive/1F9zW_nVkwfwIVXTOA_juFDrlPz5TLjpK?usp=sharing#scrollTo=izKXA4b6-oIv
Hi again! I tried to launch it this morning, making a copy of the colab on my colab (using GPU) but it still gives me the error.
For info, if I run the colab directly from your link, without copying it on my drive it works. Any hints on why is this happening?
Hi, it may be a difference in environment, is your notebook using GPU or CPU?
I think I'm using a GPU (different one than yours)
When I'm on a Tesla T4 seems working. I don't know if there is a way to choose the GPU selectively to test with another type of GPU though. Any other reasons why this could happen?
I'm not sure about the reason for errors with specific GPUs, but I have updated the pretrained weights and demo notebooks, please git pull to latest commit and try again, thanks for raising the issue!
I think I found the issue - that it is still there.
_A100-SXM4-40GB with CUDA capability sm_80 is not compatible with the current PyTorch installation. The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_70 sm_75. If you want to use the A100-SXM4-40GB GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/
warnings.warn(incompatible_device_warn.format(device_name, capability, " ".join(arch_list), device_name)) reading instances: 1it [00:00, 447.49it/s] /content/span_model/predictors/spanmodel.py:69: UserWarning: Encountered a RunTimeError on document 0. Skipping this example. Error message: CUDA error: no kernel image is available for execution on the device. warnings.warn(msg)
Hello, I am trying to run your demo (with premium GPU or standard) but from the third block of code, something doesn't work.
`ValidationError Traceback (most recent call last) in
18 text = "Did not enjoy the new Windows 8 and touchscreen functions ."
19 model = SpanModel(save_dir="pretrained_14lap", random_seed=0)
---> 20 sent = predict_sentence(text, model)
21
22 for t in sent.triples:
3 frames in predict_sentence(text, model)
12 data = Data(root=Path(), data_split=SplitEnum.test, sentences=[sent])
13 data.save_to_path(path_in)
---> 14 model.predict(path_in, path_out)
15 data = Data.load_from_full_path(path_out)
16 return data.sentences[0]
/content/aste/wrapper.py in predict(self, path_in, path_out) 89 90 with open(path_temp_out) as f: ---> 91 preds = [SpanModelPrediction(**json.loads(line.strip())) for line in f] 92 data = Data( 93 root=Path(),
/content/aste/wrapper.py in(.0)
89
90 with open(path_temp_out) as f:
---> 91 preds = [SpanModelPrediction(**json.loads(line.strip())) for line in f]
92 data = Data(
93 root=Path(),
/usr/local/lib/python3.7/dist-packages/pydantic/main.cpython-37m-x86_64-linux-gnu.so in pydantic.main.BaseModel.init()
ValidationError: 1 validation error for SpanModelPrediction predicted_relations field required (type=value_error.missing)`