hasanhuz / SpanEmo

SpanEmo
Other
60 stars 21 forks source link

Error Runtime #10

Closed Pegahyaftian closed 2 years ago

Pegahyaftian commented 2 years ago

RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)

Can you please let me know what GPU is required?

hasanhuz commented 2 years ago

Hi there, you can find relevant information in the ReadMe and requirement files. Please, ensure that you keep the same dependencies as changing may cause some errors, similar to the one you mentioned here. Hope this helps, Hassan

ahmed-aleroud commented 2 years ago

hi Hassan

I was trying to run your code for SpanEmo paper. Interesting work by the way. However, I am not sure why when I try to run it on an Arabic dataset, it gives me the following errors and warnings.

AttributeError: 'NoneType' object has no attribute 'update'

Details are below

It looks like it is something related to fastprogress, but I tried every single possibility including upgrading and downgrading some libraries

Any help is appreciated

Thanks

usr/local/lib/python3.9/site-packages/google/colab/data_table.py:30: UserWarning: IPython.utils.traitlets has moved to a top-level traitlets package. from IPython.utils import traitlets as _traitlets Currently using GPU: cuda:0 /usr/local/lib/python3.9/site-packages/ekphrasis/classes/tokenizer.py:225: FutureWarning: Possible nested set at position 2190 self.tok = re.compile(r"({})".format("|".join(pipeline))) Reading twitter_2018 - 1grams ... Reading twitter_2018 - 2grams ... /usr/local/lib/python3.9/site-packages/ekphrasis/classes/exmanager.py:14: FutureWarning: Possible nested set at position 42 regexes = {k.lower(): re.compile(self.expressions[k]) for k, v in Reading twitter_2018 - 1grams ... PreProcessing dataset ...: 0% 0/178 [00:00<?, ?it/s]/usr/local/lib/python3.9/site-packages/transformers/tokenization_utils_base.py:2323: FutureWarning: The pad_to_max_length argument is deprecated and will be removed in a future version, use padding=True or padding='longest' to pad to the longest sequence in the batch, or use padding='max_length' to pad to a max length. In this case, you can give a specific length with max_length (e.g. max_length=45) or leave max_length to None to pad to the maximal input size of the model (e.g. 512 for Bert). warnings.warn( PreProcessing dataset ...: 100% 178/178 [00:01<00:00, 122.84it/s] The number of training batches: 6 Reading twitter_2018 - 1grams ... Reading twitter_2018 - 2grams ... Reading twitter_2018 - 1grams ... PreProcessing dataset ...: 0% 0/178 [00:00<?, ?it/s]/usr/local/lib/python3.9/site-packages/transformers/tokenization_utils_base.py:2323: FutureWarning: The pad_to_max_length argument is deprecated and will be removed in a future version, use padding=True or padding='longest' to pad to the longest sequence in the batch, or use padding='max_length' to pad to a max length. In this case, you can give a specific length with max_length (e.g. max_length=45) or leave max_length to None to pad to the maximal input size of the model (e.g. 512 for Bert). warnings.warn( PreProcessing dataset ...: 100% 178/178 [00:01<00:00, 89.23it/s] The number of validation batches: 6 Some weights of the model checkpoint at asafaya/bert-base-arabic were not used when initializing BertModel: ['cls.predictions.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.decoder.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.bias', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.transform.dense.weight']