cbaziotis / ntua-slp-semeval2018

Deep-learning models of NTUA-SLP team submitted in SemEval 2018 tasks 1, 2 and 3.
MIT License
83 stars 30 forks source link

Pretraining Getting Stuck #7

Open iNeil77 opened 5 years ago

iNeil77 commented 5 years ago

I am running the pretraining code the way you suggested but it has been stuck at this point for 2 hours now. Is this supposed to take this long?

neilpaul77@NeilRig77:~/Downloads/ntua-slp-semeval2018$ python sentiment2017.py 
/home/neilpaul77/anaconda3/lib/python3.6/site-packages/sklearn/externals/joblib/externals/cloudpickle/cloudpickle.py:47: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
  import imp
Running on:cuda
loading word embeddings...
Loaded word embeddings from cache.
Reading twitter_2018 - 1grams ...
Reading twitter_2018 - 2grams ...
Reading twitter_2018 - 1grams ...
Building word-level datasets...
Loading SEMEVAL_2017_word_train from cache!
Total words: 1435889, Total unks:9700 (0.68%)
Unique words: 45397, Unique unks:3602 (7.93%)
Labels statistics:
{'negative': '18.91%', 'neutral': '45.47%', 'positive': '35.62%'}

Loading SEMEVAL_2017_word_val from cache!
Total words: 75465, Total unks:521 (0.69%)
Unique words: 9191, Unique unks:198 (2.15%)
Labels statistics:
{'negative': '18.91%', 'neutral': '45.46%', 'positive': '35.63%'}

Initializing Embedding layer with pre-trained weights!
ModelWrapper(
  (feature_extractor): FeatureExtractor(
    (embedding): Embed(
      (embedding): Embedding(804871, 310)
      (dropout): Dropout(p=0.1)
      (noise): GaussianNoise (mean=0.0, stddev=0.2)
    )
    (encoder): RNNEncoder(
      (rnn): LSTM(310, 150, num_layers=2, batch_first=True, dropout=0.3, bidirectional=True)
      (drop_rnn): Dropout(p=0.3)
    )
    (attention): SelfAttention(
      (attention): Sequential(
        (0): Linear(in_features=300, out_features=300, bias=True)
        (1): Tanh()
        (2): Dropout(p=0.3)
        (3): Linear(in_features=300, out_features=1, bias=True)
        (4): Tanh()
        (5): Dropout(p=0.3)
      )
      (softmax): Softmax()
    )
  )
  (linear): Linear(in_features=300, out_features=3, bias=True)
)
cbaziotis commented 5 years ago

Pretraining should take seconds to minutes, depending on your hardware. Please upgrade ekphrasis and try again:

pip install ekphrasis -U
iNeil77 commented 5 years ago

I have reinstalled ekphrasis and the issue persists

minkj1992 commented 5 years ago

I have same issues, is that right to enter SEMEVAL_2017's embedding file = "ntua_twitter_affect_310" ?

agn-7 commented 4 years ago

I had the same problem, the problem is the earlier version of ekphrasis, so you could update this library through pip install -U ekphrasis or you could remove the bounded version of ekphrasis in requirements.txt to get the latest version of that.