openai / generating-reviews-discovering-sentiment

Code for "Learning to Generate Reviews and Discovering Sentiment"
https://arxiv.org/abs/1704.01444
MIT License
1.51k stars 380 forks source link

Is the sentiment neuron bi-modal on other corpus than IMDB ? #23

Closed cerisara closed 7 years ago

cerisara commented 7 years ago

I've computed the mLSTM features on each of the 6920 sentence of the Stanford Sentiment Treebank with

text_features = model.transform(text)

and extracted the last 4096-feature-vector for each sentence, and plotted the distribution of neuron 2387 (see below) and 2388. But it looks unimodal. I don't have IMDB at hand to try on it, but did I do something wrong, or is the bi-modal distribution plotted in the paper only for IMDB ?

Thanks !

image

(sorry, the previous histogram was the histogram of all features together, I've now uploaded the histogram of feature 2387)

ahirner commented 7 years ago

Interesting @cerisara, have you tried to group the distribution by sentiment? The two "gaussians" might just add up to what looks like a student t distribution.

cerisara commented 7 years ago

Not yet, I'm going to do it. But I've printed the sign of 2387 and 2388 activation for each character, and it suggests that 2387 just detects 'w', and 2388 just detects 'u'. My code:

print(text)
text_features = model.transform(text)
print(''.join(['_' if text_features[i][2388]<0 else '°' for i in range(len(text_features))]))

2388u

cerisara commented 7 years ago

Also looked at the distributions per class, and they both are exactly similar to the merged global distribution. All this is very weird, I must have a nasty bug somewhere...

Newmu commented 7 years ago

Yes, it works similarly on other corpra, not sure what the bug is in your case. I've added example in sst_binary_demo.py which produces the figure in the readme.