Open agoeroeg opened 6 years ago
That's weird, I'll take a look.
Thanks. I extracted these vectors: egrep '^(afd|merkel|macron|meinung|rosen|aprikosen|pegida|komisch|erdbeeren|xdf6hkj|mirnixdirnix|fußballer|beschlussauffassung|geschäftsordnung|bus|bahn|fahren|kunde) ' wikiplus.de.vec > vectors_trained.txt
(translation: afd|merkel|macron|opinion|roses|apricots|pegida|strange|strawberries|xdf6hkj|mirnixdirnix|footballer|decision|opinion|bus|train|ride|customer) My training text will differ from yours, but still, I would expect, that if I train a political text with no roses, apricots, football, bus and train ride, then these will keep their 'sense'/neighbourhood.
Hi, I downloaded the german wiki word vectors from fasttext: https://fasttext.cc/docs/en/pretrained-vectors.html (bin+text) I copy-pasted some political text in a training file (around 180MB), cleaned with: cat train.txt | sed -e 'y/[]/()/' -e "s/([.!?,'/()])/ \1 /g" | tr "[:upper:]" "[:lower:]" | sed -e 'y/ÖÜÄ/öüä/' > train_clean.txt Trained the wiki.de.bin ./fasttext skipgram -input train_clean.txt -inputModel wiki.de.bin -output wikiplus -incr Extracted the vectors from the *.vec files Made a cluster analysis and was amazed, how stupid my new vectors are. E.g. the clustering with the original vectors would cluster bus and train together, the new vectors make not much sense. Do you have any idea, why?