NaNoGenMo / 2016

National Novel Generation Month, 2016 edition.
https://nanogenmo.github.io
162 stars 7 forks source link

Almost-but-not-quite #10

Open MichaelPaulukonis opened 7 years ago

MichaelPaulukonis commented 7 years ago

My main project will be to complete an npm module for getting texts that are almost-but-not-quite the same as the source text.

The idea is rougly the same as @dariusk's Harpooners and Sailors (here (source) and here (output+notes)) from last year - but wrapped up into a nice reusable package.

I think I would like to use such a module for other projects, so this is a good time to git-r-done.

Plus, I've been holding off the implementation of it until November, anyway.

MichaelPaulukonis commented 7 years ago

Link Dump

MichaelPaulukonis commented 7 years ago

start of crude proof-of-concept code here.

Includes some not-quite-as-crude code from another project I've done.

Which uses the nlp-compromise package, instead of natural. I'm going to look into swapping those out.

MichaelPaulukonis commented 7 years ago

Sooooooo.... the light dawns on Marblehead: I'm using Levenshtein (edit-distance), wheras Kazemi used Word2Vec - which gives a semantic distance. Edit-distance is purely an accident of orthography.

So, what I've got is not nearly as interesting as I was hoping for (as usual).

It is of some interest, and I'll post some examples later this week (I'm desperately short on time this year, le sigh).

enkiv2 commented 7 years ago

If you could normalize both to a scale between 0 and 1 you could multiply them :)

On Tue, Nov 8, 2016 at 11:15 AM Michael Paulukonis notifications@github.com wrote:

Sooooooo.... the light dawns on Marblehead: I'm using Levenshtein (edit-distance), wheras Kazemi used Word2Vec - which gives a semantic distance. Edit-distance is purely an accident of orthography.

So, what I've got is not nearly as interesting as I was hoping for (as usual).

It is of some interest, and I'll post some examples later this week (I'm desperately short on time this year, le sigh).

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/NaNoGenMo/2016/issues/10#issuecomment-259181354, or mute the thread https://github.com/notifications/unsubscribe-auth/AAd6GYmsBrf2Y5MTyCMG5MqtoLflj0YRks5q8KAsgaJpZM4KewBN .

MichaelPaulukonis commented 7 years ago

I think I'm going to do some overkill and play with retext and the nodes of its natural language concrete syntax tree. Which has some charms as paragraph and sentence tokenization, and the ability to recreate the original text.

I find the online examples of using retext and nlcst to be sub-optimal.

Also, I'm curious why the project works asynchronously, when there are no asynchronous sub-elements.

MichaelPaulukonis commented 7 years ago

@enkiv2 - What would that do? Pretend I'm almost statistically innumerate....

There are libs that provide a 0..1 edit distance; I happened to pick a package that didn't.

We've got a baby coming in < 3 weeks, so I'm not going to get into too much craziness. Figuring out how to get retext going seems to be the high-point of the month for me.

enkiv2 commented 7 years ago

If you had the two factors scaled the same way, and multiplied them, you would rank words that are a good match on both factors much higher than one that is a good match on one but a poor match on the other. So, you'd get a lot of heavily related words. The results might be much more interesting, or much less interesting; I'm not sure.

On Fri, Nov 11, 2016 at 11:17 AM Michael Paulukonis < notifications@github.com> wrote:

@enkiv2 https://github.com/enkiv2 - What would that do? Pretend I'm almost statistically innumerate....

There are libs that provide a 0..1 edit distance; I happened to pick a package that didn't.

We've got a baby coming in < 3 weeks, so I'm not going to get into too much craziness. Figuring out how to get retext going seems to be the high-point of the month for me.

— You are receiving this because you were mentioned.

Reply to this email directly, view it on GitHub https://github.com/NaNoGenMo/2016/issues/10#issuecomment-259993286, or mute the thread https://github.com/notifications/unsubscribe-auth/AAd6GRcZ5D1OpnyE_MEeXoyI2vHv1D6yks5q9JU1gaJpZM4KewBN .

MichaelPaulukonis commented 7 years ago

@enkiv2 we're ranking sentences, not words. I'm still not clear on what I would multiply.


Here is some sample output

It only took 11 hours, but that's also because the computer slept for much of that time.

enkiv2 commented 7 years ago

I guess if we're ranking sentences that's a much harder problem. I don't know how to get, say, a word2vec-style location in semantic space for a whole sentence. Adding all the vectors would probably produce some unrelated word, if anything.

On Fri, Nov 18, 2016 at 10:51 AM Michael Paulukonis < notifications@github.com> wrote:

@enkiv2 https://github.com/enkiv2 we're ranking sentences, not words.

I'm still not clear on what I would multiply.

Here is some sample output https://gist.github.com/MichaelPaulukonis/2b2d47a5e22066e950c39841b9a6c889

It only took 11 hours, but that's also because the computer slept for much of that time.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/NaNoGenMo/2016/issues/10#issuecomment-261565307, or mute the thread https://github.com/notifications/unsubscribe-auth/AAd6GTRI06wI6KmJ6bBXifrniEG5fsLkks5q_cmGgaJpZM4KewBN .

ikarth commented 7 years ago

There's been some work with vectors at the sentence, paragraph, and document level. Look into doc2vec.

MichaelPaulukonis commented 7 years ago

Kazemi's project last year used word2vec - which I missed when I started the project. I was trying to do a single-language (NodeJS) solution. Not quite possible.

michelleful commented 7 years ago

@enkiv2, you may want to give skip-thought vectors a try.

MichaelPaulukonis commented 7 years ago

@ikarth part of this was NOT using doc2vec since that's not NodeJS. Another part was thinking that Kazemi had not used it, either.

Something I did discover is some word-vectors as JSON - https://igliu.com/word2vec-json/


I'm going to call it quits for the month. I've got a novel, I didn't hit my objective of a nicely packaged npm module, but I did generate a novel and learned new things.

We've got another baby due on Dec 1, so I'm going to finish off the month focusing on that!

The entire novel has been appended to gist @ https://gist.github.com/MichaelPaulukonis/2b2d47a5e22066e950c39841b9a6c889