I originally wanted to first train a neural net on Gutenberg book data, then run it backwards (a la Google's Deep Dream) to see what came out, but it turns out that training neural nets is hard.
Instead, I took my Gutenberg book data, grabbed a random selection of 1000 of them, and started grabbing random sentences from those books until I had 50,000 words.
I originally wanted to first train a neural net on Gutenberg book data, then run it backwards (a la Google's Deep Dream) to see what came out, but it turns out that training neural nets is hard.
Instead, I took my Gutenberg book data, grabbed a random selection of 1000 of them, and started grabbing random sentences from those books until I had 50,000 words.
Code: https://github.com/katre/bibliodream/tree/random-lines Output: https://github.com/katre/bibliodream/blob/random-lines/results/book.txt