When cobe doesn't recognize any word in reply()'s input, it generates replies based on random tokens in the database. The resulting replies are likely to be much longer than replies seeded with a known word.
This might be:
1) Scoring rating those sentences higher?
2) Using random tokens rather than random words? (i.e. babbling on punctuation/space tokens)
Generating a single random token and using that as the pivot for all replies seems to improve this behavior, but I'm not sure if shorter replies are better.
When cobe doesn't recognize any word in reply()'s input, it generates replies based on random tokens in the database. The resulting replies are likely to be much longer than replies seeded with a known word.
This might be: 1) Scoring rating those sentences higher? 2) Using random tokens rather than random words? (i.e. babbling on punctuation/space tokens)