pteichman / cobe

A Markov chain based text generation library and MegaHAL style chatbot
http://teichman.org/blog/
MIT License
242 stars 51 forks source link

Small customisations #4

Open ghost opened 12 years ago

ghost commented 12 years ago

Would be good if it was possible to set some minor options.

pteichman commented 12 years ago

What is your goal with support for threading? I'm a little reluctant to take on the responsibility for maintaining thread safety throughout the database code, especially on Python where the global interpreter lock means the benefits may be less than expected.

If you're wanting to run several replies in parallel, you may be better off putting your Brain object in a multiprocessing.Pool, which would avoid thread safety issues and take advantage of multiple CPU cores. I have plans to support Pools directly for reply search and scoring, but that's going to wait until I've finished some work to reduce database load.

I also have some changes in master that will make it easier to provide an asynchronous API for replies. Maybe the best solution overall would be to have a Brain manage several replies in parallel, while using multiprocessing to accelerate the searches underneath.

Changing the reply loop time is good. That can now be specified in master as loop_ms (integer milliseconds, default 500). Setting it to zero will cause one candidate reply to be generated and returned. I generally don't recommend using master, but I can release 2.0.5 with that change.

Thanks for the feedback!

ghost commented 12 years ago

The threading request is due to the way a bot of mine is structured. The brain is opened in one thread, while another thread is responsible for learning and replying to chat text. Only one thread ever interacts with the brain once it's opened - I'm not trying to do anything ambitious.

Not sure if you want a new issue opened or not (and forgive me if I'm missing something) but lines 260-264 and 268-272 in brain.py appear to be doing the same thing.