stripe-archive / brushfire

Distributed decision tree ensemble learning in Scala
Other
391 stars 50 forks source link

Bounded-memory local training #89

Closed avibryant closed 8 years ago

avibryant commented 8 years ago

Some work from the plane, ?r @striation @tixxit

This has the same goal as #77 but is deliberately worse code (for now) in the interests of being less invasive and getting merged quickly. It completely avoids making any useful abstractions and instead just provides the most direct implementation of a single-node, single-pass-per-expansion local trainer.

The only interesting thing it does is this: by streaming over the training data we avoid using O(training set) memory, but if we try to expand an entire level at once, we still have an O(2^depth * features) problem. So expand takes a parameter of the maximum number of tree leaves to try to expand, per tree, in any one pass, and randomly picks which ones to do (using something like reservoir sampling to get a uniform sample of leaves that don't meet the stopping criteria). This lets you trade off memory use vs. performance (by forcing more passes but capping the memory).

There's still the need to keep all of the trees in memory at once, so it's not truly constant memory, but that's harder to avoid. If it becomes a problem we can look at using bonsai representations even during training...

avibryant commented 8 years ago

Per IRL conversation, I'm going to merge this and @striation might do a follow-up PR with his mutable optimizations.