Open elda27 opened 7 years ago
Hi @elda27
That's weird, I tried with a larger MNIST dataset and didn't have that problem. Anyway, the official implementation of this algorithm was published some weeks ago, you should give it a try. See #6
Thanks!
OK. I will try officaly implementation. Thanks for quickly reply.
I tried to use official implementation of gcForest. But it requires the tensorflow and Python 2.7. In Windows, the tensorflow is not supported Python 2.7 so I can't test official implementation.
I have a question. Do you test this program in Windows? If you have never tested this program in Windows, I will try to debug this problem.
I tried some of the tests about this problem, then I got a solution. In my computer, Python ran 4 processes on n_job=-1. I think this is required more memory than the single job. So I try to run n_job=1 then out of memory is not raised!
Note: Although this problem was solved, it remained that there was too much memory required for learning. Specifically, a memory of 30 to 40 GB was required. Is this memory allocation is abnormal or normal?
Thaks for your implementation!
When I want to train full of datasets as MNIST, example code is crashed because out of memory. I commented out some lines that are limit the size of dataset about following codes.
Is these changes something to wrong? Or does my computer have insufficient memory?
My testing computer is following specs. OS: Windows 7 CPU: Core i7 970 Memory: 32GB (I only use real memories.)
If this problem causes insufficient memory, I want to know how to economize memory. (e.g. like mini-batch training in deep neural network)