Closed Lapayo closed 5 years ago
Hi Simon,
The current graph generation doesn't scale very well when using more than a single decomposition. Not sure if you are familiar with the exact meaning of num_decomps
, but the decompositions are made recursively, so size of the network grows exponentially with base num_decomps
( so roughly 2^n in the snippet above). Your SPNs will be more scalable once you fix num_decomps
at 1.
So yes, it is normal what you see.
You can contact me: jos.vandewolfshaar@gmail.com. I can give you some proper advise on how to use the library right now. That will make it easier than to explain everything in this thread.
Best, Jos
While experimenting with various SPN sizes I encountered the problem, that the runtime quickly gets very slow. For example using the following SPN (24k nodes) on an i7 machine:
accumulate_updates = learning.accumulate_updates()
takes more than 6 minutesmpe_state_gen.get_state(root, iv_x, latent)
takes nearly 3 minutesAfter generating all required weights and ops the process takes already about 17gb of memory without having any training data loaded yet.
Those numbers seem very high to me - especially the runtime for accumulate_updates and the total memory usage.
Am I underestimating the work load or is there something wrong? I attached the used IPython notebook I used for time measurements for reference.
I would be very glad if someone could take a look and tell me if I am doing anything wrong or if this is normal.
Thanks in advance!
Kind regards, Simon