SanchoGGP / ggp-base

The General Game Playing Base Package
8 stars 4 forks source link

Amazons Torus 10x10 is massive #310

Open arr28 opened 9 years ago

arr28 commented 9 years ago

Causes melt down, even on the more powerful of my machines.

I have to reduce my NODE_TABLE_SIZE to 5,000, which just scrapes in. Interestingly, most of the process growth happens early on - I suspect before we even start searching. I wonder if some of the stuff we create during meta-gaming can be freed off (by nulling out references).

The "animator instance state vector size" is 10x - 100x bigger than virtually anything else in my logs from the last week. The only ones that come close are hex (1/3 of the size) and 4p guess 2/3rds (1/2 the size).

arr28 commented 9 years ago

Perhaps I can set the node table size dynamically based on the animator instance state vector size and some other metrics (e.g. the max branching factor - which affects the size of the RAVE stats).

SteveDraper commented 9 years ago

A few observations/suggestions:

If the propnet is very large (as here, indicated by animator instance size), then the main cost is NOT the animator tables (they are only a few megabytes even in this case), but rather the underlying propnet (in the tables each component is a few 10s of bytes, but in component object form it is WAY more). Consequently:

1) We could free a lot of memory by freeing the propnets themselves and just keeping the animator instances once constructed. This might require a small amount of re-engineering since we currently reference a few things through their components [though we only use them at runtime to lookup the animator ids] - would need to change from storing component refs to directly storing animator component ids

2) If the propnet is over some threshold in size perhaps we would be better not splitting it into x/o/goal nets but just using the main instance directly for everything.

Of these I think (1) is the most promising to explore