realXtend / tundra

realXtend Tundra SDK, a 3D virtual world application platform.
www.realxtend.org
Apache License 2.0
84 stars 70 forks source link

Discussion: Custom memory allocators for Tundra #724

Open jonnenauha opened 11 years ago

jonnenauha commented 11 years ago

I know very little of custom allocators in practise. I'd like devs to take part in a discussion if we should use a custom memory allocator library in Tundra. I know for examle that Ogre uses something by default.

Please share resources and libs that we could use if you know good ones! I'll start with a few:

jonnenauha commented 11 years ago

Input from @juj and @cadaver would be nice as both of you have your own rendering engine projects and are experts in C++. You have maybe thought of the same thing in them?

Stinkfist0 commented 11 years ago

I would suspect that there would not be significant advantages in Tundra until the majority of the lower level code (that Qt and esp. Ogre currently handle), where majority of the "crucial" allocations happen, would be in our hands. But I'm not any kind of expert in this area, so could be wrong.

antont commented 11 years ago

Not an expert either but seems that Stinkfist made an excellent point there.

Just for information: Blender has an own malloc too, guardedalloc or a safe malloc as it's sometimes called. Is handy because if you make errors in mem management it prints out warnings at exit when deving (yes that's why I know that much of it, it caught some errors I first made back then :)

is in http://www.gitorious.org/blenderprojects/blender/blobs/master/blender/intern/guardedalloc/MEM_guardedalloc.h

Written in and for c but there's a c++ version too, the doc seemed to say.

I figure works well for Blender as they don't use libs for GUI or rendering or such, the point Stinkfist made doesn't apply there much.

cadaver commented 11 years ago

Indeed, like Stinkfist said, the majority of low-level work (and thus also frequent allocations) are done in Qt and Ogre. Our largest memory allocations are mesh and texture resources + the work memory required by QScriptEngines. Comparatively our own Tundra code does (and allocates) very little.

Using multiple custom allocators in one process can even be harmful depending on their interactions; they may be allocating their own (large) memory pools, and would need careful investigation. In this regard adopting eg. Ogre's allocator for Tundra code would be safe, but would then increase coupling to Ogre.

I see custom memory allocation most beneficial when you have an in-house engine codebase with all of the code under your control and when you need explicit tracking of allocations, fragmentation etc. For example on consoles/mobiles where memory is severely limited.

Another possible benefit would be to unify the allocation behavior across platforms, as a custom allocator can optionally rely less on the operating system or the C runtime and do more of the allocation strategy & memory pooling itself. However if we talk about Windows for example, in the past the default allocator performance was notoriously poor but at least from Windows 7 onwards it has improved much, so custom allocators like nedmalloc will have less effect.

About one year ago I tested compiling Ogre with and without the custom allocator and it didn't have any noticeable effect on total memory consumption or application performance when loading large, asset-heavy Tundra scenes (on Windows.)