Open Ahajha opened 2 years ago
This is a great feature to have, the two versions might really give both flexibility and ease of use :)
I can give you write permissions and from there you will make a new branch from the development branch where you can work on this feature, I will join in from time to time too. What do you think about this method?
There are a couple of considerations to take for future work that might affect this implementation:
(Good to see you here from reddit ! 💯 )
That sounds good to me.
Perhaps there could be more configuration options relating to mutexes, perhaps the following:
I'm wondering how much of a performance difference would be with the thread safety, I'm curious to see benchmarks once it's implemented.
As for fixed sized allocations, that will be absolutely perfect for an allocator, as it should only allocate a single size ever.
Though, that does raise the idea of having a pool just for a single allocator type (where we can be certain we are doing fixed size allocation) and a different pool that can spawn different types of allocators.
That on its own is 6-8 different configurations, so I think the step to implement these is to implement a single allocator with templated options, then alias the options we need, something along the lines of
enum class ThreadSafe { safe, unsafe };
enum class GlobalPool { global, local };
enum class SharedPool { shared, unshared };
template<class T>
using GlobalPooledAllocator = PoolAllocatorImpl<T, ThreadSafe::safe, GlobalPool::global, SharedPool::shared>;
template<class T>
using GlobalSingleClassPooledAllocator = PoolAllocatorImpl<T, ThreadSafe::safe, GlobalPool::global, SharedPool::unshared>;
/* etc. */
Names probably need work, but that's probably a reasonable high-level look.
Hello, having worked with STL's allocators before while writing my own implementations of data structures, I've taken interest in your project after seeing your reddit post and will try to implement an allocator which will use a single memory pool. I will fork the project and open a pull request once it's completed.
By the way are you okay with the name MemoryPoolAllocator ?
@Ahajha Sorry for the delay I had a very long weekend.
That sounds good to me.
Awesome! :) I'll give you write permission right now where you can implement this idea on a new branch :)
Perhaps there could be more configuration options relating to mutexes, perhaps the following:
- thread safe global (using thread_local internally)
- thread safe local
- unsafe global (I'm not sure how this one would work, but I'd need to think on it)
- unsafe local (fastest, but limited to a single thread)
As for fixed sized allocations, that will be absolutely perfect for an allocator, as it should only allocate a single size ever.
Though, that does raise the idea of having a pool just for a single allocator type (where we can be certain we are doing fixed size allocation) and a different pool that can spawn different types of allocators.
That on its own is 6-8 different configurations, so I think the step to implement these is to implement a single allocator with templated options, then alias the options we need, something along the lines of
enum class ThreadSafe { safe, unsafe }; enum class GlobalPool { global, local }; enum class SharedPool { shared, unshared }; template<class T> using GlobalPooledAllocator = PoolAllocatorImpl<T, ThreadSafe::safe, GlobalPool::global, SharedPool::shared>; template<class T> using GlobalSingleClassPooledAllocator = PoolAllocatorImpl<T, ThreadSafe::safe, GlobalPool::global, SharedPool::unshared>; /* etc. */
Names probably need work, but that's probably a reasonable high-level look.
Your suggestion sound good, though I still need to think how it will play out in the long run when I add the automatic memory pool analysis and building of multiple pools under a more abstract interface - in a broad sense, the choice of the way we allow others to configure the pool might affect performance when building more systems on top of it, but for now I guess that this way is great and can be modified later if we will see fit :)
I'm wondering how much of a performance difference would be with the thread safety, I'm curious to see benchmarks once it's implemented.
From my test on thread safety, when you add locks on allocation and deallocation it has a significant affect on performance, before the memory pool was 20 times faster than standard new/delete, but after adding thread safety it became only 3-5 times faster.
@fvalasiad
Hello, having worked with STL's allocators before while writing my own implementations of data structures, I've taken interest in your project after seeing your reddit post and will try to implement an allocator which will use a single memory pool. I will fork the project and open a pull request once it's completed.
By the way are you okay with the name MemoryPoolAllocator ?
Indeed a fitting, straightforward name :) If you want, you can even help @Ahajha and me create an allocator together on a new branch, interested? any help will be appreciated, as it is an open source project ;)
It is better to start from the development branch as it has the latest code that includes the thread safety mechanism too (though it needs to be separated as a different pool configuration).
@LessComplexity
I am willing to participate in the creation of global, shared, thread-safe MemoryPool allocators and anything else that you can think of. I just finished writing a simple one and i am currently running tests, should i push it to a new development branch for you to see?
@LessComplexity
I am willing to participate in the creation of global, shared, thread-safe MemoryPool allocators and anything else that you can think of. I just finished writing a simple one and i am currently running tests, should i push it to a new development branch for you to see?
Thank you for suggesting yourself, I'm still trying to figure out the best way to implement those allocators. You already made a pool request and I saw your implementation, great! 💯
It would be useful to be able to use memory pools in existing containers, like
std::vector
,std::deque
, etc. (Along with theirstd::pmr::*
counterparts). To do this, we would need an allocator that "wraps" the memory pool.I'm thinking there should be two versions, one with user-managed pools (so a reference to the host allocator would need to be stored in the container, allows flexibility at the cost of a bit of extra memory), and another with a global pool (less memory usage, less flexibility).
If I were to work on such a feature, should I fork the dev branch and work from there? What is the preferred contributing method?
(Btw, hi from reddit!)