steinwurf / recycle

Simple resource pool for recycling resources in C++
BSD 3-Clause "New" or "Revised" License
62 stars 23 forks source link

add capacity() function #22

Open f18m opened 5 years ago

f18m commented 5 years ago

Hi, it would be nice to have a capacity() API function on the shared_pool class that allows me to know how many items have been allocated inside the pool. Another nice-to-have function would be inuse_resources(). Of course unused_resources()+inuse_resouces() == capacity()

I can provide a PR to implement these 2 utilities...

jpihl commented 5 years ago

Great idea, a PR would be welcomed, but I would prefer the name used_resources to inuse_resources.

Elukej commented 1 year ago

Hello! I prepared the code to send a pull request to add the beforementioned functionality for capacity() and used_resources() to the shared pool class. I took notice of the comments on the pull request that f18m provided, and I tried to write the code that satisfies these corrections. The main difference is that the relation used_resource() + unused_resource() = capacity() doesn't have to hold, since the shared pool starts empty, and its interface has a setCapacity() function which can break this relation temporarily. In ideal situation though, after enough allocate or recycle calls, this relation will hold. If you are still interested in having this functionality in the library, I will send you a pull request with the code i've written to review. Kind Regards

jpihl commented 1 year ago

@Elukej sounds great, give it a go :)

I would argue that the setCapacity function shouldn't break the used_resource() + unused_resource() = capacity() relation. I would expect that setting the capacity of an empty pool to 10 would result in used_resource = 0 and unused_resource = 10, and a capacity = 10. Also, I'm not sure what would be the point of setCapacity function. Is it meant to be a limit or is it for preallocating resources?

If it's for limiting the generation of resources I think it would be better if the calling code is just checking used_resource() before calling pool.allocate() as it would need to handle this case anyway.

All the best,

Jeppe

Elukej commented 1 year ago

Thank you for your response @jpihl ! My idea was to apply the relation that exists between size and capacity of std::vector to relation between unused_resources and capacity. I didn't want to force preallocation at the moment of the pool creation since i don't want the pool to take more space than necessary for it, but rather to grow if the demand of resources grows, and to be limited by the capacity as its maximum, like u guessed in your answer. setCapacity() would just move this upper limit and, deallocate unused resources if needed to match capacity if the unused_resources is bigger than the new capacity, or it would just change the pool maximum otherwise. I built the check inside the allocate() function. If the used_resource() + unused_resource() is bigger than capacity(), allocate() will just create a regular shared_pointer, otherwise it will create recyclable one. The main thing i tried to enable is that recycler can have the ability to create just regular shared pointers that, if the limit set by capacity is exceeded, are not tracked by the recycler and are working independently, so the programs calling pool.allocate() dont necessarily break if the limit is exceeded. Ill try to upload pull request as soon as possible so we can continue discussion there. Kind Regards

jpihl commented 1 year ago

Hi @Elukej, thanks for the detailed description. I need help finding a use case where this would make sense. Do you have one? I understand your concern regarding the preallocation, but it could make sense for a memory pool. This would allow your application to "prepare" for a future workload at the cost of memory. If you strive to keep the memory usage of an application to a minimum, using a memory pool would not be a good way to go. The intention of a memory pool is to reuse and minimize expensive allocations. At least, that's the way I see it :) If you think otherwise, I'd be happy to hear - maybe you know something I don't ;)

All the best,

Jeppe

Elukej commented 1 year ago

Hi @jpihl, sorry for my delay in answering. Use case I was thinking about is a scenario where I have many shared_pools for different types in an application and I don't have a clue which will the actual usage rate of them be. Preallocating in this scenario looks like it could create a problem for the application unnecessarily. Consider this application is to be run for a long time, so it will have plenty of opportunity to use it's allocated resources, and the fact that it starts without preallocation is not affecting performance long term. The important part is that application gets to profile itself this way, so the shared_pools get only as big as they need to be to accommodate the maximal application requests, and not hypothetical user guesses. The fact is that user should have a good guess of maximal usage of his resources, but in practice I think that this is often not the case. Also, if application has rare sharp spikes in usage rate, the behavior becomes similar to what we have with preallocation. I think it's pretty clear, like with almost everything in programming, that there is no clear winner between these two :D. Rather there are scenarios where one might work better than the other. For the solution of that, I think it would be useful to give the user an option to set the allocation_policy for a shared_pool in order to be able to choose PREALLOC or LIMIT behavior. I'll provide a pull request now with the code that I have, which doesn't have this option yet, but if you think it sounds like a good solution, I will do my best to provide that next! Kind Regards

hizukiayaka commented 2 weeks ago

I think it is a necessary feature. Likes in Gstreamer, We can allocate about N buffers for video buffers, you could only recycle between those. Because a video frame could be huge and limited resources(for DMA, only a few memory regions could be used). That is all about the buffer pool in Gstreamer.

Also std container likes std::vector<> could reserve the memory, the reason why I need a object pool is that allocation costs lots of time.

@Elukej Could I take over your MR? I may drop the std::atomic<> since there is a lock policy template.

@jpihl I think we need some things for this, reserve(), capacity() and max_sizes().

I am a newbie in C++, the code quality I made may not be good.