Open fefa4ka opened 1 year ago
Nice to meet you @fefa4ka! I would appreciate a review on my draft.
It is great to see contributions to the project, and I appreciate the time and effort that you have put into this. Thank you @ashwin-hari!
Regarding the proposed signature for the lock()
and unlock()
functions, it seems that these functions should be sufficient for providing thread-safety in most cases. However, it would be helpful to have more information about the specific use cases that these functions are intended to support. This would allow us to more thoroughly evaluate whether the proposed implementation is sufficient for the needs of the library.
thanks for the feedback @fefa4ka ! The implementation I provided is for testing; it's in the test/
tree, not the src/
tree.
A more concrete scenario is an application that stores logs; each log's owner field is set to a value that identifies which component of the application produced the log. Threads are used to read and process logs for a specific owner. The number of threads would be less than the number of components in the application. Latency doesn't matter here.
I've added the above use case to the PR description too.
@fefa4ka, I also wanted to clarify one thing:
My understanding is that this data structure is for a single producer and multiple consumers. So, we can assume that a single thread calls lr_put
while many threads call lr_get
. Is this right?
I was unclear about this from the documentation. It says that the data structure is "a single allocated buffer for multiple producers", but also says "buffer is only initialized once and all consumers can access the same buffer."
Historically, I solved the problem when I had a single-threaded MCU with bare metal code with many subsystems that needed a ring buffer, but there was little memory. @ashwin-hari originally, I use this buffer, where there are several producers and several consumers in single-thread.
I think that if blocking occurs in the methods lr_put()
, lr_get()
, lr_count()
, this will allow the buffer to be used by multiple producers and multiple consumers in multi-thread application.
The only thing I will highlight in the implementation is that I want to use bare metal code out of the box in a context where mutex support is not implemented.
@fefa4ka thanks for the context. yes, the current implementation in my PR adds blocking to lr_put
and lr_get
so it will support multiple producers and consumers.
The current lock and unlock interfaces in the PR allow the caller to provide their own implementations, which do not need to use a mutex. So I think your use case of running bare-metal code is supported.
Let me know if there are any other changes you want me to make; my PR is ready for your review.
In PR I have already written some thoughts about the implementation. @ashwin-har, please check.
@fefa4ka I am not seeing the comments posted on the PR. Are your comments listed as "Pending"? If so, then you will need to click the green "Finish your review" button to post the comments.
@ashwin-hari oh sorry, sent now
The Linked Ring Buffer Library is currently not thread-safe, which means that it cannot be used in multithreaded environments without additional measures to protect against race conditions.
Discussion
One approach to adding thread-safety to the library could be to use mutexes or atomic operations to synchronize access to the buffer. However, it is important to consider the fact that the library may be used on microcontroller units (MCUs) where there are often no threads, but interrupts instead.
In this case, it might be better to provide a cross-platform abstraction or approach for capturing the lock. This could be implemented using a function pointer or function-like macro that can be set by the user to the appropriate locking mechanism for their platform.
There is also the question of whether the lock should be set for individual buffer operations (such as
lr_put()
,lr_get()
, andlr_count()
) or for all operations at once. This could depend on the specific use case and the level of concurrency needed.