Thinking about adding an in-memory queue to decouple the API from SignalR. Currently, directly sending events from the API to SignalR can introduce issues with resilience and scalability. This enhancement would improve the architecture by offloading event delivery to a background process, ensuring the system can handle failures and high loads more gracefully.
Motivation:
Directly coupling a REST API to SignalR for real-time message broadcasting can lead to challenges in certain scenarios:
Resilience issues: If SignalR is temporarily unavailable, events may be lost, or the API could experience errors that affect its stability.
Scalability limitations: As traffic increases, the API's direct responsibility for message delivery to SignalR can become a bottleneck.
Decoupling benefits: By introducing a queue, the API can focus solely on receiving and validating requests, while a separate background process handles message delivery to SignalR.
Proposed Implementation:
Add an in-memory queue: The API would enqueue events into this queue rather than sending them directly to SignalR.
Introduce a background worker: A worker service would process the queue and handle communication with SignalR. This worker could include retry mechanisms and logic to handle SignalR downtime gracefully.
Implement backoff logic: If SignalR is unavailable, the worker would use exponential backoff to minimize load while waiting for SignalR to become available again.
Description:
Thinking about adding an in-memory queue to decouple the API from SignalR. Currently, directly sending events from the API to SignalR can introduce issues with resilience and scalability. This enhancement would improve the architecture by offloading event delivery to a background process, ensuring the system can handle failures and high loads more gracefully.
Motivation:
Directly coupling a REST API to SignalR for real-time message broadcasting can lead to challenges in certain scenarios:
Proposed Implementation: