Open andreamancuso opened 1 month ago
@coderabbitai you got ideas?
[!TIP] For best results, initiate chat on the files or code changes.
@andreamancuso, to implement throttling of requests in the MapGenerator
class using the m_tileRequestLimit
variable, you can follow these steps:
m_tileRequestLimit
variable to control the number of concurrent requests.Here is a basic implementation outline in pseudocode:
class MapGenerator {
private:
int m_tileRequestLimit;
std::queue<Request> requestQueue;
std::vector<std::future<void>> activeRequests;
public:
MapGenerator() : m_tileRequestLimit(2) {}
void fetchTile(Request request) {
requestQueue.push(request);
processQueue();
}
void processQueue() {
while (!requestQueue.empty() && activeRequests.size() < m_tileRequestLimit) {
Request request = requestQueue.front();
requestQueue.pop();
auto future = std::async(std::launch::async, [this, request]() {
// Perform the fetch request
fetch(request);
// Remove the request from activeRequests once done
activeRequests.erase(std::remove(activeRequests.begin(), activeRequests.end(), std::this_thread::get_id()), activeRequests.end());
// Process the next request in the queue
processQueue();
});
activeRequests.push_back(std::move(future));
}
}
void fetch(Request request) {
// Implementation of the actual fetch logic
}
};
This outline ensures that only m_tileRequestLimit
number of requests are processed concurrently. Once a request is completed, it processes the next request in the queue.
Would you like me to dive deeper into any specific part of this implementation?
@coderabbitai do you have specific suggestions based on the current implementation, preferably using a Mutex to prevent concurrency issues?
[!TIP] For best results, initiate chat on the files or code changes.
@andreamancuso, based on the current implementation of the MapGenerator
class, here are specific suggestions to implement throttling of fetch requests using a std::mutex
to prevent concurrency issues:
m_tileRequestLimit
variable to control the number of concurrent requests.Here is an implementation outline:
#include <queue>
#include <thread>
#include <future>
#include <algorithm>
class MapGenerator {
private:
std::mutex m_tileRequestsMutex;
int m_tileRequestLimit = 2; // Default limit
std::queue<int> requestQueue; // Queue to manage tile requests
std::vector<std::future<void>> activeRequests; // Vector to store active requests
// Other members...
public:
// Constructor and other methods...
void DownloadTiles() {
// Example method to add requests to the queue
for (int i = 0; i < m_tileDescriptors.size(); ++i) {
requestQueue.push(i);
}
processQueue();
}
void processQueue() {
std::lock_guard<std::mutex> lock(m_tileRequestsMutex);
while (!requestQueue.empty() && activeRequests.size() < m_tileRequestLimit) {
int requestId = requestQueue.front();
requestQueue.pop();
auto future = std::async(std::launch::async, [this, requestId]() {
// Perform the fetch request
fetchTile(requestId);
// Remove the request from activeRequests once done
std::lock_guard<std::mutex> lock(m_tileRequestsMutex);
activeRequests.erase(std::remove_if(activeRequests.begin(), activeRequests.end(),
[](const std::future<void>& f) { return f.wait_for(std::chrono::seconds(0)) == std::future_status::ready; }), activeRequests.end());
// Process the next request in the queue
processQueue();
});
activeRequests.push_back(std::move(future));
}
}
void fetchTile(int requestId) {
// Implementation of the actual fetch logic
// Simulate fetch with a sleep
std::this_thread::sleep_for(std::chrono::seconds(1));
MarkTileRequestFinished(requestId, true);
}
void MarkTileRequestFinished(int id, bool successOrFailure) {
// Implementation of marking the request as finished
}
// Other methods...
};
In this implementation:
std::queue<int>
is used to manage the tile requests.processQueue
method ensures that only m_tileRequestLimit
number of requests are processed concurrently.std::mutex
is used to prevent concurrency issues when accessing the queue and active requests.Would you like further details or adjustments to this implementation?
Fabulous, I shall try it out, thank you @coderabbitai
[!TIP] For best results, initiate chat on the files or code changes.
@andreamancuso, I'm glad you found the suggestions helpful! Feel free to reach out if you need any further assistance or adjustments. Happy coding! 🐇
Currently all fetch requests are processed in parallel without any throttling. There is a member variable in
MapGenerator
calledm_tileRequestLimit
(defaulting to 2) however this isn't being used.