andreamancuso / osm-static-map-generator

MIT License
1 stars 0 forks source link

Implement throttling of requests #3

Open andreamancuso opened 1 month ago

andreamancuso commented 1 month ago

Currently all fetch requests are processed in parallel without any throttling. There is a member variable in MapGenerator called m_tileRequestLimit (defaulting to 2) however this isn't being used.

andreamancuso commented 1 month ago

@coderabbitai you got ideas?

coderabbitai[bot] commented 1 month ago

[!TIP] For best results, initiate chat on the files or code changes.

@andreamancuso, to implement throttling of requests in the MapGenerator class using the m_tileRequestLimit variable, you can follow these steps:

  1. Create a Request Queue: Use a queue to manage the fetch requests.
  2. Limit Concurrent Requests: Use the m_tileRequestLimit variable to control the number of concurrent requests.
  3. Process Requests: Process requests from the queue, ensuring that the number of active requests does not exceed the limit.

Here is a basic implementation outline in pseudocode:

class MapGenerator {
private:
    int m_tileRequestLimit;
    std::queue<Request> requestQueue;
    std::vector<std::future<void>> activeRequests;

public:
    MapGenerator() : m_tileRequestLimit(2) {}

    void fetchTile(Request request) {
        requestQueue.push(request);
        processQueue();
    }

    void processQueue() {
        while (!requestQueue.empty() && activeRequests.size() < m_tileRequestLimit) {
            Request request = requestQueue.front();
            requestQueue.pop();
            auto future = std::async(std::launch::async, [this, request]() {
                // Perform the fetch request
                fetch(request);
                // Remove the request from activeRequests once done
                activeRequests.erase(std::remove(activeRequests.begin(), activeRequests.end(), std::this_thread::get_id()), activeRequests.end());
                // Process the next request in the queue
                processQueue();
            });
            activeRequests.push_back(std::move(future));
        }
    }

    void fetch(Request request) {
        // Implementation of the actual fetch logic
    }
};

This outline ensures that only m_tileRequestLimit number of requests are processed concurrently. Once a request is completed, it processes the next request in the queue.

Would you like me to dive deeper into any specific part of this implementation?


Learnings added ``` Learnt from: andreamancuso URL: https://github.com/andreamancuso/osm-static-map-generator/issues/3 Timestamp: 2024-07-22T10:34:39.918Z Learning: To implement throttling of fetch requests in the `MapGenerator` class, use a queue to manage requests and control the number of concurrent requests using the `m_tileRequestLimit` variable. ``` > Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure [review instructions](https://docs.coderabbit.ai/guides/review-instructions). You can manage existing learnings in the [app](https://app.coderabbit.ai/learnings).
andreamancuso commented 1 month ago

@coderabbitai do you have specific suggestions based on the current implementation, preferably using a Mutex to prevent concurrency issues?

coderabbitai[bot] commented 1 month ago

[!TIP] For best results, initiate chat on the files or code changes.

@andreamancuso, based on the current implementation of the MapGenerator class, here are specific suggestions to implement throttling of fetch requests using a std::mutex to prevent concurrency issues:

  1. Create a Request Queue: Use a queue to manage the fetch requests.
  2. Limit Concurrent Requests: Use the m_tileRequestLimit variable to control the number of concurrent requests.
  3. Process Requests: Process requests from the queue, ensuring that the number of active requests does not exceed the limit.

Here is an implementation outline:

#include <queue>
#include <thread>
#include <future>
#include <algorithm>

class MapGenerator {
private:
    std::mutex m_tileRequestsMutex;
    int m_tileRequestLimit = 2; // Default limit
    std::queue<int> requestQueue; // Queue to manage tile requests
    std::vector<std::future<void>> activeRequests; // Vector to store active requests

    // Other members...

public:
    // Constructor and other methods...

    void DownloadTiles() {
        // Example method to add requests to the queue
        for (int i = 0; i < m_tileDescriptors.size(); ++i) {
            requestQueue.push(i);
        }
        processQueue();
    }

    void processQueue() {
        std::lock_guard<std::mutex> lock(m_tileRequestsMutex);
        while (!requestQueue.empty() && activeRequests.size() < m_tileRequestLimit) {
            int requestId = requestQueue.front();
            requestQueue.pop();
            auto future = std::async(std::launch::async, [this, requestId]() {
                // Perform the fetch request
                fetchTile(requestId);
                // Remove the request from activeRequests once done
                std::lock_guard<std::mutex> lock(m_tileRequestsMutex);
                activeRequests.erase(std::remove_if(activeRequests.begin(), activeRequests.end(),
                    [](const std::future<void>& f) { return f.wait_for(std::chrono::seconds(0)) == std::future_status::ready; }), activeRequests.end());
                // Process the next request in the queue
                processQueue();
            });
            activeRequests.push_back(std::move(future));
        }
    }

    void fetchTile(int requestId) {
        // Implementation of the actual fetch logic
        // Simulate fetch with a sleep
        std::this_thread::sleep_for(std::chrono::seconds(1));
        MarkTileRequestFinished(requestId, true);
    }

    void MarkTileRequestFinished(int id, bool successOrFailure) {
        // Implementation of marking the request as finished
    }

    // Other methods...
};

In this implementation:

Would you like further details or adjustments to this implementation?

Analysis chain --- ---
Scripts executed The following scripts were executed for the analysis: Script: ```shell #!/bin/bash # Locate and display the implementation of the MapGenerator class. ast-grep --lang cpp --pattern $'class MapGenerator { $$$ }' ``` Length of output: 407 --- Script: ```shell #!/bin/bash # Search for the MapGenerator class definition in the codebase. rg 'class MapGenerator' -A 10 ``` Length of output: 1010 --- Script: ```shell #!/bin/bash # Display the contents of the cpp/mapgenerator.h file to understand the current implementation. cat cpp/mapgenerator.h ``` Length of output: 4936

Learnings added ``` Learnt from: andreamancuso URL: https://github.com/andreamancuso/osm-static-map-generator/issues/3 Timestamp: 2024-07-22T13:08:36.414Z Learning: To implement throttling of fetch requests in the `MapGenerator` class, use a queue to manage requests, control the number of concurrent requests using the `m_tileRequestLimit` variable, and use `std::mutex` to prevent concurrency issues. ``` > Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure [review instructions](https://docs.coderabbit.ai/guides/review-instructions). You can manage existing learnings in the [app](https://app.coderabbit.ai/learnings).

Learnings used ``` Learnt from: andreamancuso PR: andreamancuso/osm-static-map-generator#0 File: :0-0 Timestamp: 2024-07-22T10:34:40.064Z Learning: To implement throttling of fetch requests in the `MapGenerator` class, use a queue to manage requests and control the number of concurrent requests using the `m_tileRequestLimit` variable. ```
andreamancuso commented 1 month ago

Fabulous, I shall try it out, thank you @coderabbitai

coderabbitai[bot] commented 1 month ago

[!TIP] For best results, initiate chat on the files or code changes.

@andreamancuso, I'm glad you found the suggestions helpful! Feel free to reach out if you need any further assistance or adjustments. Happy coding! 🐇


Learnings used ``` Learnt from: andreamancuso PR: andreamancuso/osm-static-map-generator#0 File: :0-0 Timestamp: 2024-07-22T10:34:40.064Z Learning: To implement throttling of fetch requests in the `MapGenerator` class, use a queue to manage requests and control the number of concurrent requests using the `m_tileRequestLimit` variable. ``` --- ``` Learnt from: andreamancuso PR: andreamancuso/osm-static-map-generator#0 File: :0-0 Timestamp: 2024-07-22T13:08:36.765Z Learning: To implement throttling of fetch requests in the `MapGenerator` class, use a queue to manage requests, control the number of concurrent requests using the `m_tileRequestLimit` variable, and use `std::mutex` to prevent concurrency issues. ```