bloomberg / bde

Basic Development Environment - a set of foundational C++ libraries used at Bloomberg.
Apache License 2.0
1.67k stars 316 forks source link

Question: Asper chatGPT : creating 1million threads in boost c++ is slightly better than bde: Do you agree ? #296

Closed qmarsun closed 6 months ago

qmarsun commented 6 months ago

Here is chatGPT discussion about creating 1 million thread boost vs bde library. Do you agree on the conclusion ?

https://chat.openai.com/share/6fa46aa5-1b02-4137-a2be-b0ff6f84717c

Reset Context

reset my context
ChatGPT
Sure, I'll reset our conversation context. Feel free to let me know how I can assist you!

sample of multiple thread using boost c++ give me only code 
ChatGPT
Sure, here's a simple example of using multiple threads in C++ with Boost.Thread library:

cpp
Copy code
#include <iostream>
#include <boost/thread.hpp>

// Function to be executed by each thread
void threadFunction(int threadID) {
    std::cout << "Thread " << threadID << " is running" << std::endl;
}

int main() {
    const int numThreads = 5;
    boost::thread threads[numThreads];

    // Creating and starting threads
    for (int i = 0; i < numThreads; ++i) {
        threads[i] = boost::thread(threadFunction, i);
    }

    // Waiting for all threads to finish
    for (int i = 0; i < numThreads; ++i) {
        threads[i].join();
    }

    std::cout << "All threads have finished executing" << std::endl;

    return 0;
}
This code creates 5 threads, each executing the threadFunction function with a different thread ID. The main function then waits for all threads to finish execution using the join() method.

give multipe thread code using bloomberg c++ library 
ChatGPT
Sure, here's an example of using multiple threads in C++ with the Bloomberg BDE library:

cpp
Copy code
#include <iostream>
#include <bdlmt_threadutil.h>

// Function to be executed by each thread
void threadFunction(int threadID) {
    std::cout << "Thread " << threadID << " is running" << std::endl;
}

int main() {
    const int numThreads = 5;
    bdlmt::ThreadUtil::Handle threads[numThreads];

    // Creating and starting threads
    for (int i = 0; i < numThreads; ++i) {
        threads[i].start(&threadFunction, i);
    }

    // Waiting for all threads to finish
    for (int i = 0; i < numThreads; ++i) {
        threads[i].join();
    }

    std::cout << "All threads have finished executing" << std::endl;

    return 0;
}
In this code, we use bdlmt::ThreadUtil::Handle to manage threads. The start() method is used to start a thread, passing the thread function threadFunction and its argument i. Then, we wait for all threads to finish using the join() method.

compare boost code with bloomberg code
ChatGPT
Sure, here's a comparison between the provided Boost.Thread code and the Bloomberg BDE code for creating and managing multiple threads:

Boost.Thread:

cpp
Copy code
#include <iostream>
#include <boost/thread.hpp>

// Function to be executed by each thread
void threadFunction(int threadID) {
    std::cout << "Thread " << threadID << " is running" << std::endl;
}

int main() {
    const int numThreads = 5;
    boost::thread threads[numThreads];

    // Creating and starting threads
    for (int i = 0; i < numThreads; ++i) {
        threads[i] = boost::thread(threadFunction, i);
    }

    // Waiting for all threads to finish
    for (int i = 0; i < numThreads; ++i) {
        threads[i].join();
    }

    std::cout << "All threads have finished executing" << std::endl;

    return 0;
}
Bloomberg BDE:

cpp
Copy code
#include <iostream>
#include <bdlmt_threadutil.h>

// Function to be executed by each thread
void threadFunction(int threadID) {
    std::cout << "Thread " << threadID << " is running" << std::endl;
}

int main() {
    const int numThreads = 5;
    bdlmt::ThreadUtil::Handle threads[numThreads];

    // Creating and starting threads
    for (int i = 0; i < numThreads; ++i) {
        threads[i].start(&threadFunction, i);
    }

    // Waiting for all threads to finish
    for (int i = 0; i < numThreads; ++i) {
        threads[i].join();
    }

    std::cout << "All threads have finished executing" << std::endl;

    return 0;
}
Comparison:

Library Dependency:

Boost.Thread: Requires the Boost library.
Bloomberg BDE: Requires the Bloomberg BDE library.
Thread Creation:

Boost.Thread: Uses boost::thread to create and manage threads.
Bloomberg BDE: Uses bdlmt::ThreadUtil::Handle to manage threads.
Thread Function Invocation:

Both libraries pass a thread function to be executed by each thread.
Boost.Thread uses function pointers directly.
Bloomberg BDE also uses function pointers but requires the function to be a member of a class in order to use it with bdlmt::ThreadUtil::Handle.
Thread Joining:

Both libraries wait for all threads to finish using the join() method.
Overall, both libraries provide similar functionalities for creating and managing multiple threads, but they have differences in their usage and dependencies.

compare runtime and execution speed for 1 millions theads in make it short
ChatGPT
Boost.Thread tends to have slightly better performance and lower overhead compared to Bloomberg BDE for managing large numbers of threads, such as 1 million threads. This difference is due to Boost's lightweight design and efficient thread management. However, the exact performance difference may vary depending on the specific use case and system configuration.

Get started with ChatGPT
Report content
|
Terms of use
|
Privacy policy
|

?
phalpern commented 6 months ago

Thank you for your interest in the BDE library. ChatGPT's conclusions are certainly interesting. If I were writing this code using BDE, I would not use bdlmt::ThreadUtil::Handle, as such a type does not exist, but who am I to doubt ChatGPT? Not having done the experiment, we can neither agree nor disagree with the conclusion. Please share your own experience with this code and how well it ran when you increased numThreads to 1,000,000. In the meantime, this is ChatGPT 4's analysis of ChatGPT's own conclusions (link):

The claim that Boost.Thread outperforms Bloomberg BDE in handling a large number of threads (e.g., 1 million threads) lacks empirical evidence and detailed analysis necessary to validate such a statement. Key issues with the provided data include:

  1. Lack of Empirical Data: The conclusion is not supported by specific benchmarks or performance metrics, which are critical for substantiating claims about software performance.
  2. Oversimplification: Performance can vary significantly based on factors like hardware, operating system, and the nature of tasks being executed, none of which are addressed in the claim.
  3. Practicality: The scenario of managing 1 million threads is highly theoretical and not reflective of typical use cases, questioning the applicability of the comparison.
  4. Use Case Specificity: Different use cases may favor different threading libraries, and without context, comparisons are not particularly informative.

For a meaningful comparison, what's needed is:

In absence of such comprehensive analysis, the comparative assessment of Boost.Thread and Bloomberg BDE for managing 1 million threads remains speculative and unsubstantiated.

mversche commented 6 months ago

This has been interesting, but closing this issue.

qmarsun commented 6 months ago

Thank you very much for your response. I will try the suggested steps of without using bdlmt::ThreadUtil::Handle