IntelRealSense / librealsense

Intel® RealSense™ SDK
https://www.intelrealsense.com/
Apache License 2.0
7.6k stars 4.83k forks source link

Access & control any devices in a multicam setup #2219

Closed AntoineWefit closed 6 years ago

AntoineWefit commented 6 years ago
Required Info
Camera Model D400
Firmware Version 05.09.14.00
Operating System & Version Win 10 / Ubuntu 18.04
Kernel Version (Linux Only)
Platform PC
SDK Version legacy / 2.0
Language C++

Issue Description

What is the best way to access and control (a) device(s) in a multicam setup ?

I wrote a code based on the examples in the SDK, which is supposed to take control of cameras in a multiple D415 cameras setup. "Supposed" because right now, only 1 of the cameras is selected to capture data. Let's be clear, the problem is clearly my code and not the hardware or the OS because I'm learning as I go.

The idea is simple : I would like to write a function which will select a camera after another to capture a pointcloud for each camera (with texture and save it to disk), and ideally, another one which will select both cameras to capture an image for each (and save it to disk). What I've got so far is a program doing the different captures, but not the camera selection and is far from selecting both camera simultaneously.

I reviewed the rs-multicam example, which should have helped me, but I don't understand how to select the cameras as I want. I'm also guessing that the sensor-control example is providing some information. Unfortunately, I can't provide the code I wrote (which doesn't work anyway).

Naïvely, I thought that replacing : auto dev = devices[0]; by auto dev = devices[1]; in this example will selected the second device but it's not working... [EDIT] It is working for other cameras (Platform camera) : with two D415 and a Webcam, I can select the webcam with auto dev = devices[2]; but both auto dev = devices[0]; and auto dev = devices[1]; select the same D415...

I don't really care if the cameras are synchronized or not, because I don't want to stream and I want every data to be captured independently.

Any help is appreciated !

MartyG-RealSense commented 6 years ago

An interesting information source is a script posted a week ago by the user Westa. He describes it as follows: "If explains how the RealSense context works - and how to iterate over all the current devices. It also shows a mechanism for resetting a found Intel device - and a mechanism to wait for the hardware to finishing resetting before continuing"

https://communities.intel.com/message/558651#558651

TAREK-ELOUARET commented 6 years ago

First of all, Thank's for your idea and this new issue. so after reading all of this, I can uderstand that you want to capture and save frames simultaneously from both D415 & Webcam. in the program below we can see that the condition works with D400 series camera,

auto dev = devices[0];

// Check if current device supports advanced mode
if (dev.is<rs400::advanced_mode>())
{
    // Get the advanced mode functionality
    auto advanced_mode_dev = dev.as<rs400::advanced_mode>();
    const int max_val_mode = 2; // 0 - Get maximum values
    // Get the maximum values of depth controls
    auto depth_control_group = advanced_mode_dev.get_depth_control(max_val_mode);

    // Apply the depth controls maximum values
    advanced_mode_dev.set_depth_control(depth_control_group);
}

So as we see, I think if you want to detect the connectivity of your webcam you should create another veriable ( auto dev2 = devices[1] ) and make the appropriate condition which verify it. for my supposition I think that OpenCV function ( VideoCapture cap(0) ) will handle this nicely.

NB:

The program posted by WESTA, sweep all devices connected but with only the Realsense series and I don't think that WEBCAM will appear in the scan.

I hope that will help you with somthing, keep us in touch for any further questions.

AntoineWefit commented 6 years ago

Thank you @MartyG-RealSense, I took a look at this post but unfortunately, I don't think that my difficulties are related to hardware reset (I'll keep the the code anyway, it might come in handy someday !). I managed to go through the different devices with this code (which is working) :

// Create a context
rs2::context cxt;

// Get all devices connected
auto device = cxt.query_devices();
size_t device_count = device.size();

cout << "Device list " << endl;
cout << "----------------------------------------------------------- " << endl;
for (auto i = 0; i < device_count; i++)
    {
        auto hardware = device[i];
        auto devSerialNumber = hardware.get_info(RS2_CAMERA_INFO_SERIAL_NUMBER);
        cout << "Camera " << i << ": " << hardware.get_info(RS2_CAMERA_INFO_NAME) << endl;
        cout << "Serial number : " << devSerialNumber << endl;
    }

But as soon as I start a pipeline, only the first device is selected...

I went through the rs-multicam code (again) I mentioned earlier and I noticed something. In this bit :

void enable_device(rs2::device dev)
    {
        std::string serial_number(dev.get_info(RS2_CAMERA_INFO_SERIAL_NUMBER));
        std::lock_guard<std::mutex> lock(_mutex);

        if (_devices.find(serial_number) != _devices.end())
        {
            return; //already in
        }

        // Ignoring platform cameras (webcams, etc..)
        if (platform_camera_name == dev.get_info(RS2_CAMERA_INFO_NAME))
        {
            return;
        }
        // Create a pipeline from the given device
        rs2::pipeline p;
        rs2::config c;
        c.enable_device(serial_number);
        // Start the pipeline with the configuration
        rs2::pipeline_profile profile = p.start(c);
        // Hold it internally
        _devices.emplace(serial_number, view_port{ {},{},{}, p, profile });

}

It seems that the devices are selected with their respective serial numbers, is that correct ?

To this day, I based my algorithm on this code and tried to iterate through auto dev = devices[0] to manage the different cameras, but I didn't manage to make it work :

int main(int argc, char** argv) try
{
    // Obtain a list of devices currently present on the system
    context ctx;
    auto devices = ctx.query_devices();
    size_t device_count = devices.size();
    if (!device_count)
    {
        cout <<"No device detected. Is it plugged in?\n";
        return EXIT_SUCCESS;
    }

    // Get the first connected device
    auto dev = devices[0];

    // Check if current device supports advanced mode
    if (dev.is<rs400::advanced_mode>())
    {
        // Get the advanced mode functionality
        auto advanced_mode_dev = dev.as<rs400::advanced_mode>();
        const int max_val_mode = 2; // 0 - Get maximum values
        // Get the maximum values of depth controls
        auto depth_control_group = advanced_mode_dev.get_depth_control(max_val_mode);

        // Apply the depth controls maximum values
        advanced_mode_dev.set_depth_control(depth_control_group);
    }
    else
    {
        cout << "Current device doesn't support advanced-mode!\n";
        return EXIT_FAILURE;
    }

    return EXIT_SUCCESS;
}
AntoineWefit commented 6 years ago

Thank you for your interest @TAREK-ELOUARET ! My bad for the lack of clarity : I don't actually want to do anything with the webcam, I mentioned it just to demonstrate that I managed to select it in a list of devices, while in the same time, I couldn't select the D415 I wanted (ironic isn't it ?).

MartyG-RealSense commented 6 years ago

My intention was that the 'iteration through all devices' part might be the main part that was of potential use. Hardware reset just happened to be part of that script. smiles

I have seen examples where the camera is selected either through a device number - e.g (0) or (1) - or by providing the camera's full serial number. Both approaches are likely valid. For example, the multiple connection aspect of the unofficial SDK 2.0 Unity wrapper extension takes the approach of asking for the serial number to be provided.

https://user-images.githubusercontent.com/20081122/38487218-6ab82dce-3c1a-11e8-9cd7-31d742cb5bba.png

AntoineWefit commented 6 years ago

@MartyG-RealSense

My intention was that the 'iteration through all devices' part might be the main part that was of potential use.

I agree with that ! The bit you are talking about :

// interate thru the intel device context looking for intel sensors
// note that other devices like webcams can appear here too depending on their device type
for (auto&& dev : rs2Ctx.query_devices()) // Query the list of connected RealSense devices
{
std::string devName = "";
std::string devSerialNumber = "";
std::string devFirmware = "";
std::string devProdId = "";
devProdId = dev.get_info(RS2_CAMERA_INFO_PRODUCT_ID);
devSerialNumber = dev.get_info(RS2_CAMERA_INFO_SERIAL_NUMBER);
devName = dev.get_info(RS2_CAMERA_INFO_NAME);

if (gotDev == 0 && (devName == "Intel RealSense D415" || devName == "Intel RealSense D435"))
{
devFirmware = dev.get_info(RS2_CAMERA_INFO_FIRMWARE_VERSION);
// assume there is only one intel camera for now

// save the device for future use
rs2Dev = dev;
gotDev = 1;
}
std::cout << "System Dev: " << devName << " Ser: " << devSerialNumber << " Firmware: " << devFirmware << " ProdID " << devProdId << std::endl;
}

is based on .query_devices() to iterate through the devices, which is (if I'm not mistaken), the same thing I used : auto devices = ctx.query_devices().

And you are right ! It is indeed working when I want to display the different devices information (see my 2nd post).

I have seen examples where the camera is selected either through a device number - e.g (0) or (1) - or by providing the camera's full serial number. Both approaches are likely valid.

I hope it'll work with the serial numbers, because it looks like I don't know how to use the device number ! Anyway, I think my best chance at the moment is to modify the rs-multicam example, because it is also useful for accessing cameras simultaneously (I want to do that too).

AntoineWefit commented 6 years ago

I've got it to work with the serial number selection (finally) so I can now select the camera I want (yay), but now I'm struggling with the "simultaneously" part. For now, the main function has a loop going through the devices, and calls an other function to do the capture with a custom ".json". Obviously, the captures are happening one after another...

So my question is now, what is the best way to do the captures simultaneously ?

By that I mean that I want to save to disk 2 different pointclouds/images from 2 different cameras corresponding to the same instant (more or less, but the closer the captures are from each other, the better). I'm guessing that I need to start 2 different pipelines, but then, I don't see how to capture/save data from both of them...

Any idea is welcome !

For those who want to know, some bits of code (yes it could be way better): The function :

    void enable_device(rs2::device dev, std::string json_file_name)
    {
        // Declare pointcloud object, for calculating pointclouds and texture mappings
        rs2::pointcloud pc;

        // We want the points object to be persistent so we can display the last cloud when a frame drops
        rs2::points points;

        string serial_nb = dev.get_info(RS2_CAMERA_INFO_SERIAL_NUMBER);
        string filename = serial_nb + ".ply";

        std::string serial_number(dev.get_info(RS2_CAMERA_INFO_SERIAL_NUMBER));
        std::lock_guard<std::mutex> lock(_mutex);

        if (_devices.find(serial_number) != _devices.end())
        {
            return; //already in
        }

        // Ignoring platform cameras (webcams, etc..)
        if (platform_camera_name == dev.get_info(RS2_CAMERA_INFO_NAME))
        {
            return;
        }

        // Create a pipeline from the given device
        rs2::pipeline p;
        rs2::config c;
        c.enable_device(serial_number);
        cout << "Camera : " << serial_nb << endl;

        auto advanced_mode_dev = dev.as<rs400::advanced_mode>();

        // Check if advanced-mode is enabled
        if (!advanced_mode_dev.is_enabled())
        {
            // Enable advanced-mode
            advanced_mode_dev.toggle_advanced_mode(true);
            cout << "Advanced mode enabled. " << endl;
        }

        // Select the configuration file
        std::ifstream t(json_file_name);
        std::string preset_json((std::istreambuf_iterator<char>(t)), std::istreambuf_iterator<char>());
        advanced_mode_dev.load_json(preset_json);
        rs2::pipeline_profile pipe_profile = c.resolve(p);

        // Start the pipeline with the configuration
        rs2::pipeline_profile profile = p.start(c);
        // Hold it internally
        _devices.emplace(serial_number, view_port{ {},{},{}, p, profile });

        // Capture 30 frames to give autoexposure, etc. a chance to settle
        for (auto i = 0; i < 30; ++i) p.wait_for_frames();

        // Wait for the next set of frames from the camera
        auto frames = p.wait_for_frames();
        auto depth = frames.get_depth_frame();
        auto color = frames.get_color_frame();

        // Tell pointcloud object to map to this color frame
        pc.map_to(color);

        // Generate the pointcloud and texture mappings
        points = pc.calculate(depth);

        // Stop the stream
        p.stop();

        // Save the acquistion in a .ply file (with texture)
        cout << "Saving 3D model to " << filename << endl;
        points.export_to_ply(filename, color);

    }

(part of) The main :

        device_container connected_devices;

        rs2::context ctx;    // Create librealsense context for managing devices

        // Register callback for tracking which devices are currently connected
        ctx.set_devices_changed_callback([&](rs2::event_information& info)
        {
            connected_devices.remove_devices(info);
            for (auto&& dev : info.get_new_devices())
            {
                connected_devices.enable_device(dev, "9.json");
            }
        });

        // Initial population of the device list
        for (auto&& dev : ctx.query_devices()) // Query the list of connected RealSense devices
        {
            connected_devices.enable_device(dev,"9.json");
        }

Please tell me if it can be improved, I'm here to learn.

MartyG-RealSense commented 6 years ago

I was sure someone had asked recently about saving both streams separately, and after much searching I found that it was you, at the start of this discussion. Shows the importance of going back to the start and re-reading to refresh the memory when a discussion thread gets long. :)

The Software Support Model doc page provides some useful insights about how Librealsense2 handles multi streaming, in the section headed "Multi-Streaming Model".

https://github.com/IntelRealSense/librealsense/blob/master/doc/rs400_support.md

AntoineWefit commented 6 years ago

Ok, so from what I read, the rs400_support.md says that multi-stream is possible (and easier than before), which is not surprising because of the rs-multicam example. However, do you now where can I found the function(s) related to that @MartyG-RealSense ? I'm, of course, interested in this bit :

librealsense2 eliminates limitations imposed by previous versions with regard to multi-streaming:

Multiple applications can use librealsense2 simultaneously, as long as no two users try to stream from the same camera endpoint.
In practice, this means that you can:
    Stream multiple cameras within a single process

I took a quick look at the SDK documentation but I still can't find those functions. Your document says it's possible, but not how :( (unless I missed something ?). Some bits of code are provided, but if I'm not wrong, are only useful for sensors (not devices) control.

And yes, I try to describe as much as I can the difficulties I'm facing, so it is much likely that these threads will become unreasonably long (!).

MartyG-RealSense commented 6 years ago

Nothing unreasonable about being long. It takes as long as it takes. I've had cases go to 6 or 7 pages sometimes, so don't worry! It's helpful to be given as much detail as possible, so thanks muchly for that.

TAREK-ELOUARET commented 6 years ago

@AntoineWefit Thanks for sharing with us your experience : ) , for the simultaneously problem , I think you should use the multithreading process in c++. so you put each process on thread in order to get a good and fast results as you want.

dorodnic commented 6 years ago

Hi @TAREK-ELOUARET @MartyG-RealSense @AntoineWefit Sorry for the confusion, let me know how the documentation can be improved.

The basic context/device/sensor API allows fine-tuned control over everything that is connected, but is a bit more complex. Every sensor can be started and stopped independently, somewhat covered by the sensor-control example.

pipeline is a wrapper on top of this, and it can support multi-cam with less granular control (all sensors of one device are configured together with synchronisation between them always on) This is covered by the multicam sample.

The following snippet shows how to control individual sensors:

#include <librealsense2/rs.hpp>
#include <thread>
#include <vector>
#include <chrono>
#include <map>
#include <iostream>

int main()
{
    using namespace rs2;
    using namespace std;
    using namespace std::chrono;

    // Using context.query_devices():
    context ctx;
    std::vector<sensor> sensors;
    for (auto dev : ctx.query_devices())
    {
        for (auto sensor : dev.query_sensors())
        {
            sensor.open(sensor.get_stream_profiles().front()); // TODO: Find the correct profile
            sensor.start([sensor](frame f){
                std::cout << "New frame from " << sensor.get_info(RS2_CAMERA_INFO_SERIAL_NUMBER)
                          << " of type " << f.get_profile().stream_name() << std::endl;
                         });
            sensors.push_back(sensor); // Save & keep-alive for later clean-up
        }
    }

    this_thread::sleep_for(chrono::seconds(3));

    for (auto sensor : sensors) {
        sensor.stop();
        sensor.close();
    }
}

I'm trying to follow the post to understand how best to help.

So my question is now, what is the best way to do the captures simultaneously ?

The multicam example is probably the best reference for what you are describing. To prevent different cameras from blocking each other, it is best to use poll_for_frames instead of wait_for_frames, but you could also create std::thread for each pipeline and work as usual within each thread.

MartyG-RealSense commented 6 years ago

Thanks for coming in to help at the weekend, Dorodnic. :)

The aspect of the camera that @AntoineWefit was looking for documentation clarification on was how to stream multiple cameras within a single process (something that the Software Support Model documentation page said was possible). Thanks muchly for providing the script above.

dorodnic commented 6 years ago

I must be missing something, isn't that what multicam doing?

We should probably collect all these snippets into the wiki / some other document. Happy to help. Here's the same idea with pipelines (somewhat similar to multicam):

#include <librealsense2/rs.hpp>
#include <iostream>
#include <map>
#include <chrono>
#include <thread>
#include <string>

int main()
{
    using namespace rs2;
    using namespace std;
    using namespace chrono;

    context ctx;
    map<string, pipeline> pipes;
    for (auto dev : ctx.query_devices())
    {
        auto serial = dev.get_info(RS2_CAMERA_INFO_SERIAL_NUMBER);
        config cfg;
        cfg.enable_device(serial);
        pipeline p;
        p.start(cfg); 
        pipes[serial] = p;
    }
    auto start = steady_clock::now();
    while (duration_cast<seconds>(steady_clock::now() - start).count() < 3)
    {
        for (auto kvp : pipes)
        {
            frameset fs;
            if (kvp.second.poll_for_frames(&fs))
            {
                std::cout << fs.size() << " synchronized frames from camera " << kvp.first << std::endl;
            }
        }
        this_thread::sleep_for(milliseconds(1)); // Otherwise we get 100% CPU
    }
}
MartyG-RealSense commented 6 years ago

In regard to documentation additions, top of my own personal wish-list would be a doc on performing hardware sync with D435s using an external signal generator (a similar setup to the January 2018 volumetric capture with four D435s at the Sundance tech shack), or alternatively a D415 master to provide the sync pulse and all-D435 slaves. Thanks again for your very hard work. Dorodnic.

dorodnic commented 6 years ago

I see, we will try to add more information on that. The multi-camera white paper is constantly being updated, but I agree that it is talking less about the software side.

Here is another snippet, this time with pipeline and threads:

#include <librealsense2/rs.hpp>
#include <iostream>
#include <map>
#include <chrono>
#include <thread>
#include <string>
#include <vector>

int main()
{
    using namespace rs2;
    using namespace std;
    using namespace chrono;

    context ctx;
    vector<thread> threads;

    for (auto dev : ctx.query_devices())
    {
        string serial = dev.get_info(RS2_CAMERA_INFO_SERIAL_NUMBER);
        config cfg;
        cfg.enable_device(serial);
        pipeline p;
        p.start(cfg);

        threads.emplace_back([p, serial](){
            auto start = steady_clock::now();
            while (duration_cast<seconds>(steady_clock::now() - start).count() < 3)
            {
                frameset fs = p.wait_for_frames();
                std::cout << fs.size() << " synchronized frames from camera " << serial << std::endl;
            }
        });
    }

    for (auto& t : threads) t.join(); // Must join / detach all threads
}
MartyG-RealSense commented 6 years ago

I'll post a link to this new material over on the Intel Support RealSense site. Cheers. ^_^

AntoineWefit commented 6 years ago

Thank you for your help @dorodnic @MartyG-RealSense @TAREK-ELOUARET ! I appreciate it.

I must be missing something, isn't that what multicam doing?

Yes, I think so ! Due to my lack of knowledge in coding, I understood how to start multiple streams (kind of), but I had difficulties handling them correctly : selecting the right camera / saving data to disk / selecting the right pipeline for example... I'm, in fact, trying to combine different examples from the documentation; I managed to make them work separately, but all together is an other story ! I'll take a look at your snippets @dorodnic, they should do the trick ! I'll update the thread once I'm done.

Out of curiosity, I saw that you used string serial = dev.get_info(RS2_CAMERA_INFO_SERIAL_NUMBER) in both your snippets to select the cameras (which I'll be using too from now). Is that possible to do the same thing but with an array of devices defined by cxt.query_devices() to get access to the devices like in this document ? The related snippet :

    // Obtain a list of devices currently present on the system
    context ctx;
    auto devices = ctx.query_devices();
...
    // Get the first connected device
    auto dev = devices[0];

As I said in my first and second posts, I tried it but without success...

AntoineWefit commented 6 years ago

[EDIT] : Turns out I should have put string filename = serial + ".ply"; in the if loop to get 2 different files... (facepalm). I'll post a cleaner code.

Ok, so now, with your "pipeline & threads" snippet @dorodnic I managed to start both cameras simultaneously.

More questions :

The code I tried really quickly, please feel free to suggest improvements (fyi I have 2 cameras, which explain the pipeline p[2]):

#include <librealsense2/rs.hpp>
#include <C:/Program Files (x86)/Intel RealSense SDK 2.0/include/librealsense2/rs_advanced_mode.hpp>
#include <iostream>
#include <fstream>
#include <map>
#include <chrono>
#include <thread>
#include <string>
#include <vector>

int main() try
{
    using namespace rs2;
    using namespace std;
    using namespace chrono;

    context ctx;
    vector<thread> threads;
    string json_file_name = "9.json";
    int i = 0;
    pipeline p[2];

    for (auto dev : ctx.query_devices())
    {
        string serial = dev.get_info(RS2_CAMERA_INFO_SERIAL_NUMBER);
        string filename = serial + ".ply";

        auto advanced_mode_dev = dev.as<rs400::advanced_mode>();

        // Check if advanced-mode is enabled
        if (!advanced_mode_dev.is_enabled())
        {
            // Enable advanced-mode
            advanced_mode_dev.toggle_advanced_mode(true);
            cout << "Advanced mode enabled. " << endl;
        }

        config cfg;

        // Select the configuration file
        std::ifstream t(json_file_name);
        std::string preset_json((std::istreambuf_iterator<char>(t)), std::istreambuf_iterator<char>());
        advanced_mode_dev.load_json(preset_json);

        cfg.enable_device(serial);

        p[i].start(cfg);

        threads.emplace_back([i, p, serial]() {
            auto start = steady_clock::now();
            while (duration_cast<seconds>(steady_clock::now() - start).count() < 1)
            {
                frameset fs = p[i].wait_for_frames();
                std::cout << fs.size() << " synchronized frames from camera " << serial << std::endl;
            }
        });
        cout << "iteration : " << i << endl;
        system("pause");

        if (i == 1) // Only entering this bit when both cameras are already streaming
        {
            for (int j = 0; j < 2; j++)
            {
                // Declare pointcloud object, for calculating pointclouds and texture mappings
                rs2::pointcloud pc;

                // We want the points object to be persistent so we can display the last cloud when a frame drops
                rs2::points points;

                // Wait for the next set of frames from the camera
                auto frames = p[j].wait_for_frames();
                auto depth = frames.get_depth_frame();
                auto color = frames.get_color_frame();

                // Tell pointcloud object to map to this color frame
                pc.map_to(color);

                // Generate the pointcloud and texture mappings
                points = pc.calculate(depth);

                // Save the acquistion in a .ply file (with texture)
                cout << "Saving 3D model to " << filename << endl;
                points.export_to_ply(filename, color);
                p[j].stop();
            }
        }

        i++;

    }

    system("pause");

    for (auto& t : threads) t.join(); // Must join / detach all threads
}

catch (const rs2::error & e)
{
    std::cerr << "RealSense error calling " << e.get_failed_function() << "(" << e.get_failed_args() << "):\n    " << e.what() << std::endl;
    system("pause");
}
catch (const std::exception& e)
{
    std::cerr << e.what() << std::endl;
    system("pause");
}

I basically tried to differentiate the 2 pipelines by giving them an index, and then use this index to save the data from each camera once both of them are started to stream. It kind of worked except that it seems like I can't access the pipelines with this method as the program is saving 2 files with the same name (which mean only 1 pipeline is selected).

I have difficulties understanding "where" the data actually are when you start a pipeline, and how to access them. For my case, do I have to retrieve them from the pipelines or from the frames ?

Please pardon my poor coding skills/knowledge of the SDK. Thanks again for your help !

dorodnic commented 6 years ago

Is that normal that the snippet exit every time with an error ?

Trying to conserve lines of code I missed return EXIT_SUCCESS; in the end of main.

Can you explain what exactly the ... bit is doing ?

// Since we want to use wait_for_frames (that waits until frames from specific camera are ready) but we don't want the two cameras to block each other, we will run both pipelines in parallel. 

// We define new function (can be function outside main, but convenient to do in place) 
// This function will be executed for every pipeline
auto what_to_do = [p, serial](){ // [p, serail] part is the list of variables we want to share with the function
        // We need to define some "exit" condition, otherwise threads will keep waiting for frames
        // This specific condition will exit after 3 seconds
        // Alternatively, you can create std::atomic_bool to_exit variable in your main function, initialise it to false, run while(!to_exit), and before thread::join set it to true.
        // You can also just run the content of while just once
        auto start = steady_clock::now();
    while (duration_cast<seconds>(steady_clock::now() - start).count() < 3)
    {
                 // Block this specific thread until frames for this camera are ready
                 frameset fs = p.wait_for_frames();
                 // Here you can put any processing you want to do
                 // For example:

        rs2::pointcloud pc;
        rs2::points points;

        auto depth = fs.get_depth_frame();
        auto color = fs.get_color_frame();

        pc.map_to(color);
        points = pc.calculate(depth);

                string filename = serial + ".ply";
        cout << "Saving 3D model to " << filename << endl;
        points.export_to_ply(filename, color);
    }
};

// Once this function is defined, we can create thread to run it in parallel to everything else
// This will start executing the function right away
std::thread t(what_to_do);
// If we don't save the thread object somewhere, it will be destroyed instantly, so we add it to threads collection
threads.push_back(t);

Hope this helps

AntoineWefit commented 6 years ago

Hope this helps

@dorodnic Yes, quite a bit ! If I'm not mistaken, you suggest to define a function that'll do the capture and run that function 2 times in parallel.

With your help, I think I'm really close to my goal. However, I'm still stuck with the error I mentioned earlier, which might be triggered by the thread. As I also have an integrated webcam, I though it might be the problem, but I checked with the code below and it doesn't seem to be the case. The program seems to do the process for only one of the camera, and the other one doesn't start the capture. I tried to swap the cameras/USB ports, but only one camera is selected every time and always the same. I'm not sure if I understood

Trying to conserve lines of code I missed return EXIT_SUCCESS; in the end of main.

correctly.

image

Can't start to use the second camera (the blurred out text is the path to the .exe) : image

The code :

#include <librealsense2/rs.hpp>
#include <C:/Program Files (x86)/Intel RealSense SDK 2.0/include/librealsense2/rs_advanced_mode.hpp>
#include <iostream>
#include <fstream>
#include <map>
#include <chrono>
#include <thread>
#include <string>
#include <vector>

// 3rd party header for writing png files
#define STB_IMAGE_WRITE_IMPLEMENTATION
#include "stb_image_write.h"

int main() try
{
    using namespace rs2;
    using namespace std;
    using namespace chrono;

    context ctx;
    vector<thread> threads;
    string json_file_name = "9.json";
    pipeline p;

    // Get all devices connected
    auto device = ctx.query_devices();
    // Get number of devices
    size_t device_count = device.size();
    cout << device_count << " connected device(s). " << endl;

    for (auto i = 0; i < device_count; i++)
    {
        auto hardware = device[i];
        cout << "Name of the camera " << i << ": " << hardware.get_info(RS2_CAMERA_INFO_NAME) << endl;
    }

    for (auto dev : ctx.query_devices())
    {
        if (strcmp(dev.get_info(RS2_CAMERA_INFO_NAME), "Intel RealSense D415") == 0)
        {
            string serial = dev.get_info(RS2_CAMERA_INFO_SERIAL_NUMBER);
            auto advanced_mode_dev = dev.as<rs400::advanced_mode>();

            // Check if advanced-mode is enabled
            if (!advanced_mode_dev.is_enabled())
            {
                // Enable advanced-mode
                advanced_mode_dev.toggle_advanced_mode(true);
                cout << "Advanced mode enabled. " << endl;
            }

            config cfg;

            // Select the configuration file
            std::ifstream t(json_file_name);
            std::string preset_json((std::istreambuf_iterator<char>(t)), std::istreambuf_iterator<char>());
            advanced_mode_dev.load_json(preset_json);

            cfg.enable_device(serial);

            p.start(cfg);

            threads.emplace_back([p, serial]() {
                auto start = steady_clock::now();
                int i = 0;
                while (i < 1) // We only need 1 image/camera
                {
                    cout << "Entering While loop. " << endl;
                    frameset fs = p.wait_for_frames();
                    rs2::pointcloud pc;
                    auto color = fs.get_color_frame();
                    pc.map_to(color);

                    std::stringstream png_file;
                    png_file << "Image_" << serial << ".png";
                    stbi_write_png(png_file.str().c_str(), color.get_width(), color.get_height(), color.get_bytes_per_pixel(), color.get_data(), color.get_stride_in_bytes());
                    cout << "Saved " << png_file.str() << endl;
                    i++;
                }

            });
        }
    }

    system("pause");

    for (auto& t : threads) t.join(); // Must join / detach all threads

    return EXIT_SUCCESS;

}

catch (const rs2::error & e)
{
    std::cerr << "RealSense error calling " << e.get_failed_function() << "(" << e.get_failed_args() << "):\n    " << e.what() << std::endl;
    system("pause");

    return EXIT_FAILURE;
}
catch (const std::exception& e)
{
    std::cerr << e.what() << std::endl;
    std::cerr << "RealSense error "  << std::endl;

    system("pause");

    return EXIT_FAILURE;
}
AntoineWefit commented 6 years ago

Well well well, looks like I can answer myself this time !

Thanks to @dorodnic and his examples I managed to make it work. I have to admit that the threads gave me a lot of troubles, as I'm a total beginner. But I learned a lot !

I'm pretty happy with the result as the code is much more clean than before, and is ,of course, working (!) It may not be perfect, so feel free to suggest improvements.

For anyone interested, the following code will capture and save a picture for each connected D415 simultaneously. (One can easily replace the picture capture with a pointcloud capture for example.) I'll close this issue in a few days to keep it visible a bit.

Thank you @MartyG-RealSense @TAREK-ELOUARET and of course @dorodnic for your help !

How I checked for simultaneity of the (2) captures : Camera 1 image_806312060683 Camera 2 image_816312060660

The code :

#include <librealsense2/rs.hpp>
#include <C:/Program Files (x86)/Intel RealSense SDK 2.0/include/librealsense2/rs_advanced_mode.hpp>
#include <iostream>
#include <fstream>
#include <map>
#include <thread>
#include <string>
#include <vector>

// 3rd party header for writing png files
#define STB_IMAGE_WRITE_IMPLEMENTATION
#include "stb_image_write.h"

using namespace rs2;
using namespace std;

//---------------------------------------------------------------------------------------------------------------------------------------------------------------------
// Capture & save function
//---------------------------------------------------------------------------------------------------------------------------------------------------------------------

void capture(rs2::pipeline p, string serial)
{
    cout << "\nStarting capture with the camera : " << serial << endl;
    frameset fs = p.wait_for_frames();
    for (auto j = 0; j < 30; ++j) fs = p.wait_for_frames(); // Wait a bit for auto exposure to settle
    rs2::pointcloud pc;
    auto color = fs.get_color_frame();
    pc.map_to(color);
    std::stringstream png_file;
    png_file << "Image_" << serial << ".png";
    stbi_write_png(png_file.str().c_str(), color.get_width(), color.get_height(), color.get_bytes_per_pixel(), color.get_data(), color.get_stride_in_bytes());
    cout << "Saved " << png_file.str() << endl;
}

//---------------------------------------------------------------------------------------------------------------------------------------------------------------------
// This function will go through every connected device (as long as it is a D415), apply a custom ".json", and then capture & save a ".png" for each device simultaneously.
//---------------------------------------------------------------------------------------------------------------------------------------------------------------------

int main() try
{
    context ctx;
    vector<thread> threads;
    map<string, pipeline> pipes;

    for (auto dev : ctx.query_devices()) // For each device do :
    {
        if (strcmp(dev.get_info(RS2_CAMERA_INFO_NAME), "Intel RealSense D415") == 0) // Check for compatibility
        {
            pipeline p;
            config cfg;
            string serial = dev.get_info(RS2_CAMERA_INFO_SERIAL_NUMBER);
            string json_file_name = "9.json";
            cout << "Configuring camera : " << serial << endl;

            // Add desired stream to configuration
            cfg.enable_stream(RS2_STREAM_COLOR, 1920, 1080, RS2_FORMAT_RGB8, 30);

            auto advanced_mode_dev = dev.as<rs400::advanced_mode>();

            // Check if advanced-mode is enabled to pass the custom config
            if (!advanced_mode_dev.is_enabled())
            {
                // If not, enable advanced-mode
                advanced_mode_dev.toggle_advanced_mode(true);
                cout << "Advanced mode enabled. " << endl;
            }

            // Select the custom configuration file
            std::ifstream t(json_file_name);
            std::string preset_json((std::istreambuf_iterator<char>(t)), std::istreambuf_iterator<char>());
            advanced_mode_dev.load_json(preset_json);
            cfg.enable_device(serial);

            // Start the pipeline
            p.start(cfg);
            cout << "Starting pipeline for current camera " << serial << endl;
            pipes[serial] = p;
        }
    }

    // Call the Capture & save function to start simultaneous threads
    for (map<string, pipeline>::iterator it = pipes.begin(); it != pipes.end(); ++it)
    {
        threads.push_back(std::thread(capture, it->second, it->first));
    }

    // Joining the threads
    for (auto& t : threads) 
    {
        t.join();
    }

    return EXIT_SUCCESS;
}

catch (const rs2::error & e)
{
    std::cerr << "RealSense error calling " << e.get_failed_function() << "(" << e.get_failed_args() << "):\n    " << e.what() << std::endl;
    system("pause");
    return EXIT_FAILURE;
}

catch (const std::exception& e)
{
    std::cerr << e.what() << std::endl;
    system("pause");
    return EXIT_FAILURE;
}