Closed YANSU7 closed 1 year ago
Hi @YANSU7 If you are using the 2.49.0 version of the RealSense SDK then I would first recommend downgrading your camera's firmware driver to 5.12.15.50, as the 5.14.0.0 firmware is not fully compatible with SDK versions older than 2.14.0.0.
In regard to using wi-fi with RealSense, it would be worth looking at a RealSense GStreamer C++ plugin that can handle depth and color channels, as GStreamer can be used with a wi-fi connection.
https://github.com/WKDSMRT/realsense-gstreamer
It is also possible to use a wi-fi connection in ROS with RealSense, though it can be complicated to set up - see https://github.com/IntelRealSense/realsense-ros/issues/1673 as an example of this.
Hello, MartyG! I tried the method above the link you sent me, but I found out that it turns out I have a lot to do that I haven't done yet. So I've learnt a lot. However, I still have a few questions. After trying to use the code you gave me about GStreamer, my VS tells me that PACKAGE_VERSION, GST_LICENSE, GST_PACKAGE_NAME,GST_PACKAGE_ORIGIN are not defined. What is the reason for this? I am using a system with W10. Your tutorials use Ubuntu 18.04, is w10 not available? The project I created with VS is as follows:
Also I noticed that the source code was removed from this URL, I was going to use it, can I still upload a copy of the C++ version please?(https://github.com/IntelRealSense/librealsense/blob/master/wrappers/python/examples/net_viewer.py)
If I select this option in the Realsense Viewer on my computer: It will pop up an error message like this: To enable RealSense device over network, please build the SDK with CMake flag -DBUILD_NETWORK_DEVICE=ON. This binary distribution was built with network features disabled. I found your official tutorial for compiling the SDK on w10, which is very helpful for building VS projects but doesn't seem to have much impact on Realsense Viewer. How can I solve this problem?
Similarly, when I select "add network device" on the Raspberry Pi, I get the following error "RealSense Network Devices are unsupported at this time", how can I solve this problem?
What I can guarantee is that I can successfully open the camera using realsense Viewer whether on a PC or a Raspberry Pi.
Have you installed GStreamer on your computer, as the GStreamer plugin that I linked to is dependent on it. If you do not have it installed then the link below has the official guide about installing it on Windows.
https://gstreamer.freedesktop.org/documentation/installing/on-windows.html?gi-language=c
The network device tool has been removed from the new SDK 2.54.1 but remains available in the previous 2.53.1. However, one of the two computers in the network setup (the 'remote' computer with the camera attached) would have to be an Ubuntu machine because rs-server can only be compiled on Linux. The 'host' machine that the RealSense Viewer is on (the PC without a camera attached) could be a Windows PC.
RealSense users who have used the networking tool have said that the SDK should be built with the -DBUILD_NETWORK_DEVICE=ON flag included on both the host and remote computers.
Thank you very much for your patience and guidance. I have re-changed the version of the SDK and was able to successfully add an IP address to the Raspberry Pi and I am really happy. However, there is still a problem that I am having with SDK 2.49.0 which does not compile using VS2019. My version of cmakegui is 3.20. The project in VS currently reports errors saying that usb.lib, compressionFactory.cpp , realsense-compression.lib, realsense2-net.lib are missing. I have tried to download these libraries on the web but have not found them. Do you have any ideas for a solution? I am using this code for compilation. https://github.com/IntelRealSense/librealsense/releases/tag/v2.49.0
There have been a couple of past reports from RealSense users of problems with the networking on 2.49.0. They found that the earlier 2.47.0 version worked for them.
Thank you very much for your kind guidance. I have implemented the Network Device functionality. What I actually want to do eventually is to add the Network Device functionality to my VS project, so I've been looking for the source code for wifi communication in Realsense. Can you split the source code into two copies please? One for the Raspberry Pi and the other for the PC (w10).
Finally, I would like to share my environment for implementing the Network Device functionality: PC: windows10 Version:2004, realsense-viewer 2.47.0 Raspberry Pi 4B : realsense-viewer 2.49.0 The inconsistent version of the realsense-viewer between Raspberry Pi and PC does not seem to affect the functionality. If you want to transfer depth images and colour images in high quality you may need an advanced router.
Development of the network-device system ended a while ago and it has now been removed from the current latest librealsense version 2.54.1 in preparation for introducing a new networking system in the SDK release after that.
There was some work done on a possible follow-up to the system called LRS-Net 2.0 that would have had support for networks with high packet loss such as wi-fi but the project was not integrated into the official RealSense SDK. Its source code is available below if you wish to study that.
https://github.com/IntelRealSense/librealsense/pull/8343
https://github.com/IntelRealSenseArchive/lrs-net/tree/apuzhevi_lrsnet
Whilst the documentation for the network-device system states that it can be used with wi-fi instead of wired ethernet, there is not information about how to accomplish that. The First Boot section of the documentation linked to below suggests to me that it may just be a matter of logging into the Pi with ssh over a wi-fi connection instead of wired ethernet.
Thank you very much for your prompt reply, I have just finished compiling the code in lrs-net-apuzhevi_lrsnet and running a series of tests. They run fine! The rs-server does a good job of uploading depth and RGB information to my router. But how should I get the data I've uploaded from the wifi?
I tried using the udpSocket and tcpSocket functions, and obviously I failed. I know it must be a very short code, but I've been looking for it for 2 hours and I would have thought I could have solved the problem alone after the last one.
Data sent to a host computer by rs-server can be accessed and viewed in the RealSense Viewer by setting up the camera as a Network Device using the Add Source option at the top of the Viewer's options side-panel.
Instead of using the Viewer, the data can also be accessed with C++ scripting via the rs2::net_device SDK instruction, like in the code at https://github.com/IntelRealSense/librealsense/issues/6376
Hello, MartyG!
I modified the code of rs-capture according to your suggestion to test if the current project I built can get RGB and depth information from the network, but I still get the following error even though I have included the header file#include <librealsense2-net/rs_net.hpp>
:
Unresolved external symbol rs2_create_net_device
Here is some simple code I have written:
#include <librealsense2/rs.hpp> // Include RealSense Cross Platform API
#include "example.hpp" // Include short list of convenience functions for rendering
#include <librealsense2-net/rs_net.hpp>
int main(int argc, char* argv[]) try
{
window app(1280, 720, "RealSense Capture Example");
rs2::colorizer color_map;
rs2::rates_printer printer;
rs2::net_device dev("192.168.1.112");
rs2::context ctx;
dev.add_to(ctx);
rs2::pipeline pipe(ctx);
rs2::config cfg;
cfg.enable_stream(RS2_STREAM_DEPTH, 640, 480, RS2_FORMAT_Z16, 30);
cfg.enable_stream(RS2_STREAM_COLOR, 640, 480, RS2_FORMAT_RGB8, 30);
pipe.start(cfg);
while (app) // Application still alive?
{
rs2::frameset data = pipe.wait_for_frames(). // Wait for next set of frames from the camera
apply_filter(printer). // Print each enabled stream frame rate
apply_filter(color_map); // Find and colorize the depth data
app.show(data);
}
return EXIT_SUCCESS;
}
catch (const rs2::error& e)
{
std::cerr << "RealSense error calling " << e.get_failed_function() << "(" << e.get_failed_args() << "):\n " << e.what() << std::endl;
return EXIT_FAILURE;
}
catch (const std::exception& e)
{
std::cerr << e.what() << std::endl;
return EXIT_FAILURE;
}
The compiler I am using is VS2019, running on W10. Currently, there is no problem using realsense-viewer to get RGB and depth information in the network.
Yes, Unresolved external symbol indcates that it is a Visual Studio related error.
Did you copy the code of rs-capture into your own project, please? If you did then it would not work because of the special way in which the RealSense SDK examples are built, meaning that the CMakeLists.txt file of your project would be missing instructions that enable it to find the librealsense library files when the project is compiled.
The C++ Getting Started project at the link below is a standalone one where the CMakeLists.txt file contains the commands necessary for Visual Studio to find the librealsense library files.
https://github.com/zivsha/librealsense/tree/getting_started_example/examples/getting-started
Alternatively, you can use three props property sheet files provided by the SDK to set up a new RealSense project in VS. A RealSense user created a guide to setting up a VS project on Windows with these props files.
https://github.com/EduardoWang/visual-studio-real-sense-record-and-playback-project
The prop files are located in the root directory of the RealSense SDK folder at this Windows location after the full SDK has been installed:
C: > Program Files (x86) > Intel RealSense SDK 2.0
I have actually completed the above steps and recompiled the source code via cmakegui to create my new project.
But just now I have solved this problem, which is in fact a very minor one, and I share here the idea of how to solve it:
The idea to solve this problem applies to window users, linux users do not have this problem. Unlike linux systems, windows systems need to be manually added when configuring the include and libs in VS, which makes it very easy to get things wrong, especially if you are not familiar with the project code. Adding realsense2.lib is not enough; Realsense2-gl.lib and Realsense2-net.lib are both important library files that do not come directly with the SDK installation, so if you want to get them you can use cmakegui to compile them.
One last query, I am getting low FPS for depth information and RGB information in my own project, but it works fine when using realsense-viewer(about 30FPS).In both cases the resolution is 640*480.
My test code is as follows:
#include <librealsense2/rs.hpp> // Include RealSense Cross Platform API
#include "example.hpp" // Include short list of convenience functions for rendering
#include <librealsense2-net/rs_net.hpp>
int main(int argc, char* argv[]) try
{
window app(1280, 720, "RealSense Capture Example");
rs2::colorizer color_map;
rs2::rates_printer printer;
rs2::net_device dev("192.168.1.112");
rs2::context ctx;
dev.add_to(ctx);
rs2::pipeline pipe(ctx);
rs2::config cfg;
cfg.enable_stream(RS2_STREAM_DEPTH, 640, 480, RS2_FORMAT_Z16, 30);
cfg.enable_stream(RS2_STREAM_COLOR, 640, 480, RS2_FORMAT_RGB8, 30);
pipe.start(cfg);
while (app) // Application still alive?
{
rs2::frameset data = pipe.wait_for_frames(). // Wait for next set of frames from the camera
apply_filter(printer). // Print each enabled stream frame rate
apply_filter(color_map); // Find and colorize the depth data
app.show(data);
}
return EXIT_SUCCESS;
}
catch (const rs2::error& e)
{
std::cerr << "RealSense error calling " << e.get_failed_function() << "(" << e.get_failed_args() << "):\n " << e.what() << std::endl;
return EXIT_FAILURE;
}
catch (const std::exception& e)
{
std::cerr << e.what() << std::endl;
return EXIT_FAILURE;
}
Thanks so much for sharing your solution with the RealSense community!
I cannot see anything wrong in your code. You could try commenting out the two apply_filter instructions one at a time to see if either of them is causing slowdown.
Thanks for the advice, I found after trying it that the depth image contained a bit too much information, but my router doesn't seem to be able to handle that much pressure, so if I replace it with a better router I should be fine.
Thanks very much for the update. Please do update again if changing the router makes a positive difference. Good luck!
Hi @YANSU7 Do you require further assistance with this case, please? Thanks!
Hi, MartyG! Sorry, I forget to replay. I found after some experimentation that it was actually the upload speed of the Raspberry Pi itself that really affected the wireless transmission speed after replacing the router. The Raspberry Pi generates a lot of heat during operation, and once the temperature rises the upload efficiency of the wireless card drops dramatically.
My test conditions were relatively simple: RGB8 640480 30FPS Z16 640480 30FPS
If you want to keep your device running consistently under these conditions, it is best that neither of your wireless NICs download and upload at speeds below 70Mbps.
Expect you to make better applications suitable for transferring depth images, which are now roughly 16 times the file size of their colour counterparts, which is a very deadly problem for wireless transfers.
I don't think I have any further questions at the moment and we should be able to close this topic.
Thanks so much for the detailed feedback for the benefit of other Pi users on a wireless connection!
A new networking interface is planned to be introduced in the next SDK version after 2.54.1, but there are not details about it or its features available at the time of writing this.
As you suggest, I will close the topic. Thanks again!
Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):
All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)
Issue Description
Hi, MartyG! I am sorry to bother you again, but I did have some trouble. I have managed to successfully send the video information captured by the Raspberry Pi to my PC by way of UDP. I can do this by passing RGB images using compression and decompression, but there doesn't seem to be a corresponding encoder for depth images. In the meantime, I've noticed some methods in this article.https://github.com/IntelRealSense/librealsense/issues/6465
It is really too difficult for a beginner like me. A few years have passed, is there an easier way to implement wireless transmission now? If not, could you please make a special tutorial on what steps and environment are needed to implement wireless transmission?
If you have the source code, it would be better to use C++ rather than python, as python is easier to use, but there are some performance requirements for the device during subsequent deployment. Currently, I am using VS2019 on my PC and Qt5 on my Raspberry Pi.