Closed bluebear78 closed 6 years ago
Hi @bluebear78,
This is what you are supposed to get.
The server encodes depth using the three channels to have higher precision, and the segmentation label in the red channel as a number from 0 to 12, so it looks black at first sight. See Cameras and Sensors.
In Python you can convert these images using carla.image_converter
.
But if you have to convert a bulk of images the best way would be to compile and use the app at "Util/ImageConverter" as it is much faster converting lots of images. Just store all the images in a folder as you do now and run the image converter
$ cd Util/ImageConverter
$ make
$ ./bin/image_converter -c semseg -i path/to/image/folder -o output/folder
use the carla.image_converter like this:
depth_map_array = image_converter.depth_to_logarithmic_grayscale(dept_map_image)
then you can write the image to the disk using:
scipy.misc.imsave(address, depth_map_array)
`~/main/carla-master/Util/ImageConverter$ make
g++ -Wall -Wextra -std=c++14 -fopenmp -O3 -DNDEBUG -o bin/image_converter main.cpp -lboost_system -lboost_filesystem -lboost_program_options -lpng -ljpeg -ltiff
In file included from image_converter.h:10:0,
from main.cpp:16:
image_io.h: In instantiation of ‘class image_converter::image_file<image_converter::jpeg_io, image_converter::jpeg_io, boost::gil::image<boost::gil::pixel<unsigned char, boost::gil::layout<boost::mpl::vector3<boost::gil::red_t, boost::gil::green_t, boost::gil::blue_t> > >, false, std::allocator
@bluebear78 I don't know about the steps that @nsubiron mentioned as I have never tried that myself but the code I mentioned works. Just integrate the image conversion and saving code I mentioned and it should work just fine. Also This way you do not need to post-process your generated images after they are saved on the disk from the Client. Note that I have tried that only with the precompiled version of CARLA, so if you have built carla from yourself then I can't say anything about it.
Hi @bluebear78, have you installed the dependencies mentioned in the ImageConverter's README?
This image is the raw depth format. You also can output a gray and logarithmic depth maps but this will affect the quality and resolution of the image (int8 vs int24) I wrote about the CarlaSimBlog - https://carlasimblog.wordpress.com/2023/09/16/sensors/
I want to get depth map image and semantic image. so I attached two camera to 'client_example.py' but so stange 'depth image' and 'semantic image' comes out. I don't understanad why this images come out. How can I get normal 'depth image' and 'semantic segmantation image' ? Normal camera is clean. and semantic segmantation image is just black. this image is depth map image that I got in Carla sim.