Closed theo2021 closed 3 years ago
Hello theo2021,
We're glad it's useful for your research. We are not sharing the precomputed depth for Cityscapes val set, but the particles files yes. In details, since Cityscapes does not provide GPS data (required to compute speed) we instead randomly pick a speed in the [0,50] interval so the particles files in here is similar for any Cityscapes sets. For the depth, please refer to this issue for more info on how to output it.
Best.
Thanks, Raoul for the information, some things aren't completely clear yet. For example, for the Monodepth setup, was the pretrained model on Cityscapes used? on the resolution 512x256? Moreover, can one use super-resolution to align the depth using this repository? It would be great to know the exact methodology since every little detail can cause a domain shift. Lastly, why didn't you use the disparity maps provided by Cityscapes?
Hello Theo. The process is standard, I'll try to add more details for you to mimic our depth. 1) we generate Cityscapes disparities with MonoDepth repo, obviously using cityscapes model on 512x256. 2) disparities are upsampled and converted to depth with this snippet inspired from this and this
disparity_file = np.load('cityscapes/disparities.npy')
for disp in disparity_file:
disp = cv2.resize(disp, (1024, 512), cv2.INTER_AREA)
focal_length, baseline = 2262, 0.22
depth = (baseline * focal_length) / (disp*2048)
depth_enc = np.minimum(depth * 256., 2 ** 16 - 1).astype(np.uint16)
% write depth_enc to PNG
3) All depth are refined with this Fast Bilateral Solver guiding depth refinement on RGB (aka, reference
in the code) with default params. It seems (not 100% sure, cause it was an argument of our code) we used a confidence
image (cf. Bilateral code) which is a simple image gradients:
def confidence(im_gray):
e = cv2.Laplacian(im_gray, cv2.CV_8U)
e = cv2.dilate(e, np.ones((5, 5)))
e = cv2.blur(e, (5, 5))
e = 2**16 - (e.astype(np.uint16)+1)**2
return e
Be aware you need to manipulate the depth data (aka target
in the code) and confidence data when calling the solver. All together this looks something like:
reference = image_rgb # uint8
target = image_depth # uint8
confidence = confidence(image_gray) # you shall try with *or* without confidence
grid = BilateralGrid(reference, **grid_params)
t = target.reshape(-1, 1).astype(np.double) / (pow(2,16)-1)
c = confidence.reshape(-1, 1).astype(np.double) / (pow(2,16)-1)
output_solver = BilateralSolver(grid, bs_params).solve(t, c).reshape(im_shape)
depth_filtersolver = np.uint16(output_solver * (pow(2, 16) - 1))
depth_filtersolver_m = depth_filtersolver / 256. # As meters
cv2.imwrite(..., (depth_filtersolver_m*256.).astype(np.uint16)) # Writes refined depth
I cannot guarantee that the above snippet is the last one we used but apart from confidence enabled/disabled I don't think we changed anything.
Note that using MonoDepth2 could probably yield better results, and that depth generation is different for Cityscapes/Kitti since Kitti depth use our more precise Spade method taking benefit of rgb+lidar.
Hope this helps
Thanks for the info Raoul it sure helps!
I am attaching a link where you can download the rain masks of the validation set and follow the repositories instructions to produce the actual images. I don't post the actual images due to the Cityscapes Licence. https://drive.google.com/file/d/15twpJybNGLBeeCf7TmBWI6c1F8XalZVm/view?usp=sharing
Many thanks Theo for your initiative. However, the mask isn't enough for me to produce the actual image.
Could you either share the actual rain differential images: rain_diff = uint16(original - rainy + 2^8 – 1)
or share privately the modified images that I will 'differentiate' myself to respect Cityscapes license ?
I'd be glad to share that on our page mentioning your contribution.
Yes, it's the rain_diff that is needed I will create the images and post the correct file.
To generate the validation rainy images, you first need to download the dev-kit, edit the file cityscapes.py so as to accept the validation sequence:
sequences = ['leftImg8bit/train', 'leftImg8bit/val']
Then download the rain_diff from here and extract it.
With the following command, you can generate the rainy images:
python weather_generate.py cityscapes --weather rain --sequence leftImg8bit/val --cityscapes_root your_cityscapes_folder_location --output_dir the_location_of_rain_diff
your_cityscapes_folder_location: should be parent of leftImg8bit,
the_location_of_rain_diff: should be parent of weather_datasets
The generated images can be found in weather_datasets/weather_cityscapes/leftImg8bit/val/
Many thanks Theo for your great contribution, I will definitely include this in the official project page as soon as I can.
Hello @theo2021, do you still have by any chance the depth files you used for cityscapes val set ? I would generate the fog too if you share them, as some users request it. Best.
Thank you for the great project, the rain simulation will be very useful in my research on domain adaptation. I was wondering whether precomputed rainy images or their corresponding depth/particle files are available for the Cityscapes validation set.