Open nerdogram opened 4 years ago
Same problem.. I tried to change some argument but the final result is blurry and bad quality. I tried on Colab. Any idea ?
I ended up setting up the output video resolution to match the input
# frac = config['longer_side_len'] / max(config['output_h'], config['output_w'])
# config['output_h'], config['output_w'] = int(config['output_h'] * frac), int(config['output_w'] * frac)
frac = 1
config['original_h'], config['original_w'] = config['output_h'], config['output_w']
I was going to look at the output video bitrate next, but any help on increasing the sharpness of the output video to more closely approximate the input image would be great.
trying libx264 + higher bitrate on the output 2+ Mbps
clip.write_videofile(os.path.join(output_dir, output_name), fps=config['fps'], codec='libx264',bitrate='2000k')
also going to explore skipping the gaussian blur (may be req by the alg)
# img = cv2.GaussianBlur(img,(int(init_factor//2 * 2 + 1), int(init_factor//2 * 2 + 1)), 0)
ended up passing in the original image width to run_depth and using it vs 640. testing it now to ensure the output resolution = the input.
After that I'll circle back to the output video quality
in main.py
image_width = image.shape[1]
run_depth(device,[sample['ref_img_fi']], config['src_folder'], config['depth_folder'],
config['MiDaS_model_ckpt'], MonoDepthNet, MiDaS_utils, target_w=image_width)
in run.py run_depth
scale = target_w / max(img.shape[0], img.shape[1])
that did it, getting a large/high quality video now took 14min locally vs 2-3min before (both cpu), also needed more ram up to 22-24GB
ended up passing in the original image width to run_depth and using it vs 640. testing it now to ensure the output resolution = the input.
After that I'll circle back to the output video quality
in main.py
image_width = image.shape[1] run_depth(device,[sample['ref_img_fi']], config['src_folder'], config['depth_folder'], config['MiDaS_model_ckpt'], MonoDepthNet, MiDaS_utils, target_w=image_width)
in run.py run_depth
scale = target_w / max(img.shape[0], img.shape[1])
This still seems to limit my output to a width of 960px. Did you made another other changes?
I've also changed the frac variable but now main.py won't complete:
running on device 0
0% 0/1 [00:00<?, ?it/s]Current Source ==> 2mars
Running depth extraction at 1595581689.5904121
initialize
device: cpu
start processing
processing image/2mars.jpg (1/1)
torch.Size([1, 3, 352, 384])
finished
Start Running 3D_Photo ...
Loading edge model at 1595581910.257932
Loading depth model at 1595581919.6054657
Loading rgb model at 1595581925.925136
Writing depth ply (and basically doing everything) at 1595581932.251731
^C
@victusfate I tried your way and the results are better! For others to follow: These are the steps I took,
1) I hardcoded the frac in main.py to 1 . frac=1.. This make sure the input and output sizes are the same. 2) image_width = image.shape[1] run_depth(device,[sample['ref_img_fi']], config['src_folder'], config['depth_folder'], config['MiDaS_model_ckpt'], MonoDepthNet, MiDaS_utils, target_w=image_width)
3) run.py is changed as: scale = target_w / max(img.shape[0], img.shape[1]) 4) In mesh folder I uncommented the following: # img = cv2.GaussianBlur(img,(int(init_factor//2 2 + 1), int(init_factor//2 2 + 1)), 0)
5) In mesh folder I also modified the output video setting clip.write_videofile(os.path.join(output_dir, output_name), fps=config['fps'], codec='libx264',bitrate='2000k')
6) commented the resize code in main.py
PS., If I play with longer_side_len argument to much larger resolution (4500), the code would take for ever (I have one GPU). The RAM shoots up to over 64GB and then the code exit! lack of memory perhaps.. Is there any way I can work with high resolution image with a 64GB RAM or do I need much larger RAM.. If so, how big of a RAM do I need? This seems like a memory leakage for large files!
Another issue is that if you output the same size image in the video, some of the pixels would not have the depth info when you zoon in or circle around the object.. This does not happen when you resize the input image. However, if you output the same size as the input by folllowing the steps above, the output video effect would have "gray blocks" with no depth info! dont know why? any idea?
@victusfate I tried your way and the results are better! For others to follow: These are the steps I took,
- I hardcoded the frac in main.py to 1 . frac=1.. This make sure the input and output sizes are the same.
image_width = image.shape[1]
run_depth(device,[sample['ref_img_fi']], config['src_folder'], config['depth_folder'], config['MiDaS_model_ckpt'], MonoDepthNet, MiDaS_utils, target_w=image_width)
- run.py is changed as: scale = target_w / max(img.shape[0], img.shape[1])
- In mesh folder I uncommented the following: # img = cv2.GaussianBlur(img,(int(init_factor//2 2 + 1), int(init_factor//2 2 + 1)), 0)
- In mesh folder I also modified the output video setting clip.write_videofile(os.path.join(output_dir, output_name), fps=config['fps'], codec='libx264',bitrate='2000k')
- commented the resize code in main.py
image = cv2.resize(image, (config['output_w'], config['output_h']), interpolation=cv2.INTER_AREA)
PS., If I play with longer_side_len argument to much larger resolution (4500), the code would take for ever (I have one GPU). The RAM shoots up to over 64GB and then the code exit! lack of memory perhaps.. Is there any way I can work with high resolution image with a 64GB RAM or do I need much larger RAM.. If so, how big of a RAM do I need? This seems like a memory leakage for large files!
Another issue is that if you output the same size image in the video, some of the pixels would not have the depth info when you zoon in or circle around the object.. This does not happen when you resize the input image. However, if you output the same size as the input by folllowing the steps above, the output video effect would have "gray blocks" with no depth info! dont know why? any idea?
Hi, I tried your code, but the error is as follows, have you encountered this problem?
Traceback (most recent call last):
File "main.py", line 89, in
6. #image = cv2.resize(image, (config['output_w'], config['output_h']), interpolation=cv2.INTER_AREA)
Ensure the following code has not been commented out in main.py.
image = cv2.resize(image, (config['output_w'], config['output_h']), interpolation=cv2.INTER_AREA)
This should fix your issue.
Hi - amazing work here! I was experimenting with higher resolution images by increasing the longer_side_len argument to 1920. However - the video quality is a little blurry and does not match the input images which are high res. Is this because you downsample and then upsample them in the process or am I missing some other argument that needs to be updated?