deeplearningais / curfil

CUDA Random Forest implementation for Image Labeling tasks
Other
180 stars 63 forks source link

NYU Depth V2 labeled Mat Conversion #20

Closed ghost closed 6 years ago

ghost commented 6 years ago

I am getting this error, kindly guide me.

795 training images 654 test images using filled depth images reading ./dataset/nyu_depth_v2_labeled.mat processing images Traceback (most recent call last): File "../curfil/scripts/NYU/convert.py", line 180, in Parallel(num_threads, 5)(delayed(convert_image)(i, scenes[i], depth[i, :, :].T, images[i, :, :].T, labels[i, :, :].T) for i in range(len(images))) File "/usr/local/lib/python2.7/dist-packages/joblib/parallel.py", line 514, in init % (backend, sorted(BACKENDS.keys()))) ValueError: Invalid backend: 5, expected one of ['multiprocessing', 'sequential', 'threading']

ghost commented 6 years ago

if i put 'sequential' then i got this error.

Traceback (most recent call last): File "../curfil/scripts/NYU/convert.py", line 183, in sequential NameError: name 'sequential' is not defined

temporaer commented 6 years ago

This may be due to a newer version of joblib.

Can you try removing the number 5 in File "../curfil/scripts/NYU/convert.py", line 180 ?

On December 12, 2017 8:58:54 PM GMT+01:00, selectee4all notifications@github.com wrote:

if i put 'sequential' then i got this error.

Traceback (most recent call last): File "../curfil/scripts/NYU/convert.py", line 180, in Parallel(num_threads, 5)(delayed(convert_image)(i, scenes[i], depth[i, :, :].T, images[i, :, :].T, labels[i, :, :].T) for i in range(len(images))) File "/usr/local/lib/python2.7/dist-packages/joblib/parallel.py", line 514, in init % (backend, sorted(BACKENDS.keys()))) ValueError: Invalid backend: 5, expected one of ['multiprocessing', 'sequential', 'threading']

-- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/deeplearningais/curfil/issues/20#issuecomment-351175998

-- Sent from my Android device with K-9 Mail. Please excuse my brevity.

ghost commented 6 years ago

Now i am getting this error... /Documents/Training_Again/curfil/scripts/NYU/convert.py in convert_image(i=0, scene=u'kitchen', img_depth=memmap([[ 2.75201321, 2.75206947, 2.75221062, ... 2.0814631 , 2.08134627]], dtype=float32), image=array([[[255, 255, 255], [255, 255, 255]...55, 255], [255, 255, 255]]], dtype=uint8), label=array([[0, 0, 0, ..., 0, 0, 0], [0, 0, 0,...], [0, 0, 0, ..., 0, 0, 0]], dtype=uint16)) 105 106 folder = "%s/%s/%s" % (out_folder, train_test, scene) 107 if not os.path.exists(folder): 108 os.makedirs(folder) 109 --> 110 img_depth *= 1000.0 img_depth = memmap([[ 2.75201321, 2.75206947, 2.75221062, ... 2.0814631 , 2.08134627]], dtype=float32) 111 112 png.from_array(img_depth, 'L;16').save("%s/%05d_depth.png" % (folder, i)) 113 114 depth_visualization = visualize_depth_image(img_depth)

ValueError: output array is read-only

temporaer commented 6 years ago

replace img_depth *= 1000. with img_depth = img_depth * 1000.

ghost commented 6 years ago

i am getting this warning... /usr/local/lib/python2.7/dist-packages/skimage/exposure/exposure.py:63: UserWarning: This might be a color image. The histogram will be computed on the flattened image. You can instead apply this function to each color channel. warn("This might be a color image. The histogram will be "

ghost commented 6 years ago

Why during curfil_train in loading images, it stops? 2017-Dec-14 18:47:52.840469 INFO release version 1cea71a6276a1bd5f5a6a8990520ad375242bfc7 2017-Dec-14 18:47:52.840623 INFO acceleration mode: gpu 2017-Dec-14 18:47:52.840668 INFO CIELab: 1 2017-Dec-14 18:47:52.840700 INFO DepthFilling: 0 2017-Dec-14 18:47:52.840734 INFO profiling disabled 2017-Dec-14 18:47:52.895228 INFO going to load 1428 images from ./test/training/ 2017-Dec-14 18:47:57.323308 INFO loaded 50/1428 images 2017-Dec-14 18:48:00.347333 INFO loaded 100/1428 images 2017-Dec-14 18:48:03.402639 INFO loaded 150/1428 images 2017-Dec-14 18:48:06.496860 INFO loaded 200/1428 images 2017-Dec-14 18:48:09.633849 INFO loaded 250/1428 images 2017-Dec-14 18:48:12.598033 INFO loaded 300/1428 images 2017-Dec-14 18:48:15.677561 INFO loaded 350/1428 images 2017-Dec-14 18:48:18.771816 INFO loaded 400/1428 images 2017-Dec-14 18:48:22.020610 INFO loaded 450/1428 images 2017-Dec-14 18:48:25.184301 INFO loaded 500/1428 images 2017-Dec-14 18:48:28.258863 INFO loaded 550/1428 images 2017-Dec-14 18:48:31.400602 INFO loaded 600/1428 images 2017-Dec-14 18:48:34.518195 INFO loaded 650/1428 images 2017-Dec-14 18:48:37.587901 INFO loaded 700/1428 images 2017-Dec-14 18:48:40.888876 INFO loaded 750/1428 images 2017-Dec-14 18:48:43.890352 INFO loaded 800/1428 images 2017-Dec-14 18:48:47.031816 INFO loaded 850/1428 images 2017-Dec-14 18:48:50.091903 INFO loaded 900/1428 images 2017-Dec-14 18:48:53.277594 INFO loaded 950/1428 images 2017-Dec-14 18:48:56.356920 INFO loaded 1000/1428 images 2017-Dec-14 18:48:59.519733 INFO loaded 1050/1428 images 2017-Dec-14 18:49:02.635210 INFO loaded 1100/1428 images 2017-Dec-14 18:49:09.205677 INFO loaded 1150/1428 images 2017-Dec-14 18:49:16.005824 INFO loaded 1200/1428 images @pc:~/Documents/Training$

ghost commented 6 years ago

and this stops without any error, am i doing wrong?

fks commented 6 years ago

Any new on this topic ? I am facing the same issues.

temporaer commented 6 years ago

@selectee4all @fks i just tried this with a newly compiled version (after applying #21), and did not see the issue with training stopping. I'm going to close this issue, since I fixed some of the earlier issues mentioned in it. If it doesn't work for you even after merging in #21, please open a new issue to discuss.