alexgkendall / caffe-segnet

Implementation of SegNet: A Deep Convolutional Encoder-Decoder Architecture for Semantic Pixel-Wise Labelling
http://mi.eng.cam.ac.uk/projects/segnet/
Other
1.08k stars 452 forks source link

Check failed: status == CUBLAS STATUS_SUCCESS (11 vs. 0) CUBLAS STATUS MAPPING_ERROR #44

Closed Maxfashko closed 8 years ago

Maxfashko commented 8 years ago

Hello! An example described here is excellent and easily trained http://mi.eng.cam.ac.uk/projects/segnet/tutorial.html For my task requires segmentation of two classes: the background and subject. Man and background. But I have a problem with the preparation of annotated images (labels), in its own database. As I understand the network sees only color in the gray scale. I create labels using Adobe photoshop so: Uploading original JPEG image Superimposed over the original, without a background image in PNG format. Thus creating selection mask. The sacrificial layer with the image without background PNG image. Encodes the image to indexed color (3 colors: void 0 0 0, person 192 128 128, transparency alpha channel). This is necessary in order to while painting flowers no new colors or blur. I produce fill the selected area with mask the above colors: 192 128 228 subject, background 0 0 0) Then convert the image to RGB PNG 8 bit / chanel mode. If you submit the following annotated (labels) to the input image network, it displays the following: F0606 19:20:16.349822 2152 math_functions.cu:123] Check failed: status == CUBLAS_STATUS_SUCCESS (11 vs. 0) CUBLAS_STATUS_MAPPING_ERROR. Then I proceed as follows: Skipping my images via this script, that should solve my problems:

!/usr/bin/env python

import os import numpy as np from itertools import izip from argparse import ArgumentParser from collections import OrderedDict from skimage.io import ImageCollection, imsave from skimage.transform import resize

camvid_colors = OrderedDict([ ("Animal", np.array([64, 128, 64], dtype=np.uint8)), ("Archway", np.array([192, 0, 128], dtype=np.uint8)), ("Bicyclist", np.array([0, 128, 192], dtype=np.uint8)), ("Bridge", np.array([0, 128, 64], dtype=np.uint8)), ("Building", np.array([128, 0, 0], dtype=np.uint8)), ("Car", np.array([64, 0, 128], dtype=np.uint8)), ("CartLuggagePram", np.array([64, 0, 192], dtype=np.uint8)), ("Child", np.array([192, 128, 64], dtype=np.uint8)), ("Column_Pole", np.array([192, 192, 128], dtype=np.uint8)), ("Fence", np.array([64, 64, 128], dtype=np.uint8)), ("LaneMkgsDriv", np.array([128, 0, 192], dtype=np.uint8)), ("LaneMkgsNonDriv", np.array([192, 0, 64], dtype=np.uint8)), ("Misc_Text", np.array([128, 128, 64], dtype=np.uint8)), ("MotorcycleScooter", np.array([192, 0, 192], dtype=np.uint8)), ("OtherMoving", np.array([128, 64, 64], dtype=np.uint8)), ("ParkingBlock", np.array([64, 192, 128], dtype=np.uint8)), ("Pedestrian", np.array([64, 64, 0], dtype=np.uint8)), ("Road", np.array([128, 64, 128], dtype=np.uint8)), ("RoadShoulder", np.array([128, 128, 192], dtype=np.uint8)), ("Sidewalk", np.array([0, 0, 192], dtype=np.uint8)), ("SignSymbol", np.array([192, 128, 128], dtype=np.uint8)), ("Sky", np.array([128, 128, 128], dtype=np.uint8)), ("SUVPickupTruck", np.array([64, 128, 192], dtype=np.uint8)), ("TrafficCone", np.array([0, 0, 64], dtype=np.uint8)), ("TrafficLight", np.array([0, 64, 64], dtype=np.uint8)), ("Train", np.array([192, 64, 128], dtype=np.uint8)), ("Tree", np.array([128, 128, 0], dtype=np.uint8)), ("Truck_Bus", np.array([192, 128, 192], dtype=np.uint8)), ("Tunnel", np.array([64, 0, 64], dtype=np.uint8)), ("VegetationMisc", np.array([192, 192, 0], dtype=np.uint8)), ("Wall", np.array([64, 192, 0], dtype=np.uint8)), ("Void", np.array([0, 0, 0], dtype=np.uint8)) ])

def convert_label_to_grayscale(im): out = (np.ones(im.shape[:2]) * 255).astype(np.uint8) for gray_val, (label, rgb) in enumerate(camvid_colors.items()): match_pxls = np.where((im == np.asarray(rgb)).sum(-1) == 3) out[match_pxls] = gray_val assert (out != 255).all(), "rounding errors or missing classes in camvid_colors" return out.astype(np.uint8)

def make_parser(): parser = ArgumentParser() parser.add_argument( 'label_dir', help="Directory containing all RGB camvid label images as PNGs" ) parser.add_argument( 'out_dir', help="""Directory to save grayscale label images. Output images have same basename as inputs so be careful not to overwrite original RGB labels""") return parser

if name == 'main': parser = make_parser() args = parser.parse_args() labs = ImageCollection(os.path.join(args.label_dir, "*")) os.makedirs(args.out_dir) for i, (inpath, im) in enumerate(izip(labs.files, labs)): print i + 1, "of", len(labs)

resize to caffe-segnet input size and preserve label values

    resized_im = (resize(im, (360, 480), order=0) \* 255).astype(np.uint8)
    out = convert_label_to_grayscale(resized_im)
    outpath = os.path.join(args.out_dir, os.path.basename(inpath))
    imsave(outpath, out)

Then again I try to start training with the converted images. Again a message: Check failed: status == CUBLAS_STATUS_SUCCESS (11 vs. 0). Remarkable is that the marked image after the tool takes the network and trained them: https://github.com/kyamagu/js-segment-annotator But that's not what I need. I do not want to mark up the image manually, I already have cut without PNG image background. Please help me to understand how to convert colors to segnet took them? Maybe hsv? Here's a link to my data: https://github.com/Maxfashko/CamVid?files=1

alexgkendall commented 8 years ago

You want your label images to be single channel gray scale png with the pixel values as the label. So for the 2 class case each pixel should have either 0 or 1 depending on the class. See the CamVid data as an example.

Maxfashko commented 8 years ago

The database CamVid, involved a variety of colors. Have you seen my examples in the repository? I use them because 2 colors. Or I do not understand something?

Maxfashko commented 8 years ago

script that I have used to convert the image in the repository was in fact solve my problem? Or not?

alexgkendall commented 8 years ago

The colours are used for visualising. The raw data labels are single channel gray scale with pixel values corresponding to a zero-indexed class label. For example https://github.com/alexgkendall/SegNet-Tutorial/blob/master/CamVid/testannot/0001TP_008550.png

isn4 commented 7 years ago

Hello, I ran into the "Check failed: status == CUBLAS STATUS_SUCCESS (11 vs. 0) CUBLAS STATUS MAPPING_ERROR" error as well. I've looked around at other issues submitted and can only find the advice that I set num_output from my conv1_1_D layer to 2 and that I make sure my softmax with loss layer has only 2 class weightings. I've followed all of the advice but am still running into this issue. My net is unchanged from the original SegNet version save for the conv1_1_D layer and the softmax with loss layer:

layer { bottom: "conv1_2_D" top: "conv1_1_D" name: "conv1_1_D" type: "Convolution" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { weight_filler { type: "msra" } bias_filler { type: "constant" } num_output: 2 pad: 1 kernel_size: 3 } } layer { name: "loss" type: "SoftmaxWithLoss" bottom: "conv1_1_D" bottom: "label" top: "loss" softmax_param {engine: CAFFE} loss_param: { weight_by_label_freqs: true class_weighting: 0 class_weighting: 1 } }

My images are the same size (1, 3, 360, 480) and are of the same count as the original SegNet data (367). In addition I am using 3 channel rgb images and 1 channel greyscale labels. I would so appreciate some help!

Jorisfournel commented 7 years ago

Exact same problem for me

isn4 commented 7 years ago

Additionally, I initially thought the issue was that I cannot truly force PNG files to have only a single channel as the alpha channel (even when off) still exists. Thus I converted my PNG files into grayscale JPG files and tried the net with those but because JPGs use compression, the pixel values in my images always have more classes (ie, more variations in the pixel values rather than just 0 and 255 as I would like and as they were in the PNG files).

@alexgkendall How were you able to get your PNG files in the original segnet to pass through segnet with their implicit alpha channels? Any conversion to grayscale I make "fakes" grayscale and gives me sRGB files with "turned off" alpha channels that still exist.

I would really appreciate the help! Thank you!

p-kleczek commented 7 years ago

I had a similar problem after messing with class numbers. After I deleted all snapshot files for that model and re-run the solver, everything worked fine again.

isn4 commented 7 years ago

Thanks for the advice! I'm back to using PNGs (because of the class number issues with JPGs) and I've just re-run the net after having deleted any previously existing snapshots. My current model actually did not have any snapshots, so I copied all other snapshots from other segnet models to my local machine and deleted the ones on the machine I'm using to run segnet. Still no luck and I am still getting the same cublas error.

isn4 commented 7 years ago

Additionally, I've opened a new issue since in mining through the issues section of this repo, I have not yet seen a solution for a problem like mine. #101

surifans commented 6 years ago

@isn4 请问你遇到的这个问题解决了吗?