MiguelMonteiro / permutohedral_lattice

Permutohedral Lattice C++/CUDA implementation + TensorFlow Op (CPU/GPU)
83 stars 18 forks source link

SPATIAL_DIMS, INPUT_CHANNELS and REFERENCE_CHANNELS setting #12

Closed XYZ-916 closed 5 years ago

XYZ-916 commented 5 years ago

Thanks for your shared source code. I'm doing medial image segmentation, and I'd like to add crf-rnn layer to the end of U-Net model. My input data is 3D MRA image and the ground truth includes only one label. Is it true if I set SPATIAL_DIMS=3, INPUT_CHANNELS=3 and REFERENCE_CHANNELS=3 ?

Looking for your reply. Thanks

MiguelMonteiro commented 5 years ago

I should be:

SPATIAL_DIMS=3, INPUT_CHANNELS=NUM_CLASSES and REFERENCE_CHANNELS=3

NUM_CLASSES should be 2 if using a softmax in the classification layer or 1 if using a sigmoid.

XYZ-916 commented 5 years ago

Got it. Thanks!

zxpeter commented 5 years ago

Hi, I use SPATIAL_DIMS=2(2D image), INPUT_CHANNELS=2(softmax) and REFERENCE_CHANNELS=3(3 channel image) for my scripts and it still get errors like below:


is not a failure, but may mean that there could be performance gains if more memory were available.
2019-05-15 00:34:17.253496: E tensorflow/stream_executor/cuda/cuda_dnn.cc:332] could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR
*** Received signal 11 ***
*** BEGIN MANGLED STACK TRACE ***
2019-05-15 00:34:17.584412: E tensorflow/stream_executor/cuda/cuda_driver.cc:903] failed to allocate 3.85G (4132822528 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2019-05-15 00:34:17.584468: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_3_bfc) ran out of memory trying to allocate 3.63GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
/home/guan/anaconda3/lib/python3.6/site-packages/tensorflow/python/../libtensorflow_framework.so(+0x692cbb)[0x7fc08f934cbb]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x11390)[0x7fc0cbb60390]
/home/guan/anaconda3/lib/python3.6/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so(_ZN10tensorflow14LaunchConv2DOpIN5Eigen9GpuDeviceEfEclEPNS_15OpKernelContextEbbRKNS_6TensorES8_iiiiRKNS_7PaddingEPS6_NS_12TensorFormatE+0x13e6)[0x7fc094c23656]
/home/guan/anaconda3/lib/python3.6/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so(_ZN10tensorflow8Conv2DOpIN5Eigen9GpuDeviceEfE7ComputeEPNS_15OpKernelContextE+0x3ec)[0x7fc094c28d5c]
/home/guan/anaconda3/lib/python3.6/site-packages/tensorflow/python/../libtensorflow_framework.so(_ZN10tensorflow13BaseGPUDevice13ComputeHelperEPNS_8OpKernelEPNS_15OpKernelContextE+0x37d)[0x7fc08f8623dd]
/home/guan/anaconda3/lib/python3.6/site-packages/tensorflow/python/../libtensorflow_framework.so(_ZN10tensorflow13BaseGPUDevice7ComputeEPNS_8OpKernelEPNS_15OpKernelContextE+0x8d)[0x7fc08f8628fd]
/home/guan/anaconda3/lib/python3.6/site-packages/tensorflow/python/../libtensorflow_framework.so(+0x60d08d)[0x7fc08f8af08d]
/home/guan/anaconda3/lib/python3.6/site-packages/tensorflow/python/../libtensorflow_framework.so(+0x60d89a)[0x7fc08f8af89a]
/home/guan/anaconda3/lib/python3.6/site-packages/tensorflow/python/../libtensorflow_framework.so(_ZN5Eigen26NonBlockingThreadPoolTemplIN10tensorflow6thread16EigenEnvironmentEE10WorkerLoopEi+0x21a)[0x7fc08f90de2a]
/home/guan/anaconda3/lib/python3.6/site-packages/tensorflow/python/../libtensorflow_framework.so(_ZNSt17_Function_handlerIFvvEZN10tensorflow6thread16EigenEnvironment12CreateThreadESt8functionIS0_EEUlvE_E9_M_invokeERKSt9_Any_data+0x32)[0x7fc08f90ced2]
/home/guan/anaconda3/bin/../lib/libstdc++.so.6(+0xafc5c)[0x7fc0b9395c5c]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x76ba)[0x7fc0cbb566ba]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7fc0cb88c41d]
*** END MANGLED STACK TRACE ***

*** Begin stack trace ***
    tensorflow::CurrentStackTrace()

    tensorflow::LaunchConv2DOp<Eigen::GpuDevice, float>::operator()(tensorflow::OpKernelContext*, bool, bool, tensorflow::Tensor const&, tensorflow::Tensor const&, int, int, int, int, tensorflow::Padding const&, tensorflow::Tensor*, tensorflow::TensorFormat)
    tensorflow::Conv2DOp<Eigen::GpuDevice, float>::Compute(tensorflow::OpKernelContext*)
    tensorflow::BaseGPUDevice::ComputeHelper(tensorflow::OpKernel*, tensorflow::OpKernelContext*)
    tensorflow::BaseGPUDevice::Compute(tensorflow::OpKernel*, tensorflow::OpKernelContext*)

    Eigen::NonBlockingThreadPoolTempl<tensorflow::thread::EigenEnvironment>::WorkerLoop(int)
    std::_Function_handler<void (), tensorflow::thread::EigenEnvironment::CreateThread(std::function<void ()>)::{lambda()#1}>::_M_invoke(std::_Any_data const&)

    clone
*** End stack trace ***
Aborted (core dumped)

Any help will be appreciate. Thanks for your time.