Closed Madgeeno closed 7 years ago
Hello, Have you looked at CIFAR-10 and ImageNet examples? For instance, take a look at 01_Conv.cntk sample here. Refer to readme.txt in that folder to get more information and instructions. Hope this helps.
Note that Convolution currently relies on the cuDNN library's cudnnConvolutionForward() function, i.e. it only supports tensor shapes of 2D planes with multiple channels (such as 2D color images, or speech spectrograms with derivatives), i.e. convolving over the first 2 dimensions, reducing/removing the third dimension, and creating a new third dimension (feature map).
Tensor support in CNTK evolved in stages, starting from no tensor support (vectors only) via special-cased 3D tensors (2D-images with color planes) to general tensor support (TensorShape). Convolution was implemented for the special-cased 3D tensor version, and has not yet been fully adapted to arbitrary TensorShapes.
I'd love to see the next step taken towards allowing allowing at least all TensorShapes that are supported by cuDNN, including convolving over 3 dimensions plus reducing over 1.
@frankseide thanks for your answer, I'd already looked into the Cifar10, but there the 3rd dimension is to support the color channel. I see that in ConvolutionEngine.h there is a ConvolutionTensor4D, which is I think what you mentioned already. Is there already a plan for releasing support of convolution on 5D (cudnn supported) tensor shapes?
Yes (…since yesterday, triggered by your feedback--thanks!). We will generalize how users define the Convolution operation, and initially implement all constellations that are available in cuDNN, while deprioritizing other configurations unless we have a use case (like sparse convolution for NLP). This will solve the most use cases, including yours. It will be a few days though until we will get to it.
From: Madgeeno [mailto:notifications@github.com] Sent: Wednesday, February 10, 2016 2:40 To: Microsoft/CNTK CNTK@noreply.github.com Cc: Frank Seide fseide@microsoft.com Subject: Re: [CNTK] ND convolution and pooling (#104)
@frankseidehttps://github.com/frankseide thanks for your answer, I'd already looked into the Cifar10, but there the 3rd dimension is to support the color channel. I see that in ConvolutionEngine.h there is a ConvolutionTensor4D, which is I think what you mentioned already. Is there already a plan for releasing support of convolution on 5D (cudnn supported) tensor shapes?
— Reply to this email directly or view it on GitHubhttps://github.com/Microsoft/CNTK/issues/104#issuecomment-182305952.
Wow awesome, thanks again for the quick response and the re-prioritizing! I'll stay tuned.
Hi @frankseide, is there any news about ND Convolution & Pooling? Is there a dev branch that I could check out in the meanwhile? Thanks again!
Hello, I've started working on it but there is no branch that you can look at yet. If all goes well then I will have something by the end of this week. It won't be the fastest/prettiest code, of course, but it should give you an idea on how to use it/provide feedback. Thanks.
Thanks a lot @Alexey-Kamenev!
@Alexey-Kamenev @frankseide
Hi, I am wondering if 3D convolution is supported by CNTK now? For example, if I have a 3D input with the size of 10 by 10 by 10, can I indicate that the kernel size should be 5 by 5 by 5 (the kernel slides along x, y and z direction during the convolution)?
Thank you!
Hello, Yes, 3D (and ND in general) convolutions are supported. You should be able to use N-dimensional tensors in your BrainScript code when specifying convolution parameters. Note that in case of 1, 2 and 3D convolutions CNTK will use cuDNN implementation (fast), in all other cases it will use reference engine (slow). You can verify it by looking in the logs.
@Alexey-Kamenev
I tried the 3D convolution. But it says "EXCEPTION occurred: cuDNN could not find suitable algorithm for the current convolution configuration."
I paste the output below. Should I call the reference engine by myself? If it is the case, how can I do it? Thank you very much!
##############################################################################
##############################################################################
CNTKCommandTrainBegin: train LockDevice: Locked GPU 0 to test availability. LockDevice: Unlocked GPU 0 after testing. LockDevice: Locked GPU 0 for exclusive use. NDLBuilder Using GPU 0
Creating virgin network.
Post-processing network...
6 roots: deconv1.act = RectifiedLinear() deconv1.b = LearnableParameter() deconv1.isd = LearnableParameter() deconv1.m = LearnableParameter() deconv1.sc = LearnableParameter() mse = SquareError()
Validating network. 20 nodes to process in pass 1.
Validating --> conv1.W = LearnableParameter() : -> [16 x 125] Validating --> featScale = LearnableParameter() : -> [1 x 1] Validating --> features = InputValue() : -> [32 x 36 x 11 x ] Validating --> featScaled = ElementTimes (featScale, features) : [1 x 1], [32 x 36 x 11 x ] -> [32 x 36 x 11 x ] Validating --> conv1.c = Convolution (conv1.W, featScaled) : [16 x 125], [32 x 36 x 11 x ] -> [32 x 36 x 176 x ] Validating --> conv1.sc = LearnableParameter() : -> [16 x 1] Validating --> conv1.b = LearnableParameter() : -> [16 x 1] Validating --> conv1.m = LearnableParameter() : -> [16 x 1] Validating --> conv1.isd = LearnableParameter() : -> [16 x 1] Validating --> conv1.BN = BatchNormalization (conv1.c, conv1.sc, conv1.b, conv1.m, conv1.isd) : [32 x 36 x 176 x ], [16 x 1], [16 x 1], [16 x 1], [16 x 1] -> [32 x 36 x 176 x ] Validating --> conv1.y = RectifiedLinear (conv1.BN) : [32 x 36 x 176 x ] -> [32 x 36 x 176 x ] Validating --> deconv1.act = RectifiedLinear (conv1.y) : [32 x 36 x 176 x ] -> [32 x 36 x 176 x ] Validating --> deconv1.b = LearnableParameter() : -> [16 x 1] Validating --> deconv1.isd = LearnableParameter() : -> [16 x 1] Validating --> deconv1.m = LearnableParameter() : -> [16 x 1] Validating --> deconv1.sc = LearnableParameter() : -> [16 x 1] Validating --> labels = InputValue() : -> [32 x 36 x 11 x ] Validating --> deconv1.w.W = LearnableParameter() : -> [16 x 125] Validating --> deconv1.out = Convolution (deconv1.w.W, conv1.y) : [16 x 125], [32 x 36 x 176 x ] -> [32 x 36 x 11 x ] Validating --> mse = SquareError (labels, deconv1.out) : [32 x 36 x 11 x ], [32 x 36 x 11 x ] -> [1]
Validating network. 7 nodes to process in pass 2.
Validating network, final pass.
conv1.c: using cuDNN convolution engine for geometry: Input: 32 x 36 x 11, Output: 32 x 36 x 176, Kernel: 5 x 5 x 5, Map: 16, Stride: 1 x 1 x 1, Sharing: (1, 1, 1), AutoPad: (1), LowerPad: 0, UpperPad: 0.
Using CNTK batch normalization engine.
deconv1.out: using cuDNN convolution engine for geometry: Input: 32 x 36 x 11, Output: 32 x 36 x 176, Kernel: 5 x 5 x 5, Map: 16, Stride: 1 x 1 x 1, Sharing: (1, 1, 1), AutoPad: (1), LowerPad: 0, UpperPad: 0.
12 out of 20 nodes do not share the minibatch layout with the input data.
Post-processing network complete.
Created model with 20 nodes on GPU 0.
Training criterion node(s): mse = SquareError
Allocating matrices for forward and/or backward propagation.
Memory Sharing Structure:
0000000000000000: {[conv1.isd Gradient[16 x 1]] [conv1.m Gradient[16 x 1]] [deconv1.act Gradient[32 x 36 x 176 x ]] [deconv1.act Value[32 x 36 x 176 x ]] [deconv1.b Gradient[16 x 1]] [deconv1.isd Gradient[16 x 1]] [deconv1.m Gradient[16 x 1]] [deconv1.sc Gradient[16 x 1]] [featScale Gradient[1 x 1]] [featScaled Gradient[32 x 36 x 11 x ]] [features Gradient[32 x 36 x 11 x ]] [labels Gradient[32 x 36 x 11 x ]] } 000000001C18F150: {[labels Value[32 x 36 x 11 x ]] } 000000001C18F290: {[featScale Value[1 x 1]] } 000000001C18F470: {[conv1.W Value[16 x 125]] } 000000001C18F5B0: {[conv1.b Value[16 x 1]] } 000000001C18F6F0: {[conv1.sc Value[16 x 1]] } 000000001C18F830: {[conv1.m Value[16 x 1]] } 000000001C18F970: {[conv1.isd Value[16 x 1]] } 000000001C18FC90: {[deconv1.b Value[16 x 1]] } 000000001C18FDD0: {[deconv1.sc Value[16 x 1]] } 000000001C18FF10: {[deconv1.m Value[16 x 1]] } 000000001C190050: {[deconv1.isd Value[16 x 1]] } 000000001C190190: {[deconv1.w.W Value[16 x 125]] } 000000001E7839A0: {[features Value[32 x 36 x 11 x ]] } 000000002E30CE80: {[mse Value[1]] } 000000002E30CF20: {[featScaled Value[32 x 36 x 11 x ]] } 000000002E30CFC0: {[conv1.c Value[32 x 36 x 176 x ]] } 000000002E30D100: {[conv1.BN Value[32 x 36 x 176 x ]] } 000000002E30D2E0: {[conv1.c Gradient[32 x 36 x 176 x ]] [conv1.y Value[32 x 36 x 176 x ]] } 000000002E30D380: {[deconv1.out Value[32 x 36 x 11 x ]] } 000000002E30D420: {[conv1.BN Gradient[32 x 36 x 176 x ]] } 000000002E30D560: {[mse Gradient[1]] } 000000002E30D600: {[conv1.W Gradient[16 x 125]] [deconv1.out Gradient[32 x 36 x 11 x ]] } 000000002E30D6A0: {[deconv1.w.W Gradient[16 x 125]] } 000000002E30D740: {[conv1.sc Gradient[16 x 1]] [conv1.y Gradient[32 x 36 x 176 x ]] } 000000002E30D7E0: {[conv1.b Gradient[16 x 1]] }
No PreCompute nodes found, skipping PreCompute step.
Starting Epoch 1: learning rate per sample = 0.001000 effective momentum = 0.900000 momentum as time constant = 9.5 samples BlockRandomizer::StartEpoch: epoch 0: frames [0..1](first sequence at sample 0), data subset 0 of 1
Starting minibatch loop.
[CALL STACK]
Microsoft::MSR::CNTK::CuDnnConvolutionEngine
:: ForwardCore
- Microsoft::MSR::CNTK::ConvolutionNode
:: ForwardProp - Microsoft::MSR::CNTK::ComputationNetwork::PARTraversalFlowControlNode:: ForwardProp
- Microsoft::MSR::CNTK::ComputationNetwork:: ForwardProp
- Microsoft::MSR::CNTK::SGD
:: TrainOneEpoch - Microsoft::MSR::CNTK::SGD
:: TrainOrAdaptModel - Microsoft::MSR::CNTK::SGD
:: Train - DoTrainMicrosoft::MSR::CNTK::ConfigParameters,float
- DoCommands
- wmainOldCNTKConfig
- wmain1
- wmain
- __tmainCRTStartup
- BaseThreadInitThunk
- RtlUserThreadStart
EXCEPTION occurred: cuDNN could not find suitable algorithm for the current convolution configuration.
cuDNN does not support all combinations of tensor dimensions and layouts. Unfortunately, the error message does not tell you which node is the problematic one.
Are you compiling yourself from source? Then you could try to set a breakpoint into this error message. By going up the call stack, we should be able to find the name of the node. We can then look up its specific dimensions in the Validation output to understand what the problem is.
The solution would then be to either try to reinterpret the problem to fit into cuDNN's constraints, or to switch to the reference engine, but for this specific node only (keep using cuDNN where it works).
On our side, we should add some code to catch more of the lower-level exceptions and augment then with information on which node they occured.
Actually, there are only two, should have checked first:
conv1.c: using cuDNN convolution engine for geometry: Input: 32 x 36 x 11, Output: 32 x 36 x 176, Kernel: 5 x 5 x 5, Map: 16, Stride: 1 x 1 x 1, Sharing: (1, 1, 1), AutoPad: (1), LowerPad: 0, UpperPad: 0.
deconv1.out: using cuDNN convolution engine for geometry: Input: 32 x 36 x 11, Output: 32 x 36 x 176, Kernel: 5 x 5 x 5, Map: 16, Stride: 1 x 1 x 1, Sharing: (1, 1, 1), AutoPad: (1), LowerPad: 0, UpperPad: 0.
Let me loop in @Alexey-Kamenev. Alexey, can you see what combination is not supported?
AFAIR, you would need to use 4D tensors if you want to use 3D convolutions otherwise cuDNN tensor objects will be created in a format that is not supported by cuDNN. Basically, the problem with cuDNN is that it requires that the last (e.g. "input map") dimension of the input should be equal to the corresponding dimension of the kernel. For example, for 2D convos if input is 5x5x3 (WHC notation) then last dimension of the kernel must be equal to 3. Same for 3D convos. Try experimenting first with just one layer to understand whether this works.
Edit: for 4D tensors, just set the last dimension to 1 to make it work with cuDNN.
@Alexey-Kamenev @frankseide
Thank you for your quick reply. I tried to use 4D tensors and a 3D kernel with the size of 5 by 5 by 5. I used convolution function in the way below: (kW = kH = kD = 5, hStride = vStride = dStride =1)
c = Convolution (W, inp, {kW, kH, kD, 1}, mapCount=outMap, stride={hStride,vStride,dStride,1}, sharing={true,true,true,true}, autoPadding={true,true,true,true}, lowerPad={0, 0, 0, 0}, upperPad={0, 0, 0, 0}, imageLayout="cudnn")
The output shows: "Convolution operation currently only supports 1D or 2D convolution on 3D tensors." . I pasted the output below. The version I compiled was cloned in July.
<<<<<<<<<<<<<<<<<<<< PROCESSED CONFIG WITH ALL VARIABLES RESOLVED <<<<<<<<<<<<<<<<<<<< Commands: train output test Precision = "float" CNTKModelPath: ../Output/Models/04_DeConv CNTKCommandTrainInfo: train : 100 CNTKCommandTrainInfo: CNTKNoMoreCommands_Total : 100
##############################################################################
##############################################################################
CNTKCommandTrainBegin: train LockDevice: Locked GPU 0 to test availability. LockDevice: Unlocked GPU 0 after testing. LockDevice: Locked GPU 0 for exclusive use. NDLBuilder Using GPU 0
Creating virgin network.
Post-processing network...
2 roots: labels = InputValue() mse = SquareError()
Validating network. 13 nodes to process in pass 1.
Validating --> labels = InputValue() : -> [32 x 36 x 11 x 1 x ] Validating --> conv1.W = LearnableParameter() : -> [16 x 125] Validating --> featScale = LearnableParameter() : -> [1 x 1] Validating --> features = InputValue() : -> [32 x 36 x 11 x 1 x ] Validating --> featScaled = ElementTimes (featScale, features) : [1 x 1], [32 x 36 x 11 x 1 x ] -> [32 x 36 x 11 x 1 x ] Validating --> conv1.c = Convolution (conv1.W, featScaled) : [16 x 125], [32 x 36 x 11 x 1 x *] -> [] FAILED
[CALL STACK]
Microsoft::MSR::CNTK::ComputationNetwork:: ValidateNode
- Microsoft::MSR::CNTK::ComputationNetwork:: ValidateNodes
- Microsoft::MSR::CNTK::ComputationNetwork:: ValidateNetwork
- Microsoft::MSR::CNTK::ComputationNetwork:: CompileNetwork
- Microsoft::MSR::CNTK::NDLBuilder
:: LoadFromConfig - Microsoft::MSR::CNTK::NDLBuilder
:: LoadNetworkFromConfig - Microsoft::MSR::CNTK::NDLBuilder
:: BuildNetworkFromDescription :: operator () - std::_Callable_obj<
,0>::_ApplyXstd::shared_ptr<Microsoft::MSR::CNTK::ComputationNetwork,int> - std::_Func_implstd::_Callable_obj<<lambda_dc2d47120f78fb62d722aabb9c05d39f,0>,std::allocatorstd::_Func_class<std::shared_ptr<Microsoft::MSR::CNTK::ComputationNetwork,int>>,std::shared_ptrMicrosoft::MSR::CNTK::ComputationNetwork,int>:: _Do_call
- std::_Func_classstd::shared_ptr<Microsoft::MSR::CNTK::ComputationNetwork,int>:: operator ()
- Microsoft::MSR::CNTK::SGD
:: Train - DoTrainMicrosoft::MSR::CNTK::ConfigParameters,float
- DoCommands
- wmainOldCNTKConfig
- wmain1
EXCEPTION occurred: Convolution operation currently only supports 1D or 2D convolution on 3D tensors.
Anybody can help with this?
Hi there, I used the trained weights of CNN in the same structure network in MATLAB, but the result is different from what CNTK eval shows. The Matlab code is ok because i trained same network with eval error near to my CNTK program. Is there any tip to tell me for emigrating learned weights from CNTK to MATLAB?
Hi again, i found more details. For just one convolution neuron the output of convolution mean conv1.c.c is the same as what i calculate by myself. But when i use 3 convolution neurons , the result is not what i calculate by myself.
Here is my values with zero padding convolution. input Value: 1 0.5 2 -0.5 Weight of 3 neurons (conv1.w.W): 0.0007 -0.0018 0.0003 0.0015 0.0010 0.0024 Convolution Result of CNTK (mean: conv1.c.c): -0.0010 0.0021 -0.0043 0.0028 0.0027 0.0044 0.0008 But if you calculate convolution of [1 0.5 2 -0.5] and [0.0007 -0.0018] with zero padding you will find see such a result like: -0.0014 0.0006 -0.0039
Does CNTK calculate differently? I've read the PDF file but i could not find my answer.
I'm really sorry for calling you @frankseide , i was wondering if you could help me please?
Is it possible that the weights are arranged differently? CNTK currently does something confusing:
The weight dimensions are specified as a matrix:
Validating --> conv1.W = LearnableParameter() : -> [16 x 125]
but the tensor dimensions are actually different. Say the convolution has spatial extent (3:5
), an input feature-map depth of 7
. and an output feature-map depth of '16'. Then the weight would be specified as [16 x 105]
(105=3 * 5 * 7), but the actual layout of kernel parameters is [3 x 5 x 7 x 16]
. Notice the 16
is at the other end.
This confusing configuration had to do with the transition from our older implementation of convolution to the new one based on NVidia's design and memory layout.
@pinzhang10, I have forwarded your Issue internally. I hope there will be a resolution soon.
Yes @frankseide , in imageLayout=cuDNN manner i can not find any order of w that commits the result of convolution in CNTK. But when i changed imageLayout="HWC" and rotate input i can reach the same results as CNTK in different order, so i can reorder the results now. Thank you very much.
Just to help the other users, i explain what i did. I am running CNTK on cpu mode, but i used cuDNN type of imageLayout. maybe the problem was for my configuration.
Great. Reasonably recent versions of CNTK are able to do cudnn-type convolution on the CPU. This should not be a problem if you, e.g., work with version 1.6.
Let me add one thing: Our cudnn tensor dimension is [w x h x C x K]
, which is called KCHW
by NVidia. The order might be confusing. CNTK uses column-major storage, w
is the fastest-changing index.
If I find time, I will add an output format for exporting model parameters, so that tensors with rank>2 can be loaded easily and unambiguously into Matlab and numpy.
Thank you very much.
With the recent 1.7 release, something like this should work:
input = Input {(640:480:256:3)} # a 3D image of 640 x 480 x 256 pixels, 3 color planes
convolved = ConvolutionalLayer {64, (2:4:5), pad=true} (input)
I.e. given an input of shape [640 x 480 x 256 x 3]
, this will give you a 3D convolution of width 2, height 4, and depth 5, which will reduce over 3 color channels (inferred from input) and produce a [640 x 480 x 256 x 64]
output (assuming padding; a little less if you leave out pad=true
which I added here only so that I don't have to do the math).
The learned filter parameter will have the tensor shape [2 x 4 x 5 x 3 x 64]
.
I have not tested it, but it is supposed to work. If not, please let us know, we will make it so.
To load it in Matlab, I presume you are using the dumpNodes
action?
Thank's @frankseide . Yes I am using dumpNodes, yeah i use it soon and tell you about the result. Thank you again for your reply.
OK, looking at this a little more.
Below is an example. It says the weights are [16 x 27]
, but that is wrong. Fromt the preceding Convolution
node, you can see that the true memory layout is that of a [3 x 3 x 3 x 16]
column-major rank-4 tensor. The incorrect reporting of the dimensions is a left-over of our transition from matrices to ND-tensors. My colleague @cha-zhang is looking to fix it these days. To interpret this in Matlab, follow these steps:
[16 x 27]
matrix; that is, 16 text rows of 27 values each[3 x 3 x 3 x 16]
tensor from column-major format, using the appropriate Matlab functionLooking at http://www.mathworks.com/help/matlab/ref/reshape.html, it looks like this could do it:
mat = ... read the 16 x 27 matrix
kernel = reshape (mat, [3, 3, 3, 16])
Could you let me know if this works? If it does, I will turn it into a How Do I... article.
Example output from dump function (ImageHandsOn tutorial):
command = DumpWeights
DumpWeights = {
action = "dumpNodes"
printValues = true
outputFile = "params"
}
z.conv1._.c.c=Convolution ( z.conv1._.c.W , features ) Geometry: Input: 32 x 32 x 3, Output: 32 x 32 x 16, Kernel: 3 x 3 x 3, Map: 16, Stride: 1 x 1 x 3, Sharing: (1, 1, 1), AutoPad: (1, 1, 0), LowerPad: 0 x 0 x 0, UpperPad: 0 x 0 x 0
PoolKind: 0
z.conv1._.c.W=LearnableParameter [16,27] learningRateMultiplier=1.000000 NeedsGradient=true
0.0157501176 -0.860115051 -0.420461059 0.184413835 0.165769458 -0.0704387128 0.143727541 0.663995028 0.0985739678 -0.145760402 0.0414609201 0.00811105408 -0.0574883558 -0.72851783 -0.461660236 -0.145530581 -0.119359665 0.0955614224 0.319168836 -0.675411463 -0.0161762517 -0.186958075 0.398761719 -0.178651974 -0.242337719 -0.228404745 -0.0718713999
0.026802497 -0.396901131 0.118051335 -1.1126709 0.0839792117 0.40464741 -0.164676726 0.168620661 -0.329212427 -0.36933887 -0.249475464 -0.136266276 0.121189788 -0.40939182 0.0925182775 0.0592569932 0.0813576803 -0.321192086 1.32024837 -0.0293861255 0.88045615 -0.0545007586 0.325988054 -0.0201885384 -0.182608709 0.784401536 -0.54084444
0.117089391 -0.0967067331 0.162152007 0.716205001 0.0885873288 0.296000093 0.0208391938 0.0289477594 0.256956011 -0.228554174 0.025042627 0.0203923658 -0.0413902178 -0.262424737 0.355471909 -0.116440095 -0.281883061 -0.287363499 0.618243515 0.125785679 0.585641801 -0.0174166448 -0.0468123928 0.184540167 0.356243342 -0.404688746 -0.261396796
0.115876533 0.947685897 -0.550110042 -0.374442875 0.165412277 0.629746675 -0.465700686 0.189849705 0.174356565 -0.00399743579 0.0310268458 -0.0191452783 0.0842853114 -0.349275589 0.118302323 0.00360818696 -0.40440312 -0.381308109 -0.166637585 0.193098664 0.400664806 -0.22834155 0.258306444 -0.11852771 -0.378991485 -0.33854416 0.0241906475
-0.0914328545 0.602705121 -0.125772551 0.574328303 0.242300019 -0.330276459 0.342241853 -0.177458778 -0.308691323 -0.873078346 0.0927017331 -0.203330636 0.504462421 -0.278811097 0.481920928 0.259308636 -0.101258315 0.106481798 -0.261603922 0.600840926 0.193352491 -0.417758763 0.378738463 0.33364746 -0.132617772 0.452952951 -0.0298827421
1.09360182 0.410583526 -0.0638091341 0.202729002 0.0299155787 -1.00142181 -0.0932875127 0.0198311117 -0.128946215 -0.43900758 -0.0228358731 -0.210377052 0.196559921 0.0235394705 1.04615176 0.0679060146 0.183692619 -0.11196889 -0.318541557 -0.119220048 0.691305399 -0.223955527 -0.3129327 0.785258412 0.58898133 -0.25170365 -0.127307996
0.17044273 0.411963284 -0.124229938 0.0344633609 -0.0223480072 -0.079411447 0.33680138 0.0832752958 -0.21050556 -0.175260857 -0.000189467799 -0.0759015158 -0.142501935 0.348410696 0.202317521 -0.0226598624 0.478670537 0.0534718856 -0.00416547619 -0.515380621 0.149697378 -0.160944149 -0.0642008036 0.287780493 -0.766812801 -0.0401799195 -0.0357917659
0.6915856 1.18763411 0.300425678 0.197164014 0.0807010606 -0.258339137 1.89866185 0.0764647201 0.0594451614 -0.30393824 0.0365866758 -0.246141851 -0.422685444 0.222216204 0.0855859444 0.056895148 0.34908691 0.290904015 -0.744310021 1.27318692 0.780592918 -0.377020657 0.0322119445 0.0241557583 0.0651954263 -0.0198381115 -0.213266298
0.903982103 -0.188329503 -2.05247378 0.169380128 -0.142315894 -1.0847764 0.733656883 -0.05437636 0.00911689084 -0.0445301794 0.0768989101 -0.28116554 -0.223907351 -0.0794062987 0.277647465 0.0165914427 -0.201382771 1.0775696 -0.293075442 -0.246304929 1.36910415 -0.0931488574 -0.293941081 0.576172888 1.37042367 -0.565919101 0.072236225
-0.00776619371 -0.267675728 -0.0717607364 -0.00102484087 -0.115932293 -0.175513357 0.0860847607 0.572124958 -0.151333362 0.114214942 0.200314105 -0.11283002 -0.140601322 -0.0978299081 -0.109699361 -0.135254592 -0.139024302 0.471078008 0.223655015 -0.383629024 0.629811943 -0.542057157 -0.288001001 0.0791091248 -0.343449175 -0.131568491 0.218239129
-0.978553414 -0.296215564 -0.053718172 0.12860474 -0.201244205 -0.055716794 1.10462546 0.219417304 0.305518121 0.321450531 0.0215901751 -0.272698015 0.0603749715 -0.0112145972 -0.426112711 -0.180666521 0.183922157 -0.546367884 0.250270456 -0.0471501164 0.417789787 -1.1236459 -0.413230002 -0.411594242 -0.123818755 0.0997927561 -0.25899604
-0.847378373 0.0993302539 0.807956278 0.265067995 -0.0423706733 -0.526727498 0.251213044 -0.119113728 -0.340824127 0.0177993476 -0.137095794 -0.184635743 -0.113358736 0.0365326405 -0.217905283 -0.0959962308 -0.391200989 -0.872437119 0.111155339 0.121859431 0.763428569 -0.54044652 -0.0639858022 -0.279422045 0.686879516 -0.143091425 0.274183482
-0.300504625 0.561541259 0.385584056 -0.091737017 -0.0460590944 -0.501631856 -0.0225009881 0.0384473056 -0.256063133 0.62254262 -0.0798995495 -0.207561404 0.0290087145 0.221440911 -0.169314265 0.0518547595 -0.50748992 -0.643916249 -0.100791514 -1.02676785 0.31934768 -0.151606217 0.0484443344 0.276802778 -0.0284918007 -0.0573294498 0.397558689
-2.00445533 -0.0962456465 -0.465574652 -0.0747030824 -0.0178339668 -0.226695821 0.135969654 -0.193481177 0.0877846852 0.0232499298 0.0685779378 -0.0675867647 0.537427008 -0.112598971 -0.150455922 0.401509851 0.00385506894 0.0216568485 -0.420701295 -0.766865253 -0.196949273 -0.661984682 -0.170419291 -0.693730533 -0.255214274 0.0229086857 0.189044684
-0.322442591 0.941789031 0.677446306 0.105484404 -0.0731944814 -0.625114799 -0.0192175973 -0.157218918 0.0105529996 -0.236226723 -0.195975333 -0.30669266 0.211468175 -0.267766088 0.401424676 0.278463125 0.10634876 -0.726338863 0.53498131 0.235353008 0.0665663928 -0.00595653662 -0.0979649201 -0.135171458 0.122505687 -0.27630505 0.148772791
-0.0359792076 -0.509114981 0.278324038 0.0766433701 -0.129511923 -0.682911634 0.0833535418 -0.100735508 0.0157835186 0.15600574 -0.142763719 -0.185660794 -0.187388286 -0.398646384 -0.023907328 -0.183947563 0.684022725 -0.140375137 -0.194015756 -1.21623385 0.066768229 0.0446765423 0.0650166124 0.799849987 -0.57625407 -0.202898026 0.0323979706
I tested it and it works. Just I must mention that rot90(rot90(W)) is needed in convolution level in Matlab. It is convolution code in Matlab: conv2(input,rot90(rot90(w(:,:,1))))
Thank you @frankseide very very much. I appreciate you.
Sorry again. But the method you mentioned is just for the firs Convolution layer. For Second Convolution layer I ran to a problem. For example I have 2 neurons in first Convolution Layer,the output of the first neuron is: 0.0415 0.0138 0.00499 -0.0470
for the second layer I have only one neuron of Convolution layer with filter size 3. The output of DumpNode for W is a 1x6 dim vector as below: -0.0138 0.0755 -0.0078 0.0554 0.0236 -0.0463
Write Command for output of Convolution action is as below which I can't catch to this Values. These Values must be 2 of 2 Convolution Results : 0.0028 0.0036 0.0026 0.0055
I used many ways like: reshape(w,[2,3,1]); reshape(w,[3,2,1]); rot90(rot90(reshape(w,[2,3,1]))); rot90(rot90(reshape(w,[3,2,1]))); and other ways too.
@cha-zhang Please take this into account.
@frankseide I have tested the method you mentioned in the previous networks like 02_Convolution in https://github.com/Microsoft/CNTK/tree/master/Examples/Image/MNIST with 2 240 convolution layer and it works fine, it's exactly what you said. In https://github.com/Microsoft/CNTK/wiki/Hands-On-Labs-Image-Recognition I ran to many problems, I explained the problems. Another problem is that with the same structure of 02_Convolution the memory usage is about 16G without any progress in epoch. So I think that it has a problem , may a bug.
Thank you very much. You helped me so much.
I think the original issue and the several others have been closed in this discussion. closing this one - and as usual - if this or similar problems arise in the most recent bits, please open a new issue ...
Hi all, thanks again for releasing this very powerful toolkit. I read the doc and run the MNIST example. In this example the "mnist_convert.py" resizes the 2d digit images in a feature vector of length 28x28. How can I implement something similar for 3d datasets? Would it work out of the box if I transform my data in a feature vector of length NX x NY x NZ? Is 3d convolution already supported?