Open nimning opened 6 years ago
AdamUpdate does not support CPU sparse yet. Can you try training with GPU?
I would really appreciate to have this working on CPU too. I'm hosting a machine learning course and many of my students have CPUs, and none of the advanced network types are working. RNNs, LSTMs, and 1D-convnets all throw this exception on CPU.
I am not using any batch normalization during training. But, I get the following error. The feature map has been implemented in the scope of the code. It works on GPU version, but not on CPU version.
` 8 dbg = (gain_ir / c.reduce_max(gain_ir)).eval().reshape(6) 9 for i in range(0, 1000): #LS_C07 = 500, LS_C00 = 1000, SS_C07 = 1000, SS_C00 = 2000 ---> 10 trainer.train_minibatch(mbs.next_minibatch(int(1000 * 10 * (0.0008 i)), input_map=feature_map)) 11 if i % 100 == 0: 12 dbg = np.vstack((dbg, (gain_ir / c.reduce_max(gain_ir)).eval().reshape(6)))
C:\Users\ninm\AppData\Local\Continuum\Anaconda2\lib\site-packages\cntk\train\trainer.pyc in train_minibatch(self, arguments, outputs, device, is_sweep_end) 179 if contains_minibatch_data: 180 updated = super(Trainer, self).train_minibatch_overload_for_minibatchdata( --> 181 arguments, device) 182 else: 183 updated = super(Trainer, self).train_minibatch(arguments, is_sweep_end,
C:\Users\ninm\AppData\Local\Continuum\Anaconda2\lib\site-packages\cntk\cntk_py.pyc in train_minibatch_overload_for_minibatchdata(self, args) 3022 3023 def train_minibatch_overload_for_minibatchdata(self, args): -> 3024 return _cntk_py.Trainer_train_minibatch_overload_for_minibatchdata(self, args) 3025 3026 def train_minibatch(self, args):
RuntimeError: Inside File: Matrix.cpp Line: 1859 Function: Microsoft::MSR::CNTK::Matrix::AdamUpdate -> Feature Not Implemented.
[CALL STACK]