Bartzi / see

Code for the AAAI 2018 publication "SEE: Towards Semi-Supervised End-to-End Scene Text Recognition"
GNU General Public License v3.0
573 stars 147 forks source link

TypeError: Argument 'x' has incorrect type #100

Closed wdon021 closed 3 years ago

wdon021 commented 3 years ago

Hi Bartzi, Thank you for sharing the code. I am currently trying to get this model working in google colab, and I have encounter an error when I train_fsns.

/usr/local/lib/python3.6/dist-packages/chainer/training/trainer.py in run(self, show_loop_exception_msg) 374 f.write('Traceback (most recent call last):\n') 375 traceback.print_tb(sys.exc_info()[2]) --> 376 six.reraise(*excinfo) 377 finally: 378 for , entry in extensions:

/usr/local/lib/python3.6/dist-packages/six.py in reraise(tp, value, tb) 701 if value.traceback is not tb: 702 raise value.with_traceback(tb) --> 703 raise value 704 finally: 705 value = None

/usr/local/lib/python3.6/dist-packages/chainer/training/trainer.py in run(self, show_loop_exception_msg) 341 self.observation = {} 342 with reporter.scope(self.observation): --> 343 update() 344 for name, entry in extensions: 345 if entry.trigger(self):

/usr/local/lib/python3.6/dist-packages/chainer/training/updaters/standard_updater.py in update(self) 238 239 """ --> 240 self.update_core() 241 self.iteration += 1 242

/usr/local/lib/python3.6/dist-packages/chainer/training/updaters/multiprocess_parallel_updater.py in update_core(self) 238 batch = self.converter(batch, self._devices[0]) 239 --> 240 loss = _calc_loss(self._master, batch) 241 242 self._master.cleargrads()

/usr/local/lib/python3.6/dist-packages/chainer/training/updaters/multiprocess_parallel_updater.py in _calc_loss(model, in_arrays) 274 def _calc_loss(model, in_arrays): 275 if isinstance(in_arrays, tuple): --> 276 return model(*in_arrays) 277 elif isinstance(in_arrays, dict): 278 return model(**in_arrays)

/content/drive/My Drive/Colab Notebooks/COMP421/Project/multi_accuracy_classifier.ipynb in call(self, *args)

/content/drive/My Drive/Colab Notebooks/COMP421/Project/loss_metrics.ipynb in calc_loss(self, x, t)

/content/drive/My Drive/Colab Notebooks/COMP421/Project/loss_metrics.ipynb in calc_aspect_ratio_loss(self, width, height, label_lengths)

/usr/local/lib/python3.6/dist-packages/cupy/creation/basic.py in ones_like(a, dtype, order, subok, shape) 180 181 order, strides, memptr = _new_like_order_and_strides(a, dtype, order, --> 182 shape) 183 shape = shape if shape else a.shape 184 a = cupy.ndarray(shape, dtype, memptr, strides, order)

/usr/local/lib/python3.6/dist-packages/cupy/creation/basic.py in _new_like_order_and_strides(a, dtype, order, shape) 40 return 'C', None, None 41 ---> 42 order = chr(_update_order_char(a, ord(order))) 43 44 if order == 'K':

TypeError: Argument 'x' has incorrect type (expected cupy.core.core.ndarray, got Variable)

I have a look into the loss_metric.py and print out the grid and labels `
loss_weights = [1, 1.25, 2, 1.25]

    for i, (predictions, grid, labels) in enumerate(zip(batch_predictions, F.separate(grids, axis=0), F.separate(t, axis=1)), start=1):

        with cuda.get_device_from_array(getattr(predictions, 'data', predictions[0].data)):

            # adapt ctc weight depending on current prediction position and labels
            # if all labels are blank, we want this weight to be full weight!

            overall_loss_weight = loss_weights[i - 1]

            print(predictions)

            print(grid)

            print(labels)`

They are all variable. My guess this is where the error occurs.

I am not sure how to fix the problem, I am new to the chainer and this type of model.

Can you please help me with the problem? thank you in advance.

Bartzi commented 3 years ago

as you can see in the stack trace, the error occurs in this function: /content/drive/My Drive/Colab Notebooks/COMP421/Project/loss_metrics.ipynb in calc_aspect_ratio_loss(self, width, height, label_lengths)

Somewhere there we call xp.ones_like. You have to make sure that you provide the array that is wrapped in the Variable. You can get access to the array, by calling .data on the Variable. That should be it, I guess.

wdon021 commented 3 years ago

Hi Bartzi, You are correct, somehow, I have to add .data for every xp.ones_like and xp.zeros_like. Thank you for helping me out, really appreciated!!.

I run into another problem when training the model, TypeError Traceback (most recent call last)

in () 323 # ) 324 --> 325 trainer.run() 12 frames /usr/local/lib/python3.6/dist-packages/chainer/training/trainer.py in run(self, show_loop_exception_msg) 374 f.write('Traceback (most recent call last):\n') 375 traceback.print_tb(sys.exc_info()[2]) --> 376 six.reraise(*exc_info) 377 finally: 378 for _, entry in extensions: /usr/local/lib/python3.6/dist-packages/six.py in reraise(tp, value, tb) 701 if value.__traceback__ is not tb: 702 raise value.with_traceback(tb) --> 703 raise value 704 finally: 705 value = None /usr/local/lib/python3.6/dist-packages/chainer/training/trainer.py in run(self, show_loop_exception_msg) 344 for name, entry in extensions: 345 if entry.trigger(self): --> 346 entry.extension(self) 347 except Exception as e: 348 if show_loop_exception_msg: /usr/local/lib/python3.6/dist-packages/chainer/training/extensions/evaluator.py in __call__(self, trainer) 178 with reporter: 179 with configuration.using_config('train', False): --> 180 result = self.evaluate() 181 182 reporter_module.report(result) /usr/local/lib/python3.6/dist-packages/chainer/training/extensions/evaluator.py in evaluate(self) 239 with function.no_backprop_mode(): 240 if isinstance(in_arrays, tuple): --> 241 eval_func(*in_arrays) 242 elif isinstance(in_arrays, dict): 243 eval_func(**in_arrays) /content/drive/My Drive/Colab Notebooks/COMP421/Project/multi_accuracy_classifier.ipynb in __call__(self, *args) /content/drive/My Drive/Colab Notebooks/COMP421/Project/fsns.ipynb in __call__(self, images, label) /content/drive/My Drive/Colab Notebooks/COMP421/Project/fsns.ipynb in __call__(self, images) /usr/local/lib/python3.6/dist-packages/chainer/link.py in __call__(self, *args, **kwargs) 285 # forward is implemented in the child classes 286 forward = self.forward # type: ignore --> 287 out = forward(*args, **kwargs) 288 289 # Call forward_postprocess hook /usr/local/lib/python3.6/dist-packages/chainer/links/connection/convolution_2d.py in forward(self, x) 249 return convolution_2d.convolution_2d( 250 x, self.W, self.b, self.stride, self.pad, dilate=self.dilate, --> 251 groups=self.groups, cudnn_fast=self.cudnn_fast) 252 253 /usr/local/lib/python3.6/dist-packages/chainer/functions/connection/convolution_2d.py in convolution_2d(x, W, b, stride, pad, cover_all, **kwargs) 656 else: 657 args = x, W, b --> 658 y, = fnode.apply(args) 659 return y /usr/local/lib/python3.6/dist-packages/chainer/function_node.py in apply(self, inputs) 267 is_chainerx, in_data = _extract_apply_in_data(inputs) 268 --> 269 utils._check_arrays_forward_compatible(in_data, self.label) 270 271 if is_chainerx: /usr/local/lib/python3.6/dist-packages/chainer/utils/__init__.py in _check_arrays_forward_compatible(arrays, label) 91 'Actual: {}'.format( 92 ' ({})'.format(label) if label is not None else '', ---> 93 ', '.join(str(type(a)) for a in arrays))) 94 95 TypeError: incompatible array types are mixed in the forward input (Convolution2DFunction). Actual: , , Do you know what cause error? I looked up online, some suggested to add `.to_gpu()` at the end of `L.Convolution2D(None, 32, 3, pad=1)`, but then it gives rise to another error > AttributeError: 'NoneType' object has no attribute 'shape' This would mean it produced None as output. Thank you again.