ryankiros / skip-thoughts

Sent2Vec encoder and training code from the paper "Skip-Thought Vectors"
2.05k stars 543 forks source link

Value Error: Sequence is shorter then the required number of steps #57

Open perceptronnn opened 7 years ago

perceptronnn commented 7 years ago

When I tried to encode a list of 20 sentences, it was successful but then now I am trying to encode a list of 283007 sentences, named data1, and now I'm getting value error while runningvec_d1 = encoder.encode(data1) as follow:

0
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-27-f84532db8b01> in <module>()
----> 1 vec_d1 = encoder.encode(data1)

C:\Users\Anurag\Documents\skip-thoughts-master\skipthoughts.pyc in encode(self, X, use_norm, verbose, batch_size, use_eos)
    100       Encode sentences in the list X. Each entry will return a vector
    101       """
--> 102       return encode(self._model, X, use_norm, verbose, batch_size, use_eos)
    103 
    104 

C:\Users\Anurag\Documents\skip-thoughts-master\skipthoughts.pyc in encode(model, X, use_norm, verbose, batch_size, use_eos)
    153                 bff = model['f_w2v2'](bembedding, numpy.ones((len(caption)+1,len(caps)), dtype='float32'))
    154             else:
--> 155                 uff = model['f_w2v'](uembedding, numpy.ones((len(caption),len(caps)), dtype='float32'))
    156                 bff = model['f_w2v2'](bembedding, numpy.ones((len(caption),len(caps)), dtype='float32'))
    157             if use_norm:

C:\ProgramData\Anaconda2\lib\site-packages\theano\compile\function_module.pyc in __call__(self, *args, **kwargs)
    896                     node=self.fn.nodes[self.fn.position_of_error],
    897                     thunk=thunk,
--> 898                     storage_map=getattr(self.fn, 'storage_map', None))
    899             else:
    900                 # old-style linkers raise their own exceptions

C:\ProgramData\Anaconda2\lib\site-packages\theano\gof\link.pyc in raise_with_op(node, thunk, exc_info, storage_map)
    323         # extra long error message in that case.
    324         pass
--> 325     reraise(exc_type, exc_value, exc_trace)
    326 
    327 

C:\ProgramData\Anaconda2\lib\site-packages\theano\compile\function_module.pyc in __call__(self, *args, **kwargs)
    882         try:
    883             outputs =\
--> 884                 self.fn() if output_subset is None else\
    885                 self.fn(output_subset=output_subset)
    886         except Exception:

C:\ProgramData\Anaconda2\lib\site-packages\theano\scan_module\scan_op.pyc in rval(p, i, o, n, allow_gc)
    987         def rval(p=p, i=node_input_storage, o=node_output_storage, n=node,
    988                  allow_gc=allow_gc):
--> 989             r = p(n, [x[0] for x in i], o)
    990             for o in node.outputs:
    991                 compute_map[o][0] = True

C:\ProgramData\Anaconda2\lib\site-packages\theano\scan_module\scan_op.pyc in p(node, args, outs)
    976                                                 args,
    977                                                 outs,
--> 978                                                 self, node)
    979         except (ImportError, theano.gof.cmodule.MissingGXX):
    980             p = self.execute

theano/scan_module/scan_perform.pyx in theano.scan_module.scan_perform.perform (C:\Users\Anurag\AppData\Local\Theano\compiledir_Windows-10-10.0.10586-Intel64_Family_6_Model_142_Stepping_9_GenuineIntel-2.7.13-64\scan_perform\mod.cpp:2737)()

ValueError: ('Sequence is shorter then the required number of steps : (n_steps, seq, seq.shape):', 1, array([], shape=(0L, 4L, 1L), dtype=float32), (0L, 4L, 1L))
Apply node that caused the error: forall_inplace,cpu,encoder__layers}(Elemwise{Maximum}[(0, 0)].0, InplaceDimShuffle{0,1,x}.0, Elemwise{sub,no_inplace}.0, Subtensor{int64:int64:int8}.0, Subtensor{int64:int64:int8}.0, IncSubtensor{InplaceSet;:int64:}.0, encoder_U, encoder_Ux, ScalarFromTensor.0, ScalarFromTensor.0)
Toposort index: 50
Inputs types: [TensorType(int64, scalar), TensorType(float32, (False, False, True)), TensorType(float32, (False, False, True)), TensorType(float32, 3D), TensorType(float32, 3D), TensorType(float32, 3D), TensorType(float32, matrix), TensorType(float32, matrix), Scalar(int64), Scalar(int64)]
Inputs shapes: [(), (0L, 4L, 1L), (0L, 4L, 1L), (0L, 4L, 4800L), (0L, 4L, 2400L), (3L, 4L, 2400L), (2400L, 4800L), (2400L, 2400L), (), ()]
Inputs strides: [(), (16L, 4L, 4L), (16L, 4L, 4L), (76800L, 19200L, 4L), (38400L, 9600L, 4L), (38400L, 9600L, 4L), (19200L, 4L), (9600L, 4L), (), ()]
Inputs values: [array(1L, dtype=int64), array([], shape=(0L, 4L, 1L), dtype=float32), array([], shape=(0L, 4L, 1L), dtype=float32), array([], shape=(0L, 4L, 4800L), dtype=float32), array([], shape=(0L, 4L, 2400L), dtype=float32), 'not shown', 'not shown', 'not shown', 2400, 4800]
Inputs type_num: [9, 11, 11, 11, 11, 11, 11, 11, 9, 9]
Outputs clients: [[Subtensor{int64}(forall_inplace,cpu,encoder__layers}.0, ScalarFromTensor.0)]]

Debugprint of the apply node: 
forall_inplace,cpu,encoder__layers} [id A] <TensorType(float32, 3D)> ''   
 |Elemwise{Maximum}[(0, 0)] [id B] <TensorType(int64, scalar)> ''   
 | |Elemwise{Composite{minimum(((i0 + i1) - i2), i3)}} [id C] <TensorType(int64, scalar)> ''   
 | | |Elemwise{Composite{Switch(LT(i0, i1), (i0 + i2), i0)}} [id D] <TensorType(int64, scalar)> ''   
 | | | |Elemwise{Composite{Switch(LT((i0 - Composite{Switch(LT(i0, i1), i0, i1)}(i1, i2)), i3), (i4 - i0), Switch(GE((i0 - Composite{Switch(LT(i0, i1), i0, i1)}(i1, i2)), (i2 - Composite{Switch(LT(i0, i1), i0, i1)}(i1, i2))), (i5 + i0), Switch(LE((i2 - Composite{Switch(LT(i0, i1), i0, i1)}(i1, i2)), i3), (i5 + i0), i0)))}} [id E] <TensorType(int64, scalar)> ''   
 | | | | |Shape_i{0} [id F] <TensorType(int64, scalar)> ''   
 | | | | | |embedding [id G] <TensorType(float32, 3D)>
 | | | | |TensorConstant{1} [id H] <TensorType(int64, scalar)>
 | | | | |Elemwise{add,no_inplace} [id I] <TensorType(int64, scalar)> ''   
 | | | | | |TensorConstant{1} [id H] <TensorType(int64, scalar)>
 | | | | | |Shape_i{0} [id F] <TensorType(int64, scalar)> ''   
 | | | | |TensorConstant{0} [id J] <TensorType(int8, scalar)>
 | | | | |TensorConstant{-2} [id K] <TensorType(int64, scalar)>
 | | | | |TensorConstant{2} [id L] <TensorType(int64, scalar)>
 | | | |TensorConstant{0} [id J] <TensorType(int8, scalar)>
 | | | |Elemwise{add,no_inplace} [id I] <TensorType(int64, scalar)> ''   
 | | |TensorConstant{1} [id H] <TensorType(int64, scalar)>
 | | |TensorConstant{1} [id M] <TensorType(int8, scalar)>
 | | |Shape_i{0} [id F] <TensorType(int64, scalar)> ''   
 | |TensorConstant{1} [id H] <TensorType(int64, scalar)>
 |InplaceDimShuffle{0,1,x} [id N] <TensorType(float32, (False, False, True))> ''   
 | |Subtensor{int64:int64:int8} [id O] <TensorType(float32, matrix)> ''   
 |   |x_mask [id P] <TensorType(float32, matrix)>
 |   |ScalarFromTensor [id Q] <int64> ''   
 |   | |Elemwise{switch,no_inplace} [id R] <TensorType(int64, scalar)> ''   
 |   |   |Elemwise{le,no_inplace} [id S] <TensorType(bool, scalar)> ''   
 |   |   | |Elemwise{Composite{Switch(LT(i0, i1), i0, i1)}} [id T] <TensorType(int64, scalar)> ''   
 |   |   | | |Shape_i{0} [id F] <TensorType(int64, scalar)> ''   
 |   |   | | |Shape_i{0} [id U] <TensorType(int64, scalar)> ''   
 |   |   | |   |x_mask [id P] <TensorType(float32, matrix)>
 |   |   | |TensorConstant{0} [id J] <TensorType(int8, scalar)>
 |   |   |TensorConstant{0} [id J] <TensorType(int8, scalar)>
 |   |   |TensorConstant{0} [id V] <TensorType(int64, scalar)>
 |   |ScalarFromTensor [id W] <int64> ''   
 |   | |Elemwise{Composite{Switch(i0, i1, minimum(i2, i3))}}[(0, 2)] [id X] <TensorType(int64, scalar)> ''   
 |   |   |Elemwise{le,no_inplace} [id S] <TensorType(bool, scalar)> ''   
 |   |   |TensorConstant{0} [id J] <TensorType(int8, scalar)>
 |   |   |Elemwise{Composite{Switch(LT(i0, i1), i0, i1)}} [id T] <TensorType(int64, scalar)> ''   
 |   |   |Shape_i{0} [id U] <TensorType(int64, scalar)> ''   
 |   |Constant{1} [id Y] <int8>
 |Elemwise{sub,no_inplace} [id Z] <TensorType(float32, (False, False, True))> ''   
 | |TensorConstant{(1L, 1L, 1L) of 1.0} [id BA] <TensorType(float32, (True, True, True))>
 | |InplaceDimShuffle{0,1,x} [id N] <TensorType(float32, (False, False, True))> ''   
 |Subtensor{int64:int64:int8} [id BB] <TensorType(float32, 3D)> ''   
 | |Elemwise{Add}[(0, 0)] [id BC] <TensorType(float32, 3D)> ''   
 | | |Reshape{3} [id BD] <TensorType(float32, 3D)> ''   
 | | | |Dot22 [id BE] <TensorType(float32, matrix)> ''   
 | | | | |Reshape{2} [id BF] <TensorType(float32, matrix)> ''   
 | | | | | |embedding [id G] <TensorType(float32, 3D)>
 | | | | | |MakeVector{dtype='int64'} [id BG] <TensorType(int64, vector)> ''   
 | | | | |   |Elemwise{Mul}[(0, 1)] [id BH] <TensorType(int64, scalar)> ''   
 | | | | |   | |Shape_i{0} [id F] <TensorType(int64, scalar)> ''   
 | | | | |   | |Shape_i{1} [id BI] <TensorType(int64, scalar)> ''   
 | | | | |   |   |embedding [id G] <TensorType(float32, 3D)>
 | | | | |   |Shape_i{2} [id BJ] <TensorType(int64, scalar)> ''   
 | | | | |     |embedding [id G] <TensorType(float32, 3D)>
 | | | | |encoder_W [id BK] <TensorType(float32, matrix)>
 | | | |MakeVector{dtype='int64'} [id BL] <TensorType(int64, vector)> ''   
 | | |   |Shape_i{0} [id F] <TensorType(int64, scalar)> ''   
 | | |   |Shape_i{1} [id BI] <TensorType(int64, scalar)> ''   
 | | |   |Shape_i{1} [id BM] <TensorType(int64, scalar)> ''   
 | | |     |encoder_W [id BK] <TensorType(float32, matrix)>
 | | |InplaceDimShuffle{x,x,0} [id BN] <TensorType(float32, (True, True, False))> ''   
 | |   |encoder_b [id BO] <TensorType(float32, vector)>
 | |ScalarFromTensor [id BP] <int64> ''   
 | | |Elemwise{Composite{Switch(LE(i0, i1), i1, i2)}}[(0, 0)] [id BQ] <TensorType(int64, scalar)> ''   
 | |   |Shape_i{0} [id F] <TensorType(int64, scalar)> ''   
 | |   |TensorConstant{0} [id J] <TensorType(int8, scalar)>
 | |   |TensorConstant{0} [id V] <TensorType(int64, scalar)>
 | |ScalarFromTensor [id BR] <int64> ''   
 | | |Shape_i{0} [id F] <TensorType(int64, scalar)> ''   
 | |Constant{1} [id Y] <int8>
 |Subtensor{int64:int64:int8} [id BS] <TensorType(float32, 3D)> ''   
 | |Elemwise{Add}[(0, 0)] [id BT] <TensorType(float32, 3D)> ''   
 | | |Reshape{3} [id BU] <TensorType(float32, 3D)> ''   
 | | | |Dot22 [id BV] <TensorType(float32, matrix)> ''   
 | | | | |Reshape{2} [id BF] <TensorType(float32, matrix)> ''   
 | | | | |encoder_Wx [id BW] <TensorType(float32, matrix)>
 | | | |MakeVector{dtype='int64'} [id BX] <TensorType(int64, vector)> ''   
 | | |   |Shape_i{0} [id F] <TensorType(int64, scalar)> ''   
 | | |   |Shape_i{1} [id BI] <TensorType(int64, scalar)> ''   
 | | |   |Shape_i{1} [id BY] <TensorType(int64, scalar)> ''   
 | | |     |encoder_Wx [id BW] <TensorType(float32, matrix)>
 | | |InplaceDimShuffle{x,x,0} [id BZ] <TensorType(float32, (True, True, False))> ''   
 | |   |encoder_bx [id CA] <TensorType(float32, vector)>
 | |ScalarFromTensor [id BP] <int64> ''   
 | |ScalarFromTensor [id BR] <int64> ''   
 | |Constant{1} [id Y] <int8>
 |IncSubtensor{InplaceSet;:int64:} [id CB] <TensorType(float32, 3D)> ''   
 | |AllocEmpty{dtype='float32'} [id CC] <TensorType(float32, 3D)> ''   
 | | |Elemwise{Composite{(Switch(LT(maximum(i0, i1), i2), (maximum(i0, i1) + i3), (maximum(i0, i1) - i2)) + i4)}}[(0, 0)] [id CD] <TensorType(int64, scalar)> ''   
 | | | |Elemwise{Composite{((maximum(i0, i1) - Switch(LT(i2, i3), (i2 + i4), i2)) + i1)}}[(0, 2)] [id CE] <TensorType(int64, scalar)> ''   
 | | | | |Elemwise{Composite{minimum(((i0 + i1) - i2), i3)}} [id C] <TensorType(int64, scalar)> ''   
 | | | | |TensorConstant{1} [id H] <TensorType(int64, scalar)>
 | | | | |Elemwise{Composite{Switch(LT((i0 - Composite{Switch(LT(i0, i1), i0, i1)}(i1, i2)), i3), (i4 - i0), Switch(GE((i0 - Composite{Switch(LT(i0, i1), i0, i1)}(i1, i2)), (i2 - Composite{Switch(LT(i0, i1), i0, i1)}(i1, i2))), (i5 + i0), Switch(LE((i2 - Composite{Switch(LT(i0, i1), i0, i1)}(i1, i2)), i3), (i5 + i0), i0)))}} [id E] <TensorType(int64, scalar)> ''   
 | | | | |TensorConstant{0} [id J] <TensorType(int8, scalar)>
 | | | | |Elemwise{add,no_inplace} [id I] <TensorType(int64, scalar)> ''   
 | | | |TensorConstant{2} [id L] <TensorType(int64, scalar)>
 | | | |TensorConstant{1} [id M] <TensorType(int8, scalar)>
 | | | |TensorConstant{1} [id H] <TensorType(int64, scalar)>
 | | | |TensorConstant{1} [id H] <TensorType(int64, scalar)>
 | | |Shape_i{1} [id BI] <TensorType(int64, scalar)> ''   
 | | |Shape_i{1} [id CF] <TensorType(int64, scalar)> ''   
 | |   |encoder_Ux [id CG] <TensorType(float32, matrix)>
 | |Rebroadcast{0} [id CH] <TensorType(float32, 3D)> ''   
 | | |Alloc [id CI] <TensorType(float32, (True, False, False))> ''   
 | |   |TensorConstant{0.0} [id CJ] <TensorType(float32, scalar)>
 | |   |TensorConstant{1} [id M] <TensorType(int8, scalar)>
 | |   |Shape_i{1} [id BI] <TensorType(int64, scalar)> ''   
 | |   |Shape_i{1} [id CF] <TensorType(int64, scalar)> ''   
 | |Constant{1} [id CK] <int64>
 |encoder_U [id CL] <TensorType(float32, matrix)>
 |encoder_Ux [id CG] <TensorType(float32, matrix)>
 |ScalarFromTensor [id CM] <int64> ''   
 | |Shape_i{1} [id CF] <TensorType(int64, scalar)> ''   
 |ScalarFromTensor [id CN] <int64> ''   
   |Elemwise{Mul}[(0, 1)] [id CO] <TensorType(int64, scalar)> ''   
     |TensorConstant{2} [id L] <TensorType(int64, scalar)>
     |Shape_i{1} [id CF] <TensorType(int64, scalar)> ''   

Inner graphs of the scan ops:

forall_inplace,cpu,encoder__layers} [id A] <TensorType(float32, 3D)> ''   
 >Elemwise{Composite{((i0 * ((scalar_sigmoid(i1) * i2) + ((i3 - scalar_sigmoid(i1)) * tanh(((i4 * scalar_sigmoid(i5)) + i6))))) + (i7 * i2))}} [id CP] <TensorType(float32, matrix)> ''   
 > |<TensorType(float32, col)> [id CQ] <TensorType(float32, col)> -> [id N]
 > |Subtensor{::, int64:int64:} [id CR] <TensorType(float32, matrix)> ''   
 > | |Gemm{no_inplace} [id CS] <TensorType(float32, matrix)> ''   
 > | | |<TensorType(float32, matrix)> [id CT] <TensorType(float32, matrix)> -> [id BB]
 > | | |TensorConstant{1.0} [id CU] <TensorType(float32, scalar)>
 > | | |<TensorType(float32, matrix)> [id CV] <TensorType(float32, matrix)> -> [id CB]
 > | | |encoder_U_copy [id CW] <TensorType(float32, matrix)> -> [id CL]
 > | | |TensorConstant{1.0} [id CU] <TensorType(float32, scalar)>
 > | |<int64> [id CX] <int64> -> [id CM]
 > | |<int64> [id CY] <int64> -> [id CN]
 > |<TensorType(float32, matrix)> [id CV] <TensorType(float32, matrix)> -> [id CB]
 > |TensorConstant{(1L, 1L) of 1.0} [id CZ] <TensorType(float32, (True, True))>
 > |Dot22 [id DA] <TensorType(float32, matrix)> ''   
 > | |<TensorType(float32, matrix)> [id CV] <TensorType(float32, matrix)> -> [id CB]
 > | |encoder_Ux_copy [id DB] <TensorType(float32, matrix)> -> [id CG]
 > |Subtensor{::, int64:int64:} [id DC] <TensorType(float32, matrix)> ''   
 > | |Gemm{no_inplace} [id CS] <TensorType(float32, matrix)> ''   
 > | |Constant{0} [id DD] <int64>
 > | |<int64> [id CX] <int64> -> [id CM]
 > |<TensorType(float32, matrix)> [id DE] <TensorType(float32, matrix)> -> [id BS]
 > |<TensorType(float32, col)> [id DF] <TensorType(float32, col)> -> [id Z]

Storage map footprint:
 - encoder_U, Shared Input, Shape: (2400L, 4800L), ElemSize: 4 Byte(s), TotalSize: 46080000 Byte(s)
 - encoder_Ux, Shared Input, Shape: (2400L, 2400L), ElemSize: 4 Byte(s), TotalSize: 23040000 Byte(s)
 - encoder_W, Shared Input, Shape: (620L, 4800L), ElemSize: 4 Byte(s), TotalSize: 11904000 Byte(s)
 - encoder_Wx, Shared Input, Shape: (620L, 2400L), ElemSize: 4 Byte(s), TotalSize: 5952000 Byte(s)
 - IncSubtensor{InplaceSet;:int64:}.0, Shape: (3L, 4L, 2400L), ElemSize: 4 Byte(s), TotalSize: 115200 Byte(s)
 - encoder_b, Shared Input, Shape: (4800L,), ElemSize: 4 Byte(s), TotalSize: 19200 Byte(s)
 - encoder_bx, Shared Input, Shape: (2400L,), ElemSize: 4 Byte(s), TotalSize: 9600 Byte(s)
 - TensorConstant{2}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
 - Elemwise{Maximum}[(0, 0)].0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
 - TensorConstant{0}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
 - Constant{1}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
 - TensorConstant{1}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
 - ScalarFromTensor.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
 - Elemwise{Composite{(((i0 - maximum(i1, i2)) - i3) + maximum(i4, i5))}}[(0, 0)].0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
 - ScalarFromTensor.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
 - TensorConstant{-2}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s)
 - TensorConstant{(1L, 1L, 1L) of 1.0}, Shape: (1L, 1L, 1L), ElemSize: 4 Byte(s), TotalSize: 4 Byte(s)
 - TensorConstant{0.0}, Shape: (), ElemSize: 4 Byte(s), TotalSize: 4.0 Byte(s)
 - TensorConstant{2}, Shape: (), ElemSize: 1 Byte(s), TotalSize: 1.0 Byte(s)
 - TensorConstant{1}, Shape: (), ElemSize: 1 Byte(s), TotalSize: 1.0 Byte(s)
 - TensorConstant{0}, Shape: (), ElemSize: 1 Byte(s), TotalSize: 1.0 Byte(s)
 - Constant{1}, Shape: (), ElemSize: 1 Byte(s), TotalSize: 1.0 Byte(s)
 - Elemwise{sub,no_inplace}.0, Shape: (0L, 4L, 1L), ElemSize: 4 Byte(s), TotalSize: 0 Byte(s)
 - embedding, Input, Shape: (0L, 4L, 620L), ElemSize: 4 Byte(s), TotalSize: 0 Byte(s)
 - InplaceDimShuffle{0,1,x}.0, Shape: (0L, 4L, 1L), ElemSize: 4 Byte(s), TotalSize: 0 Byte(s)
 - Subtensor{int64:int64:int8}.0, Shape: (0L, 4L, 2400L), ElemSize: 4 Byte(s), TotalSize: 0 Byte(s)
 - x_mask, Input, Shape: (0L, 4L), ElemSize: 4 Byte(s), TotalSize: 0 Byte(s)
 - Subtensor{int64:int64:int8}.0, Shape: (0L, 4L, 4800L), ElemSize: 4 Byte(s), TotalSize: 0 Byte(s)
 TotalSize: 87120084.0 Byte(s) 0.081 GB
 TotalSize inputs: 87004852.0 Byte(s) 0.081 GB

HINT: Re-running with most Theano optimization disabled could give you a back-trace of when this node was created. This can be done with by setting the Theano flag 'optimizer=fast_compile'. If that does not work, Theano optimizations can be disabled with 'optimizer=None'.

Could anyone please help me out? Thanks.

TitusTom commented 7 years ago

I'm having the same issue. Any luck fixing this?

Edit: I managed to solve this. A blank line in my file was the cause of this.

Pratyusha1796 commented 6 years ago

I'm having same issue can you help me with how to fix that? ng f_log_probs... Done Building f_cost... Done Done Building f_grad... Building optimizers... Optimization Epoch 0

ValueError Traceback (most recent call last)

in () 1 import train ----> 2 train.trainer (X) /home/pratyusha/Documents/skip-thoughts/training/train.py in trainer(X, dim_word, dim, encoder, decoder, max_epochs, dispFreq, decay_c, grad_clip, n_words, maxlen_w, optimizer, batch_size, saveto, dictionary, saveFreq, reload_) 155 x, x_mask, y, y_mask, z, z_mask = homogeneous_data.prepare_data(x, y, z, worddict, maxlen=maxlen_w, n_words=n_words) 156 --> 157 if x == None: 158 print 'Minibatch with zero sample under length ', maxlen_w 159 uidx -= 1 ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() could any one help me? Thank you
afrozas commented 6 years ago

@Pratyusha1796 Change x == None in training/train.py to x.any() == None or x.all() == None.

abhilasha23 commented 5 years ago

I'm having the same issue. Any luck fixing this?

Edit: I managed to solve this. A blank line in my file was the cause of this.

Hi @TitusTom : Can you please help me with which file did you fix to fix this issue?

AkshayVaghani commented 5 years ago

I am getting following error , can anyone please help ,

vectors = encoder.encode(['love is good'])


TypeError Traceback (most recent call last)

in () ----> 1 vectors = encoder.encode(['love is good']) ~/Downloads/skip-thoughts-master/skipthoughts.py in encode(self, X, use_norm, verbose, batch_size, use_eos) 123 Encode sentences in the list X. Each entry will return a vector 124 """ --> 125 return encode(self._model, X, use_norm, verbose, batch_size, use_eos) 126 127 ~/Downloads/skip-thoughts-master/skipthoughts.py in encode(model, X, use_norm, verbose, batch_size, use_eos) 151 print(k) 152 numbatches = len(ds[k]) / batch_size + 1 --> 153 for minibatch in range(numbatches): 154 caps = ds[k][minibatch::numbatches] 155 TypeError: 'float' object cannot be interpreted as an integer