pquochuy / xsleepnet

50 stars 9 forks source link

About the code running in 'attention' #7

Open DaozeZhang opened 11 months ago

DaozeZhang commented 11 months ago

Hello! Thank you for this amazing work! I'm trying to run the xsleepnet1 on my machine. I have installed the environments (TensorFlow 1.13, Cuda10.0 and cudnn7), but I still met a problem that I cannot solve. In the file sleepedf-78/tensorflow_nets/xsleepnet1.py, there are codes in line 194~196:

        with tf.variable_scope("seq_frame_attention_layer"):
            self.seq_attention_out1, _ = attention(seq_rnn_out1, self.config.seq_attention_size1)
            print(self.seq_attention_out1.get_shape())

When the code execution reached line 195:

            self.seq_attention_out1, _ = attention(seq_rnn_out1, self.config.seq_attention_size1)

I met an ERROR: TypeError: Tensor objects are only iterable when eager execution is enabled. To iterate over this tensor use tf.map_fn. I knew then I should open the eager execution, so I added tf.enable_eager_execution() to the top of train_xsleepnet1.py. But unfortunately, this error still occurred... I also tried using tf.map_fn to solve the error, like this: (the input args of attention() is modified accordingly)

            x = (seq_rnn_out1, self.config.seq_attention_size1)
            self.seq_attention_out1, _ = tf.map_fn(lambda x: attention(x), x)

But another error occurred: ValueError: slice index 0 of dimension 0 out of bounds. for 'seq_frame_attention_layer/map_1/TensorArrayUnstack_1/strided_slice' (op: 'StridedSlice') with input shapes: [0], [1], [1], [1] and with computed input tensors: input[1] = <0>, input[2] = <1>, input[3] = <1>.

Could you please tell me how to solve this problem? Thank you very much!

jasoncoinhe commented 8 months ago

Hi, I'm facing the same problem,

I tried tf.map_fn to fix TypeError: Tensor objects are only iterable when eager execution is enabled. To iterate over this tensor use tf.map_fn issues. My updated function is:

        with tf.variable_scope("seq_frame_attention_layer"):
            self.seq_attention_out1 = tf.map_fn(lambda x: attention(x, self.config.seq_attention_size1), seq_rnn_out1)
            print(self.seq_attention_out1.get_shape())

But it throws other error:

(?, 29, 128)
Traceback (most recent call last):
  File "train_xsleepnet2.py", line 356, in <module>
    net = XSleepNet(config=config)
  File "D:\Smartbed\xsleepnet\sleepedf-78\tensorflow_nets\xsleepnet2\xsleepnet.py", line 26, in __init__
    self.construct_seqsleepnet()
  File "D:\Smartbed\xsleepnet\sleepedf-78\tensorflow_nets\xsleepnet2\xsleepnet.py", line 195, in construct_seqsleepnet
    self.seq_attention_out1 = tf.map_fn(lambda x: attention(x, self.config.seq_attention_size1), seq_rnn_out1)
  File "C:\Users\DELL\.conda\envs\xsleep\lib\site-packages\tensorflow\python\ops\functional_ops.py", line 497, in map_fn
    maximum_iterations=n)
  File "C:\Users\DELL\.conda\envs\xsleep\lib\site-packages\tensorflow\python\ops\control_flow_ops.py", line 3556, in while_loop
    return_same_structure)
  File "C:\Users\DELL\.conda\envs\xsleep\lib\site-packages\tensorflow\python\ops\control_flow_ops.py", line 3087, in BuildLoop
    pred, body, original_loop_vars, loop_vars, shape_invariants)
  File "C:\Users\DELL\.conda\envs\xsleep\lib\site-packages\tensorflow\python\ops\control_flow_ops.py", line 3022, in _BuildLoop
    body_result = body(*packed_vars_for_body)
  File "C:\Users\DELL\.conda\envs\xsleep\lib\site-packages\tensorflow\python\ops\control_flow_ops.py", line 3525, in <lambda>
    body = lambda i, lv: (i + 1, orig_body(*lv))
  File "C:\Users\DELL\.conda\envs\xsleep\lib\site-packages\tensorflow\python\ops\functional_ops.py", line 486, in compute
    packed_fn_values = fn(packed_values)
  File "D:\Smartbed\xsleepnet\sleepedf-78\tensorflow_nets\xsleepnet2\xsleepnet.py", line 195, in <lambda>
    self.seq_attention_out1 = tf.map_fn(lambda x: attention(x, self.config.seq_attention_size1), seq_rnn_out1)
  File "D:\Smartbed\xsleepnet\sleepedf-78\tensorflow_nets\xsleepnet2\nn_basic_layers.py", line 151, in attention
    hidden_size = inputs_shape[2].value  # hidden size of the RNN layer
  File "C:\Users\DELL\.conda\envs\xsleep\lib\site-packages\tensorflow\python\framework\tensor_shape.py", line 788, in __getitem__
    return self._dims[key]
IndexError: list index out of range

@pquochuy could you please share your working environment, TensorFlow and Cuda version?

jasoncoinhe commented 8 months ago

I have to update function attention in file nn_basic_layers.py to

def attention(inputs, attention_size, time_major=False):
    if isinstance(inputs, tuple):
        # In case of Bi-RNN, concatenate the forward and the backward RNN outputs.
        inputs = tf.concat(inputs, 2)

    if time_major:
        # (T,B,D) => (B,T,D)
        inputs = tf.array_ops.transpose(inputs, [1, 0, 2])
    inputs_shape = inputs.shape
    sequence_length = inputs_shape[0].value  # the length of sequences processed in the antecedent RNN layer
    hidden_size = inputs_shape[1].value  # hidden size of the RNN layer

    # Attention mechanism
    W_omega = tf.Variable(lambda: tf.random_normal([hidden_size, attention_size], stddev=0.1))
    b_omega = tf.Variable(lambda: tf.random_normal([attention_size], stddev=0.1))
    u_omega = tf.Variable(lambda: tf.random_normal([attention_size], stddev=0.1))

But I am not sure if the index is correct or not. From sequence_length = inputs_shape[1].value to sequence_length = inputs_shape[1].value, and hidden_size = inputs_shape[2].value to hidden_size = inputs_shape[1].value

andylws commented 4 months ago

I have to update function attention in file nn_basic_layers.py to

def attention(inputs, attention_size, time_major=False):
    if isinstance(inputs, tuple):
        # In case of Bi-RNN, concatenate the forward and the backward RNN outputs.
        inputs = tf.concat(inputs, 2)

    if time_major:
        # (T,B,D) => (B,T,D)
        inputs = tf.array_ops.transpose(inputs, [1, 0, 2])
    inputs_shape = inputs.shape
    sequence_length = inputs_shape[0].value  # the length of sequences processed in the antecedent RNN layer
    hidden_size = inputs_shape[1].value  # hidden size of the RNN layer

    # Attention mechanism
    W_omega = tf.Variable(lambda: tf.random_normal([hidden_size, attention_size], stddev=0.1))
    b_omega = tf.Variable(lambda: tf.random_normal([attention_size], stddev=0.1))
    u_omega = tf.Variable(lambda: tf.random_normal([attention_size], stddev=0.1))

But I am not sure if the index is correct or not. From sequence_length = inputs_shape[1].value to sequence_length = inputs_shape[1].value, and hidden_size = inputs_shape[2].value to hidden_size = inputs_shape[1].value

Hello, did this modification work well? wasn't there any error during execution?