tensorflow / java

Java bindings for TensorFlow
Apache License 2.0
833 stars 202 forks source link

How to get the Operand[T <: TNumber] index or indices like tensorflow python tensor #457

Closed mullerhai closed 2 years ago

mullerhai commented 2 years ago

HI: In tensorflow python ,If we create tensor like shape[x,y,z] three dimension ,

x = tf.random.normal([4,32,32,3]) we can get it any dimension data , like x[0] is shape (32,32,3) , x[0,::,::,::] the shape also is (32,32,3) ,also we can get other dimension ,like x[0,1] the shape is (32,,3). but in tensorflow-java ,for Operand[T ] type object ,like Operand[TFloat32] data, I dont know how to get it any dimensions data,could you show me the example to do this.

like my code

---python
    def encoder_lstm(self, args):
        # self.x = tf.placeholder(tf.float32, [args.num_shifts + 1, None, args.input_dim])        
        input_h = tf.squeeze(self.x[0, :, :])
        input_c = tf.squeeze(self.x[0, :, :])
        widthes = self.width
        num_lstm = len(widthes) - 1
        h, c = [], []
        for i in np.arange(num_lstm):
            input_h = tf.nn.tanh(tf.matmul(input_h, self.weights['WEH%d' % (i + 1)]))
            h.append(input_h)
            input_c = tf.nn.tanh(tf.matmul(input_c, self.weights['WEC%d' % (i + 1)]))
            c.append(input_c)
        output = []
        for i in np.arange(self.time_step):
            x = tf.squeeze(self.x[i, :, :])
            # multi-LSTM             h, c = self.lstm_one_shift(x, h, c, num_lstm)
            output.append(h[-1])
        return output

--- scala

  def encoderLSTM(input: Operand[T], num_lstm: Int)(implicit tf: Ops): Operand[T] = {
    //     # self.x = tf.placeholder(tf.float32, [args.num_shifts + 1, None, args.input_dim])
    // input_c = tf.squeeze(self.x[0, :, :])
    // input_h = tf.squeeze(self.x[0, :, :])
    val input_h = tf.squeeze(input, Squeeze.axis(0)) //input [0,::]   here  is error defined 
    val input_c = tf.squeeze(input, Squeeze.axis(0))  //input [0,::] here  is error defined 
    val h_prev_list = new util.ArrayList[Operand[T]]()
    val cs_prev_list = new util.ArrayList[Operand[T]]()
    val output = new util.ArrayList[Operand[T]]()
    var input_h_tmp = input_h.asOutput()
    var input_c_tmp = input_c.asOutput()
    for (i <- 1 until this.width.length) {
      input_h_tmp = tf.math.tanh(tf.linalg.matMul(input_h_tmp, this.weh_list.get(i.toInt))).asOutput()
      h_prev_list.add(input_h_tmp)
      input_c_tmp = tf.math.tanh(tf.linalg.matMul(input_c_tmp, this.wec_list.get(i.toInt))).asOutput()
      cs_prev_list.add(input_c_tmp)
    }
    for (k <- 1 to this.timeStep.toInt) {
      val temp_X = tf.squeeze(input, Squeeze.axis(0)) // input[i, :, :] here  is error defined 
      val operand = this.lstmOneShift(temp_X, h_prev_list, cs_prev_list, num_lstm)
      val last_hidden_state = operand._1.get(operand._1.size() - 1)
      output.add(last_hidden_state)
    }
    tf.concat(output, tf.constant(0))
  }

if we can do like this https://github.com/scalanlp/breeze/wiki/Linear-Algebra-Cheat-Sheet thanks

karllessard commented 2 years ago

You can access any specific sub-element of your Tensor/NdArray by calling x.get(0, 1), which mimics x[0][1] in Python.

For more fancy indexing, like x[:,0,:,2], you should slice your data using NdArray indexers, like this:

import static org.tensorflow.ndarray.index.Indices.*;

// given x a Tensor/NdArray of shape [4,32,32,3]
x.slice(all(), at(0), all(), at(2));

Look at NdArray's Indices for more options on how you can slice your data.

mullerhai commented 2 years ago

You can access any specific sub-element of your Tensor/NdArray by calling x.get(0, 1), which mimics x[0][1] in Python.

For more fancy indexing, like x[:,0,:,2], you should slice your data using NdArray indexers, like this:

import static org.tensorflow.ndarray.index.Indices.*;

// given x a Tensor/NdArray of shape [4,32,32,3]
x.slice(all(), at(0), all(), at(2));

Look at NdArray's Indices for more options on how you can slice your data.

thanks it's great ,I also get another way but not the same good as you mention