talolard / MarketVectors

Implementations for my blog post [here](https://medium.com/@TalPerry/deep-learning-the-stock-market-df853d139e02#.flflpo3xf)
MIT License
260 stars 169 forks source link

MarketVectors Error after import from iPython to Python, error not related... #5

Open wanfuse123 opened 7 years ago

wanfuse123 commented 7 years ago

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 ERROR

('self.logits = ', <tf.Tensor 'ff/fully_connected_2/BiasAdd:0' shape=(?, 11) dtype=float32>) ('self.target_data', <tf.Tensor 'Placeholder_1:0' shape=(?,) dtype=int32>) Traceback (most recent call last): File "./preparedata-manual-upgraded.py", line 204, in model = Model() File "./preparedata-manual-upgraded.py", line 187, in init self.losses = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=self.logits,logits=self.target_data) File "/home/steven/Practical-DataScience/DataScience/local/lib/python2.7/site-packages/tensorflow/python/ops/nn_ops.py", line 1686, in sparse_softmax_cross_entropy_with_logits (labels_static_shape.ndims, logits.get_shape().ndims)) ValueError: Rank mismatch: Rank of labels (received 2) should equal rank of logits minus 1 (received 1)

CODE IN QUESTION

class Model(): def init(self): global_step = tf.contrib.framework.get_or_create_global_step() self.input_data = tf.placeholder(dtype=tf.float32,shape=[None,num_features]) self.target_data = tf.placeholder(dtype=tf.int32,shape=[None]) self.dropout_prob = tf.placeholder(dtype=tf.float32,shape=[]) with tf.variable_scope("ff"): droped_input = tf.nn.dropout(self.input_data,keep_prob=self.dropout_prob)

        layer_1 = tf.contrib.layers.fully_connected(
            num_outputs=hidden_1_size,
            inputs=droped_input,
        )
        layer_2 = tf.contrib.layers.fully_connected(
            num_outputs=hidden_2_size,
            inputs=layer_1,
        )
        self.logits = tf.contrib.layers.fully_connected(
            num_outputs=num_classes,
            activation_fn =None,
            inputs=layer_2,
        )
    with tf.variable_scope("loss"):
        print ("self.logits = ",self.logits) 
        print ("self.target_data", self.target_data)

exit()

        self.losses = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=self.logits,logits=self.target_data)
        mask = (1-tf.sign(1-self.target_data)) #Don't give credit for flat days
        mask = tf.cast(mask,tf.float32)
        self.loss = tf.reduce_sum(self.losses)

    with tf.name_scope("train"):
      opt = tf.train.AdamOptimizer(lr)
      gvs = opt.compute_gradients(self.loss)
      self.train_op = opt.apply_gradients(gvs, global_step=global_step)

    with tf.name_scope("predictions"):
        self.probs = tf.nn.softmax(self.logits)
        self.predictions = tf.argmax(self.probs, 1)
        correct_pred = tf.cast(tf.equal(self.predictions, tf.cast(self.target_data,tf.int64)),tf.float64)
        self.accuracy = tf.reduce_mean(correct_pred)

PRINTED OUTPUT OF VARIABLES BEFORE ENTERING THE FUNCTION

[[ 2 3 11 6 1 7 7 3 3 4 5] [ 2 3 8 7 8 7 6 2 2 2 3] [ 1 4 9 5 2 13 5 11 5 3 2] [ 1 6 7 8 5 15 6 1 7 4 2] [ 1 3 6 2 3 9 10 5 7 4 0] [ 0 5 11 3 3 6 6 4 5 6 2] [ 1 3 15 3 3 12 12 1 4 2 4] [ 0 4 8 3 3 8 12 2 10 3 0] [ 0 2 16 6 3 9 12 2 3 0 1] [ 0 7 11 5 2 7 10 4 4 4 2] [ 3 3 10 6 6 9 6 4 1 4 0]] [[91 37 75] [76 35 89] [92 30 75]]


RELEVANT TENSOR FLOW DOC

https://www.tensorflow.org/api_docs/python/tf/nn/sparse_softmax_cross_entropy_with_logits