pkmital / CADL

ARCHIVED: Contains historical course materials/Homework materials for the FREE MOOC course on "Creative Applications of Deep Learning w/ Tensorflow" #CADL
https://www.kadenze.com/courses/creative-applications-of-deep-learning-with-tensorflow/info
Apache License 2.0
1.48k stars 732 forks source link

Cost function minimization #65

Closed GeorgeMadlis closed 7 years ago

GeorgeMadlis commented 7 years ago

I think the cost function minimization example in lecture 2 (input 4) has a bug. Following the initialization values given in the example, one should get to the local minimum just in a few steps. Below is the notebook code: %matplotlib inline import os import tensorflow as tf import numpy as np import matplotlib.pyplot as plt import matplotlib.colors as colors import matplotlib.cm as cmx plt.style.use('ggplot')

def myMin(values): import operator min_index, min_value = min(enumerate(values), key=operator.itemgetter(1)) return min_index, min_value

Define placeholders

hz = tf.placeholder(tf.float32, name='hz')

hz = 10

ksize = tf.placeholder(tf.int32, name='ksize')

ksize = 200

lr = tf.placeholder(tf.float32, name='learning_rate')

learning_rate = 1.0

init_p = tf.placeholder(tf.int32, name = 'init_p')

init_p = 120

x = tf.linspace(-1.0, 1.0, ksize)

Define model

x_cropped = x[init_p : init_p+2] cost = tf.multiply(tf.sin(hzx_cropped),tf.exp(-x_cropped)) costFull = tf.multiply(tf.sin(hzx),tf.exp(-x)) grad = cost[1:] - cost[:-1] x_out = tf.multiply(lr,grad)

Initialize and run

init_op = tf.global_variables_initializer() init_ps = 120 #int(120/2) nr_iterations = 15 with tf.Session() as sess: sess.run(init_op) xs, cost_s = sess.run([x,costFull],{hz:10,ksize:200}) x_in = xs[init_ps] x_ser = [] cost_ser = [] for i in range(nr_iterations): x_ser.append(xs[init_ps])
x2, cost2 = sess.run([x_out,cost], {hz:10,init_p:init_ps,x:xs,lr:1}) x2 = xs[init_ps] - x2 dx = np.abs(xs - x2) initps, = myMin(dx)
cost_ser.append(cost2[0].flatten())

Prepare for plotting

x_ser = np.array(x_ser).flatten() cost_ser = np.array(cost_ser).flatten() cmap = plt.get_cmap('coolwarm') c_norm = colors.Normalize(vmin=0, vmax=nr_iterations) scalar_map = cmx.ScalarMappable(norm=c_norm, cmap=cmap)

Plot results

fig, axF = plt.subplots(2, figsize=(10, 8)) ax = axF[0] ax.plot(xs, cost_s) for i in range(nr_iterations): ax.plot(x_ser[i], cost_ser[i],'ro', color=scalar_map.to_rgba(i)) ax.set_ylabel('Cost') ax.set_xlabel('x')

ax = axF[1] for i in range(nr_iterations-1): ax.plot(i, x_ser[i+1] - x_ser[i],'o',color=scalar_map.to_rgba(i)) ax.set_xlabel('Iteration')

lecture2_in4.ipynb.tar.gz

pkmital commented 7 years ago

I'm sorry it's hard to understand what is going on in this code and where the bug is. Could you try formatting the code a bit using Markdown: https://guides.github.com/features/mastering-markdown/ - and also point out exactly the expected outcome and the actual outcome?

pkmital commented 7 years ago

Closing to inactivity. Please feel free to repoen!