Open ilame opened 7 years ago
Hi @ilame , Your suggestion to add k-fold cross validation is a good one, and I should add this. I put this under 'enhancements' and will spend some time in the next few weeks adding this to the multiclass SVM.
If you wanted, you could manually split the dataset into k-random splits and run the model k times. I'll work on incorporating scikit-learn's k fold cross validation preprocessing into it.
These past two months have been really busy for me and I won't get any more free time until early July.
Thanks!
Hi @nfmcclure Thank you for your reply. I have tried to modify the code incorporating scikit-learn's k fold. y_vals and batch_size dimensions need to be adapted to the split dimensions, but still can't find how.
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from sklearn import datasets
from tensorflow.python.framework import ops
from sklearn.model_selection import KFold
ops.reset_default_graph()
# Create graph
sess = tf.Session()
# Load the data
# iris.data = [(Sepal Length, Sepal Width, Petal Length, Petal Width)]
iris = datasets.load_iris()
x_vals = np.array([[x[0], x[3]] for x in iris.data])
y_vals1 = np.array([1 if y==0 else -1 for y in iris.target])
y_vals2 = np.array([1 if y==1 else -1 for y in iris.target])
y_vals3 = np.array([1 if y==2 else -1 for y in iris.target])
y_vals = np.array([y_vals1, y_vals2, y_vals3])
# Declare batch size
batch_size = 50
# Initialize placeholders
x_data = tf.placeholder(shape=[None, 2], dtype=tf.float32)
y_target = tf.placeholder(shape=[3, None], dtype=tf.float32)
prediction_grid = tf.placeholder(shape=[None, 2], dtype=tf.float32)
# Create variables for svm
b = tf.Variable(tf.random_normal(shape=[3,batch_size]))
# Gaussian (RBF) kernel
gamma = tf.constant(-10.0)
dist = tf.reduce_sum(tf.square(x_data), 1)
dist = tf.reshape(dist, [-1,1])
sq_dists = tf.multiply(2., tf.matmul(x_data, tf.transpose(x_data)))
my_kernel = tf.exp(tf.multiply(gamma, tf.abs(sq_dists)))
# Declare function to do reshape/batch multiplication
def reshape_matmul(mat):
v1 = tf.expand_dims(mat, 1)
v2 = tf.reshape(v1, [3, batch_size, 1])
return(tf.matmul(v2, v1))
# Compute SVM Model
first_term = tf.reduce_sum(b)
b_vec_cross = tf.matmul(tf.transpose(b), b)
y_target_cross = reshape_matmul(y_target)
second_term = tf.reduce_sum(tf.multiply(my_kernel, tf.multiply(b_vec_cross, y_target_cross)),[1,2])
loss = tf.reduce_sum(tf.negative(tf.subtract(first_term, second_term)))
# Gaussian (RBF) prediction kernel
rA = tf.reshape(tf.reduce_sum(tf.square(x_data), 1),[-1,1])
rB = tf.reshape(tf.reduce_sum(tf.square(prediction_grid), 1),[-1,1])
pred_sq_dist = tf.add(tf.subtract(rA, tf.multiply(2., tf.matmul(x_data, tf.transpose(prediction_grid)))),
tf.transpose(rB))
pred_kernel = tf.exp(tf.multiply(gamma, tf.abs(pred_sq_dist)))
prediction_output = tf.matmul(tf.multiply(y_target,b), pred_kernel)
prediction = tf.arg_max(prediction_output-tf.expand_dims(tf.reduce_mean(prediction_output,1), 1), 0)
accuracy = tf.reduce_mean(tf.cast(tf.equal(prediction, tf.argmax(y_target,0)), tf.float32))
# Declare optimizer
my_opt = tf.train.GradientDescentOptimizer(0.01)
train_step = my_opt.minimize(loss)
# Initialize variables
init = tf.global_variables_initializer()
sess.run(init)
# Training loop
kf = KFold(n_splits=3)
loss_vec = []
train_accuracy = []
valid_accuracy = []
x_trains = []
y_trains = []
x_tests = []
y_tests = []
for train_index, test_index in kf.split(x_vals):
X_train, X_test = x_vals[train_index], x_vals[test_index]
y_train, y_test = y_vals[train_index], y_vals[test_index]
x_trains.append(X_train)
y_trains.append(y_train)
x_tests.append(X_test)
y_tests.append(y_test)
x_trains = np.asarray(x_trains)
y_trains = np.asarray(y_trains)
x_tests = np.asarray(x_tests)
y_tests = np.asarray(y_tests)
for i in range(100):
rand_index = np.random.choice(len(x_trains), size=batch_size)
rand_x = x_trains[rand_index]
rand_y = y_trains[:,rand_index]
sess.run(train_step, feed_dict={x_data: rand_x, y_target: rand_y})
temp_loss = sess.run(loss, feed_dict={x_data: rand_x, y_target: rand_y})
loss_vec.append(temp_loss)
train_acc_temp = sess.run(accuracy, feed_dict={x_data: x_trains,
y_target: y_trains,
prediction_grid:x_trains})
train_accuracy.append(train_acc_temp)
valid_acc_temp = sess.run(accuracy, feed_dict={x_data: x_tests,
y_target: y_tests,
prediction_grid: x_tests})
valid_accuracy.append(valid_acc_temp)
if (i+1)%25==0:
print('Step #' + str(i+1))
print('Loss = ' + str(temp_loss))
# Plot train/test accuracies
plt.plot(train_accuracy, 'k-', label='Training Accuracy')
plt.plot(valid_accuracy, 'r--', label='Validation Accuracy')
plt.title('Train and Validation Set Accuracies')
plt.xlabel('Generation')
plt.ylabel('Accuracy')
plt.legend(loc='lower right')
plt.show()
# Plot loss over time
plt.plot(loss_vec, 'k-')
plt.title('Loss per Generation')
plt.xlabel('Generation')
plt.ylabel('Loss')
plt.show()
Hi @nfmcclure https://github.com/nfmcclure Any news about the enhancement? Kind Regards
On Sun, Jun 25, 2017 at 12:31 AM, Nick notifications@github.com wrote:
Hi @ilame https://github.com/ilame , Your suggestion to add k-fold cross validation is a good one, and I should add this. I put this under 'enhancements' and will spend some time in the next few weeks adding this to the multiclass SVM.
If you wanted, you could manually split the dataset into k-random splits and run the model k times. I'll work on incorporating scikit-learn's k fold cross validation preprocessing into it.
These past two months have been really busy for me and I won't get any more free time until early July.
Thanks!
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/nfmcclure/tensorflow_cookbook/issues/80#issuecomment-310871980, or mute the thread https://github.com/notifications/unsubscribe-auth/AJIDznWxIEvKTDaEcELm-7AU-0j_J99vks5sHZw1gaJpZM4N4gwP .
Hi @ilame , sorry for the delay. I'm currently wrapped up in rewriting the book for a version 2. I'll get around to the enhancements (like this one) at the start of October.
Hi Nick,
Thank you for remembering the issue. I am still interested in testing the algorithm with Tendorflow using cross validation I would be grateful if you let me know when it is done.
Kind Regards Ilham
On Mon, 20 Aug 2018 at 17:32, Nick notifications@github.com wrote:
Hi @ilame https://github.com/ilame , sorry for the delay. I'm currently wrapped up in rewriting the book for a version 2. I'll get around to the enhancements (like this one) at the start of October.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/nfmcclure/tensorflow_cookbook/issues/80#issuecomment-414380127, or mute the thread https://github.com/notifications/unsubscribe-auth/AJIDzmADyj1vUMMGsfILkUgy0EnWdkyMks5uSuSwgaJpZM4N4gwP .
Hi, I have adapted the code of multiclass SVM to my dataset, it is working well, however I don't know how to use k-fold cross validation in the training loop. Any help or guidance would be very appreciated. Thanks