charlesmartin14 / emf-rbm

Extended Mean Field Restricted Boltzmann Machine
16 stars 11 forks source link

omniglot test does not use rbm features #5

Open charlesmartin14 opened 7 years ago

charlesmartin14 commented 7 years ago

We use the RBM to generate features for the SVM or Logistic Regression

so we need to transform X_train to features (F_train):

example:

rbm = BernoulliRBM() rbm= rbm.fit(X_train) F_train = rbm.transform(X_train) F_test = rbm.transform(X_test)

classifier = LinearSVC() classifier.fit(F_train, train_t) Y_test_rbm_pred = classifier.predict(F_test) emf_accuracy = accuracy_score(y_pred=Y_test_rbm_pred, y_true=test_t)


for the EMF RBM, we need to implement a transform method, based on sig_means()

from sklearn.utils.fixes import expit
from sklearn.utils.extmath import safe_sparse_dot

def sig_means(x, b, W): a = safe_sparse_dot(x, W.T) + b return expit(a, out=a)

charlesmartin14 commented 7 years ago

See

RBM_Baseline_Ominglot.ipynb

basaks commented 7 years ago

rbm = BernoulliRBM() rbm= rbm.fit(X_train) F_train = rbm.transform(X_train) F_test = rbm.transform(X_test)

classifier = LinearSVC() classifier.fit(F_train, train_t)

I do this following the sklearn example using the pipeline.

classifier = Pipeline(steps=[('rbm', B_rbm), ('logistic', logistic)]) statement does the same.

Followed by

classifier.fit(X_train, Y_train) Y_test_emf_pred = classifier.predict(X_test)

Why do we need the sig_means transform only for emf-rbm and not for the bernoulli rbm?