shimingyoung / mtl

multitask learning
2 stars 1 forks source link

TODOs #1

Open shimingyoung opened 6 years ago

shimingyoung commented 6 years ago
  1. check the gradient_log_loss (Rajat)
  2. check the L (Rajat, Shiming)
  3. finalize the dirty_model_logistic function (Shiming, Rajat)
  4. Write a main script to run the dirty model (Shiming)
rajathpatel23 commented 6 years ago

Dear Dr. yang, I have worked don the dirty model logistic function and uploaded my version of the code. Please do check. I will soon try and upload your version of dirty model logistic code and update.

Thanks, Rajat Patel

rajathpatel23 commented 6 years ago

@shimingyoung Hello Dr. Yang, I have written the function for L as given in the FISTA algorithm paper, The function is as follows: L = max Eigen value of A^T A function is as follow as:

def Lipchitz_function(input_matrix): aT = np.transpose(input_matrix) product = np.dot(aT, input_matrix) w, v = np.linalg.eig(product) print(v.shape) print(w) return(2* max(w))

def adaptive_Lipchitz(old_L, eta): L_new = eta*old_L return(L_new)