njs03332 / ml_study

3 stars 0 forks source link

2022/05/04~2022/05/11 #28

Open danbi5228 opened 2 years ago

danbi5228 commented 2 years ago
givitallugot commented 2 years ago

4.5.2 라쏘 회귀

lasso_reg = Lasso(alpha = 0.1) lasso_reg.fit(X,y) lasso_reg.predict([[1.5]])

njs03332 commented 2 years ago

4.5.4 조기 종료

데이터 준비

poly_scaler = Pipeline([ ("poly_features", PolynomialFeatures(degree=90, inclue_bias=False)), ("std_scaler", StandardScaler()) ]) X_train_poly_scaled = poly_scaler.fit_transform(X_train) X_val_poly_scaled = poly_scaler.transform(X_val)

sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None, learning_rate="constant", eta0=0.0005)

warm_start=True로 지정하면 fit() 메서드가 호출될 때 이전 모델 파라미터에서 훈련을 이어감

minimum_val_error = float("inf") best_epoch = None best_model = None for epoch in range(1000): sgd_reg.fit(X_train_poly_scaled, y_train) # 훈련을 이어서 진행 y_val_predict = sgd_reg.predict(X_val_poly_scaled) val_error = mean_squared_error(y_val, y_val_predict) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = epoch best_model = clone(sgd_reg)

danbi5228 commented 2 years ago

4.5.3 엘라스틱넷