Closed hugoguh closed 8 years ago
hmm ... can you share a few lines of code to reproduce the problem?
sure!
let’s see it with only one reg_lambda
, first just like in README, except setting one reg_lambda
:
import numpy as np
import scipy.sparse as sps
from sklearn.preprocessing import StandardScaler
from pyglmnet import GLM
# create class of Generalized Linear model
model = GLM(distr='poisson', verbose=True, alpha=0.05, reg_lambda = 0.1)
n_samples, n_features = 10000, 100
# coefficients
beta0 = np.random.normal(0.0, 1.0, 1)
beta = sps.rand(n_features, 1, 0.1)
beta = np.array(beta.todense())
# training data
Xr = np.random.normal(0.0, 1.0, [n_samples, n_features])
yr = model.simulate(beta0, beta, Xr)
# testing data
Xt = np.random.normal(0.0, 1.0, [n_samples, n_features])
yt = model.simulate(beta0, beta, Xt)
# fit Generalized Linear Model
scaler = StandardScaler().fit(Xr)
now fit
model.fit(scaler.transform(Xr), yr)
here’s the output, looks good except for that extra space ;):
now instantiate another (or re-instantiate same model):
model_another = GLM(distr='poisson', verbose=True, alpha=0.05, reg_lambda = 0.1)
now fit either model:
model.fit(scaler.transform(Xr), yr)
here’s the output, it repeats everything twice:
looks like the number of repetitions equals the number of models you have instantiated so far
remove that extra spacing after dL/L:...
also :)
if I instantiate the same model again, the verbose adds one repetition:
it adds one more every time I instantiate a model. Doesn’t even have to be the same model again.