Open ZeonlungPun opened 8 months ago
This is because of a wrong condition I used in a previous version of pydivsufsort. I used to check loss == loss + 1
to detect infinite numbers instead of torch.isfinite(loss)
.
Could you test with the latest version I just uploaded to PyPI?
That being said, your loss still looks very large. Did you actually normalize inputs?
of course i have normalized inputs, and i use these codes :
from lassonet import LassoNetRegressorCV model = LassoNetRegressorCV() # LassoNetRegressorCV path = model.fit(X_train, y_train) print("Best model scored", model.score(X_test, y_test)) print("Lambda =", model.bestlambda)
however, my input's shape is (20000,30000)
The number of samples is irrelevant as the MSE has reduction="mean"
.
Did you test with the latest version?
yes, i have tried the latest version; at the begining, the loss is normal; when the new fitting begin , the loss will be explosive : …… epoch: 850 loss: 0.017978345975279808 epoch: 851 loss: 0.017944464460015297 epoch: 852 loss: 0.0179106704890728 epoch: 853 loss: 0.017876965925097466 epoch: 854 loss: 0.017843332141637802 epoch: 855 loss: 0.017809787765145302 epoch: 0 loss: 0.017776312306523323 epoch: 1 loss: 5.919191360473633 epoch: 2 loss: 245.20724487304688 epoch: 3 loss: 37423.44140625 epoch: 4 loss: 10632257.0 Loss is 3204740096.0 Did you normalize input? Loss: 3204740096.0 l2_regularization: 0.3105020225048065 l2_regularization_skip: 575.5364379882812
I think you are using an older version because the epoch:
and loss:
lines were removed from the previous version on PyPI. I just added some additional logging for the automatically selected value of lambda_start. Could you test again with:
pip install git+https://github.com/lasso-net/lassonet
and use verbose=2
as parameter?
i have follwed your tips: but the same error happened:
Could you try to manually set lambda_start? To some larger value like 100.
same error happened …… i think maybe is something related to the huge shape of dataset , i have tested that when the shape is (2000,3000), all the thing normal
Can you post the logging output?
Hey @louisabraham what else was changed in 0.0.15? After 0.0.15 LassoNetRegressor keeps returning 'None' for the lassoregressor model's state_dict, even though using the exact same settings 0.0.14 returns the model well. What were the updates between 14 and 15 in addition to the auto logging that could have caused this?
Loss is 15310032732160.0
Did you normalize input?
Traceback (most recent call last):
File "D:\anaconda\envs\newtorch\lib\site-packages\IPython\core\interactiveshell.py", line 3397, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "
I am having the same problem, my dataset shape is (74,201376), I also tried the suggestions you gave above (install the latest version, set lambda_start lager, verbose=2), but I still get the error.
model = LassoNetRegressorCV(lambda_start=500,verbose=2) path = model.fit(x_train, y_train) Choosing lambda with cross-validation: 0%| | 0/5 [00:00<?, ?it/s] Initialized dense model 100 epochs, val_objective 9.32e+01, val_loss 9.32e+01, regularization 3.11e+02, l2_regularization 1.83e-01 Loss is nan Did you normalize input? Loss:: inf l2_regularization: tensor(inf) l2_regularization_skip: tensor(4.4145e+17) Choosing lambda with cross-validation: 0%| | 0/5 [29:49<?, ?it/s] AssertionError Traceback (most recent call last) Cell In[8], line 5 2 x_train, x_test, y_train, y_test = train_test_split(x_sd,y_sd,train_size=0.8,random_state=1) 4 model = LassoNetRegressorCV(lambda_start=500,verbose=2) ----> 5 path = model.fit(x_train, y_train)
File D:\Anaconda\envs\lassonet8262\Lib\site-packages\lassonet\interfaces.py:935, in BaseLassoNetCV.fit(self, X, y)
926 def fit(
927 self,
928 X,
929 y,
930 ):
931 """Train the model.
932 Note that if lambda_
is not given, the trained model
933 will most likely not use any feature.
934 """
--> 935 self.path(X, y, return_state_dicts=False)
936 return self
File D:\Anaconda\envs\lassonet8262\Lib\site-packages\lassonet\interfaces.py:870, in BaseLassoNetCV.path(self, X, y, return_state_dicts) 867 splitlambdas.append(hist[-1].lambda) 868 split_scores.append(model.score(X[test_index], y[test_index])) --> 870 path = super().path( 871 X[train_index], 872 y[train_index], 873 return_state_dicts=False, # avoid memory cost 874 callback=callback, 875 ) 876 self.rawpaths.append(path) 878 # build final path
File D:\Anaconda\envs\lassonet8262\Lib\site-packages\lassonet\interfaces.py:471, in BaseLassoNet.path(self, X, y, X_val, y_val, lambda_seq, lambda_max, return_state_dicts, callback, disable_lambda_warning) 469 if self.model.selected_count() == 0: 470 break --> 471 last = self._train( 472 X_train, 473 y_train, 474 X_val, 475 y_val, 476 batch_size=self.batchsize, 477 lambda=current_lambda, 478 epochs=self.n_iters_path, 479 optimizer=optimizer, 480 patience=self.patience_path, 481 return_state_dict=return_state_dicts, 482 ) 483 if is_dense and self.model.selected_count() < X_train.shape[1]: 484 is_dense = False
File D:\Anaconda\envs\lassonet8262\Lib\site-packages\lassonet\interfaces.py:317, in BaseLassoNet._train(self, X_train, y_train, X_val, y_val, batchsize, epochs, lambda, optimizer, return_state_dict, patience) 314 loss += ans.item() batch_size / ntrain 315 return ans --> 317 optimizer.step(closure) 318 model.prox( 319 lambda=lambda_ optimizer.param_groups[0]["lr"], 320 M=self.M, 321 ) 323 if epoch == 0: 324 # fallback to running loss of first epoch
File D:\Anaconda\envs\lassonet8262\Lib\site-packages\torch\optim\optimizer.py:484, in Optimizer.profile_hook_step.
File D:\Anaconda\envs\lassonet8262\Lib\site-packages\torch\optim\optimizer.py:89, in _use_grad_for_differentiable.
File D:\Anaconda\envs\lassonet8262\Lib\site-packages\torch\optim\sgd.py:112, in SGD.step(self, closure) 110 if closure is not None: 111 with torch.enable_grad(): --> 112 loss = closure() 114 for group in self.param_groups: 115 params: List[Tensor] = []
File D:\Anaconda\envs\lassonet8262\Lib\site-packages\lassonet\interfaces.py:312, in BaseLassoNet._train.
AssertionError:
are you able to share this dataset? or reproduce on a public dataset?
Also, just a hinch but is your data in float64?
I am honored to receive your reply, this dataset is pyrim from LIBSVM website, refering to some of the literature, I used polynominal(degree=5) to expand the data dimensions. I am learning feature selecting on ultra-high dimensional datasets. So, I have recently tried the Lassonet method to select features.
pyrim dataset's webiste: https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/
噗哈 @.***
------------------ 原始邮件 ------------------ 发件人: "lasso-net/lassonet" @.>; 发送时间: 2024年8月28日(星期三) 晚上11:10 @.>; @.**@.>; 主题: Re: [lasso-net/lassonet] Large dataset error (Issue #53)
are you able to share this dataset? or reproduce on a public dataset?
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>
thank you very much, do you have maybe a MRE?
How do you get 201376 samples?
my feature number is 30000, it get an error : Loss is 511581280.0 Did you normalize input? Choosing lambda with cross-validation: 0%| | 0/5 [01:12<?, ?it/s] Traceback (most recent call last): File "/opt/conda/lib/python3.10/site-packages/IPython/core/interactiveshell.py", line 3553, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "", line 3, in
path = model.fit( x, y)
File "/opt/conda/lib/python3.10/site-packages/lassonet/interfaces.py", line 744, in fit
self.path(X, y, return_state_dicts=False)
File "/opt/conda/lib/python3.10/site-packages/lassonet/interfaces.py", line 679, in path
path = super().path(
File "/opt/conda/lib/python3.10/site-packages/lassonet/interfaces.py", line 472, in path
last = self._train(
File "/opt/conda/lib/python3.10/site-packages/lassonet/interfaces.py", line 331, in _train
optimizer.step(closure)
File "/opt/conda/lib/python3.10/site-packages/torch/optim/optimizer.py", line 373, in wrapper
out = func(*args, *kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/optim/optimizer.py", line 76, in _use_grad
ret = func(self, args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/optim/sgd.py", line 66, in step
loss = closure()
File "/opt/conda/lib/python3.10/site-packages/lassonet/interfaces.py", line 326, in closure
assert False
AssertionError
however,when the feature number is 1000, it would not get this error