(luan) Atlas:LAMDA-SSL wainer$ python Examples/FixMatch_BreastCancer.py
Traceback (most recent call last):
File "/Users/wainer/Dropbox/alunos/luan/LAMDA-SSL/Examples/FixMatch_BreastCancer.py", line 64, in <module>
model.fit(X=labeled_X,y=labeled_y,unlabeled_X=unlabeled_X)
File "/Users/wainer/miniconda3/envs/luan/lib/python3.11/site-packages/LAMDA_SSL/Base/DeepModelMixin.py", line 326, in fit
self.init_train_dataloader()
File "/Users/wainer/miniconda3/envs/luan/lib/python3.11/site-packages/LAMDA_SSL/Base/DeepModelMixin.py", line 243, in init_train_dataloader
self._labeled_dataloader, self._unlabeled_dataloader = self._train_dataloader.init_dataloader(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/wainer/miniconda3/envs/luan/lib/python3.11/site-packages/LAMDA_SSL/Dataloader/TrainDataloader.py", line 344, in init_dataloader
self.labeled_dataloader = self.labeled_dataloader.init_dataloader(dataset=self.labeled_dataset,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/wainer/miniconda3/envs/luan/lib/python3.11/site-packages/LAMDA_SSL/Dataloader/LabeledDataloader.py", line 86, in init_dataloader
self.dataloader= DataLoader(dataset=self.dataset,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/wainer/miniconda3/envs/luan/lib/python3.11/site-packages/torch/utils/data/dataloader.py", line 245, in __init__
raise ValueError('prefetch_factor option could only be specified in multiprocessing.'
ValueError: prefetch_factor option could only be specified in multiprocessing.let num_workers > 0 to enable multiprocessing, otherwise set prefetch_factor to None.
I have been altering the obvious things such as default prefetch_factor and num_workers but after 2 hours of doing this I still get some problem somewhere. Below is my last attempt, by creating Dataloaders with the appropriate num_workers and prefetch_factor for the FixMatch_BreastCancer.py code, but I am not sure my modifications are correct. Someone is probably much more competent to make these changes...
(luan) Atlas:progs wainer$ python a2.py
Traceback (most recent call last):
File "/Users/wainer/Dropbox/alunos/luan/progs/a2.py", line 82, in <module>
model.fit(X=labeled_X,y=labeled_y,unlabeled_X=unlabeled_X)
File "/Users/wainer/miniconda3/envs/luan/lib/python3.11/site-packages/LAMDA_SSL/Base/DeepModelMixin.py", line 335, in fit
self.fit_epoch_loop(valid_X,valid_y)
File "/Users/wainer/miniconda3/envs/luan/lib/python3.11/site-packages/LAMDA_SSL/Base/DeepModelMixin.py", line 311, in fit_epoch_loop
self.fit_batch_loop(valid_X,valid_y)
File "/Users/wainer/miniconda3/envs/luan/lib/python3.11/site-packages/LAMDA_SSL/Base/DeepModelMixin.py", line 280, in fit_batch_loop
for (lb_idx, lb_X, lb_y), (ulb_idx, ulb_X, _) in zip(self._labeled_dataloader, self._unlabeled_dataloader):
File "/Users/wainer/miniconda3/envs/luan/lib/python3.11/site-packages/torch/utils/data/dataloader.py", line 633, in __next__
data = self._next_data()
^^^^^^^^^^^^^^^^^
File "/Users/wainer/miniconda3/envs/luan/lib/python3.11/site-packages/torch/utils/data/dataloader.py", line 677, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/wainer/miniconda3/envs/luan/lib/python3.11/site-packages/torch/utils/data/_utils/fetch.py", line 51, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/wainer/miniconda3/envs/luan/lib/python3.11/site-packages/torch/utils/data/_utils/fetch.py", line 51, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
~~~~~~~~~~~~^^^^^
File "/Users/wainer/miniconda3/envs/luan/lib/python3.11/site-packages/LAMDA_SSL/Dataset/LabeledDataset.py", line 217, in __getitem__
Xi, yi = self.apply_transform(Xi, yi)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/wainer/miniconda3/envs/luan/lib/python3.11/site-packages/LAMDA_SSL/Dataset/LabeledDataset.py", line 185, in apply_transform
_X = self._transform(X, item)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/wainer/miniconda3/envs/luan/lib/python3.11/site-packages/LAMDA_SSL/Dataset/LabeledDataset.py", line 130, in _transform
X=self._transform(X,item)
^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/wainer/miniconda3/envs/luan/lib/python3.11/site-packages/LAMDA_SSL/Dataset/LabeledDataset.py", line 132, in _transform
X = transform(X)
^^^^^^^^^^^^
File "/Users/wainer/miniconda3/envs/luan/lib/python3.11/site-packages/LAMDA_SSL/Base/Transformer.py", line 18, in __call__
return self.fit_transform(X,y,fit_params=fit_params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/wainer/miniconda3/envs/luan/lib/python3.11/site-packages/sklearn/utils/_set_output.py", line 140, in wrapped
data_to_wrap = f(self, X, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/wainer/miniconda3/envs/luan/lib/python3.11/site-packages/LAMDA_SSL/Base/Transformer.py", line 30, in fit_transform
return self.fit(X=X,y=y,fit_params=fit_params).transform(X)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/wainer/miniconda3/envs/luan/lib/python3.11/site-packages/sklearn/utils/_set_output.py", line 140, in wrapped
data_to_wrap = f(self, X, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/wainer/miniconda3/envs/luan/lib/python3.11/site-packages/LAMDA_SSL/Transform/ToTensor.py", line 51, in transform
X=torch.Tensor(X)
^^^^^^^^^^^^^^^
TypeError: new(): data must be a sequence (got Image)
I would guess that the problem is with the torch 2.X version, but I am not sure.
I just installed LAMDA-SSL from github. It instaled the newer version of all packages, including torch==2.0.1 (pip freeze below)
I cannot reproduce the Example that uses deeplearning
Assemble and others non-deep algorithms work fine:
but
I have been altering the obvious things such as default prefetch_factor and num_workers but after 2 hours of doing this I still get some problem somewhere. Below is my last attempt, by creating Dataloaders with the appropriate num_workers and prefetch_factor for the FixMatch_BreastCancer.py code, but I am not sure my modifications are correct. Someone is probably much more competent to make these changes...
I would guess that the problem is with the torch 2.X version, but I am not sure.
pip freeze: