Closed WeGlove closed 1 year ago
Hi @WeGlove, Thank you for your interest in the Adapt library and thank you for reporting this strange bug. Your "debugging" explanation seems interestin, but I am not sure that the problem is here, bcause the RandomForestClassifier object of sklearn has a "sample_weight argument in the "fit" method, which avoids the use of bootstraping.
I tried to reproduced the bug, with unbalanced class dataset but I did not manage to reproduce it.
Can you please tell me wich version of adapt and sklearn you are using? How are encoded your classes, with integer or string? Do you use numpy arrays or pandas DataFrame for the X, y inputs?
Can you please share, a little example where the bug happen (with simulated data and set random seed)?
Best,
Hello everyone,
I have run into the following problem. I have labeled data X, y and labeled target data Xt, yt and would like to use TrAdaBoost in combination with sklearn RandomForrest.
Now, this works most of the time, however sometimes TrAdaBoost crashes. See this stack trace:
I have done some digging, and was able to pin it to the base.py module, specifically the BaseAdaptEstimator fit_estimator function. Specifically this section seems to be the problem:
What seems to happen is that np.random.choice sometimes randomly chooses a distribution that does not contain data for one of the classes. Thus, the estimator only returns data fitted on 4 of my 5 classes leading to the problems above.
The data isn't completely balanced if that makes a difference, however there is definitley data for each class.
Am I doing something wrong or is this something that can just happen?