Closed jimthompson5802 closed 6 years ago
Current work-around is to specify test_prediction_method='all_data_model'
in creating the ModelTrainer object.
this_model = ModelTrainer(
ModelClass=ThisModel, #Model algorithm
model_params=dict(n_estimators=200,n_jobs=-1), #hyper-parameters
test_prediction_method='all_data_model',
model_id='L0XTC1', # Model Identifier
feature_set='KFS02' # feature set to use
)
Looks like this might be related to this Python issue running on MacOS. This seems to be the case because I can save models whose resulting size on disk is just under 2GB. However, if the model size seems to grow past 2GB, the pickle.dump() fails.
Fix implemented based on this discussion.
For this model specification
Receive this error message in
this_model.trainModel()
Code fragment in question in
model_stacking.py