Open mci-s opened 6 years ago
@mci-s Hi, Do you have any solution for this situation? I have a same problem and i think it's related with Ram memory. If you have any solution, please share.
@Nurka11 Yes it was a memory issue. I ultimately went with a solution that took significantly less memory. You can set max_leaf_nodes
to limit the size to some degree, but the .mlmodel
size will still be very large. I don't believe there is a solution that doesn't include changing your approach.
This is still an issue with coremltools 4.1, macOS 11.3 Beta and Scikit Learn 0.19.2. If you're not using Python 2.7, you need to change string.lowercase
to string.ascii_lowercase
.
I'm not sure this is an issue worth looking into. The synthetic data generated by the example has 1,000 different labels. I don't think a decision tree that predicts a 1,000 different classes is a practical case.
I've trained a model using scikit-learn's DecisionTreeClassifier on a dataset with 1,600,000 rows, 15 features, max_depth=7. When I try to convert using coremltools for a model over about 2 mb, I get the following error:
malloc: *** error for object 0x7fb096a0f738: incorrect checksum for freed object - object was probably modified after being freed. *** set a breakpoint in malloc_error_break to debug
I might be doing this part wrong, but when I try to log and debug, I get:
Segmentation fault: 11
To recreate: