I am using tpot for auto ml and unable to convert the model into pytorch getting following error.
Unable to find converter for model type <class 'tpot.builtins.stacking_estimator.StackingEstimator'>.
It usually means the pipeline being converted contains a
transformer or a predictor with no corresponding converter implemented.
Please fill an issue at https://github.com/microsoft/hummingbird.
Traceback (most recent call last):
File "/anaconda/envs/numtra_env/lib/python3.6/site-packages/NumtraBackendHB-0.3-py3.6.egg/automl/ModelPrediction.py", line 85, in getPrediction
model_torch = convert(sklearn_model, 'pytorch', extra_config={"n_features":col_len})
File "/anaconda/envs/numtra_env/lib/python3.6/site-packages/hummingbird/ml/convert.py", line 431, in convert
return _convert_common(model, backend, test_input, device, extra_config)
File "/anaconda/envs/numtra_env/lib/python3.6/site-packages/hummingbird/ml/convert.py", line 392, in _convert_common
return _convert_sklearn(model, backend, test_input, device, extra_config)
File "/anaconda/envs/numtra_env/lib/python3.6/site-packages/hummingbird/ml/convert.py", line 97, in _convert_sklearn
topology = parse_sklearn_api_model(model, extra_config)
File "/anaconda/envs/numtra_env/lib/python3.6/site-packages/hummingbird/ml/_parse.py", line 60, in parse_sklearn_api_model
outputs = _parse_sklearn_api(scope, model, inputs)
File "/anaconda/envs/numtra_env/lib/python3.6/site-packages/hummingbird/ml/_parse.py", line 232, in _parse_sklearn_api
outputs = sklearn_api_parsers_map[tmodel](scope, model, inputs)
File "/anaconda/envs/numtra_env/lib/python3.6/site-packages/hummingbird/ml/_parse.py", line 278, in _parse_sklearn_pipeline
inputs = _parse_sklearn_api(scope, step[1], inputs)
File "/anaconda/envs/numtra_env/lib/python3.6/site-packages/hummingbird/ml/_parse.py", line 234, in _parse_sklearn_api
outputs = _parse_sklearn_single_model(scope, model, inputs)
File "/anaconda/envs/numtra_env/lib/python3.6/site-packages/hummingbird/ml/_parse.py", line 254, in _parse_sklearn_single_model
alias = get_sklearn_api_operator_name(type(model))
File "/anaconda/envs/numtra_env/lib/python3.6/site-packages/hummingbird/ml/supported.py", line 385, in get_sklearn_api_operator_name
raise MissingConverter("Unable to find converter for model type {}.".format(model_type))
hummingbird.ml.exceptions.MissingConverter: Unable to find converter for model type <class 'tpot.builtins.stacking_estimator.StackingEstimator'>.
The complete pipeline that tpot returns is
Pipeline(steps=[('stackingestimator',
StackingEstimator(estimator=DecisionTreeClassifier(max_depth=9,
min_samples_leaf=16,
min_samples_split=16))),
('gaussiannb', GaussianNB())])
I am converting it like this
model_torch = convert(sklearn_model, 'pytorch', extra_config={"n_features":col_len})
I am using tpot for auto ml and unable to convert the model into pytorch getting following error. Unable to find converter for model type <class 'tpot.builtins.stacking_estimator.StackingEstimator'>. It usually means the pipeline being converted contains a transformer or a predictor with no corresponding converter implemented. Please fill an issue at https://github.com/microsoft/hummingbird.
Traceback (most recent call last): File "/anaconda/envs/numtra_env/lib/python3.6/site-packages/NumtraBackendHB-0.3-py3.6.egg/automl/ModelPrediction.py", line 85, in getPrediction model_torch = convert(sklearn_model, 'pytorch', extra_config={"n_features":col_len}) File "/anaconda/envs/numtra_env/lib/python3.6/site-packages/hummingbird/ml/convert.py", line 431, in convert return _convert_common(model, backend, test_input, device, extra_config) File "/anaconda/envs/numtra_env/lib/python3.6/site-packages/hummingbird/ml/convert.py", line 392, in _convert_common return _convert_sklearn(model, backend, test_input, device, extra_config) File "/anaconda/envs/numtra_env/lib/python3.6/site-packages/hummingbird/ml/convert.py", line 97, in _convert_sklearn topology = parse_sklearn_api_model(model, extra_config) File "/anaconda/envs/numtra_env/lib/python3.6/site-packages/hummingbird/ml/_parse.py", line 60, in parse_sklearn_api_model outputs = _parse_sklearn_api(scope, model, inputs) File "/anaconda/envs/numtra_env/lib/python3.6/site-packages/hummingbird/ml/_parse.py", line 232, in _parse_sklearn_api outputs = sklearn_api_parsers_map[tmodel](scope, model, inputs) File "/anaconda/envs/numtra_env/lib/python3.6/site-packages/hummingbird/ml/_parse.py", line 278, in _parse_sklearn_pipeline inputs = _parse_sklearn_api(scope, step[1], inputs) File "/anaconda/envs/numtra_env/lib/python3.6/site-packages/hummingbird/ml/_parse.py", line 234, in _parse_sklearn_api outputs = _parse_sklearn_single_model(scope, model, inputs) File "/anaconda/envs/numtra_env/lib/python3.6/site-packages/hummingbird/ml/_parse.py", line 254, in _parse_sklearn_single_model alias = get_sklearn_api_operator_name(type(model)) File "/anaconda/envs/numtra_env/lib/python3.6/site-packages/hummingbird/ml/supported.py", line 385, in get_sklearn_api_operator_name raise MissingConverter("Unable to find converter for model type {}.".format(model_type)) hummingbird.ml.exceptions.MissingConverter: Unable to find converter for model type <class 'tpot.builtins.stacking_estimator.StackingEstimator'>.
The complete pipeline that tpot returns is Pipeline(steps=[('stackingestimator', StackingEstimator(estimator=DecisionTreeClassifier(max_depth=9, min_samples_leaf=16, min_samples_split=16))), ('gaussiannb', GaussianNB())]) I am converting it like this model_torch = convert(sklearn_model, 'pytorch', extra_config={"n_features":col_len})