yzhao062 / pyod

A Python Library for Outlier and Anomaly Detection, Integrating Classical and Deep Learning Techniques
http://pyod.readthedocs.io
BSD 2-Clause "Simplified" License
8.48k stars 1.36k forks source link

Can we use Pyspark dataframe as input #461

Open rupesh15203 opened 1 year ago

rupesh15203 commented 1 year ago

I am exploring this library on Pyspark dataframe. Following code I use in my experiment. I used joblibspark library to use spark as my backend while processing:

from sklearn.utils import parallel_backend
from joblibspark import register_spark
register_spark()
mcd_model = mcd.MCD(random_state=42)
with parallel_backend('spark', n_jobs=-1):
    mcd_model.fit(df)

I tried to run this code using dummy dataset. I ran into following problem:

ValueError: Expected 2D array, got scalar array instead:
array=DataFrame[length: double, width: double, height: double, engine-size: int, price: int].
Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample.

It is only considering shema information of dataframe.

If I use df.collect() method while fitting the model, I am able to run the code successfully. But it will cause error for large dataset. Can somebody guide me how can I use pyspark dataframe directly to train the model.

Oukaishen commented 1 year ago

Hi there, I have the same problem here. My data is about ~ 10billion in rows and ~100 in features, which is obviously not suitable for single machine. Hope there is some guide for spark or pyspark support