SeldonIO / seldon-core

An MLOps framework to package, deploy, monitor and manage thousands of production machine learning models
https://www.seldon.io/tech/products/core/
Other
4.38k stars 831 forks source link

Cant' use multiprocessing.Pool in predict function #5489

Open RyanZgmzpn opened 7 months ago

RyanZgmzpn commented 7 months ago

` import logging import sys from multiprocessing import Pool

import requests

handler = logging.StreamHandler(sys.stdout) handler.setLevel(logging.DEBUG) handler.setFormatter(logging.Formatter("%(asctime)s - %(name)s - %(levelname)s - %(message)s"))

logger = logging.getLogger("router-model.log") logger.setLevel(logging.DEBUG) logger.addHandler(handler)

def get_data(url_data): r = requests.post(url_data[0], json=url_data[1], headers={'Content-Type': 'application/json'}) return r.json()

class RouterModel(object): """ Model template. You can load your model parameters in init from a location accessible at runtime """

there may be multiple urls to access

URLS = ['http://device-risk-20-ato-default-a.seldon.svc.cluster.local:9000/api/v0.1/predictions']

def __init__(self):
    """
    Add any initialization parameters. These will be passed at runtime from the graph definition parameters defined in your seldondeployment kubernetes resource manifest.
    """
    logger.info("Initializing")
    self.pool = Pool(3)

def predict(self, X, features_names=None):
    data = {"data": {"ndarray": X.tolist(), "names": features_names}}
    # data will be split for different urls
    resps = self.pool.map(get_data, [(url, data) for url in RouterModel.URLS])
    return [r for r in resps]

`

I was trying to use multiprocessing.Pool to make request to several urls in parallel. But it seems that self.pool.map doesn't return.

archwolf118 commented 4 months ago

same question. can't use multiprocessing method in MODEL class's function. Could anyone to solve this problem? If I use flask, it is no problem use multiprocessing in functions. Thank you!