Closed pushpendrapratap closed 3 years ago
@pushpendrapratap would you please try to set the util file path in the initialize() method as utils = os.path.join(model_dir, "utils.py"), then import utils where its needed. Please let me know if that helps.
@pushpendrapratap:
The model-archive is extracted in a temporary directory (model_dir) and added to the PYTHONPATH but it is not a package hence the relative import statement fails.
Use the following import statement
from utils import set_seed
instead of
from .utils import set_seed
@HamidShojanazeri thanks for the suggestion but in my case, it will not help. Here is a reason:
Let's say I've a script utils.py
which in turn import other python scripts (e.g., video_transforms.py
), all imports are relative and not absolute. Now, in this scenario, your suggested approach will not work.
@harshbafna absolute import works. My bad, I think I didn't clarify it well that actually I was looking for any better way to achieve the same. The reason was that, in order to just deploy my model using torchserve
, I've to create lots of duplicate scripts (--extra-files
) and change relative import to absolute import in all of them.
Can you suggest any resources for torchserve
best practices?
Thanks
@pushpendrapratap, There can be a couple of approaches to supply multiple python dependency files while creating the mar file :
model-dir
while initializing the handler.If it is a python project with setup.py
, you can create a binary, supply it with --extra-files
and add a requirements.txt
file with an entry for your project binary.
For more details on model-specific requirements.txt refer documentation
@harshbafna Thanks for your response. Yes, I think for the time being I've to go with the above approach. But I really wish if the same codebase could have been used to serve the inference request (like if I'm going to use Flask
or Starlette
, all I've to do is add an app.py
file and I'll be good to go).
Closing this issue as absolute import
fixes the above issue.
In case you came here looking for a way to define your handler in nbdev(which uses relative imports), this is what worked for me.
...
from fastai.vision.all import * # Just including this here for the Path import
## Standalone boilerplate before relative imports
## Allows the nbdev relative imports to work with torchserve
if not __package__ and '__file__' in locals():
DIR = Path(__file__).resolve().parent
sys.path.insert(0, str(DIR.parent))
__package__ = DIR.name
from my_nbdev_package.my_module import *
...
Context
0.2.0
0.2.0
1.6.0
0.7.0
openjdk 11.0.8 2020-07-14
Ubuntu 20.04
/home/pushpendra/Documents/personal-src/ocr/venv/bin/python3
Your Environment
no
no
CPU
but it doesn't mattercustom handler
vision
local models
using default one
Expected Behavior
torchserve --start --ncs --model-store ./models/ --models r2plus1d=r2plus1d.mar
should successfully start the server.Current Behavior
ImportError: attempted relative import with no known parent package
, Backend worker process died.Possible Solution
--extra-files
to thetorch-model-archiver
) but that will be a huge change. So I just want to know, is there any workaround or a better way to achieve the same?Steps to Reproduce
models/
src/
torch-model-archiver --model-name r2plus1d --version 1.0 --serialized-file ./models/r2plus1d_8_kinetics_100_epochs.pt --extra-files ./src/utils.py --handler ./src/model_handler.py --export-path ./models/ -f
torchserve --start --ncs --model-store ./models/ --models r2plus1d=r2plus1d.mar
relevant code snippets and files
import torch import numpy as np from PIL import Image import torch.nn as nn from torchvision import transforms as torchTF from torchvision.transforms import Compose from decord import VideoReader, cpu from smart_open import open as sm_open from ts.torch_handler.base_handler import BaseHandler
from .utils import set_seed
set_seed(0) logger = logging.getLogger(name)
DEFAULT_MEAN = (0.43216, 0.394666, 0.37645) DEFAULT_STD = (0.22803, 0.22145, 0.216989)
class ModelHandler: def init(self): self.model = None self.context = None self.manifest = None self.initialized = False self.device, self.map_location = None, "cpu" self.batch_size, self.sample_length, self.width, self.height = 8, 50, 132, 132 self.transform = Compose( [ torchTF.Lambda(lambda fms: [torchTF.Resize(128)(fm) for fm in fms]), torchTF.Lambda(lambda fms: [torchTF.CenterCrop(112)(fm) for fm in fms]),
[T, H, W, C] -> [T, C, H, W]
_service = ModelHandler()
def handle(data, context): try: if not _service.initialized: _service.initialize(context) if data is None: return None return _service.handle(data, context) except Exception as err: logger.error("error_type: {}, error: {}".format(type(err), err)) raise err
import os import random
import torch import numpy as np
def set_seed(seed: int): random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) os.environ["PYTHONHASHSEED"] = str(seed) if torch.cuda.is_available(): torch.cuda.manual_seed(seed) torch.cuda.manual_seed_all(seed) torch.backends.cudnn.deterministic = True torch.backends.cudnn.benchmark = False