Open njgerner opened 4 years ago
I am also experiencing this issue
@njgerner, thanks for bringing this to our attention. We'll look into this.
This works as expected not on my local machine, but is necessary for us for automated testing in CI.
Just to make sure I understand your use case: you're using Local Mode just for the purpose of automated testing?
@njgerner, thanks for bringing this to our attention. We'll look into this.
This works as expected not on my local machine, but is necessary for us for automated testing in CI.
Just to make sure I understand your use case: you're using Local Mode just for the purpose of automated testing?
Correct
I'm trying to test my pipeline in local mode and I'm running into the same issue. This should be labeled as a bug instead of a feature request as local mode is clearly not working properly. Local mode is a crucial feature because having to wait for instances to spin up remotely is just not an option during development and testing.
import sagemaker
from sagemaker.pytorch.model import PyTorchModel
from sagemaker.pipeline import PipelineModel
from sagemaker import get_execution_role
role = get_execution_role()
session = sagemaker.LocalSession()
session.config = {"local": {"local_code": True}}
pytorch_model = PyTorchModel(
model_data="./model.tar.gz",
framework_version="1.5.0",
code_location="s3://whatever/sagemaker-code",
sagemaker_session=session,
py_version="py3",
role=role,
entry_point='inference.py'
)
postproc_model = PyTorchModel(
model_data="./model.tar.gz",
framework_version="1.5.0",
code_location="s3://whatever/sagemaker-code",
sagemaker_session=session,
py_version="py3",
role=role,
entry_point='postproc.py'
)
pipeline_model = PipelineModel(
name="whatever",
role=role,
sagemaker_session=session,
models=[
pytorch_model,
postproc_model
]
)
predictor = pipeline_model.deploy(
instance_type='local',
initial_instance_count=1,
wait=False,
serializer=sagemaker.serializers.JSONSerializer(),
deserializer=sagemaker.deserializers.JSONDeserializer()
)
Error:
INFO:sagemaker:Creating model with name: whatever
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-8-1d360bc14bee> in <module>
4 wait=False,
5 serializer=sagemaker.serializers.JSONSerializer(),
----> 6 deserializer=sagemaker.deserializers.JSONDeserializer()
7 )
~/anaconda3/envs/onmt/lib/python3.6/site-packages/sagemaker/pipeline.py in deploy(self, initial_instance_count, instance_type, serializer, deserializer, endpoint_name, tags, wait, update_endpoint, data_capture_config)
150 self.name = self.name or name_from_image(containers[0]["Image"])
151 self.sagemaker_session.create_model(
--> 152 self.name, self.role, containers, vpc_config=self.vpc_config
153 )
154
~/anaconda3/envs/onmt/lib/python3.6/site-packages/sagemaker/session.py in create_model(self, name, role, container_defs, vpc_config, enable_network_isolation, primary_container, tags)
2142
2143 try:
-> 2144 self.sagemaker_client.create_model(**create_model_request)
2145 except ClientError as e:
2146 error_code = e.response["Error"]["Code"]
TypeError: create_model() missing 1 required positional argument: 'PrimaryContainer'
I am also having this issue! Is there any update?
This is a critical problem with Local Mode. Without fixing this, we can't test a pipeline locally.
Does the issue still exist?
Describe the bug I am unable to run a
SparkMLModel
viaPipelineModel
in local mode due to a lack of support forContainers
in theLocalSession
methodcreate_model
.To reproduce
Will give the error result below.
Expected behavior LocalSession should support the
Containers
parameter.Screenshots or logs
System information A description of your system. Please provide:
2.4.0
PySpark
2.4.6
3.7.0
CPU
N
uses SparkML Serving - https://sagemaker.readthedocs.io/en/stable/frameworks/sparkml/sagemaker.sparkml.htmlAdditional context This works as expected not on my local machine, but is necessary for us for automated testing in CI.