issues
search
aws
/
sagemaker-pytorch-inference-toolkit
Toolkit for allowing inference and serving with PyTorch on SageMaker. Dockerfiles used for building SageMaker Pytorch Containers are at https://github.com/aws/deep-learning-containers.
Apache License 2.0
134
stars
72
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Disabling token auth and enabling model API by default
#167
udaij12
closed
1 month ago
1
Ignore zombie processes when detecting TorchServe status
#166
namannandan
closed
6 months ago
7
Zombie process exception
#165
5agado
opened
6 months ago
4
Improve error logging when invoking custom handler methods
#164
namannandan
closed
8 months ago
0
Improve debuggability during model load and inference failures
#163
namannandan
closed
8 months ago
0
Fix dependency versions
#162
sachanub
closed
11 months ago
0
Unpin dependency versions
#161
sachanub
closed
11 months ago
0
Include 2.1.0 container for integration tests
#160
sachanub
closed
11 months ago
0
Upgrade psutil version
#159
sachanub
opened
1 year ago
0
Cleanup to remove EIA integration tests and other unused files/code
#158
sachanub
closed
1 year ago
0
Migrate base inference toolkit scripts and unit tests
#157
sachanub
closed
1 year ago
0
ModuleNotFoundError: Sagemaker only copies entry_point file to /opt/ml/code/ instead of the holy-cloned source code
#156
celsofranssa
opened
1 year ago
0
Add new unit and integration tests
#155
sachanub
closed
1 year ago
4
Fix integration tests and update Python versions
#154
sachanub
closed
1 year ago
0
Dummy PR 2 - Include base inference toolkit
#153
sachanub
closed
1 year ago
0
Dummy PR to understand PyTorch inference toolkit build process
#152
sachanub
closed
1 year ago
1
remove unused file
#151
lxning
closed
1 year ago
0
reuse sagemaker-inference's requirements.txt installation logic
#150
humanzz
closed
1 year ago
0
Reuse the requirements.txt installation logic from sagemaker-inference-toolkit
#149
humanzz
closed
1 year ago
0
Enable vmargs argument for torchserve
#148
carljeske
opened
1 year ago
0
deprecation: deleting outdated docker file for pytorch-eia-1.3.1
#147
pravali96
opened
1 year ago
0
Enable telemetry.
#146
chen3933
closed
1 year ago
0
fix: Remove stale docker files
#145
ivan-khvostishkov
opened
1 year ago
0
Add support for SAGEMAKER_MAX_PAYLOAD_IN_MB
#143
namannandan
closed
1 year ago
2
Documentation for inference.py `transform_fn`
#142
david-waterworth
opened
1 year ago
0
Incorrect reporting of memory utilisation
#141
david-waterworth
opened
1 year ago
0
Modify log4j2.xml to remove dependency on javascript.
#140
chen3933
closed
1 year ago
0
fix_build
#139
chen3933
closed
1 year ago
1
fix_build
#138
chen3933
closed
1 year ago
0
Prepend `code_dir` to `sys.path` rather than `append`
#137
davidthomas426
opened
1 year ago
0
add vmargs=-XX:-UseContainerSupport in config
#136
lxning
closed
1 year ago
3
Document TorchServe version compatibility
#135
davidthomas426
opened
1 year ago
2
Specify batch size for MME
#134
austinmw
opened
2 years ago
0
[Question] Using model.mar with built-in handler script
#133
austinmw
opened
2 years ago
0
configure min torchserve version dependency
#132
rohithkrn
closed
1 year ago
0
Update PyTorch Inference toolkit to log telemetry metrics
#131
sachanub
closed
2 years ago
0
Fix: Don't load default model in MME mode
#130
nikhil-sk
closed
2 years ago
10
MMS mode in inference does not support in GPU instance
#129
holopekochan
closed
2 years ago
0
Is this Dockerfile compatible with sagemaker elastic inference
#128
fymaterials98
opened
2 years ago
0
Multi gpu support
#127
waytrue17
opened
2 years ago
16
how to use gpu in sagemaker instance
#126
haiderasad
opened
2 years ago
1
using cuda enabled pytorch image
#125
haiderasad
opened
2 years ago
0
Document how to locally run the container
#124
ManuelVs
opened
2 years ago
2
add environment variable "OMP_NUM_THREADS"
#123
lxning
opened
2 years ago
0
Use an overriden transform function to support batch inference in pytorch
#122
nikhil-sk
closed
2 years ago
5
Batch Inference does not work when using the default handler
#121
nikhil-sk
opened
2 years ago
0
Test config change
#120
mseth10
opened
2 years ago
2
Update CI to use PyTorch 1.10
#119
mseth10
closed
2 years ago
16
pass model directory as input to torchserve
#118
mseth10
closed
2 years ago
80
Launch TorchServe without repackaging model contents
#117
fm1ch4
opened
2 years ago
5
Next