issues
search
aws
/
sagemaker-inference-toolkit
Serve machine learning models within a 🐳 Docker container using 🧠 Amazon SageMaker.
Apache License 2.0
372
stars
82
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Is streaming supported?
#137
inf3rnus
opened
4 months ago
0
Add support for downloading inference script and code (entry_point) from s3
#136
urirosenberg
opened
6 months ago
0
(WIP) Feature: fix `ENABLE_MULTI_MODEL` setting
#135
taylorfturner
opened
8 months ago
1
Triton container documentation implies py38
#134
david-waterworth
opened
8 months ago
0
Fix zombie process exception
#133
sachanub
closed
9 months ago
5
psutil 5.9.6 seems to be throwing ZombieProcess when retrieving the mms process
#132
charlietruong-wk
opened
9 months ago
5
Enhancing Multi-Model Server Logging Support - JSON Format Integration
#131
ViliamSerecun
opened
1 year ago
0
feat: support codeartifact for installing requirements.txt packages
#130
humanzz
closed
1 year ago
7
Add support for SAGEMAKER_MODEL_SERVER_TIMEOUT_SECONDS variable
#129
davidthomas426
closed
1 year ago
1
Tensorflow Inference toolkit
#128
vincentvic
opened
1 year ago
0
Support for parquet encoder and decoder
#127
lorenzwalthert
opened
1 year ago
0
chore: translate readme to turkish
#126
CaglarTaha
opened
1 year ago
0
fix: transform function to support proper batch inference
#125
taepd
opened
1 year ago
3
OOM errors creating an endpoint for LLMs
#124
jsleight
closed
1 year ago
1
Support user-defined batch inference logic
#123
dgcnz
opened
1 year ago
0
handle max_request_size param set by SM platofrm
#121
rohithkrn
closed
1 year ago
1
feature: Add support for py38, py39 and py310
#120
maaquib
closed
1 year ago
2
relax retrying dependency
#119
jakob-keller
closed
1 year ago
7
Add environment variable VMARGS
#118
nikhil-sk
closed
1 year ago
8
Document default_pre_model_fn and default_model_warmup_fn
#117
l3ku
opened
1 year ago
0
stop_server function
#116
Duncan-Haywood
opened
1 year ago
0
feature: Write handler to config file and serve model directly from o…
#115
davidthomas426
closed
1 year ago
3
feature: added model_name param model_server.start_model_server method
#114
andre-marcos-perez
closed
8 months ago
0
Long Model Loading times in Multimodel Server
#113
AlexRaschl
opened
1 year ago
0
fix: fix loading user custom script
#112
SuperBo
opened
1 year ago
3
[DRAFT] Support CustomAttributes / request context in handler function overrides
#111
athewsey
opened
1 year ago
1
Support CustomAttributes in script mode (for PyTorch, HuggingFace, etc)
#110
athewsey
opened
1 year ago
0
Pass context from handler service to handler function
#109
waytrue17
closed
1 year ago
25
Modify transform function to support batch inference
#108
nikhil-sk
closed
1 year ago
11
Add (and test) Support for Python 3.8
#107
ddluke
opened
2 years ago
0
Include "Requires-Python" in source distributions
#106
ddluke
opened
2 years ago
0
fix: inference handler issue
#105
Qingzi-Lan
closed
2 years ago
5
feature: preModel and warmup function support
#104
Qingzi-Lan
closed
2 years ago
10
Launch MMS without repackaging model contents
#102
fm1ch4
closed
1 year ago
0
Issue attaching eia device for huggingface transformers roBERTa model
#100
kjhenner
opened
2 years ago
0
Server Timeout Unit is Minutes in MMS, but docstring says Seconds
#99
kastman
closed
1 month ago
2
fix: Add configurable startup timeout
#98
davidthomas426
closed
2 years ago
9
fix: log4j migration from 1 to 2. Moving properties file to xml
#97
maaquib
closed
2 years ago
3
Dummy pull request to check CI build
#96
davidthomas426
closed
2 years ago
1
fix: Add formatter to logger with a timestamp.
#95
davidthomas426
closed
2 years ago
2
Model Server fails to start with multi-model-server version 1.1.5
#94
joseproura
closed
2 years ago
1
fix: Re-enable output capturing
#93
nikhil-sk
closed
2 years ago
3
fix: Increase timeout for starting model server to 10 minutes
#92
nikhil-sk
closed
2 years ago
11
fix: Increase timeout for starting model server to 10 minutes
#91
davidthomas426
closed
2 years ago
2
feature: support configuration of extra mms parameters via env
#90
henryhu666
closed
2 years ago
3
fix: fix whl installation error from inference requirements
#89
yifeim
opened
2 years ago
4
fix: Add NOTICE to MANIFEST
#88
BastianZim
closed
2 years ago
12
Add VMARGS to disable the container support.
#87
vdantu
closed
3 years ago
2
Custom `model_fn` function not found when extending the PyTorch inference container
#86
e13h
opened
3 years ago
0
Support CodeArtifact repositories for installing Python packages
#85
setu4993
closed
1 year ago
4
Next