issues
search
awslabs
/
multi-model-server
Multi Model Server is a tool for serving neural net models for inference
Apache License 2.0
998
stars
230
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Batch settings in archive
#778
erandagan
closed
5 years ago
8
Updates for initialize() description
#777
ddavydenko
closed
5 years ago
0
Model Archive should allow excluding dirs/files
#776
mikeobr
opened
5 years ago
0
Containers / Continers typo
#775
NegatioN
closed
5 years ago
1
Removing checks from backend. Backend is not standalone and frontend …
#774
vdantu
closed
5 years ago
0
Model archiver changes
#773
vdantu
closed
5 years ago
7
Update documents to use latest-gpu tag for docker image.
#772
frankfliu
closed
5 years ago
0
max_request_size not working
#771
johncolby
closed
5 years ago
2
Batch Configuration In Model Archive
#770
erandagan
opened
5 years ago
14
Update the beta version
#769
vdantu
closed
5 years ago
0
How to enable CORS on mms
#768
MaxTran96
closed
5 years ago
5
How to represent any size in the signature.json file
#767
wangce888
opened
5 years ago
7
Nightly base containers
#766
vdantu
closed
5 years ago
0
Configure maximum allowed request size
#765
vdantu
closed
5 years ago
1
Base docker files for python2.7 and python3.6
#764
vdantu
closed
5 years ago
0
Configurable response buffer size
#763
erandagan
closed
5 years ago
2
Exception thrown on large responses
#762
erandagan
closed
5 years ago
6
Handle default invocations model_name
#761
vdantu
closed
5 years ago
0
Fix bug #759, reset previous error in handle function.
#760
frankfliu
closed
5 years ago
1
A bug occurred while processing an error request
#759
wangce888
closed
5 years ago
6
An error occurred while using requests and postman requests
#758
wangce888
closed
5 years ago
2
Custom serving for non-MXNet or ONNX models -- error handling (with status codes, response body)?
#757
andremoeller
closed
5 years ago
5
MMS in single model mode
#756
vdantu
closed
5 years ago
1
Fixes WorkerThread race condition bug.
#755
frankfliu
closed
5 years ago
0
how to choose gpu in mms.
#754
JustinhoCHN
closed
5 years ago
10
Added batching example and documentations
#753
vdantu
closed
5 years ago
0
Added handling for accept headers
#752
vdantu
closed
5 years ago
0
Configuration max_workers ignored
#751
ThomasDelteil
closed
5 years ago
3
getAvailableGpu() inaccurate on Jetson
#750
ThomasDelteil
opened
5 years ago
3
Updated docker docs
#749
vdantu
closed
5 years ago
0
Fix lstm input json file.
#748
frankfliu
closed
5 years ago
0
Added back shufflenet to model zoo
#747
vrakesh
closed
5 years ago
0
Revert "Update benchmarkAI script."
#746
frankfliu
closed
5 years ago
0
Add missing import in template.
#745
frankfliu
closed
5 years ago
0
Move benchmark scripts to deeplearning-benchmark repo.
#744
frankfliu
closed
5 years ago
0
Bump up version to 1.0.2
#743
frankfliu
closed
5 years ago
0
How to handle high frequency query?
#742
JustinhoCHN
closed
5 years ago
1
Update mms version.
#741
frankfliu
closed
5 years ago
0
Update docker files.
#740
frankfliu
closed
5 years ago
0
Fix metrics in sample service code.
#739
frankfliu
closed
5 years ago
0
[Question] How does one adjust payload size limits for POST (e.g maxFileSize / maxRequestSize)?
#738
fpertl
closed
5 years ago
5
Update benchmarkAI script.
#737
frankfliu
closed
5 years ago
0
Add example code for machine translation using Sockeye
#736
jamesewoo
closed
5 years ago
1
Fix buildspec.yml for missing artifacts files.
#735
frankfliu
closed
5 years ago
0
testing code build PR failure case
#734
frankfliu
closed
5 years ago
0
Consume Enginename as a string
#733
vdantu
closed
5 years ago
0
Batching
#732
miguelvr
closed
5 years ago
31
Configuarable response timeout
#731
vdantu
closed
5 years ago
0
CORS configuration does not work in docker
#730
OElesin
closed
5 years ago
4
pre-fork initialization capabilities.
#729
vdantu
closed
5 years ago
1
Previous
Next