issues
search
NVIDIA
/
tensorrt-laboratory
Explore the Capabilities of the TensorRT Platform
https://developer.nvidia.com/tensorrt
BSD 3-Clause "New" or "Revised" License
261
stars
50
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Bump jupyterlab from 0.35.4 to 1.2.21
#44
dependabot[bot]
opened
3 years ago
0
Bump urllib3 from 1.24.2 to 1.26.5
#43
dependabot[bot]
opened
3 years ago
0
Bump py from 1.7.0 to 1.10.0
#42
dependabot[bot]
opened
3 years ago
0
Bump pygments from 2.3.1 to 2.7.4
#41
dependabot[bot]
opened
3 years ago
0
Bump jinja2 from 2.10.1 to 2.11.3
#40
dependabot[bot]
opened
3 years ago
0
when i have multi place use the TRT in different thread, i hit this crash and caught signal: 11 SIGSEGV
#39
bigBuffers
opened
3 years ago
0
Bump notebook from 5.7.8 to 6.1.5
#38
dependabot[bot]
opened
3 years ago
0
make install does not behave as expected
#37
ryanolson
opened
3 years ago
0
TensorRT-7.0 support
#36
idealboy
closed
3 years ago
1
How to write a custom allocator for IGpuAllocator
#35
ray-lee-94
closed
3 years ago
1
run multiple models at one time on xavier
#34
jeansely
closed
4 years ago
1
Bump bleach from 3.1.0 to 3.1.1
#33
dependabot[bot]
closed
4 years ago
1
bazel cpuaff
#32
ryanolson
opened
4 years ago
0
Bump pillow from 5.3.0 to 6.2.0
#31
dependabot[bot]
closed
4 years ago
0
Build problem
#30
twmht
closed
5 years ago
1
Update notebooks
#29
ryanolson
closed
5 years ago
0
Creating engines for PyTorch or onnx
#28
mlcoop
closed
5 years ago
8
fixes GIL failure on PyInferRunner::Infer
#27
ryanolson
closed
5 years ago
0
Release tags?
#26
brianthelion
opened
5 years ago
1
One of your dependencies may have a security vulnerability (Jinja2 < 2.10.1)
#25
zeroepoch
closed
5 years ago
1
I found that using tensorrt for inference takes more time than using tensorflow directly on GPU
#24
jlygit
closed
5 years ago
1
Refactor Memory; Add better support for DLPack and Numpy
#23
ryanolson
closed
3 years ago
1
Model chaining example
#22
SlipknotTN
opened
5 years ago
5
Enable gRPC deadlines in Client/Server
#21
ryanolson
opened
5 years ago
1
Provide K8s Horizontal Pod AutoScaler Example
#20
ryanolson
opened
5 years ago
0
Back pressure for thread pool task queue
#19
mrjackbo
closed
5 years ago
3
Runtime refactor
#18
ryanolson
closed
5 years ago
0
Extract the TensorRT build as part of the version in the FindTensorRT.cmake
#17
ryanolson
closed
5 years ago
1
Add TensorRT Inference Server Configurations for GTC Demo
#16
ryanolson
opened
5 years ago
0
Post GTC Sides
#15
ryanolson
closed
5 years ago
0
Add IServer Interface
#14
ryanolson
opened
5 years ago
0
TensorRT::Model::GetBinding should be overloaded to accept input tensor names
#13
ryanolson
closed
5 years ago
0
DeserializeEngine should accept raw bytes
#12
ryanolson
closed
5 years ago
0
pybind11
#11
ryanolson
closed
5 years ago
1
Unary Forwarder
#10
ryanolson
closed
5 years ago
0
03-batching
#9
mrmeswani
closed
5 years ago
1
NVIDIA Inference Server
#8
ryanolson
closed
5 years ago
0
Improve Metrics and Dashboard
#7
ryanolson
opened
6 years ago
0
Batching Service
#6
ryanolson
closed
6 years ago
0
Task Graphs
#5
ryanolson
closed
5 years ago
0
Improved network testing
#4
ryanolson
closed
6 years ago
0
Updated K8s example
#3
ryanolson
closed
6 years ago
0
Prometheus Metrics
#2
ryanolson
closed
6 years ago
0
BYO-Memory + Enhancements
#1
ryanolson
closed
6 years ago
0