issues
search
triton-inference-server
/
fil_backend
FIL backend for the Triton Inference Server
Apache License 2.0
68
stars
35
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Test CPU-only inference without visible GPU
#103
wphicks
closed
3 years ago
0
Provide instructions for pulling Triton container
#102
wphicks
closed
3 years ago
0
[Tracker] Reduce installed size of FIL backend
#101
wphicks
closed
2 years ago
0
Ensure move_deps.sh does not pick up any unnecessary libraries
#100
wphicks
closed
3 years ago
0
Merge 21.06.1 hotfix into main
#99
wphicks
closed
3 years ago
0
Correct bad cp command in Dockerfile
#98
wphicks
closed
3 years ago
0
Add release checklist to docs
#97
wphicks
closed
3 years ago
2
Tests failing on "custom" end-to-end builds
#96
wphicks
closed
3 years ago
0
Skip cudaStreamSynchronize if no GPU is visible
#95
hcho3
closed
3 years ago
0
Use RMM for memory allocation
#94
wphicks
closed
3 years ago
0
Add benchmarking scripts
#93
wphicks
closed
2 years ago
0
Remove guidance in docs on limit for large input arrays
#92
wphicks
closed
3 years ago
2
Set random seed in tests
#91
wphicks
closed
3 years ago
0
Include environment details in CI output
#90
wphicks
closed
3 years ago
3
Fix PEP8 errors and add flake8 to test env
#89
wphicks
closed
3 years ago
1
Generate example models only once and reuse for GPU/CPU tests
#88
wphicks
closed
3 years ago
0
Test end-to-end prediction pipeline with CPU-only machine
#87
hcho3
closed
3 years ago
0
Add ability to build triton_fil Docker image with cuML nightly
#86
hcho3
closed
3 years ago
2
Synchronize stream during TritonTensor sync
#85
wphicks
closed
3 years ago
0
Set numpy random state in tests
#84
wphicks
closed
3 years ago
0
PR Pipelines generally triggered twice in CI
#83
wphicks
closed
3 years ago
2
Add ability to perform prediction on CPU
#82
hcho3
closed
3 years ago
3
Add script for printing environment details in CI
#81
wphicks
closed
3 years ago
0
Tests failing on NVIDIA Tesla T4, AWS G4 instance
#80
hcho3
closed
3 years ago
2
Update to C++17
#79
wphicks
closed
3 years ago
0
Segfault on model load when built in Debug mode
#78
wphicks
closed
3 years ago
1
Intermittent failure in cuML RF test with batch size 1
#77
wphicks
closed
3 years ago
2
Switch to C++17
#76
wphicks
closed
3 years ago
0
Move all associated repos into organization namespaces
#75
wphicks
closed
3 years ago
0
Add documentation for generating example models
#74
wphicks
closed
3 years ago
0
Update to stable cuML 21.06
#73
wphicks
closed
3 years ago
0
Condense regression and classification model generation in tests
#72
wphicks
opened
3 years ago
0
Update tests to load FIL models directly from TL checkpoints
#71
wphicks
opened
3 years ago
0
Add tests for Treelite binary checkpoint files
#70
wphicks
closed
3 years ago
0
Use std::optional instead of raw pointer to optionally store raft handle
#69
hcho3
closed
3 years ago
1
Add note on layer names
#68
wphicks
closed
3 years ago
1
Use 1 as output dimension in testing non-proba classifiers
#67
wphicks
closed
3 years ago
0
Use BuildKit cache mounts to reduce conda install time in CI
#66
wphicks
closed
2 years ago
0
Create missing directory in Triton base image
#65
wphicks
closed
3 years ago
0
Temporary fix for upstream cuML issues
#64
wphicks
closed
3 years ago
3
Add ability to perform prediction on CPU
#63
hcho3
closed
3 years ago
4
Add script for linting with clang_format
#62
wphicks
closed
3 years ago
1
Review variables passed from Triton CI and use as required
#61
wphicks
closed
3 years ago
1
Add make target to update rpath based on CMake configuration
#60
wphicks
closed
2 years ago
0
Adjust build options for compatibility with Triton's build.py
#59
wphicks
closed
3 years ago
1
Automate CI provisioning
#58
wphicks
closed
3 years ago
1
Add linting to CI
#57
wphicks
closed
3 years ago
1
DO NOT MERGE: Test Failing CI
#56
wphicks
closed
3 years ago
0
CI test PR
#55
dantegd
closed
3 years ago
1
Add E2E test for Treelite checkpoint model
#54
wphicks
closed
3 years ago
0
Previous
Next