Closed ZiniuYu closed 1 year ago
please rebuild the image and reply to the communities.
Merging #882 (a3d8796) into main (0b293ec) will decrease coverage by
11.27%
. The diff coverage is100.00%
.
@@ Coverage Diff @@
## main #882 +/- ##
===========================================
- Coverage 83.06% 71.78% -11.28%
===========================================
Files 22 22
Lines 1529 1531 +2
===========================================
- Hits 1270 1099 -171
- Misses 259 432 +173
Flag | Coverage Δ | |
---|---|---|
cas | 71.78% <100.00%> (-11.28%) |
:arrow_down: |
Flags with carried forward coverage won't be shown. Click here to find out more.
Impacted Files | Coverage Δ | |
---|---|---|
server/clip_server/model/model.py | 75.37% <100.00%> (+0.18%) |
:arrow_up: |
server/clip_server/executors/clip_tensorrt.py | 0.00% <0.00%> (-94.60%) |
:arrow_down: |
server/clip_server/model/clip_trt.py | 0.00% <0.00%> (-85.72%) |
:arrow_down: |
server/clip_server/model/trt_utils.py | 0.00% <0.00%> (-83.52%) |
:arrow_down: |
server/clip_server/executors/clip_onnx.py | 87.91% <0.00%> (+1.09%) |
:arrow_up: |
server/clip_server/model/clip_onnx.py | 87.30% <0.00%> (+22.22%) |
:arrow_up: |
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.
@ZiniuYu Does the hub executor also suffer from this issue?
@ZiniuYu Does the hub executor also suffer from this issue?
I just tested both Torch and ONNX executors, they are all fine
I just tested both Torch and ONNX executors, they are all fine
Then, I doubled if this issue comes from cuda? And I would not recommend pinning the cuda version. Our work should work on all stable cuda versions.
So, please try to make it work with cuda-11.6.0
, rather than freeze the cuda version.
This PR fixes the error in linked issue