-
System config:
- CPU arch x86_64
- GPU: H200
- Tensorrt-LLM:v0.14.0
- OS: ubuntu-22.04
- runtime-env: docker container build from sources via official [build script](https://techcommunity.microsoft.c…
-
### Issue type
Build/Install
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
2.18.0-rc0
### Custom code
No
### OS pl…
-
The experimental data coming from the camera and the detector have a resolution that is captured by `float16` format.
This means that the model does not have to be more precise than that and can b…
-
cpu: x86_64
gpu: nvidia H20
cuda version: 12.4
tensorrt-llm version: 0.14.0
I follow https://github.com/NVIDIA/TensorRT-LLM/blob/main/examples/qwen/README.md running qwen2 0.5B model, The results I ob…
-
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_nn_functional_glu_cuda_float16&suite=TestInduct…
-
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_nn_functional_conv3d_cpu_float16&suite=TestInducto…
-
## 🐞Describing the bug
The built-in Mish activation function in `coremltools` introduces significant numerical errors in Core ML models when using 16-bit floating point precision (FLOAT16) on configu…
-
```sql
create virtual table vec_movies using vec0(
synopsis_embedding float16[768]
);
create virtual table vec_movies using vec0(
synopsis_embedding bfloat16[768]
);
```
Also `vec_quan…
-
Float mismatch error after float16 quantization :Data in initializer 'onnx::Add_2877' has element type tensor(float16) but usage of initializer in graph expects tensor(float)
`model_fp16 = float16.…
-
The below code defines a custom TIR function that computes the atan of each element in a buffer of shape (20,) and then uses it within a relax function. When trying to build the module using relax.bui…