-
Package: libtorch[core,cuda,dist,fftw3,leveldb,llvm,opencv,vulkan,xnnpack,zstd]:x64-windows@2.1.2#1
**Host Environment**
- Host: x64-windows
- Compiler: MSVC 19.39.33521.0
- vcpkg-tool vers…
-
Hi
I'm working inside NGC container image ``nvcr.io/nvidia/pytorch:23.04-py3`` using a GPU RTX 3090
- These are my steps:
- Cloning the YOLOv5 repository:
```
git clone https://githu…
-
When I ran the project code, I found that DLA0 and GPU resources were being used, but DLA1 resources were not being used.
-
Thanks for sharing open source。
While there are differences in batch size inference performance mentioned in the readme, I noticed in the inference code only the provision of multiple batch variables…
ou525 updated
6 months ago
-
when I run the command: " make clean & make run ", An error occurred:
```
**/usr/local/cuda/bin/nvcc -I /usr/local/cuda/include -I ./src/matx_reformat/ -I /usr/include/opencv4/ -I /usr/incl…
-
## Description
1. I clone the repo https://github.com/NVIDIA-AI-IOT/cuDLA-samples, then using the `trtexec` to inference the engine file. The file `yolov5.int8.int8hwc4in.fp16chw16out.standalone.bi…
-
1. follow export `README.md`, Option1, QAT -> PTQ, then try to serialize onnx model to generate engine file.
```
(py310) orin@orin-root:~/workspace/cuDLA-samples$ bash data/model/build_dla_sta…
-
* convert new engine , using dlacore =1
```
echo "Build DLA loadable for fp16 and int8"
mkdir -p data/loadable
TRTEXEC=/usr/src/tensorrt/bin/trtexec
${TRTEXEC} --onnx=data/model/yolov5s_trimmed_…
-
* Env
```
(base) orin@orin-root:~/workspace/cuDLA-samples$ sudo jetson_release
[sudo] password for orin:
Software part of jetson-stats 4.2.4 - (c) 2024, Raffaello Bonghi
Model: Jetson AGX Orin De…
-
apologies for the strange artifacts in the terminal output the error that causes the build to fail is.
I ran ./jetson-containers build ros:noetic-desktop.
I also tried manually setting L4T_VERSION…
bl33m updated
4 months ago