-
Hi, I have tried the inference code in NYU dataset. However, I can't achieve "real-time" performance as mentioned in your paper.
For batch size=1: frame rate is 12
For batch size=3: frame rate is 17…
-
(tensorflow1) C:\Tensorflow1\models\research\object_detection>python export_infe
rence_graph.py --input_type image_tensor --pipeline_config_path training/faster_
rcnn_inception_v2_pets.config --trai…
-
Hey folks,
I trained and tested a bunch of models with this implementation. Everything works smooth, I'm just wondering if I'm doing something wrong since the results I'm getting are not in line w…
-
### Version
main
### On which installation method(s) does this occur?
Source
### Describe the issue
I've followed the example provided [here](https://nvidia.github.io/earth2mip/examples/01_ensemb…
-
!!! Exception during processing !!!
Traceback (most recent call last):
File "E:\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_out…
-
### Your current environment
```text
PyTorch version: 2.3.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC …
-
I ran the benchmark_flash_attention.py on a RTX4080 and the code worked and promoted results as normal. Here I am curious about the variable **bs_seqlen_vals**, and in the code it was set to a number …
-
### Describe the issue
When trying to quantize a Yolov8 model (exported with `yolo export model=yolov8x.pt format=onnx`) with `onnxruntime`, I get the following error:
```
$ python quantize.py yo…
Jamil updated
1 month ago
-
**Problem**
I need to create a lot of small JSONs with a LLM. To do so I started with [Jsonformer](https://github.com/1rgs/jsonformer). However, since this is not maintained anymore and my colleagu…
-
**Description**
When CUDA Shared memory is used with HTTP/GRPC protocol, it is expected that the client allocates cuda memory on one of the devices and copies the data into it.
On systems with mult…