-
Hi, I am trying to replicate your experiments on ActivityNet using VideoLLaMA2-7B model on a single A100 GPU. Here is the command I run:
```
python videollama2/eval/inference_video_oqa_activitynet.p…
-
Thanks for the open source. I run vos_inference.py with my own video dataset on a machine with four v100 graphics cards (each with 32G memory). I detected that the program is only running on one card.…
-
Hi Teams,
I'm trying to evaluate VideoLLaMA2 on MVBench. As I run the inference_video_mcqa_mvbench.py, the following traceback occurs:
```
Traceback (most recent call last):
File "/***/Video…
-
Hi, I noticed that new IDs cannot be added during inference on videos:
https://github.com/facebookresearch/segment-anything-2/blob/6186d1529a9c26f7b6e658f3e704d4bee386d9ba/sam2/sam2_video_predictor.p…
-
I have been trying to figure out how to use this for **inference** and evaluate other datasets without finetuning.
The scripts explain how you can use the model with the extracted features but I h…
-
### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [X] I added a very descriptive title to this issue.
- [X] I have provide…
-
### Search before asking
- [X] I have searched the Inference [issues](https://github.com/roboflow/inference/issues) and found no similar bug report.
### Bug
## Set Up
I use a Basler Camera acA1…
-
Hi, thanks for your great work. I have found a strange thing, if I set the seed in the for loop, the video will be noise.
And why you not setting random seed in inference?
https://github.com/PKU-Yua…
-
Didn't find the code for DDIM inversion, but DDIM sampling directly from noise, why is that?
@torch.no_grad()
def __call__(
self,
prompt: Union[str, List[str]],
mot…
-
The input resolution is 1024, how much roughly frames it can handles one time consider the GPU mem usage and speed?