-
### Short description of current behavior
I have huggingface sentiment analyzer and sentiment explain comes as text in next format:
```
{'Neutral': 0.6581241488456726, 'Bullish': 0.3416257798671722…
-
Hello ,
I am so confused,
I have some simple questions. does the training set extract all frames of the video, or does it choose the first frame or randomly extract a frame?
Then after the train…
-
Hi,
According to the paper, autoaugment is only used for the images classification (with the cross attention head), but in the [code](https://github.com/facebookresearch/jepa/blob/main/evals/video_c…
-
Currently Vbench can evaluate on the list of dimension
['subject_consistency', 'background_consistency', 'temporal_flickering', 'motion_smoothness', 'dynamic_degree', 'aesthetic_quality', 'imaging_qu…
-
-
## ❓ General Questions
I tried to compile TVM and MLC-LLM on jetson orin AGX(jp6 cu122), in order to inference phi3.5v. However, I discovered phi3 processes images is much slower than hugging face …
-
### Model description
[MeMViT, CVPR 2022](https://arxiv.org/abs/2201.08383) is the most efficient transformer-based video understanding model, and META AI released it. Its efficient online attentio…
-
### Have I written custom code (as opposed to using a stock example script provided in MediaPipe)
No
### OS Platform and Distribution
iOS 16.4 and iOS 16.6
### MediaPipe Tasks SDK version
0.10.15…
V-m1r updated
16 hours ago
-
## 🐛 Bugs / Unexpected behaviours
Not all clips are loaded in the training/testing step when I use `UniformClipSampler`.
I followed `video_classification_example` to load data and train my network, …
-
Hello, I encountered a problem when using the official video classification example. I want to ask you whether you have successfully trained video classification! My error is as follows. Thank you so …