-
Hi,
I just run the code provided and I am not able to reproduce the results. I am getting the following:
name | bleu_1 | bleu_2 | bleu_3 | bleu_4 | meteor | rouger_l | cider
baseline (i3d_rgb…
-
#### Description
I noticed that `y_frames` in librosa.core.spectrum.stft() is computed by
```python
y_frames = util.frame(y, frame_length=n_fft, hop_length=hop_length)
```
when `win_length` is …
-
Hi,
I am trying to compile essentia with gaia support. On arch linux `qt4` is no longer supported, so I built `gaia` from the qt5 branch, although I do not know if that branch has conflicts with `…
-
While I am super thankful for the creation of a video feature extractor, it has been so difficult to use it that I am giving up and start implementing it by hand. I am using a Kaggle Kernel and my goa…
-
We have the VGGish model in pytorch using a fork of https://github.com/tcvrick/audioset-vggish-tensorflow-to-pytorch.
Now we need to write a wrapper around this model to integrate it with the GOGG…
-
Thank you for your code and I wonder if your VGGISH_WEIGHTS from the path in the code is purely an adaption from google's checkpoint or your retrained result?
-
I want to use mediapipe to dev audio model, but i can not find in examples.
-
@DTaoo The original checkpoint file is provided [here](https://storage.googleapis.com/audioset/vggish_model.ckpt). How did you generate a HD5 file without retraining the model ?
-
Hi,
I'm trying to extract the audio and video features from a mp4 video using the youtube-8m mediapipe extractor. I was able to generate the output.tfrecord but when I try to parse the tfrecord I g…
-
Hi there,
thanks for your great work. I found that you provided the scores with 6 references on this issue #5. Even though I can reproduce the scores with single reference, the scores with 6 referen…