-
```
├── real_source
├── aaa.png
├── bbb.jpg
├── real_target
├── ccc.png
├── ddd.jpg
├── fake
├── ccc_fake.png
├── ddd_fake.jpg
├── main.py
├── inception_score.py
…
-
Dear Joao Paulo
Many thanks for this amazing work ! I have found a problem when computing the _bresenham_pairs for a 3 x 9 matrix. I simply used these values
```
p = np.array([[80.0644976552576…
-
Hi,
Thank you very much for this great effort, which the community really would profit from!
While trying to set up this benchmark, I faced a few issues, which make it hard to use this benchmark…
MUCDK updated
9 months ago
-
How can we objectively evaluate our model?
Some random thoughts below:
1. **Intrusive v.s. non-intrusive metrics**
In speech generation, we generally have 2 kind of metrics: intrusive and non-…
-
**Describe the bug**
Running the FID computation on two distributions which are **exactly the same** leads to non-zero values. For example, if I use the 10,000 examples of **CIFAR-10 test set** as o…
-
What this script does is it calculates the FID score for some random images:
```python
predictor = InceptionPredictor(output_dim=64)
true_images = torch.rand(32, 3, 299, 299)
art_images = torch.…
-
the shape of sampe is (batch, num_frames, channel, height, width), so the sample.shape[2] should be the number of channel.
but here, you set num_frames=sample.shape[2], is there a problem here?
…
-
Hi,
It's a nice work. But I have questions about experiments on Diffusion.
1. In Table 8, do you compare your results with full data training (shown as original by 7.83)? But in E.2 VISUALIZATI…
-
Hi guys,
I want to implement in my trainer a measure of similarity between my predicted trajectory and the GT trajectory. Here is an example:
![imagen](https://user-images.githubusercontent.com…
-
In your article, you wrote:
For the evaluation on real-world datasets without ground truth, we employ the widely-used non-reference perceptual metrics: FID and NIQE.
Is FID a non-reference indicator…