-
It would be nice to have 'tangent distance' as a possible metric in nearest neighbors models.
It is not a [new concept][1] but is [widely cited][2]. It is also relatively standard, the [Elements of…
-
Hi,
I read through the tutorial for metric learning:
https://umap-learn.readthedocs.io/en/latest/supervised.html
I have two questions (that may be related, I'm not sure)
1. Is there an explana…
-
I've noticed that you are using OpenCV to detect every hero, but it seems slow and buggy.
Have you ever try another way to detect it? Such as machine learning or hash distance calculation.
-
I expect this suggestion to be frowned upon by the expert drivers! But nobody is born expert, and even the expert might stumble upon the route he is unfamiliar with...
While learning a yet unknown (t…
-
Hi,
I use spanbert large model with default parameters in config file, and I get Avg F1 78.27, lower than Avg.F1 79.9 in paper.
config as following:
num_docs = 2802
bert_learning_rate = 1e…
-
https://arxiv.org/pdf/1701.07875.pdf
We introduce a new algorithm named WGAN, an alternative to traditional GAN training. In this new model, we show that we can improve the stability of learning, g…
leo-p updated
7 years ago
-
t-sne is inherently randomized but still not that much. It produces consistently different (much worse) results compared to scikit-learn Barnes-Hut implementation.
Example on IRIS dataset:
Sciki…
-
### Is your feature request related to a problem? Please describe.
I have been working with unsupervised learning for a while and one part of my research was investigating different distance measur…
-
When understanding something like mean squared error as a loss for the neurons in a machine learning model, the formula wants to calculate the difference between two values. When these two values repr…
-
@yaringal
Thank you for your example, it helps a lot to understand the paper. I am currently use the proposed formula (exp(-log_var)*loss+log_var)) in self-supervised learning with uncertainty…