-
starting with Hedvig’s patron set
https://docs.google.com/spreadsheets/d/1k_6BuQbOYOTURIfcS5WGk4YeppyjzPbfHXtrXqZ-O5k/edit?usp=sharing
-
```
This should probably go into a separate module so we can have the old ClearNLP
around for another while.
Here the release annoucement for ClearNLP 3:
---
Folks,
After a long delay, the ClearN…
-
I am hesitant to bring this up after the long discussion at UniversalDependencies/UD_English-EWT#170, but I just noticed that the guidelines stipulate that quantifiers *many* and *few* should be tagge…
-
Hi! I have several questions/requests regarding value learning https://github.com/deepmind/rlax/blob/master/rlax/_src/value_learning.py
1. If I want to use the `_quantile_regression_loss` without …
-
# Generalized Distributional TV
## Overview
This is a proposal for a new TV type that encompasses probabilistic,
fuzzy, distributional TV types and more. In short it is a
distributional TV that may w…
-
# 概要
embedding関係の総論的なディスカッションスペース
今後の調査方針なども含めて議論しましょう。
-
Hi,
Perhaps, this is a naive question, but do we need to retain non-Lexicon words in KenLM file? I haven't read the KenLM paper but do backoff and smoothing require retaining other non-Lexicon word…
-
-
so if we follow through with #5, we'll need to find a way to meaningfully compute and visualize content similarity between subreddits.
There exist fancy black box models like doc2vec and probably …
-
Post questions here for this week's fundamental readings:
Jurafsky, Daniel and James H. Martin. 2015. Speech and Language Processing. Chapters 15-16 (“[Vector Semantics](https://web.stanford.edu/~…
lkcao updated
8 months ago