Weighting by recency is a relatively simple tactic based on our observation that more recent tweets are more relevant, but there are other kinds of relevance. For example, we have an RT-or-not feature prefix, and multiple ways of looking at custom dictionaries. Examine more sophisticated methods of establishing relevance:
attention model over the tokens of a single tweet
attention model over the tweets in an account
NB: the neural net architecture developed so far lies in t4fnet, and relies on data that is on the lab servers rather than in the repository. The NN code can be brought into this repo if that makes things easier.
Weighting by recency is a relatively simple tactic based on our observation that more recent tweets are more relevant, but there are other kinds of relevance. For example, we have an RT-or-not feature prefix, and multiple ways of looking at custom dictionaries. Examine more sophisticated methods of establishing relevance:
NB: the neural net architecture developed so far lies in t4fnet, and relies on data that is on the lab servers rather than in the repository. The NN code can be brought into this repo if that makes things easier.