-
To Do:
* . Use Stanford Contextualized Word Similarity
* Convert to tf_records using the existing dictionary so that the IDs match
* Split into validation and eval
Later:
* . Use ELMo eval m…
-
Post questions here for this week's exemplary readings: 2. W. Guo, A. Caliskan. 2020. “Detecting Emergent Intersectional Biases: Contextualized Word Embeddings Contain a Distribution of Human-like Bia…
-
One way of addressing unknown words in the input would be to predict the missing embedding from the context. This is effectively the same as saying that for these words we simply run a language model,…
-
According to [[Peters et al., 2018](https://arxiv.org/pdf/1802.05365.pdf)], ELMo is a **task** specific combination of the intermediate layer representations in the biLM.
The computation of ELMo em…
-
## Project Roadmap: Domain-Specific Knowledge Mesh
**1. Project Goals:**
* **Unified Data Management:** Create a system that ingests and manages data from various sources, including files, data…
-
Thank you for the nice work. I read your paper and saw a few embedding visualizations that look very nice, but I can't seem to find the code for generating those in the repo here.
I'm also trying t…
-
### Title
Autoregressive Search Engines: Generating Substrings as Document Identifiers
### Team Name
Autoregressive Seekers
### Email
nisargganatra13@gmail.com
### Team Member 1 Name
Nisarg Gan…
-
Good morning,
Thank you for sharing the paper, code and pre-trained model for NLP text data. Your research work results are impressive. Because I am developing embeddings solutions for genes and pr…
-
I am new to BERT. The main part of BERT is beeing capable of different contextual meaning for a word. But in my case I more need to be able to capture synonyms. So I ask myself if BERT better suited f…
-
I'm not sure if any of the available AI coding assistants do this (if you know, please tell me), but this is the main feature I'm missing:
Ideally it should crawl the docs of dependencies of the proj…