-
Hello,
Based on your code, I added Korean tokens (using a Korean emotional dataset) to the tokenizer and fine-tuned the model with the LibriTTS R dataset. The Korean dataset is slightly less than 3…
-
## Overview
HPO is a separate ticket and will likely spearhead this work, but here's a ticket for the rest of them. Mondo, and what is used in OBO2OMOP.
SO for the Genomics WG (edit)
## Sub-task li…
-
In the past we have promoted the use of the line `marine, harvested by OBIS` in the [additionalMetadata](https://eml.ecoinformatics.org/schema/eml_xsd.html#eml_additionalMetadata) EML element to indic…
-
We at [SmartMfg](https://www.w3.org/community/smartmfg/) are developing a manufacturing extension called "mfg" for schema.org.
![4](https://user-images.githubusercontent.com/22487882/31403276-43028…
rjy15 updated
3 weeks ago
-
# How To Order Coffee In English
Hey guys! It's Ariannita la Gringa and welcome back to my YouTube channel.
Can you guys guess where I'm at today?
Today I'm at Starbucks, and as you can see b…
-
I've been running a couple billion words through word2vec, using a 32-core EC2 instance. Actually training the model is blazing fast (relatively), but populating the vocab dict is painfully slow.
I f…
-
## Describe the bug
I am trying to create an fcm from a tokens object but receive the error below.
I am assuming this is because the problem is quite large. The tokens object consists of 5,674,5…
yjjbn updated
3 months ago
-
Hello,
I'm trying to reproduce the paper results, but I get an assertion error after few epochs while running "make train".
Do you have any suggestion?
Thanks!
```
Example 700
Original Source:…
-
@dennybritz
Hi ,
First of all many thanks for sharing your code. I am trying to use pretrained word embeddings instead of randomly initialized word embedings based on the vocabulary size.
My pre…
-
| Title (Goal) | Manage taxonomy terms in a repo |
--------------- |------------------------------------ |
| Primary Actor | Collections Manager…