Closed dgkim5360 closed 6 years ago
Hi Don!
The first one is about using a single representation for multiple tasks:
This general tactic – learning a good representation on a task A and then using it on a task B – is one of the major tricks in the Deep Learning toolbox.
The second one is about embedding multiple kinds of data into a single represetnation
Instead of learning a way to represent one kind of data and using it to perform multiple kinds of tasks, we can learn a way to map multiple kinds of data into a single representation!
Yep, now I understand. Thank you for the gist ! :)
I will close this issue.
FYI, I finished the correction in my translation, which you can check at the below link. http://dgkim5360.tistory.com/entry/deep-learning-nlp-and-representations-kr
Hello, Mr. Olah.
First thing first, thank you for your great work. I am learning with pleasure via your articles.
Here I would like to address what is difficult for me to understand. The below two sentences are from the source of the section "Shared Representations":
It was kind of confusing for me, because "the trick" and its "counterpart" seem to have the similar descriptions. And below is my current understanding.
For the first one, the representation can learn other kind of data, incrementally, whenever other tasks come in and the word embedding tries to solve the tasks. And for the second one (its counterpart), assuming we already have multiple tasks and its corresponding multiple kinds of data, then the single representation can learn these all data.
Since I have worked this article for the translation in Korean, it would be great if my understanding gets confirmed and corrected.
Best regards, Don