uchicago-computation-workshop / nicolas_masse

Repository for Nicolas Masse's presentation at the CSS Workshop (1/13/2019)
0 stars 0 forks source link

Learning is complex #2

Open liu431 opened 5 years ago

liu431 commented 5 years ago

Thank you for your talk.

Your paper talked about context-dependent gating signal works in the supervised and reinforcement-based learning. I am wondering if the method could be useful for unsupervised learning tasks, such as learning better representations of the sequential input data?

Also the machines "learn" the task with one type of inputs each time, such as images, texts, and speech. But human beings could learn with various forms of input. For example, to classify cats and dogs, machines find patterns in the pixels from the large data while kids learn with pictures presented in the book, speech from the teachers, and might play with real cats and dogs. So I am wondering if people memorize the previous tasks because different forms of input generate composite component that reinforces the learning ability?

policyglot commented 5 years ago

Great question as always from Li! I'd like to dig deeper into the idea of 'reinforcement' that he mentioned. I'll refer to the field of education and linguistics, which I'm very passionate about. I found that after learning one foreign language, the next ones became remarkably simpler. Nonetheless, this one 'task' can be approached from multiple routes- immersion in a culture, or focusing first on textbook grammar. This would mirror what Li said about learning about cats and dogs by reading about them as opposed to actually playing with them. For computers, has it ever been the case that a task can be learnt through multiple avenues? Also, like with distinct languages, how can we know that two tasks are sufficiently different to be considered distinct, and not directly affecting the learning of the next task?

nmasse commented 5 years ago

I would love to add an unsupervised component to this model! It is very likely that most learning we do is unsupervised, as we learn to construct a model of our world. Going forward, it's probably crucial that our networks learn to "properly" represent stimuli before we teach them tasks (what "properly" means here is currently up for debate). The challenge is actually implementing this. Would be happy to chat further.

nmasse commented 5 years ago

As for policyglot's question about knowing whether two tasks are sufficiently distinct, we don't really know. I reckon that two different tasks don't have to be "same" or "different", but probably live on some continuum. I mentioned somewhere else that we're not even sure what a "context" really means, and eventually we'll have to figure that out.