uchicago-computation-workshop / Spring2024

Spring Workshop 2024, Thursdays 9:30-11:50am
2 stars 0 forks source link

Questions for Uri Hasson concerning his talk on "Deep language models as a cognitive model for natural language processing in the human brain." #4

Open jamesallenevans opened 2 weeks ago

jamesallenevans commented 2 weeks ago

Post your questions for Uri Hasson about his talk and paper: Deep language models as a cognitive model for natural language processing in the human brain. Naturalistic experimental paradigms in cognitive neuroscience arose from a pressure to test, in real-world contexts, the validity of models we derive from highly controlled laboratory experiments. In many cases, however, such efforts led to the realization that models (i.e., explanatory principles) developed under particular experimental manipulations fail to capture many aspects of reality (variance) in the real world. Recent advances in artificial neural networks provide an alternative computational framework for modeling cognition in natural contexts. In this talk, I will ask whether the human brain's underlying computations are similar or different from the underlying computations in deep neural networks, focusing on the underlying neural process that supports natural language processing in adults and language development in children. I will provide evidence for some shared computational principles between deep language models and the neural code for natural language processing in the human brain. This indicates that, to some extent, the brain relies on overparameterized optimization methods to comprehend and produce language. At the same time, I will present evidence that the brain differs from deep language models as speakers try to convey new ideas and thoughts. Finally, I will discuss our ongoing attempt to use deep acoustic-to-speech-to-language models to model language acquisition in children.

zcyou018 commented 2 weeks ago

Considering that deep language models and the human brain may share some computational principles, how can we leverage this similarity to improve our understanding of human cognition, particularly in the context of natural language processing, while still addressing the key differences between these models and human-centric attributes?

anzhichen1999 commented 2 weeks ago

In your study, you align the computational principles of autoregressive deep language models (DLMs) with the cognitive processes of the human brain during language processing. Knowing that these models do not engage in semantic or syntactic analysis but rather rely on statistical language patterns learned from large data sets, how might this alignment influence our understanding of the neurobiological underpinnings of language?

Dededon commented 2 weeks ago

Hi Professor Hasson, thank you for your talk! I am interested in how current Large Language Models could be applied in reasoning of neural network research?

Jessieliao2001 commented 2 weeks ago

Thanks for your sharing Prof. Hasson! I have a question: To what extent can deep language models serve as an accurate cognitive model for natural language processing in the human brain, considering both shared computational principles and disparities observed during language comprehension and production?

bhavyapan commented 2 weeks ago

Thank you for sharing your work! Does the insight from such studies vary with languages or can be applicable to specific settings only? I recall coming across this study where the authors found that contrary to predictions, the large-scale language models were shown to be ineffective for representing children’s utterances as they tried to use a deep learning-based language development screening model based on word and part-of-speech to investigate its effectiveness in detecting language learning impediments in children.

Oh, Byoung-Doo, Yoon-Kyoung Lee, Jong-Dae Kim, Chan-Young Park, and Yu-Seop Kim. 2022. "Deep Learning-Based End-to-End Language Development Screening for Children Using Linguistic Knowledge" Applied Sciences 12, no. 9: 4651. https://doi.org/10.3390/app12094651

AnniiiinnA commented 2 weeks ago

The study suggests that large-scale language models, although effective in general natural language processing tasks, do not perform as expected in representing the nuances of children's speech. Given the unique linguistic features of early language development, such as incomplete grammatical structures and limited vocabulary, could the adaptation of these models to focus more on phonetic and prosodic features rather than traditional lexical and syntactic elements improve their applicability in developmental language screening? What other modifications could be considered to enhance the performance of these models in the context of early language development?

jiayan-li commented 2 weeks ago

Hi Professor Hasson, I'm intrigued by how cognitive research intersects with various deep learning model architectures, such as LSTM and Transformers, along with techniques like prompt engineering tailored for Large Language Models (LLMs). Given the dynamic nature of NLP research within computer science, how do we navigate the integration of these diverse approaches into cognitive studies? Are we primarily focused on exploring the fundamental parallels with neural networks in our cognitive research endeavors?

ymuhannah commented 2 weeks ago

Thanks for sharing! One of your studies focuses on predictions and neural responses primarily within English-speaking contexts. How might these computational principles adapt or vary when applied to languages with significantly different syntactic or morphological structures, such as agglutinative languages like Turkish or languages with varying script like Chinese?

XiaotongCui commented 2 weeks ago

Thanks for sharing! Given the current findings, how should future research further explore and utilize deep language models to enhance our understanding of the mechanisms underlying human language processing?

ana-yurt commented 2 weeks ago

Hi Prof. Hasson, thanks for sharing your fascinating research. What would be the link from modeling individual language processing to modeling collective cultural behaviors? How might language models assist in such endeavor?

PaulaTepkham commented 2 weeks ago

Thank you for sharing your paper with us! From my perspective, the most interesting question that we need to think further is that should we develop the computer to think like human or not. Since there are a variety of human thought and way of life. Some is so erratic. I still think that it is a good idea to be able to distinguish between computer thought and human thought. I am looking forward for the talk!

shaangao commented 2 weeks ago

Thank you so much for sharing your research with us, Uri! One major question regarding the human brain vs LLM comparison is the amount of training data available, i.e., LLMs are trained on a huge amount of data, whereas human children need to learn languages through a very limited amount of examples. I look forward to hearing about the findings of your language acquisition project, and more generally your take on developmentally plausible language models.

Kevin2330 commented 1 week ago

Thank you, could you elaborate on which specific computational mechanisms are shared, and which are distinctly different when the brain processes new ideas compared to deep neural networks? Based on your findings, what are the next steps in refining these models to better align with human brain functions?

ecg1331 commented 1 week ago

Thank you so much for sharing your research!

With the similarities you have pointed out between DLMs and the human brain, do you think there are currently more? And if not, do you think it's possible for DLMs to continue to develop to be more similar?

kexinz330 commented 1 week ago

Thank you for sharing your research! You discuss the use of artificial neural networks to model cognition in natural contexts. Could you elaborate on how these models might be integrated into existing cognitive neuroscience methodologies to enhance our understanding of brain function?

natashacarpcast commented 1 week ago

Thanks for the research! I remember from a class in Cognitive Science, that we discussed that maybe there are some variations in how people perform at specific tasks (such as counting or discriminating between colors) based on the language they speak, since their conceptual schemes (constructed through language) are different. Do you think that these models would be able to be fine-tuned to how different languages operate in the brain? And be useful to observe that kind of differences in task performance?

alejandrosarria0296 commented 1 week ago

It is very interesting to see how deep language models is similar to human computation. At the sime, I'm curious about the interesting ways in which they may differ. Have you found any such instance that you find relevant?

Daniela-miaut commented 1 week ago

Thanks for sharing your work! I'm interested in if your findings suggest something about the cognitive process of how people understand language (to make meaning of it), and what are these implications?

Hai1218 commented 1 week ago

Professor Hasson, considering the inherent limitations of DLMs in modeling complex human cognitive processes such as the understanding and production of novel ideas, to what extent do you believe these models need to evolve to more accurately reflect the nuanced, dynamic nature of human cognition?

yuhanwang7 commented 1 week ago

Thanks for sharing your research. My question is if there are specific aspects or characteristics of direct-fit models that are particularly biologically plausible or implausible. How well do these models align with known physiological and neurological data?

yuzhouw313 commented 1 week ago

Hello Professor Uri, Thank you for presenting your research with us! Consider the temporal dynamics of human brain's language processing, including rapid comprehension during conversation and the incremental nature of speech production, how do artificial neural network models handle the temporal aspects of language processing?

hchen0628 commented 1 week ago

Thank you very much for sharing! Considering the computational strategies between deep language models (DLMs) and humans in understanding language and the differences in generating new ideas and handling complex linguistic details, what neuroscientific knowledge could future DLMs apply to better mimic these human capabilities? What steps should be taken to enhance the models' ability to process real-time conversational understanding and generate new, meaningful content?

ksheng-UChicago commented 1 week ago

Thanks for sharing. I would like to know more about how humans can work with deep language models to train their speaking capabilities since there are many similarities and major differences. Is it more like a collaboration than competition in the years to come?

zhian21 commented 1 week ago

Thank you for sharing your work! Your research mentioned the shared computational principles between deep language models and the neural code for natural language processing in the human brain. Given that deep neural networks require extensive data and computational resources for training, how do you think these models can inform us about the efficiency of language acquisition in children, who often learn language with much more limited data and exposure?

yunfeiavawang commented 1 week ago

Thanks for this great research! My question is what neuroscience information or technique might future deep language models utilize to better emulate human talents, given the discrepancies between DLMs and human's way of processing intricate language details?

lbitsiko commented 1 week ago

How might the principles you present differ when applied to other structured forms of communication, such as mathematical logic, or more loosely structured forms of communication, such as art? What implications could these differences have for our understanding of cognition?

ethanjkoz commented 1 week ago

This research is really interesting. In the Goldstein et al. paper, you mention the three main ways that human brains and DLMs are alike when it comes to word prediction, but what I am also curious as to the significant differences between DLMs at the time of this study as well as now (like GPT-4) versus the human brain. What are the next steps for bridging the gap? Furthermore is there any chance of a selection bias from using epileptic patients?

fabrice401 commented 1 week ago

Thanks for sharing your work! Considering the overparameterized nature of deep learning models and their ability to process language similarly to the human brain, how can we incorporate insights from the human brain’s unique capacity for generating new ideas and thoughts into the development of more innovative and contextually aware language models?

essicaJ commented 1 week ago

Hi Professor Hasson. Thanks so much for sharing your work! My question is where do you see the biggest divergences between how deep language models process language compared to the human brain? You note differences emerge as speakers try to convey new ideas - can you elaborate on that? Thanks!

wenyizhaomacss commented 1 week ago

Thanks for sharing your work! Since the brain differs from deep language models when speakers try to convey new ideas and thoughts, what are some of the crucial human-centric properties missing in these machine learning models, and how do they impact language comprehension and production? What are the differences between the trade-off between understanding and competence in deep neural networks and in the human brain during language processing?

lguo7 commented 1 week ago

Thank you for sharing! Here is my question: Considering the direct-fit approach discussed in the article, what are the ethical implications of using such models in social science research, especially when they are applied to predictive policing, hiring practices, or loan approvals?

zhuoqingli526 commented 1 week ago

Thank you for sharing the meaningful work! The article (Shared computational principles for language processing in humans and deep language models) mentions the capabilities of DLMs in simulating human brain language processing. How do DLMs perform in handling multimodal tasks that involve non-linguistic information, such as visual or auditory signals? Is it possible to develop models that can simultaneously process and understand multiple sensory inputs?

Weiranz926 commented 1 week ago

Thank you for your sharing! My question is: In light of the similarities between the computational principles of deep language models and natural language processing in the human brain, how can these findings be applied to develop educational tools or interventions, particularly for children with atypical language development? Additionally, what ethical considerations should be addressed when implementing these AI-inspired methods in real-world educational settings?

HamsterradYC commented 1 week ago

Thank you for sharing this work! The article mentions that although artificial neural networks (ANNs) are inspired by biological neural networks (BNNs), there are significant differences in structure and function between the two. When designing ANNs, how do you think we should balance biological fidelity with computational efficiency? In the process of mimicking biological neural networks, which biological features are worth emulating, and which might be unnecessary?

Marugannwg commented 1 week ago

I really enjoy contemplating about the link Professor Hasson drawn between deep learning architechture and human cognitive capacity. Today, many people perceive the extraordinary capacity of AI to be something unhuman, and I think it worth distinguish what facets of such capacity resembles human cognitive process verse alian to human features --- where is the role and uniqueness of human in learning (e.g. language) compare to using an neural architecture to build a generative/predictive model?

CanaanCui commented 1 week ago

Thanks for sharing! Could you elaborate on how the shared computational principles between deep language models and the neural code impact our understanding of language acquisition in children, especially in terms of the differences observed when adults generate novel ideas?

Brian-W00 commented 1 week ago

Thank you for sharing. How can we distinguish and exploit the similarities and differences in language processing between the brain and deep neural networks when applying these findings in practice?

binyu0419 commented 1 week ago

Thank you for sharing! I am wondering how current deep language models are adapted or modified to more accurately reflect the specific neural processes of the human brain for language simulation?

isaduan commented 1 week ago

Thanks for sharing your research with us! What do you think of other processes of the brain (e.g. long-term planning) suggest for the next-generation deep language models? Does the analogy make sense there too? How big they need to be and how tuned their parameters need to be for the models to achieve human-like performance?

secorey commented 1 week ago

Thanks for presenting your work. What do you think is the biggest limitation in the relationship you draw between the development of AI models and evolution?

jinyz1220 commented 1 week ago

What specific evidence supports the notion that the human brain relies on overparameterized optimization methods for language processing, and how does this insight challenge traditional views of linguistic cognition?

yuy123337 commented 1 week ago

Hi Professor Hasson. I am wondering how do you anticipate code-switching language in your research might influence the understanding of language processing mechanisms in deep acoustic-to-speech-to-language models?

jialeCharloote commented 1 week ago

Hi Professor Hasson, considering your findings that deep neural networks share computational principles with the human brain's language processing, yet differ significantly when generating new ideas, how do we address the potential limitations of using such models to fully replicate the complexity and adaptability of human cognitive processes, especially in dynamic real-world scenarios? Furthermore, what are the implications of these differences for the validity of deep neural networks as models for language development in children?

Caojie2001 commented 1 week ago

Thank you for sharing your interesting work! I wonder if any new approaches can better simulate the process procedure of the human brain compared with deep language models.

C-y22 commented 1 week ago

Thank you for sharing! How do you see the findings of your research being applied in practical settings, such as in education or artificial intelligence development?

zihua-uc commented 1 week ago

Thank you for sharing this interesting research! I just wonder how the language formulation process differs for bilingual or even individuals who speak more than 2 languages. Are there any striking differences between these polyglots with deep neural networks in natural language processing, or individuals who only speak one language?

HongzhangXie commented 1 week ago

Thank you for your presentation. I am interested in whether there are differences in the deep language models in the brains of different human individuals. Do these differences affect the effectiveness of language communication between individuals?

xiaowei-v commented 1 week ago

It is so interesting that the the language model shares the similar way of predicting the word with contextual information. The paper stressed that DLMs indicate that linguistic competence may be insufficient to capture thinking. I would expect that the machines might also demonstrate some characteristics of human thinking yet what we observed in reality by interacting with generative ai (i.e. chatgpt) is that there might be ways to distinguish between AI and human. However, it would be interesting to test whether people are really capable of differentiating between them without informative languages (i.e. I am a language model and do not have the information you are asking) in interaction .

QIXIN-ACT commented 1 week ago

Thank you for sharing your insight! Given the shared principles between deep neural networks and brain function in language processing, do you see a potential for direct applications of these findings in technology or education, particularly in improving how we teach language to both children and adults? Could you elaborate on how this research could be applied practically? Could you provide some examples?

iefis commented 6 days ago

Thanks for sharing! Could you give your insight on how the deep language model might help explain the natural language processing on the evolvement of language in the human brain, especially when there is a diverse set of social and cultural stimuli that prompt language to be constantly in dynamic changes?