falcondai / falcondai.github.io

my personal website and blog
https://falcond.ai
MIT License
1 stars 0 forks source link

Google fellowship essay #12

Closed falcondai closed 4 years ago

falcondai commented 4 years ago

I have a broad range of research interests in reinforcement learning, natural language processing, and computer vision. The consistent theme of my research is to ask novel questions and to bring into focus new aspects of old problems. These questions are often inspired by applied research and formulated to distill the key theoretical aspects. In many ways, my undergraduate study in physics and mathematics inform my aesthetics in research. Before I venture to speculate the impact of my research with two personal stories, I wish to say that I deeply benefited from the impact of others. Partly due to my interests and partly due to a pure intellectual curiosity, I keep a diverse collection of work to read, and their sheer imaginativeness never ceases to amaze me. It has occurred to me twice that the niche questions I studied were independently and simultaneously studied by others. I cannot help but recognize the world historic moment we live in and the whisper of ideas that is all around us.

The first story is about how I decided to apply for a PhD in computer science. Late 2013, I was in the audience when DeepMind first presented their DQN algorithm playing Atari games at a deep learning workshop at NeurIPS. I was stunned. Not knowing anything about reinforcement learning, I immediately realized with the help of the presentation by Vlad Mnih that this problem setting can capture some aspects of scientific research: its experimentation and exploration. I love physics dearly, and the prospect of explicating its core challenges and even replicating a curious mind in silica captivated me. I applied and started my PhD shortly afterwards walking away from several industry opportunities. And RL has been my main focus. My research is driven by my own intellectual curiosity around the problem of intelligence.

The second story is about how I found a passion in working on socially relevant problems. Late 2018, I met an outsider to machine learning research at the end of the NeurIPS conference. She asked me a simple question of whether I think the expected effect of development in AI is net positive. Her followup question is pointed. If one does not strongly believe in that, then why should they work on AI? There are many ways to dismiss this kind of questions but I took it seriously and personally. In front of the backdrop of the exuberance of industry and academia on display at NeurIPS and the many recent collective reckonings of the misuse of technologies, I may also have felt responsible to represent my community to a thoughtful skeptic. We talked for hours and in the end, I realized, we have to ensure that positive outcome. We, the AI research community, uniquely capable in understanding the technical nuances, has a unique social responsibility. I do not believe in some kind of unified voice or position for the whole community. But we must stop being naive about the implications of our own work due to the prevalence and rapid adoption of machine learning technologies. Indeed, I found research problems based on humanistic desiderata (fairness, interpretability, etc) and more realistic assumptions (human cognitive biases, etc) to be immensely exciting. Currently, I am pursuing a critical study of the temporally extended effect of recommendation system through the lens of RL.

By working on both applied and theoretical problems, I wish both to bring more clarity and theoretical import to certain applied problems by formalization, and to communicate to theory-minded researchers certain emergent challenges. And by being mindful of the societal issues of our times, I hope to solve problems that will limit the misuse of technologies and encourage others in our research community to do the same.

falcondai commented 4 years ago

Reframe it as my visits to NeurIPS’13, ‘18, and me re-visiting Tahoe, where ‘13 was held. my experience at ‘13 inspired even my research today.

falcondai commented 4 years ago

Reflections on NeurIPS's

I have attended NeurIPS in 2013, 2018, and 2019. With the benefit of hindsight, I can clearly see the impact of the early experiences.

2013

The first story is about how I decided to apply for a PhD in computer science. In late 2013, I was in the audience when DeepMind first presented their DQN algorithm playing Atari games at Deep Learning workshop at NeurIPS (then shorthanded as NIPS). I was stunned. Not knowing anything about reinforcement learning (RL), I immediately realized with the help of the presentation by Vlad Mnih that its problem setting can capture some aspects of scientific research: experimentation and exploration. I love physics dearly, and the prospect of explicating its core challenges and even replicating a curious mind in silica captivated me. I applied and started my PhD shortly afterwards. And RL has remained my main focus.

2018

The second story is about how I found a passion in working on socially relevant problems in machine learning (ML). In late 2018, I met an outsider to machine learning research at the end of the NeurIPS conference. She asked me a simple question of whether I think the expected effect of development in AI is net positive. Her followup question is pointed. "If one does not strongly believe in that, then why should they work on AI?" There are many ways to dismiss this kind of questions but I took it seriously and personally. In front of the backdrop of the exuberance of industry and academia on display at NeurIPS and the many recent collective reckonings of the misuse of technologies, I may also have felt responsible to represent my community to a thoughtful skeptic. We talked for hours and in the end, I realized, we have to ensure that positive outcome. We, the AI research community, uniquely capable in understanding the technical nuances, has a unique social responsibility. I do not believe in some kind of unified voice or position for the whole community. But we must stop being naive about the implications of our own work due to the prevalence and rapid adoption of machine learning technologies. Indeed, I found research problems based on humanistic desiderata (fairness, interpretability, etc) and more realistic assumptions (human cognitive biases, etc) to be immensely exciting. Currently, I am pursuing a critical study of the temporally extended effect of recommendation system through the lens of RL.

2019

As 2019 came to an end, I presented my first publication at the main conference of NeurIPS. It is on the subject of regret complexity in RL. And during the conference I finished a manuscript on an formal issue related to word2vec, which, similar to RL, I first encountered at NeurIPS 2013. I have come full circle--both works involve making loopy arguments--to my first NeurIPS visit when I revisited South Tahoe City on a ski trip shortly after the new year's day of 2020.