Open ehuppert opened 3 years ago
Hi Dr. Rahwan, what an honor it is to have you speak at our workshop! Your works, The Moral Machine and Machine Behaviour are required readings in multiple classes in our department. Machine Behaviour and its call for cross-disciplinary collaboration has especially stayed with me. As questions around AI regulation have grown to a feverish pitch in public discussions (op-eds, grandstands in US Congress), the fact that the creators of an algorithm are usually its evaluators has remained more or less the same. As the creation and interpretation of deep learning models become more opaque, what can AI designers to increase transparency for both the benefit of other researchers and the public? I have seen practical examples like Model Cards and Datasheets from researchers like Timnit Gebru, but I was curious what your thoughts were.
Thank you so much for this fascinating and meaningful research. Reading The Moral Machine, I was struck by the differences between ethicists' and the the public's morality. You reference the need for policy makers to consider both and I was curious about your own opinions about how we should weigh these opinions and why?
Dr. Rahwan,
Thank you for sharing your research with us! Having read the moral machine paper in my undergraduate machine learning class, I am super excited to discuss the ethical implications of what we termed "the new trolley problem". I have always been particularly curious about the effect of context when a participant makes a judgment in the moral machine experiment. While autonomous vehicles have become more ubiquitous than in the past, we are still far away from an age where they become the norm. When sampling across countries with wildly different socioeconomic breakdowns, do you think it possible that participant responses could be biased due to beliefs that autonomous vehicles are too far from their reality at the time of response? I could see this becoming a larger issue in countries with a lower global socioeconomic status where it may be multiple decades before autonomous vehicles are seen on the roads.
Thanks so much for coming to our workshop and sharing your work with us Dr. Rahwan! I'm super excited because last quarter in Prof. Solotoff's course "Big Data and Society", I read the moral machine paper in the 1st week's readings [Course ad alert LOL]. The paper itself is a great demonstration of how scientists from different disciplines participate in solving AI ethical problems as the authors came from various backgrounds such as Psychology, Philosophy, and Computer Science.
The Moral Machine emphasized the difference in country, religion, ethnicity, and culture resulted in different ethical priorities. Given AI systems' lack of ethical commitment and ontological sensitivity of judgment, it is hard for them to evaluate their own decision, creation, and shoulder further responsibilities. My question would be that how should we (as humans) incorporate human judgment into the use of these powerful "reckoning" systems and evaluate AI's performance? I would emphasize on the how because certain human characteristics are obvious as listed within the paper and I'd like to learn more about particular ways to implement that.
I'll be participating asynchronously but still, looking forward to your speech, and thanks again for coming!
Thanks for coming to give us this talk! I read the moral machine paper in college for a class on the philosophy of mind and its relationship with ethics, so I'm very excited to watch the talk (I have a class at the time so I will also be unfortunately watching asynchronously). One of the questions I have about this, that I've had since reading the paper for the first time, is whether or not there is interesting information in the outlier data of the experiment. What is going on with the extremely abnormal participants? This is one of the rare cases where the outliers are extremely scientifically interesting in and of themselves!
Thank you so much for coming to give us this talk. Looking forward to the presentation.
Thank you for coming and speaking with us today! My question is should society design machines/programs concurrently with discussions of the ethics of that machine, or should we focus on developing the technology first and then deal with the ethical issues later? If we choose the former route, are we hindering technological advancements by focusing on challenging ethical issues?
Thanks for the inspiring sharing in advance! How should we combine the human characteristics into the "reckoning" systems? My question is similar to @Lynx-jr .
Thanks for sharing your work with us! My question is: how would the ethical framework described in your paper (i.e., basing self driving cars' decisions based on human views on who to save in an impossible situation) evolve as new data comes in? For example: what if, after self-driving cars become more mainstream, it becomes clear that there is actually a social preference for saving old people versus young people? What would the turnaround time be for implementing this new data, would the car be able to "learn" from this sort of experience or would this be something that could only be instilled in the next generation of the cars? What are the implications of that latency period? And, do you anticipate that the self-driving cars' ethical decision-making framework will vary based on the results you've discussed, by country, or would it be more homogenous?
Excited to hear your presentation Dr. Rahwan! In an era where technology/platform design is influencing so many of our personal tendencies (happiness, attention span, curiosity...) with a market that is largely regulated by producer design and consumer demand - what role do you think ethical intervention will (and can, realistically) play in future technology design (with self-driving cars and beyond) -do you imagine the framework applied in the Moral Machine will be applied to lower-stakes circumstances?
Thank you, and I look forward to the presentation. Since machines started to participate in more decision-making processes, we must understand them better. The false-positive and false-negative rates could be crucial.
Thank you very much for sharing this very interesting and important work!
Do you think that early regulations regarding autonomous vehicles such as those in Germany will have a profound impact on the future of centralized regulation regarding AV's or will these serve as more of a test in regulations to see what the global public response will be? Will AV regulations being the earliest in moral machine decision making break the ground for centralized regulations regarding AI as a whole?
I will be joining Technische Universität Berlin summer school virtually this year so I am eager to learn more about these regulations from within a German institution!
Thank you so much for this fascinating and meaningful research. Your research reminds me of a book I recently read that talked about how the AI-supported technology of calculating the probability that a child is being abused and ignored is highly biased by historical data and the data people can collect (garbage in, garbage out). Therefore, my question is: given that machines are just doing calculations based on the data people feed in, how can we actually avoid the ethnic problems without abandoning altogether “bad” machines, like machines performing judgment on gender/race/etc.? Thank you.
Hi Dr. Rahwan, thanks for sharing your work with us! This is really an interesting research topic and supplement last week workshop's questions from another angle. How to keep a balance between use, control and trust machine is really a crucial problem. For the papers, without offense, but I am a little curious about to what extent do you think the moral machine frameworks your proposed in papers could be applied in practice? What are the policy-makers views on the "moral machine"? Thanks!
I agree that the interaction between the machine and the human being has reached unprecedented importance and your work does shed light on this research question. However, this paper is beyond my field of interest and I don't want to venture into commenting on it. Looking forward to your presentation, Dr. Rahwan.
Thanks so much for sharing! I’m slightly curious about how such ethics or morals will be decided or altered when there are conflicts between different cultures or beliefs, as different cultures can have really discrepant ethics. Thank you!
Thank you so much for the sharing. I would like to hear more cases in which machines failed our trusts. Also, I think different people or country may hold different opinions on the definition of "trustworthy". Should / How should we take that into consideration? Thank you and Looking forward to the presentation.
Thank you so much for the great presentation! How do you think we can apply this methodology to other disciplines of social science such as economics? Look forward to your talk tomorrow.
Thanks very much for sharing this wonderful paper with us. Look forward to your presentation tomorrow!
Thank you for your presentation! I read the Moral Machine as a starter for Big Data and Society course, and it encouraged lots of discussions and foundations for the rest of the course. What are your thoughts about the Moral Machine results, the current ethical frameworks regarding AI, and the idea of Artificial General Intelligent (supposed we deem they are necessary)?
Hi Dr. Rahwan,
Thank you for presenting in our workshop! It was very interesting to learn about the concerns about morality within the realm of artificial intelligence. I enjoyed learning about this in the context of autonomous cars, which are being used more and more today.
Thank you, and I look forward to your presentation!
Thanks for coming! How can the research results get applied to other fields?
Thank you very much for sharing. As is mentioned by @YuxinNg , I wonder how do researchers in different disciplines think differently about "trustworthy" in terms of transferring machine learning into academic insights.
Thanks for presenting your work!
Thank you so much for sharing. As we are now more and more relying on recommendation algorithms to learn about information about the world, how can we avoid being trapped inside the information cocoon created by AIs?
Great! Following @luxin-tian and @YuxinNg, the trustworthy question on machine learning baffles me. Hope you can share your thoughts!
Thank you for presenting your work. It is interesting to know the implementation of Moral Machine. As you mentioned in the paper, you investigated the culture and economics variantions in predicting the vector of different moral preferences. You also introduced Gini coefficient as a variable to be considered. My question is how to measure the predicting power of the variables you choose and what if you use other variables in this prediction work? Thank you.
Thank you in advance Dr. Rahwan for speaking with us about your extremely meaningful work! I am very interested in learning more about the impacts of trusting machines from an intersectional perspective. How have the discussions around ethics in AI benefited some but not the others (in terms of demographics, level development, etc.)?
Thanks for bringing this really interesting topic here! I am wondering to what extent can the Moral Machine results measure the congruence of universal machine ethics.
Dr. Rahwan, Thank for for presenting your work in our workshop. In the domain of natural language processing, there are more and more attempts that are trying to increase the transparency of language models. In particular, the field is test machine learning models with some specially designed difficult sentences, such as sentences with a non-canonical thematic role assignment (e.g. customer served waitress). I was wondering how you think of the methods of using particularly difficult testing cases to make machine learning more transparent?
Thanks for your presentation Dr Rahwan! Excited to learn more about your work.
Thank you for this great presentation! I wonder that how could we create machines that both follow the moral rules and execute commands at their most efficiency? And how do we set a political boundary that could be easily understood by technicians?
Thank you for the presentation. My question is similar to @MkramerPsych. Looking forward to the presentation.
I am more than interested in human-centered machine learning and Human-computer interaction. I wonder how we can further let machines learn and work according to human ethics or respond to human cultures?
Thank you so much for sharing your research. I'm curious about the human-machine ecology in AI application.
Thank you for your sharing! Your research about the moral Machine is really interesting. It seems that the goal here is to build a machine with ethical consideration based on global ethic variation so that the moral dilemma faced by autonomous vehicles will be solved.
I wonder that how this type of machine moral standards will be applied in the real world. Besides, as you train the machine with consideration of the ethical heterogeneity across the world, I wonder how you ensure that there is no subjective judgment of researchers when training the machine.
Thank you. Looking forward to your presentation.
Thanks for sharing. Could you explain more about balancing the opinions of both ethicists' and the public's morality for policy making?
Thank you for presenting your work. I would like to hear more about the problem of accountability for autonomous AI systems such as the ones leveraged by self-driving car technologies. What would be a useful framework to think about whom to hold accountable for the faults within those systems? How should legal systems around the world adapt to AI?
Thanks for sharing your work! My question is similar to some students': how should we deal with the potential conflict between ethicists and public morality?
Thank you for presenting at our workshop! What is your perception of how well-informed different stakeholders are in the problem of machine morality? From the technologists who build these systems, to the people who use them, to the policymakers and politicians who lobby and legislate them, there is clearly a gradient of both technical understanding. Algorithms are rarely marketed to people, but are instead packaged as 'products' that 'do things'. The Cambridge Analytica trials revealed that many US lawmakers struggle with conceptualizing an algorithm - forget a ensemble of deep neural networks interpreting multisensor data (as in the case of a self-driving car).
Is there a baseline of technical knowledge required across groups for there to be productive discussion on these ethics? How can we achieve that standard?
Thank you for presenting your work to us Dr. Rahwan! I am wondering do you think it is ethical to let machines do moral decisions? What I mean is emotional behavior and concerns is one of the major differences between human and machines. Thanks again for your presentation in advance!
Thanks for your sharing! Could you elaborate on how did you satisfy the moral standard using this new approach?
Thank you for your presentation. My question is how this conclusion can contribute to economics research.
Thank you for sharing your work with us! I’m curious about the application of human-centered machine learning in other fields. Looking forward to your presentation.
Thank you for presenting your work! One question, how likely do you think something like the movie Matrix might happen? Do you think the moral standards of humans might be altered by our reliance on machine decisions (as they might be more impartial? Will machines become more emotional like human or will humans be less emotional like machines in the long run?
Could you please explain Moral Machines in more detail?
Thanks very much for your sharing! I think moral machine is indeed an interesting topic to research. My question is about its cultural diversity. Will it take on different forms in different countries?
Hi Professor Rahwan, I’m a great fan of yours since your stay at the Media Lab. Welcome to our workshop. My question is about the nature of morality in the age of machine intelligence. Particularly, do you think crowdsourcing moral opinion will be more legitimate than professional opinion? I.e., should morality be descriptive or prescriptive?
Thank you for sharing. Looking forward to the presentation.
Looking forward to watching your presentation asynchronously, Dr. Rahwan.
Comment below with questions or thoughts about the reading for this week's workshop.
Please make your comments by Wednesday 11:59 PM, and upvote at least five of your peers' comments on Thursday prior to the workshop. You need to use 'thumbs-up' for your reactions to count towards 'top comments,' but you can use other emojis on top of the thumbs up.