Closed pemj closed 8 years ago
Got more than halfway done last night. Still need to tie some things together at the end, reword stuff. After that, still have to make it fit with the nearby sections (once I see drafts of those), add some edits for clarity, throw in a few citations.
Hey Peter,
I’ve got an outline for my section, and I imagine that I’ll be able to transform it into something workable within the next couple of hours. Here’s the outline:
BACKGROUND:
-brief background on self-driving cars
APPLICATION:
-three main stakeholders: public, government, corporations
-explain that stakeholders can vary depending on culture, value systems, economic status, BUT for the purpose of this application, we’ll focus on generalized American culture, values, and economic status. -try to define American culture -weigh importance of 3 stakeholders in American culture
-apply our solution (utilitarian, prevent harm/violation of core values with caveat: can’t violate core values of autonomy of self, welfare/well being and economics) to self-driving cars: -what would utilitarian approach look like? Pros/cons -explain what makes our approach different/better than strict utilitarianism
Best wishes, Brad
On Mar 13, 2016, at 2:00 PM, Peter McKay notifications@github.com wrote:
Got more than halfway done last night. Still need to tie some things together at the end, reword stuff. After that, still have to make it fit with the nearby sections (once I see drafts of those), add some edits for clarity, throw in a few citations.
— Reply to this email directly or view it on GitHub.
Sweet. That sounds pretty good, I'll let you know if I come up with any suggestions that seem helpful. At the moment, I'm gonna keep working on the background.
(I'll paste this outline into the tex file for that section, just for ease of tracking on my part, and so the organization will become more visible by glancing at the PDF.)
So, in talking about self-driving cars, I remembered a concrete example you can use when talking about balancing the stakeholders against each-other for different situations, with an eye for historical precedent.
During the Paris attacks, Uber prices went way way up when everyone started panicking and trying to get away from dangerous location. This supposedly resulted from their algorithm's default behavior, which causes price to surge with increases in demand. In this case, the interests of Uber and the interests of its users were directly opposed. Moreover, the company was widely regarded as having stepped over the line with that move, and they publicly stepped in and lowered the prices to allow people to pay their way out of the dangerous areas. They did so when it became clear that they faced moral outrage, as it broadly seemed like they were taking advantage of a bad situation for personal gain, which is generally an ethical stance you don't want to be seen taking. This relates to our point about how our system falls back to a series of stronger rules when we see boundary cases, as opposed to strictly utilitarian cases.
Thanks, Peter. I’ll work that in.
On Mar 13, 2016, at 4:36 PM, Peter McKay notifications@github.com wrote:
So, in talking about self-driving cars, I remembered a concrete example you can use when talking about balancing the stakeholders against each-other for different situations, with an eye for historical precedent.
During the Paris attacks, Uber prices went way way up when everyone started panicking and trying to get away from dangerous location. This supposedly resulted from their algorithm's default behavior, which causes price to surge with increases in demand. In this case, the interests of Uber and the interests of its users were directly opposed. Moreover, the company was widely regarded as having stepped over the line with that move, and they publicly stepped in and lowered the prices to allow people to pay their way out of the dangerous areas. They did so when it became clear that they faced moral outrage, as it broadly seemed like they were taking advantage of a bad situation for personal gain, which is generally an ethical stance you don't want to be seen taking. This relates to our point about how our system falls back to a series of stronger rules when we see boundary cases, as opposed to strictly utilitarian cases.
— Reply to this email directly or view it on GitHub.
This section needs to be written.