paul-crowe / Bug-Tracker

trackin' sum bugz
0 stars 0 forks source link

EA has not justified its core assumptions #3

Open paul-crowe opened 1 year ago

paul-crowe commented 1 year ago

Example:

Summary:

Date:

Status:

Lag to response:

Current canonical instance:

Prior status of critic:

Fundamental criticism:

Public responses:

paul-crowe commented 1 year ago

Example: Need to strengthen the case for betting hard on AI

Summary: Not enough work has gone into analyzing the case for prioritizing AI. Existing published arguments may be less convincing than people assume.

Date: 9th Feb 2019

Status:

Lag to response:

Current canonical instance:

Prior status of critic:

Fundamental criticism:

Public responses:

paul-crowe commented 1 year ago

Example: Poor standards of metaethics

Summary: "I would like to see more humility from people involved in effective altruism regarding metaethics, or at least better explanations for why EAs' metaethical positions are what they are. [...] More broadly, I think that rather than having a 'lying problem,' EA has an 'epistemic humility problem' -- both around philosophical questions and around empirical ones, and on both the community level and the individual level."

Date: 2nd Jul 2017

Status:

Lag to response:

Current canonical instance:

Prior status of critic:

Fundamental criticism:

Public responses: Overlaps with "Over-confident".

paul-crowe commented 1 year ago

Example: The methodology of existential risk study needs work

Summary: Current ways of talking about existential risk and longterm goals focuses far too much on transhumanism, utilitarianism, and strong longtermism; and thus does not take into account that most people do not subscribe to the full forms of those beliefs. Thus we should, roughly, (1) separate the study of existential risks from moral theory about what to do about it (see page 12/13), and (2) We should have democratic ways of discussing these problems (existential risk, and longterm goals) and how to approach them in order to avoid it being controlled primarily by those that hold those sets of beliefs.

Date: 28th Dec 2021

Status:

Lag to response:

Current canonical instance:

Prior status of critic:

Fundamental criticism:

Public responses:

paul-crowe commented 1 year ago

Example: EA needs a reasonable answer to population ethics

Summary: As listed

Date: 2nd Dec 2013

Status:

Lag to response:

Current canonical instance:

Prior status of critic:

Fundamental criticism:

Public responses:

paul-crowe commented 1 year ago

Example: Making Beliefs Pay Rent (in Anticipated Experiences)

Summary: "Above all, don’t ask what to believe—ask what to anticipate. Every question of belief should flow from a question of anticipation, and that question of anticipation should be the center of the inquiry. " Ensuring you have a complete mental model of your beliefs by testing what you would expect to see happen, or develop from a given situation. Where do those anticipations come from and what are they based on?

Date: 29th Jul 2007

Status:

Lag to response:

Current canonical instance:

Prior status of critic:

Fundamental criticism:

Public responses:

paul-crowe commented 1 year ago

Example: Value-aligned' is insufficiently well-defined

Summary: "According to the public writing I found, value-alignment could mean any of the following: supporting and spreading EA, having shared worldviews, focussing on the most important problems or doing the most high-value thing. Importantly, not being value-aligned is seen as having downsides: it can dilute, simplify or lead to wrong prioritisation. It is probably in the interest of EA to have a more concise definition of value-alignment. "

Date: 7th Feb 2017

Status:

Lag to response:

Current canonical instance:

Prior status of critic:

Fundamental criticism:

Public responses: