quantified-uncertainty / potential-projects

2 stars 0 forks source link

EA Elicitation Service #17

Open OAGr opened 2 years ago

OAGr commented 2 years ago

I think we could get pretty far by: 1) Paramaterizing key cruxes around EA. 2) Surveying people, particularly senior EAs, on where they stand on these cruxes. 3) Posting the results publicly.

If any of these parameters seem particularly exciting/promising, we could then turn them into forecasting questions.

For example, I had a list of some "Very different stances on AGI"; some of which could be turned into cruxes, and surveyed. https://forum.effectivealtruism.org/posts/SZFDtA4pjZzepdacv/13-very-different-stances-on-agi

uvafan commented 2 years ago

This is one of the ideas I'm most excited about (starting a 2-tiered system, <3 vs. thumbs-up).

While you're at it, I'd propose eliciting short explanations from people about the reasoning behind their answers, then writing up a summary of the patterns.

OAGr commented 2 years ago

While you're at it, I'd propose eliciting short explanations from people about the reasoning behind their answers, then writing up a summary of the patterns.

Yep, I think this would be preferable, if easy/possible to get them to do. Else, we could have a system that polls people, then we reach out to some of the people directly.

NunoSempere commented 2 years ago

The "survey experts" step kind of bugs me.

To point as to why, it seems to me that not that many people really dig in fairly deep into a topic. I'd expect experts' shallow patterns to be better than the counterfactual, but I'd expect it to be more valuable for someone to dig in deep, and I'd prefer the second as a way of building consensus.

OAGr commented 2 years ago

I'd note that "survey experts" could come with augmentations.

For example, we could hire a research assistant or two to spend more time doing deeper research.

The "survey" could also assume a lot of time spent. Like, the experts are expected to spend 2 hours per question, if we want.