w3c / process

W3C Process Document
https://www.w3.org/policies/process/drafts/
192 stars 130 forks source link

Approval Vote Experiment Questions #521

Open fantasai opened 3 years ago

fantasai commented 3 years ago

The AB has asked W3M to run an approval vote experiment during the upcoming AB election and will likely do so in the upcoming TAG election as well.

In light of concerns raised by various AC reps and AB members over the years about the STV method we use for AB and TAG elections, the AB has asked the W3C Team to run an approval vote experiment alongside the upcoming AB election.

This will not in any way affect the actual vote tallies. It is merely an informational survey, whose individual data the Team will keep confidential.

This issue is to collect ideas for questions we might want to ask the Team about the results.

Here are some questions to start:

Please suggest any more questions you'd like the AB to ask the Team. :)

dwsinger commented 3 years ago

For how many voters was the set of candidates they ranked in STV different from the set that they Approved of in AV?

michaelchampion commented 3 years ago

Did STV generate winners who were dis-approved by a majority?

Was the set of winners under STV significantly more diverse (in terms of geography, gender, web technology specialization, whatever) than under approval voting?

It would be hard to get data on the counterfactual, but does it seem likely that the winners of an approval ballot would have been more likely to come to consensus on the key issues than the actual STV winners? (For example, if in hindsight some AB member blocked consensus on some issue and that person wouldn't have been elected under approval voting). Obviously subjective, but this gets to the heart of the discomfort many feel with Meek STV in the W3C context.

I'd urge the AB to keep the experiment going until there is significant variance on key variables such as ratio of candidates to open seats, turnout, and general contentiousness of issues in elections.

Also, while I don't expect the team to agree, it would be a much better experiment if anonymized data were made available so independent analysts could verify the results and explore the data in unanticipated ways.

swickr commented 3 years ago

@michaelchampion wrote

it would be a much better experiment if anonymized data were made available so independent analysts could verify the results and explore the data in unanticipated ways.

With the number of ballots we typically receive I believe it will be very difficult to provide data that is sufficiently unidentifiable with respect to all candidates. We won't know for certain until after the ballots are received, of course.

chaals commented 3 years ago

@michaelchampion

Did STV generate winners who were dis-approved by a majority?

This is an interesting question. I would generally expect the answer to be "yes". If it isn't the case, it would indicate there is a very strong consensus on the outcomes.

I believe our history suggests a notable minority of votes only identify one or two candidates, which suggests people understand how to vote strategically to maximise the value of their vote. Likewise, I believe the history also suggests some people don't understand that with STV you get the most value by ranking all candidates, so only rank one candidate.

@fantasai:

How many voters cast approval ballots for one candidate? Two candidates? Three candidates? etc.

This is a core question. If voters approve significantly fewer candidates than there are seats, it suggests a very strong strategic vote, where people give the full value to a single candidate or small number, at the expense of any further influence over the outcome.