usds / equity_practice

A repository for all equity toolkit items (for now)
1 stars 0 forks source link

Feedback Log: Equity Tools #39

Open jeremyzitomer-usds opened 6 months ago

jeremyzitomer-usds commented 6 months ago

Running tracker for constructive (and positive) feedback we get from stakeholders/USDSers on our equity tools -- we can use this ticket to generate other one-off tickets for tool improvements/fixes.

jeremyzitomer-usds commented 6 months ago

Feedback from Cortney Sanders, from the SSA Equity team, on "What's the Problem?" and "From Problem to Inequity" in the context of the SSA CX data sprint, Fall 2023

Cortney-Sanders-SSA-Feedback-WTP-FPTI.pdf

natasha-jamal commented 6 months ago

General feedback from Participation Spectrum: A LOT of information and takes time (often about 10 min) to even understand what we are trying to get at. Will likely need to at least split up the content onto separate pages, and perhaps there are other ways to simplify the existing content too.

natasha-jamal commented 6 months ago

Reach Burden From CDC - the scale is not as clear on this one as on participation spectrum. What if we had multiple of the problem framing categories all at once? I think it also is part of Mina's concerns with the exercise, that can you really just move from one to another...

jeremyzitomer-usds commented 6 months ago

this is a test of tagging @natasha-jamal

jeremyzitomer-usds commented 6 months ago

this is a test of a tagless message for natasha

jeremyzitomer-usds commented 6 months ago

this is a new test for @natasha-jamal

jeremyzitomer-usds commented 5 months ago

From Victor, 1/12:

POSITIONALITY WHEEL How did you determine that you needed to create this? - I ask because this and the participation spectrum are things that already exist. There are multiple positionality and power wheels. Why couldn't one of those be used? Was there a reason to create a new one?

These tools seem to be based on the implicit assumption that awareness leads to action - This comment and the one above apply to such tools in general and specifically to positionality and power wheel tools. The reason they are not foundational to the ways in which I have been a team member in community-led work is that . . . well, they seem to be focused on designers or people with power (which makes sense) and founded on the belief that if people are aware, then change will happen. I've seen time and time again a team does something like this and it only brings up issues of feeling stuck or guilt or not being sure what to do or feeling they don't have the organizational authority to change the situation. To me, you want someone or a team or org to fix the imbalance not try to use a bandaid approach (well, what do we do now that we realize we have bias on our team - oh ok, well, let's remain vigilant and aware of teh power of that bias to creep into our work - now let's keep going). These kind of tools can suffer from elite capture. Teams use them, feel good that they use them without fundamental change and then keep moving on in their project. I have a biased experience so there may be other experiences that are different and somehow maybe the other experiences are more indicative or usual, but I have a feeling what I have see is more REACH/BURDEN SPECTRUM How did you determine which categories of things would constitute a row and which would not? - For me there are some possibilities missing. For example, from psychology, learning, and behavioral theory, people may not do something for a host of reasons. Often time designers assume it's one reason, like knowledge. But people can have the knowledge and still not be doing something because they lack the skill. People could have the knowledge and the skill and still not be doing something due to any of a host of "non-cognitive" factors (relational, motivation, grit/perseverance, etc.). People might have the knowledge, skill, and motivation (etc.), but not do something or take part due to structural or environmental reasons. In learning theory, this is important because they way you design for each of those is different. I see in the spectrum knowledge and environment/structure. I don't see skill and I don't see some of the "non-cognitive" factors I mentioned.

The wording of the framings if "people-centered," and unfortunately absolves the government of agency - I would actually make those government-centered. Instead of People do not receive ongoing care or adequate support, I would write The government does not provide adequate care or ongoing support to the people or WE do not provide adequate care or ongoing support to the people. Instead of People do not receive effective information, I prefer The government does not provide or communicate effective information to the people or WE do not provide or effectively communicate information to the people. Etc.

Why did you choose a linear scale for the score? - To me the levels are not linear like that. Structural problems are systemic. I would have rather chose a multiplicative score, or better yet, a logarithmic or exponential scoring system. 6, 8, and 10 scores are orders of magnitude above the rest and each other. So more like 1, 10, 100, 1000, 10,000 or other exponential options. Even a Fibonacci sequence would be better. But a service is an order of magnitude above info, and on and on.

Why are you using scoring at all? In this way, the positionality wheel is stronger. Scores can be gamed.

Unfortunately, a tool always encourages people to run a 60-minute workshop and output an answer which I do not believe you can really do with this - It MAY be possible when accessing non-mainstream types of knowledge (such as collective embodied wisdom that is brought together into a room). In normal Western ways of working, the way you can actually use this activity best (in my opinion) would be AFTER research. Otherwise, it's just guesses which can be wrong.

The tool seems to imply that all problems are all levels, and you just choose which level you want to work at - This is somewhat strange to me. I do agree problems can exist at multiple levels. And some problem do exist at all levels. Some problems exist only at a subset of levels (from some level and up more superficially or symptomatically). Part of the work is to distinguish what is A problem (versus not a problem) AND to FURTHER distinguish what is THE problem (versus A problem). As long as you don't work on fundamental, root, or system problems, the higher (more superficial or symptomatic) level you choose to work at will always recur even if you temporarily resolve it. This is the whole problem with Design Thinking and HCD / UCD for social problems.

PARTICIPATION SPECTRUM Why is the scale linear?

Why are you using scoring at all? In this way, the positionality wheel is stronger. Scores can be gamed. I'll talk more about the scoring later.

Even though participation spectra already exist, I do appreciate that this one bring some new elements or has extra help with the spectrum.

Q1 is problem-based, need-based, or deficit-based. Communities can do work for aspirational, inspirational, motivational, goal-based, vision-based, futures-based, etc. reasons.

Q1 removes agency of communities in vital implementation work. It's not just about framing, researching, analysis, synthesis, ideation, decisions, and communication (and even prototyping, creating). It's also about delivering (launching, running, maintaining, etc.)

[Relates to score gaming] Q1 only tries to measure when participation started but it fails to consider participation ending - Sarah Fathallah has a 2D graph that is basically your Q1 and Q2 put together and she maps a typical or specific project showing how it moves in and out of participation. So you can start participation before framing but then drop it before defining success. This is actual quite truly how normally practiced PD works (and in and out effect). So the framing of Q1 and the question it asks doesn't seem to work for the reality of what happens. It only catches a subset of situations.

[Relates to score gaming] Q2 has the same problem assuming you are only at one level of engagement. In reality, as you move from phase to phase of your work, the level of engagement can change (just like IF people are participating at all can change). This can be viewed in Sarah Fathallah's graph where the line is a measure of time and Q1 and Q2 are the axes. Because of this this scoring sheet really only captures a tiny minority of real cases or it allows someone to score it in a way that doesn't match what happens but they are forced to choose one Q1 and one Q2.

Q2 assumes informing is the worst but you can have situations where you are not informed.

Q2 Involved is too vague because there are TONS of ways to use input. Its definition also intermingles with collaboration.

Q2 Collaborate is too vague as there are multiple ways to collaborate. Arnstein adds more specificity just by distinguishing between partnership and delegation, for instance.

Q2 Empowerment is very gov-centered. Also empowerment reinforces the hierarchy. Again, I like Arnstein saying citizen control. i2S uses the top level of support (professional support to community-led work).

Q2 Empowerment sounds like review. The definition seems to just say people make the final decision but if they are not involved in the creation of choices it's more like a community review board. i2S uses the top level of support (professional support to community-led work).

Q3-5 seem like they are geared to different people. Maybe Q3/5 are to participants. Q4 looks like it's to government professionals?

Q3-5 aren't independent. This wouldn't really matter if you weren't doing an additive score below. (Q3 is independent) Because you are adding them up, the dependency of Q4 and Q5 means that it will be impossible to have a 10 for Q4 and a 0 for Q5, for example. For that additive score independent factors make more sense. Otherwise, you should probably use a different way of bringing the scores together.

Q3-5 this type of VERY COMMON quanitative scoring of qualitative question is not my preference due to bias I see in practice. I go against the grain in such survey design. In my experience, you get different results when you (and thus I feel it is better to) ask a neutral question and add non neutral answers. For example for Q3 I would say "What was your experience learning about the participation opportunity." And I would have five options that say "It was very easy / easy / not easy and not hard / hard / very hard to learn about it."

Q3-5 this type of VERY COMMON quantitative scoring of qualitative question is not my preference due to score anchoring and bias in practice. I prefer to have the neutral option (neither agree or disagree) at a 0 and the strongly agree at 4 (or whatever you choose) and the strongly disagree at a -4. You get different answers (and I believe better) when anchored around the association of 0 with neither one or other and the negative to the thing we associate with a negative experience and a positive with the one we associate with a positive experience. It's more balanced. a 5 out of 10 is seen differently than a 0 between -4 and 4. The neutral score ends up not being neutral.

jeremyzitomer-usds commented 5 months ago

Discussion of yay/nay:

PARTICIPATION SPECTRUM

jeremyzitomer-usds commented 5 months ago

Reflections:

High level takeaways:

jeremyzitomer-usds commented 5 months ago

RBS

AlexBornkesselUSDS commented 5 months ago

FYI: To help wrap my brain around feedback received to date, I documented items from the above feedback log into this MURAL and propose we use this MURAL to capture feedback moving forward. (It's still in progress - this is a snapshot in time).

celestemespinoza commented 5 months ago

Adding this feedback from OMB Equity @jeremyzitomer-usds @AlexBornkesselUSDS

OMB Equity Team Feedback on USDS Equity Materials_02-07-2024.docx

AlexBornkesselUSDS commented 4 months ago

@celestemespinoza @jeremyzitomer-usds FYI - I added the OMB Equity team's feedback into the feedback log.