Letterbook / Letterbook

Sustainable federated social media built for open correspondence
https://letterbook.com/
GNU Affero General Public License v3.0
121 stars 16 forks source link

Admin/Moderator interview guide #262

Open jenniferplusplus opened 5 months ago

mattly commented 5 months ago

When I interview people for the purpose of building tools to help their work, I am typically trying to learn five things:

  1. What is their mental model for approaching their work?
  2. What parts of their domain data are vitally important to performing that work?
  3. What are the operational contexts in which they're performing their work?
  4. What does and does not work about their current toolset for performing their work?
  5. What are the potential consequences of their actions they are trying to understand?

The latter two questions are I feel fairly direct and straightforward; at least when I am talking to professionals, they're often able to articulate answers with a minimum of coaxing and it's possible to learn a lot without dancing around the point.

The first three questions tend to be the opposite. Few people are introspective enough to be able to give straightforward answers to these questions without a lot of coaxing and guidance.

I don't have a template for these sorts of interviews: my approach as someone with ADHD is to a) be present, and b) to improvise. Everyone is different; they think about their work differently, they'll have different ideas about what's important, what's easy or not, weigh consequences differently, and even use different domain language to talk about their work. Additionally, the more distant someone is from the front-line of the work, the less they typically understand the nuances of the problems involved or the potential for consequences from a mistake to create catastrophe, but sometimes the opposite is true: a manager might have a much deeper understanding of the consequences of mistakes than any of the front-line workers.

Generally my approach is to start by asking them to describe their typical work session. In our case, this is, it's time to do some moderation work. How do they know this needs to be done? What are they thinking about in the space between "oh it's time to put on the moderator hat" and finding out why they need to? If there are multiple things to do, how do they decide what to do first? How do they evaluate claims, make decisions? There will typically be a lot of opportunity to sidestep into particulars. This is also a good point to find out what's frustrating about the workflow, what works well, and hopefully ask about how they know if they've made a good decision.

After this initial pass, and particularly as I proceed through talking with multiple people, I'll typically ask what advice they might give someone new to doing this type of work: what are the important things to watch out for, what are the places they need to be extra careful? As I learn more about the domain from previous conversations, I'll sometimes propose a scenario and potential approach to later participants – this is partly to calibrate how well my understanding is coming along, but also partly to see how their approaches differ. Mostly this is to learn about their mental models and attempt to break into the unspoken assumptions about domain data.

I'll typically close by describing some approaches to problems I've been thinking about and ask for their feedback. In this case, I might mention that I've been thinking about improving the defederation interface, you enter an instance's url and it tells you how many people this would impact: how many people on your instance would be impacted? How many people on the target instance do people on your instance interact with? I might show them a sketch or mockup of such an interface – this is less about validating the idea and more about listening to their response. What does this idea bring up for them? It's another good opportunity to calibrate my understanding of the problem so far and discover the unspoken domain data assumptions.

jenniferplusplus commented 5 months ago

This is great, I think this is exactly the kind of thing I was hoping for.

I wanted to go into the interviews and recruiting with some common understanding of what questions we're trying to answer with the research, and how we would find those answers. And also, at least for myself, some specific strategies that are worth pursuing during the interviews.

mattly commented 3 months ago

Here are some high-level questions I'm hoping we can answer through this research:

I'm thinking about more specific questions to ask that will help get at these topics, and will post them separately.

jenniferplusplus commented 3 months ago

This seems great!

One thing I've considered a few times is whether reports are actually the correct abstraction to center for moderator actions. Does it make more sense to organize around something like cases? Something that can encapsulate reports, involved parties, notes, actions taken, and maybe other things? (Are there other things? Is that the right set of nouns?) How often would that framework be more effort than it's worth?

mattly commented 3 months ago

Does it make more sense to organize around something like cases? Something that can encapsulate reports, involved parties, notes, actions taken, and maybe other things?

I like this; it occurs to me as a natural abstraction, one that could be made mostly invisible, would allow for easier linking/cross-referencing between cases and their subject matter (users, notes, etc) and could become polymorphic to encompass thing initiated by items other than reports.

mattly commented 3 months ago

Here are some questions I think would be good to ask moderators in an interview:

mattly commented 3 months ago

a few more questions for the research findings after today's first (wildly successful) interview:

jenniferplusplus commented 3 months ago

And another I'd like to explore: