Open mlissner opened 4 years ago
Lots of things to think about if we're going to start putting an editorial thumb on the scale.
A thought experiment that I've been using is to consider if we'd ever use something like this for Representative Devin Nunes' various politically-motivated lawsuits. He consistently sues media organizations and whenever he does, his cases get a lot of traffic from people that think they mean something (they don't, and they're pure politics). Do his cases merit a warning along the lines of:
This case is politically motivated and appears to lack legal merit. The lead plaintiff in this case is known for abusing the legal system for political purposes.
Extreme? Perhaps. Useful to the public? Perhaps! I'm not proposing we do this, but it's useful to think about the boundaries.
I'm definitely concerned about who makes such editorial decisions and what guidelines are used, especially in terms of how to do this at any scale and particularly over the long term.
One idea: could we think about such a warning on any pro se cases? Even outside of this particular circumstance, I always have to warn new paralegals or interns doing any case research to be extremely wary of pro se cases because using them as examples of, say, form, structure, or reasoning can be problematic if they are not read with particular care. While I realize there is definite inequity in doing this only for pro se cases, it is something that might scale, and I think there's value to it beyond the current problem. It also has a more general appearance of impartiality than other approaches, and thus less likely to result in attacks on Courtlistener itself.
It does not address the issue of politically motivated lawsuits, though, since there's are usually blessed by some attorney signing off on them. I'm struggling to think of a way to do this other than human editors/reviewers, though, and flagging these kinds of cases seems like it could even more quickly grow into a problematic morass of accusations of political bias, etc., etc.
A quick post to summarize some of the replies we've seen in various places and to reply to the ones above.
This seems to be people's biggest concern. Perhaps I'm naive to think it's really not much of an issue, but I guess my envisioned approach would be to only do something like this for the worst and most obvious cases. I'm not particularly interested in doing something that scales or that handles edge cases. I'm more just interested in doing this for very obviously very terrible cases (that are getting enough traffic to show up on somebody's radar).
I think that handles the slippery slope generally, but since it's slippery in general and doubly so over the long term, perhaps it's worth enshrining that in some sort of policy that says:
We wouldn't do anything for a case unless it was really obvious
Doing so requires unanimous agreement by some sort of content board that the case is "bad" or "likely to confuse" or whatever.
I mentioned Nunes in a couple places as a thought provoker of sorts, to encourage thinking about the slippery slope problem. All responses about him are that doing anything with his cases is a terrible idea that'd incur reputational cost. I agree! Perhaps we need better edge cases. OTOH, perhaps the policy above of doing this for "obviously bad" cases only when "unanimously supported" will head off the issue.
I don't expect to use this feature much if we built it.
@krisnelson said, above:
could we think about such a warning on any pro se cases?
I think that'd be a different, bigger, question since it'd capture less obvious stuff. I think it's not a horrible idea, but I think we'd want a really light touch there if we did anything. For now I'd want to pass on anything that aggressive though.
(There's also a real challenge figuring out what is a pro se case, but I'm setting that aside for the moment.)
I didn't say we'd block access, but one person suggested we "tarpit" these users, and another responded that we better not block access to our information. We won't block access to our content under any scheme I've envisioned.
Two people suggested that if we do something here, we mostly limit it to just providing a link to an authority on the matter. YouTube is doing this now, for example:
https://www.cnbc.com/2018/03/13/youtube-wikipedia-links-debunk-conspiracy.html
I think that's a smart part of any solution, but I don't know that it'll always work for us. The case that got me thinking about this, for example, lacks any useful debunking that I can find so far, but perhaps just linking to something debunking QAnon in general is fine for this instance.
One other update. This case has been picked up by at least one bot that continues spreading it around, and this has become one of the top cases on CL ever. I have begun harboring a theory that the case itself was filed so it could be spread via bots, though I have no idea to what end.
One could highlight e.g. orders of dismissal for lack of merit, filings by someone actually ruled to be vexatious, etc. Basically, the outcome and context of any given motion. This would be useful in general.
I doubt that people taken in by this would be affected by e.g. a reminder that party submissions are merely that party's allegations, not rulings, but I think it'd be beneficial to highlight the filings of the court. E.g. sometimes people get confused between a draft order (which is routine and often required for a motion) and an actual granted order.
There is some excellent information about combating conspiracy theories in this: https://www.climatechangecommunication.org/wp-content/uploads/2020/03/ConspiracyTheoryHandbook.pdf
Two updates here:
I talked to a major publisher yesterday and they informed me that their policy has never been published but that the slippery slope problem is really easy to solve in practice. Just only focus on the most egregious things.
They shared that there's some kind of factchecking network (reach out to Duke, maybe) and there's even factchecking schema markup that we could add if we ever come back to this.
@mlissner Could you elaborate on # 2 in your last? It's new to me.
Sure, check out: https://reporterslab.org/, and https://developers.google.com/search/docs/data-types/factcheck
Right now, the QAnon folks are linking to CourtListener like crazy. The case they're linking to can charitably be described as "bonkers." With a straight face it alleges killer robots and bees are out to get us.
But...the QAnon people think it means something. And there's a lot of QAnon people. As much as it's easy to write off people that are so far gone — and much of the response to my complaining about this on twitter was in that vein — there's a truth that our education system has failed these people. I believe that we run a site that spreads information, and that gives us a position of power and a responsibility to help make tings better if we can.
I also think that there's a spectrum of people that have been looking at this link. Sure, some high percentage of QAnon-ers are too far gone. They're too far down the conspiracy hole. But some percentage are not, and we can help those people.
So what to do? I'm open to lots of ideas, but a simple one is as follows:
On the docket and RECAPDocument models add a Boolean field called "content_warning" or something that we can flip.
If content warning is true for a docket or PDF, we show a big click through warning that users have to click when they see the item.
By default the click through warning shows something generic. Suggestions welcome, but something like,
Ugh, that's not great, but we can work on that. At the bottom of the click through, there's a button you can click to see the content.
On the RECAPDocment and Docket model, we have an additional field that allows an override to the string above. It can take HTML and link to good places or whatever we want it to do. Possibly, this could instead be a join to another table with various options in it.
I'm open to other designs. The hope is that we can do some good in this world of people that are mostly too far gone.