bluesky-social / proposals

Bluesky proposal discussions
88 stars 9 forks source link

[0001] How is this supposed to work in a federated context? #18

Open sneakers-the-rat opened 1 year ago

sneakers-the-rat commented 1 year ago

re: https://github.com/bluesky-social/proposals/blob/751dc2781dfe7680b63167e758fce28e1ab637ff/0001-user-lists-replygating-and-thread-moderation/README.md

These are good ideas - I have written about and implemented similar functions for lists in mastodon and elsewhere, and reply-gating and thread based moderation is also good, but I'm not seeing how that's possible to enforce in a federated context without encryption and eg. capabilities.

The only way I can see to make some of these features work is to place more of the functionality of interaction in the graph indexing servers and feed generators, which effectively defeats the point of federation.

Re: Lists

The proposed lists functionality seem possible, although too much emphasis is placed on display of list membership, rather than safety when you are added to eg. a hateful list.

eg.

We shouldn’t necessarily show all the lists somebody is added to because they won’t always be nice or useful.

the problem isn't that the list might be mean, it's that trolls and abusers use them to target abuse. this is made worse by being able to share lists. Since I presume the list will be something stored by single PDSs, there is no way to ensure the PDS implementation is well behaved and will comply with the request to remove.

People can always make lists for targeted harassment out of protocol, true, so the risk is not unique as much as elevated by having it in protocol.

Blocks don't fix this - particularly when the identity model of atproto is based on a single DID without proxy identity, if i'm reading it right. You can block the creator of a list, but they can very easily create another DID, ban evasion is cheap.

Reply Gating

This seems impossible to implement except by being built into the graph indexing services and feed generators.

So posts are identified by IPLD CIDs right? and that's a content address? so it is intrinsic to the content of the post? mentions are to DIDs, and replies are strongRefs right? and repos are strongly keyed to DIDs and can't trivially be changed?

So Alice makes a post and only Bob is supposed to be able to reply. Charlie is on a well-behaved client, and so when receiving the feed the client resolves the reply allow list, and since they're not in it their interface doesn't even give them the option. So far so good. Denise is on a forked client that is not well-behaved, however, and so they create a new post object with a replyRef pointing to the root post in their PDS. Since a user is in full control of their PDS, and this is merely creating a record in my PDS, there is no reason Denise can't do that.

OK that might be fine, the OP's client doesn't need to recognize the post, it can drop it and not return it when you do a getPostThread. In the current implementation this happens in the OP's PDS by returning a BlockedPost.

But most people won't be getting their feed directly from the PDS, correct? they'll be getting them from big graph servers and app views and whatnot. So the big graph service indexes the disallowed reply, and say another poorly-behaved app view decides it wants to ignore the reply gate - this could be malicious, or it could just be people that are nosey and curious and want to reply and read where they shouldn't!

To Alice, with her well-behaved client, nothing has happened. The reply effectively doesn't exist to them. For that to be the case for everyone else, they need to have well behaved clients, that receive data from well behaved big graph services, and feeds from well behaved app views. AKA you have to have no bad actors, or even curious actors in the network. The only person who for sure wouldn't know there's a whole tree of abusive replies under their post is the one who most would need to know - Alice.

So either a) it's impossible to implement except for cosmetically b) it requires a few, authoritative, well-behaved client apps like bsky.app to be the only way to interact with the network, or c) it requires a few, authoritative, big graph services to drop posts that are not allowed and refuse to serve them to app views.

This is what i mean by this is either impossible or makes federation pointless. This is a relatively well known problem in existing federated and p2p social networks - that you can't stop a bad actor from creating replies that are visible to other actors/instances/etc. willing to display them. On the fediverse this is one of the roles of instances being comparatively expensive vs. DIDs to generate, and on SSB they use a variety of tactics including proxy identities, capabilities, and encryption.

These decisions can be cumulative. So if this is approved into the spec, and it turns out dang we really do need this to be implemented at the big graph service level, and then over time lots more functionality that is super easy to imagine in a centralized context but actually very difficult in a federated context gets shunted onto the indexing and algorithmic services, then the network becomes federated in name only and most of the operation of the network is outside your control.

So I pose this as a genuine question - how is this supposed to work in a federated context? I could be badly misreading the protocol and implementation, so please let me know if i am.

sneakers-the-rat commented 1 year ago

I see the issue with lists was already partially raised here https://github.com/bluesky-social/proposals/issues/1#issue-1772155353

agentjabsco commented 1 year ago

I will not articulate the actual solution being overlooked here except to say it is possible

sneakers-the-rat commented 1 year ago

I will not articulate the actual solution being overlooked here except to say it is possible

I didn't post it just to be an asshole, i was genuinely asking how it is possible to address. if there's an actual solution being overlooked, please enlighten me.

Note how i didn't say it was impossible, just that there are tradeoffs:

So either a) it's impossible to implement except for cosmetically b) it requires a few, authoritative, well-behaved client apps like bsky.app to be the only way to interact with the network, or c) it requires a few, authoritative, big graph services to drop posts that are not allowed and refuse to serve them to app views.

agentjabsco commented 1 year ago

based on everything I see about the protocol's design, the way I see this shaking out is effectively option c: BGS operators will be something akin to telecommunications' "trunk carriers" or the "autonomous systems" of Internet backbones, and certain classes of abusive behavior will subsequently lead to increasing levels of consequence in the Real World, the nature of which you should never need to know about