Closed dchoi27 closed 1 year ago
While there's definitely some work that could be done here on the implementation side, I suspect that in order to have a strong answer to questions around content filtering we need work on the policy + collaborators side of things.
In particular, adding a few commands into go/js-ipfs that lets users basically preemptively block themselves from fetching certain multihashes seems like some, but not an insane amount of work to prototype. However, the strength in content filtering comes from a user being able to subscribe to filters from sources they find reputable and use that to screen out data. Even when a user receives a certain content filter they probably would prefer if that filter were scoped based on the general reason it was deemed filterable content. Assembling a few groups that are willing and able to put together filters for users to subscribe to will likely take some time + effort to accomplish.
This could be done in two steps: content filtering, and later shared filtering lists.
The first step would 1) allow to deal with DMCA requests which is definitely needed for node operators and 2) establish a framework to build upon more advanced features.
Adding built-in support for denylists would be an immediate win by removing the need for public gateways operators to write their own glue code in nginx/lua.
The initial ability to share lists in interoperable manner could be facilitated by having manual ipfs acl import|export
command. I imagine we would need to not only support filtering content, but also peers. Think allow|denylists for CIDs, content-types, peerids, multiaddrs (protocols, in case of IP also jurisdictions based on geoip, AS etc).
Big chunk of year would be figuring out automation, policies and governance for publicly-maintained lists that community can opt-into (many questions to answer: should "official" IPFS nodes ship with some lists enabled by default for legal reasons? should IPFS project maintain "official" lists for public gateways, or would gateway operators band together and have their own to limit the admin related to DMCA? should we partner with existing organizations maintaining lists of bad bits? etc)
The initial ability to share lists in interoperable manner could be facilitated by having manual ipfs acl import|export command.
Just a note about this: for a node operator, having the ability to centralize the source of truth for this list would be a big win and would simplify deployments. This doesn't have to materialize into an actual command, but at least having that option in the underlying APIs would be great.
We're going to have to figure this out in any case as a service provider. I actually don't know if I would want this at the go-ipfs server layer -- what I need is a "bad hash list". I happen to own the .com of that domain because I've been thinking about it for a while 👍
@turt2live and @ara4n -- I was really impressed by the moderation policy tooling you designed and implemented for Matrix/Element. I'm curious if you have any advice or lessons learned to inform this roadmap proposal around creating affordances for IPFS node operators to manage their own nodes' content policy?
@momack2 Is it the proposal PL wants to focus on during 2021?
Some note about public black lists: I think they should list Sha256(CIDs) and not CIDs. Or there should at least be an option for it. Otherwise, if there is a list of malware tooling CID listed (for example, not to name even shadier contents), even if some nodes will refuse to serve them, some nodes will be aware of the correct CID thanks to the public list and try to fetch it from another node.
Going further, it should probably be sha256(multihash)
instead of sha256(CID)
to be independent of the (irrelevant here) encoding.
Black lists are a dangerous move for IPFS and in worst case marginalize this piece of software. Especially when the lists/filters are centralized.
In recent times we had more than enough centralized services deplatforming important opinions regarding the most recent crisis. We also have had the case of Catalonia and Wikipedia in Turkey. I understand the notion to block crime on IPFS. Personally and emotionally I even support safe content. But who decides what is appropriate? Different cultures, times, generations and systems, all have other values. But this technology must stay neutral and at max just give each user the voluntary possibility to use such filters as an addon, though it shall not be written into the core repo nor come from a central authority, which could be hijacked by certain interest groups, as we have seen it over and over again...
Please, keep IPFS free of opinions and let it do what it was designed for -> Run the Distributed Web (neutral and not opinionated)
I stumbled upon these guys (https://www.corbettreport.com/?s=ipfs), who rely on IPFS and this proposal should be discussed in a broader more philosophical round including all stakeholders (eg. a James Corbett) not only Devs. Thank you!
@Weedshaker Thanks for raising this, but I think we all agree on that. No list should be baked in IPFS or suscribed to by default. Subscribing to an external blocklist must be opt-in only, and will be a choice of a node operator. The tech should and will be neutral. On the other hand, if a node chose not to be, it should be able to. If I run a blog, I want to be able to block malware for my users. I don't see any problem with a centralised list of malware that nodes can chose to block. I see a problem only with a centralised list of malware that nodes must block. But this is not what this proposal is about, fortunately.
@momack2 hey - thanks for thinking of us :) Our work on MSC2313 has evolved a bunch more, as per https://matrix.org/blog/2020/10/19/combating-abuse-in-matrix-without-backdoors - and we now have a trust & safety team dedicated to building out relative reputation systems that folks can use if they want. None of this is particularly specific to Matrix, and we'd love to collaborate with IPFS; we've also been doing a bunch of work on it in the context of bluesky. For instance, https://matrix.org/blog/2020/12/18/introducing-cerulean#whats-with-the-decentralised-reputation-button is the first instance of decentralised rep in Matrix in the wild in our twitter-on-matrix PoC.
Murmuration Labs has put together a rough draft discussion/idea document about how decentralized content moderation & assessment systems might work, which we'd love to get feedback on from the IPFS community specifically, and the dweb/web3 community more broadly:
https://github.com/Murmuration-Labs/songbird-decentralized-moderation/
If there is interest, we would also like to organize a workshop to discuss this in more detail and hear your ideas for improvement. Let us know if that's something you'd like to participate in. Thanks!
Super cool! Thanks for sharing @tim-murmuration! @ara4n @MichaelMure @bertrandfalguiere @bmann - curious if this does a good job summarizing the user-agency we think the dweb ecosystem should encode around these types of issues?
Pinging this conversation, as I think there's something here that can be further developed through collaboration.
Would also love to hear @jbenet thoughts on Songbird sometime when you have a chance! Thanks. (https://github.com/Murmuration-Labs/songbird-decentralized-moderation/)
We do not need content filtering system and censorship in ipfs on node to node level we transfer data blocks and act like telcom providers. We do not have it on IP packet level either. Do we have content filters in databases? No, but we might have content filter at application level. Would you use storage system which can possibly disrupt application by false positive censoring? Why take this risk?
As soon it will get implemented block requests from state actors will follow. You advertise IPFS as good technology for making uncensorable sites. But instead of goverment censorship you aim to develop community censorship. Let technology be neutral.
We can deal with DMCA requests case by case as we all already do. Lets get real and not hunt ghosts. Do you get request for blocking hash on public gateway? Ok, i can add it to nginx config and i am done. This is very different from blocking proactively offensive things. IPFS is not anonymous anybody can get list of nodes hosting specific content and send DMCA, law enforcement can identify ipfs nodes with unauthorized content distribution and investigate.
As soon you will add blocking crypto community will leave, there are different storage systems in development - they do not need IPFS. Nobody will store and pay for NFT with knowledge that if someone find art offensive it will be blocked and investment voided. Its too risky to invest into that.
Word! I would certainly add other options to peerweb.site, if IPFS would introduce content filtering.
@hsn10 @Weedshaker I think you're missing what this issue is about. In general with IPFS everyone is empowered to make their own decisions. In fact right now the way go-ipfs works is that each user decides what data they're going to fetch and store locally. For example, nobody else can make my node store some file for them.
Given that this is the case some users might want to be more liberal in what types of content they download but want some kind of content filter to fall back on. For example, I might be more willing to download a file over IPFS from a not so trustworthy source if I knew that I had some local filters I could enable (e.g. from an antivirus vendor I'm a customer of) that would protect me from downloading files I didn't want to have. Similarly, if I run a gateway and am getting an endless supply of DMCA requests I might want to ask a friend who also runs a gateway for the a list of content they've had to block so that my life becomes a little easier. This isn't censoring the network, it's just automating some tasks for the user that they might want to do anyway. Additionally, for those concerned about censorship of gateways they can just run a local IPFS node and actually join the p2p system instead of relying on some gateway to proxy for them.
To give a counter example of where someone might advocate for filtering in IPFS and where I'd strongly object is in the public IPFS DHT. Kademlia is a cooperative protocol that relies on everyone following basically the same rules. If every peer in the public network made their own decisions about the types of content they will/will not help others locate then the system becomes exceedingly complicated to reason about and generally just less efficient.
@aschmahmann , You open pandoras box with this! Just stop the whole content filtering/consorship topic and use your energy to make something more useful, eg. that IPNS works at js-ipfs. PS: aschmahmann quote: "If every peer in the public network made their own decisions about the types of content they will/will not help others locate then the system becomes exceedingly complicated to reason about and generally just less efficient." This is exactly the point of decentralization. If this bothers you, there are plenty central fileshare services.
Wow, no need to be offensive @Weedshaker . @aschmahmann in one of the main contributor of the IPFS project, so you should be constructive in you remarks or you should be the one forking the project for your own needs. Now, just like TCP/IP is content-agnostic but website are not, webservices (either web3 or otherwise) should be able to manage their content and what they host. Nobody should be able to tell a node operator what to censor, yet nobody should be able to tell them what to host. IPFS is big enough for controversial content to be hosted somewhere. All the people in this thread agree that this any blocklist would be opt-in, an it will be. If I understand you correctly, your concern is that state actors would more easily enforce censorship via blockrequests to node operators who don't want to. They will try either way, and will provide lists to block, and nodes will have to comply or decide not to. Having a tool to do it won't change that. On the other hand, node operators willing to control what they serve (not what the network can serve) should have the tool to do it.
Anyway, we should continue thi conversation on discuss.ipfs.io, as GitHub is for actionnable issues.
@bertrandfalguiere & @aschmahmann , Sorry I didn't mean to be offensive at all.
I haven't read through all the comments in this thread in detail, but from what I can see this sounds like a V E R Y bad idea and a slippery slope into censorship land. I am in favor of letting people be responsible for the content they want to see and not rely on external "authorities" in any form to filter content.
I am in favor of letting people be responsible for the content they want to see
That's precisely what this issue is about.
Please implement filter based on netmasks. I will used it to filter access from filtering nodes. No DHT or anything for them.
Agreed. Luckily, there is no "authority" in this proposal. This proposal is precisely about giving more control to the people.
People tend to think freedom of speech is only about being able what to have to say. It is usually the more threatened side of the coin. But it is also about not saying what you don't want to say. Imagine the opposite case: a state is forcing me to host and serve its propaganda. Is my free speech protected? We need tool for censorship evasion, and for content filtering. Please read the whole thread.
(Concerns about the slippery slope are obviously valid, though. We have to stay alert that the intent is not tweaked in the future.)
Are we doing "Strong content filtering" aka. censorship to make FileCoin more corporate friendly? Profit has always been a bad advisor! IPFS has been perfect, N O T H I N G needs to be adjusted nor changed in this area.
The corporate trends should be obvious to everyone by now. Slippery slope is indeed a very valid concern here. I see no good reason to provide any sort of content filtering. For the case of someone being forced to produce state propaganda, I don't see how this would help. If they can compel you to do one thing, they could compel you to remove any sort of "protective" filter. Don't implement any kind of content filtering mechanism for IPFS or it will come back to haunt us all. The risk is far greater than any potential reward that would come from such filtering.
1) This private E-mail including but not limited to any / all attachment(s), are lawfully protected in any public jurisdiction subject to 1a. 2) Nunc pro tunc "Now for Then" (retroactive): All Rights Reserved (Without Prejudice(2a)), Non Assumpsit(2b), and Without the United States(2c)1a) Electronic Communications Privacy Act, 18 U.S.C. § 2510-2521 2a) Without Prejudice UCC 1-308, UCC 1-306.6, in other words: All Rights Reserved 2b) Non Assumpsit "Did not undertake" (compulsory contracts) 2c) United States as defined by 26 USC § 77012c § 7701.9) Legalese: “The United States government is a foreign corporation with respect to a state" (20 C.J.S. Corporations §1785) 2c § 7701.10) Legalese: "include" in Latin: "expressio unius est exclusio alterius" or "to express or include one thing implies the exclusion of the other." Black's Law Dictionary (8th, 9th ed. 2009)
3) The color of the text above designates political jurisdiction: red-private non-statutory, flesh and blood, man / woman, blue-administrative, admiralty law, corporate jurisdiction. These are also pen ink colors and are significant on legal documents. Underlined text are links to further information.
Sent with ProtonMail Secure Email.
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Monday, April 12, 2021 1:54 AM, bertrandfalguiere @.***> wrote:
(Concerns about the slippery slope are obviously valid, though. We have to stay alert that the intent is not tweaked in the future.)
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or unsubscribe.[https://github.com/notifications/beacon/ACUQLMX53JTVVPFSYK2JQFLTIKKL3A5CNFSM4T3YNWU2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOGC5KKJI.gif]
The opposite is true too. If dtate actor wants you to remove a content, they won't take "we don't have the good tool for it" as an excuse. Complying or not to a state actor is orthogonal of having a tool for it. This is an issue at the legal level. Put another way, not having filtering tool will not protect us from authoritarian govs. This is a very important topic, but independent from this one.
Renaming this theme proposal to hopefully clarify it's scope (since I think a lot of folks are jumping into this thread with concerns based on theme title). I think https://github.com/Murmuration-Labs/songbird-decentralized-moderation/ does a good job describing the sorts of decentralized, autonomous decision making around content the IPFS network wants to empower. Any tooling for node operators to manage their own configurations would be fully optional and enhance their individual user agency (right now it takes advanced knowledge of IPFS to configure it to your personal needs as a node operator, which limits growth/adoption to more technically-savvy users).
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.
Note, this is part of the 2021 IPFS project planning process - feel free to add other potential 2021 themes for the IPFS project by opening a new issue or discuss this proposed theme in the comments, especially other example workstreams that could fit under this theme for 2021. Please also review others’ proposed themes and leave feedback here!
Theme description
Introduce the ability for node operators to better manage their own nodes (ex to filter content they store/retrieve/provide in useful ways), helping mitigate IPFS susceptibility to negative uses like objectionable content and expanding the set of use cases of IPFS.
Hypothesis
It’s critical that node operators have simple tools to scalably manage their node’s participation in the open and permissionless IPFS network as it grows and evolves.
Vision statement
Content filtering works well out of the box for any IPFS based solution. Similar to safebrowsing in browsers, no user has to think about it - but the power is in the user’s hands.
Why focus this year
As the user base of IPFS continues to expand quickly, creating a strong content filtering story will both help accelerate this expansion and mitigate potential risks.
Example workstreams
Implementation of content filtering
Other content