Open amyntaranta opened 2 years ago
Hello,
I have no objections to adding such messaging in the UI. Perhaps an even stronger deterrent would be to communicate that due to the nature of the network queries are visible by other users and if anyone cares about the reputation of their MuWire persona they should be careful what they search for. (Or maybe this will just cause more persona churn, hard to tell).
Taking this a step further, I'm happy to have syndicated drop/block rules, as long as they are opt-in. By syndicated drop/block rule I mean ability to subscribe to another user's content control rules (currently available only in the 0.8.12 beta series).
The point where I will draw the line is an automated content suppression mechanism - I will personally never implement such thing as it defeats one of the core goals of MuWire which is to prevent a majority from silencing a minority. Due to the nature of the I2P network a "majority" is a very loose term, anyone can write a script that generates "enough" personas to become a majority.
Some practicalities: I release 4 times a year in cadence with the OpenJDK security updates. The next release is planned mid-April, which means the strings must be "frozen" by beginning of April to give translators enough time to translate. Do you think you can implement this in the next two weeks or at least finalize the wording in that time? I will gladly help you with the actual coding :)
Thanks for bringing this up!
zlatinb
Thinking a bit more about the way this could be presented, I think the most effective way would be to block the entire search tab.
That is, if a CSAM term is detected, instead of opening a regular search tab MuWire opens a special tab that contains the CSAM-specific warnings as well as a warning about others being able to see the queries. Then I believe the user should be presented the option to perform the search anyway - otherwise it's just going to provide incentive to third parties to fork the code and remove the warning tab altogether.
Regarding the list of terms that trigger the warning, I would really prefer if they were kept in a separate git repository. That way I don't have to become an arbiter of what is a CSAM term and what isn't; also the terms can be kept up-to-date more efficiently by volunteers. The muwire-pkg packaging scripts which I use to build the all-in-one binaries can then fetch the terms from that repository.
Hey there. I'm sorry it's been so long since my last comment on this issue. I have been in contact with Elliot from Prostasia to see what they might be able to do in terms of making this more official, but no word from them just yet. I also tried reaching out to a couple of MuWire users who have broadcast queries for CSAM, but unfortunately, none of them responded, though that was a bit of a long shot anyway. I did, however, collect a sizable list of search terms which I think is a good base. I'd like to be quite careful where I put this list. I know it will be open source anyway but distributing what could be construed as "instructions" for getting CSAM is nothing to take lightly. Where and how should I proceed in uploading the list?
Oh yes, and one more thing about resources to include-- StopItNow would be at the top of my list for resources but I wanted to get your opinion on whether or not to include them. Their UK branch, StopItNow-UK, is cooperating with the government's smear campaign on encryption. I know of at least one privacy-minded person who has disconnected themselves from StopItNow as a whole for that.
Hi,
I've been thinking about adding a generic query interception and prevention mechanism to MuWire that can be used for any kind of content. I am happy to do the necessary plumbing changes to enable this mechanism, but under the following conditions:
${TERM}
is illegal in ${JURISDICTION}
". ( "All jurisdictions" is a valid jurisdiction too.)${LINK_TO_COMMUNITY}
".Regarding your other question whether to include specific resource or not, I would be very careful about the wording of the message and the links to the resources. Ideally these should be done by professionals - those who deal with such issues daily. My concern is that if I or some non-trained person just comes up with something to say that could do more harm than good. Specifically for StopItNow, I would say their position on encryption in general isn't relevant and the only factor in deciding whether to include them should be whether they are an effective resource.
To expand more on that last point, I'm not going to interfere in any way in the decision process of building the term lists and the associated warning messages. To me matters that it be a transparent, community-driven process. There is a very simple form of checks-and-balances here: if the entit[y|ies] in charge start to abuse their power, people will just fork the MuWire code and remove the warnings.
On to technicals: I am still finalizing the architectural/plumbing details of how this is going to work, but most likely it will end up something like this:
.jar
file. The MuWire gui
project then will add these jars as a dependency and fetch them during build time. In order to keep builds reproducible, the version of the jar will need to be fixed and lists/warnings updated once per release..csv
format and the warning text as well as localizations of the warning text. This allows the community managing the filter to work with translators independently of the current MuWire relationship with Transifex.MANIFEST.MF
in the .jar
file in order to support multiple filters simultaneously. I'm still working out the details of thisSo before you upload the list you have compiled anywhere, let me do the necessary plumbing. Then I will create a repository here on GitHub with sample filter list and warning files, as well as the required build scripts to produce the .jar
artifact.
zlatinb
Hello again,
I've created a new repository with an example filter as well as the necessary build scripts to publish the filters to a maven repo from where MuWire can pick them up.
I've added you as a contributor and you should have write access. The repository is public, i.e. viewable by everyone. So, the decision whether to upload the keyword list you have compiled falls on you.
Once you tell me a keyword list and the associated warning(s) are ready for inclusion I will set up a maven repo and update the build script to publish the artifacts to there. In unison, I will update the MuWire build scripts to fetch the the filter artifacts from that repo.
zlatinb
MuWire is used on a daily basis to exchange child sexual abuse material (CSAM). I left my node running for 48 hours listening to all queries broadcast to it, and a total of 3.5% were explicitly searching for illegal content of this nature. While an entirely technological solution to this problem is impossible and not desired for the use case of MuWire, as a privacy and security focused application, it is nonetheless possible to introduce non-intrusive features which gently coerce users into exhibiting help-seeking behavior. I believe that this can help the health of the network of MuWire users, the users seeking this material themselves, and the children victimized by these materials.
My suggestion is a part of the UI becoming visible or popping up upon searches for keywords closely related to CSAM. This UI component or popup could have a list of resources for those who have an interest in stopping their consumption of CSAM (a large proportion, up to 50% if you are unaware.)
I think striking a balance between pushing too lightly and pushing too hard must be achieved. Ignoring for a moment that it would be against the intended cause of MuWire, simply blocking access to likely CSAM will only force those users to migrate elsewhere, where they have a lower chance of receiving help. At the same time, meek requests and pleas to stop using CSAM may prove entirely ineffective.
I have created some UI mockups to get across my suggestion: Quite frankly both of these appear pretty meek. If I were to redo them, I would replace "Don't show again" with "Hide for 24 hours." A more intrusive idea is to cover the entire screen with the message, only allowing the user to browse results upon dismissing it. The exact wording of the message and what happens upon clicking "Learn more..." is not final. (It was derived from Google's similar warning upon searching for CSAM)
As this seems like it would be a simple change after discussions are finished, I am more than willing to implement this myself.
This feature request was in part inspired by Prostasia's CP Deterrence campaign.