element-hq / element-meta

Shared/meta documentation and project artefacts for Element clients
75 stars 12 forks source link

Is there a way we could somehow detect people searching for suicide advice and suggest links to suicide prevention hotlines? #1600

Open Yoric opened 4 years ago

Yoric commented 4 years ago

If someone is searching for suicide advice, we could suggest links to suicide prevention hotlines.

This might require both server- and client-side changes.

Bumbadawg commented 4 years ago

If you start making exceptions, you will extend to bazillion real life potentially detectable threats : beating, rape, fire, terror attack. Matrix is not 911, it's not a search engine that can contextualize and provide localized help (which is the only internet option for that imo). 911, localized authorities and hotlines can create rooms to help people and redirect them to proper services. If someone in a group chat feel like someone is at risk, he can redirect to hotlines already. Automating specific cases means to analyze any public content data for this "XXX imperious topic" is compromising privacy/freedom on behalf of security. Emotional thinking to trojan security needs no introduction ; History proved.

lampholder commented 4 years ago

There's no need for this to compromise privacy.

The client can have a list of keywords + message copy (either supplied to the client as configuration options, or pulled from an endpoint on the server and cached) and use that to identify and handle/redirect troublesome searches.

Either client or homeserver admins could then choose whether or not to specify keywords - this would only impact users using either their client instance or their homeserver instance. And privacy wouldn't be compromised in any way.

Bumbadawg commented 4 years ago

Despite the technical aspect of language and its contextual varying complexity.. If server-side is involved, it compromises privacy. Client-side doesn't by default (supposed it's not pirated), you're right.

It's a philophical question you're indirectly pointing out.

I don't want the software that receives my keyboard and mic inputs to passively analyze what is in my mind, EVER. I don't want my tools to hand-hold me, suggest me, influence me, orient me, and at worst decide for me. I am the active master of my tools, not its human passive vessel.

There are active tools already, like search engines to help people, on their demand, there are even humans ready to answer your calls for help.

But, let's imagine for a moment it's chosen to be implemented client-side; then what will it be if it gets extended ? Every time you'll mention words like bomb you'll get tagged as terrorist, you'll talk about children you'll be tagged as pedo, everytime you'll talk about 1933's Germany you'll be tagged as nazi and so on. I'm pointing out the extrems here to make a point: once you agree to slip in passive analysis on your data, you allow gradual enforcement of control over your life, as little or as big as it can be, you yield a part of your freedom of consciousness to a machine.

But guess what, you don't need to worry about typing XXX-thread related terms on your computer if you use MacOS or Windows 10. It's already on record, and more specifically, on permanent record.

If this trend of passive analysis of the mind (could be any topic) becomes a generality in the future, wiil that be a desirable future for you ? To know your big bro big branned A.I know what's best for you. I'm pushing the reflection to its extrem to show the slip and the ethical questions lying behind this simple "suicide hotline" well-intended active suggestion of yours.