Open Noah670 opened 1 year ago
Theres actually a offensiveUserCounter.increment()
that I found and tried to delete before in the PR here: https://github.com/twitter/the-algorithm/pull/345
I don't know if this is the only place where something like that could happen, but it can be read from our settings page if we have a get endpoint for this specific user.
The only one that faithfully detected my search ban (not shadow ban) is https://taishin-miyamoto.com/ShadowBan/ My account is https://twitter.com/SmolderinCorpse As you can see, I rarely talks in the past 10 years, I'm not toxic, I'm not a bot and I tweet sfw.
I think there should also be some transparency on the reasons for why users getting search/shadow banned. At least give users some instructions on how to get rid of the ban and how to regulate their own behavior, otherwise it lost the point of banning real people.
The only one that faithfully detected my search ban (not shadow ban) is https://taishin-miyamoto.com/ShadowBan/ My account is https://twitter.com/SmolderinCorpse As you can see, I rarely talks in the past 10 years, I'm not toxic, I'm not a bot and I tweet sfw.
I think there should also be some transparency on the reasons for why users getting search/shadow banned. At least give users some instructions on how to get rid of the ban and how to regulate their own behavior, otherwise it lost the point of banning real people.
I think you've misinterpreted the results from that website, understandably. It is confusing how they display it. I believe the green results you are seeing mean that you are 'safe' from that type of ban, not that it is effecting you. I imagine the results would be red if it detected a ban was in place. I tried multiple accounts and they're all green.
The only one that faithfully detected my search ban (not shadow ban) is https://taishin-miyamoto.com/ShadowBan/ My account is https://twitter.com/SmolderinCorpse As you can see, I rarely talks in the past 10 years, I'm not toxic, I'm not a bot and I tweet sfw. I think there should also be some transparency on the reasons for why users getting search/shadow banned. At least give users some instructions on how to get rid of the ban and how to regulate their own behavior, otherwise it lost the point of banning real people.
I think you've misinterpreted the results from that website, understandably. It is confusing how they display it. I believe the green results you are seeing mean that you are 'safe' from that type of ban, not that it is effecting you. I imagine the results would be red if it detected a ban was in place. I tried multiple accounts and they're all green.
Yesterday I was search banned. now I was unbanned. The way I check search ban is, open firefox in private browsing and search "Fibonacci time signature", which is my latest tweet. I'm 100% sure I couldn't find my tweet yesterday.
Even so, I still have another account (I won't say the username) which got search banned that only posts selfies (sfw).
I've heard it's either because I changed my username recently or because I followed too many nsfw accounts, including models and artists (really, that's a reason?), but since there's no official advice , I have no way of knowing how to get rid of the search ban.
Edit: But since the algorithm unbanned my main account immediately, it means that it is reliable to some extent. Maybe the reason I got banned at the time was because I was using a VPN from another location or something.
https://taishin-miyamoto.com/ShadowBan/ does not even work for my account, LOL. Just says error, but your account works. It is all clear now, yeah.
But again. The actual secret notes on Guano is what makes the difference, like in viral content.
I thought it's pretty clear twitter admin can do that... I though everyone knew it.
I live in China. For me, it's hard to believe any social platform don't do that. I know what kind of place China is, don't remind me, pity me insted. But I have to also remind people, US is not an utopia...
If twitter shadow already bans people using the algorithm, why don't they do it manually too? Of course they do it.
@rambda
Happens all the time. There are several records.
I usually warn anyone who shares intimate photos with these tools. Even with P2P encryption it is possible for a user with elevated privileges to access the photos. The only thing that would really stop it are the supposed moral values of whoever has control over it. https://www.yahoo.com/entertainment/book-claims-facebook-fired-52-012010053.html
@rambda
Happens all the time. There are several records.
I usually warn anyone who shares intimate photos with these tools. Even with P2P encryption it is possible for a user with elevated privileges to access the photos. The only thing that would really stop it are the supposed moral values of whoever has control over it. https://www.yahoo.com/entertainment/book-claims-facebook-fired-52-012010053.html
I know. This is no different from Weibo, QQ, WeChat, etc. Their admins also spy. No one can supervise these twitter admins other than themselves. So it's impossible to stop them from prying into users privacy. I use Telegram when I really need relatively higher privacy, relatively.
I said that I don't want to expose my selfie account, because apparently no one wants to do that.
Check @cawldha Success.
https://taishin-miyamoto.com/ShadowBan/
But I really wonder why?
Twitter rules: Violent Speech, Violent & Hateful Entities, Child Sexual Exploitation, Abuse/Harassment, Hateful conduct, Perpetrators of Violent Attacks, Suicide, Sensitive media, Illegal or Certain Regulated Goods or Services, Private Information, Non-Consensual Nudity, Account Compromise, Platform Manipulation and Spam, Civic Integrity, Misleading and Deceptive Identities, Synthetic and Manipulated Media, Copyright and Trademark, Third-party advertising
Translate my tweets and read them, which rule did i break?
I still only had a search ban yesterday, and now I got shadow ban. And my only behavior today is to repost some life advice and reply some good tweeters, you can see it from my homepage.
POLITICS ALERT:
I really don't want to talk about minority rights and politics here, because it's GITHUB, but If politics is so important to twitter's algorithm, I have no choice.
Anyway, I think I gave a perfect example that, an ordinary user, without violating Twitter's rules at all, may be shadow banned for unknown reasons.
I repeat my advice again: I think there should also be some transparency on the reasons for why users getting search/shadow banned. At least give users some instructions on how to get rid of the ban and how to regulate their own behavior, otherwise it lost the point of banning real people.
Edit: OK I got unbanned in one day, but still search banned. why torturing me? Edit 2: No, every comment of mine was folded by twitter. Why? WHY?
The best way to determine if your replies are being censored is to create a test account using a browser (with all cookies cleared), preferably using a different IP. Then, load the pages you replied to. You'll probably be surprised: Twitter lies to millions of people each day via "shadow ordering".
@Noah670 A clear metric should be given for every user to determine this status for greater transparency.
In what way would a first-party exposing such a metric NOT completely undermine the purpose of shadow banning for malicious accounts. Third party methods that try detect shadow-banning off a platform are irrelevant to the core of this type of problem. Shadow banning is a moderation tool businesses can use to coral the efforts of potentially malicious accounts by making them continue to operate as if they are having an effect. It is not a paradox that they are allowed to continue to use the platforms resources versus a complete ban to save resources, it's a tradeoff. If a malicious account knows it's shadow banned it will make more new accounts using even more resources each time a full-ban happens using more service resources than if that account had continued being shadow-banned account ignorant it's wasting it's time.
@PaulNewton
A malicious account, 100% knows it's shadow banned, because it can simply request these shadow ban detection site.
Sadly, however, an ordinary account, which got mistakenly shadow banned, most likely does not know even know shadow banning exists.
Because, simply, even most people in China don't know shadow banning is a thing. It's so hypocritical, that it's beyond an average person's imagination.
@rambda already preempted this, read:
Third party methods that try detect shadow-banning off a platform are irrelevant to the core of this type of problem.
The barrier of difficulty does not need to be removed because of an assumption of omniscient technical sophistication is given to potential attackers, not every malicious user is a coordinated troll farm with it's own developers. That's how security works, you don't get rid of doors because some thieves have lockpicks, nor do you start handing out keys either.
Twitter does show some messages when they (almost always incorrectly) think you're misbehaving.
Several years ago I started a new account and put a few handles in the first tweet. They kept me from sending the tweet due to the # of handles.
More recently, I replied to someone with a link that had "idiotic" in it. That was a reference to a contest, not a person. Twitter showed me their "Most tweeters don't tweet things like this" screen, forcing me to tell them they're wrong and tweet it verbatim.
With shadowbanning and shadow ordering, Twitter is lying to millions of users each day. That's no way to run a company. Each and every day, Twitter treats millions of legitimate users as if they were spammers.
If Twitter were technically competent and opposed censorship, instead of doing that they'd evaluate the content of a user's tweets. I'm sure a technically competent company could identify outright commercial spam in almost all cases while not heavily impacting real users. Instead, Twitter bases that on their highly flawed algorithms and farms that out to unreliable parties. For an example of the latter, see DrPanMD. He's extremely pro-censorship and in one case he hid 99 out of the 100 replies to one of his tweets. Criticize him and he's likely to hide your reply or block you. Instead of holding that against him, Twitter holds his blocks against his critics.
Not only is Twitter technically incompetent, they have zero interest in open debate and protecting speech.
@PaulNewton
All your comments are super sophisticated in vocabulary and grammar. It's quite hard to understand to me, who don't speak English natively.
Let me try to break it up:
Third party methods that try detect shadow-banning off a platform are irrelevant to the core of this type of problem.
I don't agree. It's relevant, because at least some malicious accounts are using these platforms.
The barrier of difficulty does not need to be removed because of an assumption of omniscient technical sophistication is given to potential attackers
So you mean, "we cannot assume EVERY malicious account use these platforms, so we cannot show shadow ban status"? I assume that's what you mean, because the next sentence is similar.
not every malicious user is a coordinated troll farm with it's own developers.
True, to my understanding, they're two types of malicious user:
get rid of doors
I assume you mean getting rid of the shadow banning system, and real-ban those accounts that meet the shadow ban standard.
handing the keys
I assume you mean to show shadow ban status.
For the first type of users, I think it's pretty clear they know if they're shadow banned. I mean, I'm a very junior programmer, I open google and search "why I cannot find my own tweets in search". I then knew it's search ban, and there is also shadow ban. Then naturally, I found out how to know my shadow ban status, because these platforms are on top when you search twitter shadow ban. So of course, some developers who can automate hundreds of accounts know these platforms.
For the second type of users. You inform them that they were shadow banned. They either stop using twitter, or watch their language and behave better to get out of the ban, which is exactly what we want.
Finally, I think you missed something. No one here is a malicious user. No one in this issue has ever talked about malicious user. We don't have to talk about malicious users at all. This is not about malicious users. We're talking about the transparency/user-experience to well-behaved users.
It's pretty clear the truth is Twitter lacked the technical ability to identify malicious users (because I'm still banned, check my tweets, I'm not a malicious user). So we can conclude that the search/shadow ban system can not only help Twitter block malicious users, but also damage the experience of well-behaved users.
If there is transparency. People who were wrongly shadow banned can know that they're banned, and just seek help from Twitter support. If there is no shadow ban at all, they also seek for help after getting real-banned.
Indeed, according to Twitter's user agreement, Twitter has the right to weigh which one is more important.
From the standpoint of Twitter, the former is more important, so supporting this issues does no good to them. Which is also your statement.
However, I think that as a small group of well-behaved users who have been banned by mistake, supporting this issue:
So, from my point of view, it's very reasonable to support this issue.
Imagine, if you're not the tech type person. You never thought of using private browsing to see if your tweets could be found, (but you can find tons of ads). Until you use Twitter for a whole week (or possibly longer, I've personally been search banned for a week now and occasionally shadow banned) when no one cares about your tweets, you finally find out that you have been search banned. Do you think this banning algorithm should really run without any transparency...
Edit: fixed typos.
Usually this means others did nor tweet that link. The comment with idiotic or stronger words just gets hidden.
So, you claim that if I tweeted any of the thousands of results from the search "site:nytimes.com idiotic", it would just be hidden?
That is impossible. Failure to send the tweet always means just bad network connection.
You're welcome to prove me wrong: post a brand new account where the first tweet contains multiple handles. If you can do that, they've changed the rules since when I tried it ("several years ago").
There is a button to show hidden tweets.
They only recently added it or showed it to my accounts. It also doesn't have a label or a mouseover popup.
When logged in, you can see the button on DrPanMD/status/1602004039385874436. Agree or disagree with the hidden replies, can anyone justify someone like Pan being able to impact the visibility of his opponents' other tweets? A blue check blocking or hiding your tweets impacts your replies to others. What other powers would Twitter give Pan if they could?
@rambda Finally, I think you missed something. No one here is a malicious user.
Personal special interests are irrelevant, it is only serves to undermine moderation tools.
"Shadow banning", rank deboosting, etc are tools to prevent or limit the growth of bad-faith actors.
For any proposal to be an improvement is has to recognize why the tool exists as it does.
Ignoring that for innocent personal motives is a defective circular argument.
Ignoring that for maliciousness is an attack vector.
Do you think this banning algorithm should really run without any transparency.
A tool for protection should not make itself useless because some users do X , or need Y.
Public transparency about "shadow banning" not being used has been addressed by twitter. https://blog.twitter.com/en_us/topics/company/2018/Setting-the-record-straight-on-shadow-banning
Individual transparency requires actual proposals of technical changes not wishful philosophical complaint.
So, from my point of view, it's very reasonable to support this issue. Imagine, if you're not the tech type person
To support transparency effectively you cannot ignore why these systems are currently opaque.
Notice how none of the comments in support of undermining the ranking system do not have anything of implementable value; not even code references or mention of the ranking system. It's all anecdotes motivated by special treatment leading to circular arguments.
@PaulNewton
Thank you for your response. I'd like to clarify some points and express my views on the matter.
Sometimes, I use complex words because Google Translator suggests them, but that doesn't mean I'm proficient in English or familiar with writing techniques. I'm having a really hard time reading what you're saying. I must ask you again, could you please write English in a way that's more easy to understand for non-native speaker.
Also, I think we better discuss it line by line. So you don't miss my point.
If we can't make shadow banning more transparent or remove it entirely, we should at least improve the transparency of search banning.
Still, let me try to understand your statements line by line.
Personal special interests are irrelevant, it is only serves to undermine moderation tools.
3.1. "Personal" is inaccurate. People being search/shadow banned are not any single person. If you search for "search ban" is twitter search. You can find that there are, let me conservatively estimate, thousands of twitter users who have been search banned and then unbanned later. At the same time, there are also many twitter users who are really search banned now.
Perhaps in your moral philosophy, a minority group of thousands of people is far from being able to escape the definition of "personal" from a utilitarian perspective. But in my opinion, the interests of thousands of people are important enough. I tend to not agree sacrificing the rights of a very small part to protect the experience of the majority, especially when I'm in this part of people. You tried to prove that my(some people)'s rights (of not getting banned) are less important than some greater good (stop malicious accounts). I'm not interested in this.
3.2. "special" is inaccurate. According to this, Twitter claimed "We may suspend accounts that violate the Twitter Rules.". I think the right of well-behaved users to not get banned is very natural. It's not special.
it is only serves to undermine moderation tools.
Do you mean moderation tools are more important than any one's right to not get banned here? If that's what you mean. You were repeating your statement.
See my last comment:
Indeed, according to Twitter's user agreement, Twitter has the right to weigh which one is more important:
- to block malicious users to protect the experience of most users, in the meantime, to obey to the orders of the US government (which is good).
- to maintain the experience of a very small number of normal users who have been wrongly banned. From the standpoint of Twitter, the former is more important, so supporting this issues does no good to them. Which is also your statement.
As you can see, I already knew your point, and I agree with your point of view. But I don't think your point of view is more correct or better than mine. I think our views are juxtaposed and equal. They are two different moral philosophical viewpoints, two different political viewpoints, objectively there is no right or wrong.
Just leave it to Twitter's technical staff and management to decide whose opinion they care more about (obviously not mine).
"Shadow banning", rank deboosting, etc are tools to prevent or limit the growth of bad-faith actors. For any proposal to be an improvement is has to recognize why the tool exists as it does. and To support transparency effectively you cannot ignore why these systems are currently opaque.
- Yes, I know what they're for. I think everyone here knows this fact. See my last comment: to block malicious users to protect the experience of most users, in the meantime, to obey to the orders of the US government (which is good). This proves I knew it.
I didn't ignore it. I know why it's currently opaque, it's because you have said these and I agreed:
Shadow banning is a moderation tool businesses can use to coral the efforts of potentially malicious accounts by making them continue to operate as if they are having an effect. It is not a paradox that they are allowed to continue to use the platforms resources versus a complete ban to save resources, it's a tradeoff. If a malicious account knows it's shadow banned it will make more new accounts using even more resources each time a full-ban happens using more service resources than if that account had continued being shadow-banned account ignorant it's wasting it's time.
Even if I know its function, I still think this system failed both theoretically and factually. (See below.)
Ignoring that for innocent personal motives is a defective circular argument. Ignoring that for maliciousness is an attack vector. It's all anecdotes motivated by special treatment leading to circular arguments.
- Again. It's not personal. It's a group of people. Just search for "search ban".
- circular arguments. Did you mean my point is: "To prevent me getting banned, I have to get transparency. To protect myself from malicious users, I have to accept search banning system." Because two of my points have a conflict, they're circular? If that's what you mean. That's really not my point, because:
I don't think shadow banning is that effective for stopping malicious users. See this:
For the first type of users, I think it's pretty clear they know if they're shadow banned. I mean, I'm a very junior programmer, I open google and search "why I cannot find my own tweets in search". I then knew it's search ban, and there is also shadow ban. Then naturally, I found out how to know my shadow ban status, because these platforms are on top when you search twitter shadow ban. So of course, some developers who can automate hundreds of accounts know these platforms. For the second type of users. You inform them that they were shadow banned. They either stop using twitter, or watch their language and behave better to get out of the ban, which is exactly what we want.
According to this, I think theoretically and factually the shadow ban system failed, the transparency/real-ban is better.
In fact, a pure real-ban system can still block a considerable number of malicious users. Let me give you some example:
At least, I really can't find an example where a search ban makes sense. Also in fact, I still see tons of Ad spammers under many tweets on Twitter. Every other platform has less Ads.
Removing shadow ban will not increase those spammers so much, will not fool those abusers for so long, but it will save thousands of wrongly banned users.
I reject this kind of protection from Twitter. I'd rather see a few more Ads, increase a little bit chance of getting abused, than my tweets cannot be found from search completely. If someone rejects the same, they will also 👍 this issue. Since we rejected this part of rights. I consider this not a circular argument.
Public transparency about "shadow banning" not being used has been addressed by twitter. https://blog.twitter.com/en_us/topics/company/2018/Setting-the-record-straight-on-shadow-banning
I think this doesn't matter here. The two authors didn't know that too many people are getting search/shadow banned wrongly at that time. Also, they didn't know Twitter would become source available in the future. If they knew, they probably will answer to this issue directly, but there is no point in making assumptions like this too.
Individual transparency requires actual proposals of technical changes not wishful philosophical complaint.
Yes it's certainly better if we improve twitter's search banning algorithm, so that I don't get banned by mistake But I'm not capable of doint that. I don't have that kind of technical ability. Also, I'm pretty sure this part of algorithm was not open sourced. I'm pretty sure we cannot do that at this moment.
Based on the limits of my abilities, I endorse solutions that I think will work well. If Twitter thinks this method is useless, or contrary to their interests, just let some official staff comment: "We will not adopt this proposal" and close this issue. This is the customary practice of every GitHub repo. Then I won't talk about this on GitHub again, like I shouldn't.
@PaulNewton
If you'd clicked my profile you'd see that I'm the author of an alternative ranking algorithm. It's in my open source Twitter censorship checker right here on Github (for now it requires a special build of Selenium but I can post the latest version + instructions on request).
The reports (linked from the Github page) show how extensive Twitter censorship is and that it hasn't changed much.
For instance, in 2019, Vijaya was censoring about half the replies to the president of Iran. As of Feb 2023, Musk was censoring about half the replies to the head ayatollah of Iran.
The reports also compare my ranking algorithm with Twitter's. Where they elevate things like childish GIFs, I elevate tweets with long words. Twitter censors people based on others blocking them. I know from years of leaving comments that's a recipe for disaster.
For quick, tangible examples of how flawed the Twitter algorithm is, see the link. Twitter thinks "good grief" is an abusive swear word.
@TolstoyDotCom It's in my open source Twitter censorship checker(https://github.com/TolstoyDotCom/more-speech)
Conflating deranking with censorship and shadow-banning just undermines any attempts at change.
This is really stretching the idea of shadow banning to fit an agenda by ignoring content ranking. Freedom of speech not freedom of reach. Just because someone's posted something doesn't mean the rest of need to read what some want to desire as "high quality". Not everything gets to be on the frontpage because of literal conceptual limits and resource usage. And endless scrolling is an anti-pattern not a solution to that problem.
For a tangible example, consider this from GovNedLamont/status/1643011987356393472 .... Whether you agree with the last tweet in the image or not, you have to admit that it's higher quality than the tweets above.
This is injecting a lot personal opinion and assumption that others must agree for a political agenda of what is "higher quality". Meanwhile the ranking did it's job and deranked an off topic reply trying to hijack a conversation for political motives; Ranking working as intended, even if it works like an annoying debate lord that seems to be against "accountability". Because:
Where they elevate things like childish GIFs, I elevate tweets with long words
high effort != high quality, that's as flawed as SEO that only ranked based on keywords.
Then the 2nd example insinuates the replier is a bot yet shows nothing being censored Your trying to prove a negative with irrelevant examples.
Then there's the goat story example:
And yet the ranking is people engaging with the goat problem, ranking working fine; even though it may seem contradictory with the sports wagering tweet.
Twitter thinks "good grief" is an abusive swear word. No it doesn't , again conflation. It literally says "Show additional replies, including those that may contain offensive content" That means of all the additional replies some MAY contain, it's not that ALL additional replies are offensive.
I get the sentiment being applied here but this aint it, there are fundamental flaws in the argument, bad faith conflations, and really really bad opening examples that should be addressed.
@ValZapod did you mean pull https://github.com/twitter/the-algorithm/pull/660 Limit penalization on blocks / mutes for a cooldown of 180 days. Fix #658 #660 Issue #658 https://github.com/twitter/the-algorithm/issues/658
Or another pull request? searched but couldn't find something more relevant Or just mean they should make a pull request for what they want 😏 .
I came across this file
So my account must have triggered some of these safety label: DoNotAmplify, CoordinatedHarmfulActivityHighRecall, UntrustedUrl, MisleadingHighRecall, NsfwHighPrecision, NsfwHighRecall, CivicIntegrityMisinfo, MedicalMisinfo, GenericMisinfo, DmcaWithheld, HatefulHighRecall, ViolenceHighRecall, HighToxicityModelScore,
In my understanding, does this mean that there is some kind of machine learning model that can detect the behavior that triggers these labels?
Anyway, I'm 100% sure I didn't trigger any of these. My tweets are 100% healthy. (use google translate, they're just plain and simple Chinese.) But I'm still banned now.
Does it mean search banning is like blocks/mutes, has an extremely long penalty time? Or is it that one of my tweets is still mislabeled by a certain machine model as harmful speech?
So according to your experience, if I want to report this kind of wrong search ban, how should I open an issue?
Also, Why no one mentioned about search ban, except me? Is it because the search bans are mainly Chinese and Japanese users (who don't speak English)? I really don't like to talk about politics on GitHub, but why are accounts using 汉字 (Chinese/Japanese characters) more likely to be search banned? Shouldn't Russian accounts be more likely to be search banned? I don't want to imply political issues here, but I'm terribly confused. To be honest, I'm an ordinary person with a deep lack of technical knowledge and political sensitivity, a stupid person. Can some smart person please tell me what to do to get rid of search ban?
@PaulNewton
Shadow banning is a moderation tool businesses can use to coral the efforts of potentially malicious accounts by making them continue to operate as if they are having an effect. It is not a paradox that they are allowed to continue to use the platforms resources versus a complete ban to save resources, it's a tradeoff. If a malicious account knows it's shadow banned it will make more new accounts using even more resources each time a full-ban happens using more service resources than if that account had continued being shadow-banned account ignorant it's wasting it's time.
In other words: Allowing people access to a lawyer when accused of committing a crime in a court of law would undermine crime fighting, as a fair trial for each accused person comes at a cost to the country. Rather than trying suspected persons in a court of law, it is more efficient to simply convict all suspects without giving them the right of defense. Many innocents will be arrested in the process, but this is irrelevant to you as the important thing is that criminals will be arrested for a small cost. Fair trials in a court of law are a waste of time.
The barrier of difficulty does not need to be removed because of an assumption of omniscient technical sophistication is given to potential attackers, not every malicious user is a coordinated troll farm with it's own developers. That's how security works, you don't get rid of doors because some thieves have lockpicks, nor do you start handing out keys either.
You don't get rid of a system without the right of defense for the sake of a system with the right of defense because that's like getting rid of a lock. The personal interests of innocent people trapped in this system are irrelevant and only serve to get in the way of fighting crime.
Except that violating the principle of the presumption of innocence is a crime in itself, and a violation of human rights.
Ignoring that for innocent personal motives is a defective circular argument. Ignoring that for maliciousness is an attack vector.
To ignore that it is easier to authorize the police to shoot anyone holding a banana than to train the police properly is to be naive. Allowing people with bananas in hand to walk freely is an attack vector.
A tool for protection should not make itself useless because some users do X , or need Y.
The police cannot be useless just because some innocent citizens don't want to be killed by the police without actually committing a crime.
Conclusion: People demand compliance with the presumption of innocence and their right to defense as set out in human rights.
Even if it increases the attack surface and the number of breaches. No breach should be used as a pretext to treat humans as false positives and ignore collateral damage. Punishing innocent people is unacceptable and if it occurs, it must produce redress. Restricting the right of defense is unthinkable in all scenarios.
@ValZapod
No defense is allowed in FISC, FISCR and grand jury part of lower courts.
I don't know what FISC is, but violation of the right of defense is a violation of human rights.
In my country, if a murderer has his right to defense violated, his sentence is annulled and the accused receives compensation. Even if the accused is guilty. Even poor people are given the right to a state-funded lawyer. My country takes the right of defense seriously.
From here we get this impression that the US does not uphold human rights in certain circumstances. I'm sorry that US citizens have to go through this.
Does changing the username lead to getting shadowbanned?
@rambda is correct, shadowbans are beyond the average person's imagination.
I just wrote an article about this called The Internet's Red Army.
The state of affairs is much worse than even this thread reveals. Every platform does this, and not just to whole accounts, but also to individual comments. So you might receive interaction on some content while other commentary appears to fall flat. Such lonely, childless comments may truly have been uninteresting, or they may have been shadow moderated.
Re the substack discussion in comments at the last link, I got a blog at Medium suspended. I complained and they admitted it was a mistake within a few days or so. I don't know if it was an honest mistake or they just decided not to put up a fight. Due to censorship elsewhere, it's not like I have a following there anyway.
Twitter suspended one of my accounts for four months before finally admitting they made a mistake.
As I described in comments on another issue, I created a Change dot org petition. The cowards didn't respond to emails sent to their general address and their CEO and they didn't respond to phone calls.
I was banned from DailyKos years ago for calling him on claiming a pic was from a major news site when it was actually from some obscure blog. For balance, I was banned from American Thinker for criticizing them on my own site. Not to mention dozens of other bans just for showing sites wrong.
What's scary is how little support there is for open debate. The great majority of those who discuss politics online are very eager for authoritarianism and tend to have little respect for the Constitution.
@TolstoyDotCom you're describing bans you were notified about, which should be regarded as a good thing as compared to the vast amounts of secret suppression that happens.
This YouTube video shows what I'm talking about. There is no notification and the content still appears to you as if it hasn't been removed.
I have been banned on Twitter for 3 months and no one from support or the owner gets back to you about it. I was given no warning or notification or anything. A status of the account would be great.
To be clear, I'm very very familiar with Twitter censorship. I wrote a censorship checker that browses Twitter using a regular web browser and compiles data showing how your replies are censored, or how Twitter censors replies to specific users. See the reports. Despite discussing that with "reporters" from Buzzfeed, USA Today, etc and even buying ads, no one gives a darn that Twitter heavily censors replies to those like the head ayatollah of Iran.
I've also repeatedly been put in gitmo. One account was completely shadowbanned, yet Twitter kept showing me my tweets as if they were right there at the top of the lists.
I have some experience with Reddit shadowbans. For instance, I was recently banned from r/Twitter for repeatedly showing them wrong (and, as with others, their coward mods refused to reply to me even tho one of the places I bought ads for my checker was their subreddit). Reddit still shows me my replies as if I wasn't banned. It's even less honest than Twitter: whereas Twitter will tell you someone banned you, Reddit doesn't. There's just no Post button, etc.
As for Lisag123, I have no solution other than to create a new account. I switch back between my accounts, using one until it's shadowbanned and then switching to the next, rinse and repeat.
Thanks Tolstoy. Yes, I will most likely have to create a new account, but it is bound to be shadow banned as well. I gained 16k followers over the past year and losing motivation for using the site. I am on other social media sites and haven't been reprimanded before. I do talk about politics but I am very careful not to use any derogatory language or be offensive. However, I was vocal about disagreeing with the new management and that's when I received a shadow ban that never went away. I also pay for a blue check which makes me a clown. I read something about tweeting less than 10 tweets a day and staying off twitter for a week. Neither of these approaches work. There seems to be shadow bans that are removed after time then there are other accounts where the shadow ban is permanent. I would not have minded if this happened on other platforms that don't tout freedom of speech.
I simply replied to a deboosted / whatever you want to call it when you have to click "show more replies" comment and instantly got the same treatment for doing so. Was on a path to paying for subscription until this. Irrevocably broken and or you've got sleepers still working there. Start a complete rewrite.
I simply replied to a deboosted / whatever you want to call it when you have to click "show more replies" comment and instantly got the same treatment for doing so. Was on a path to paying for subscription until this. Irrevocably broken and or you've got sleepers still working there. Start a complete rewrite.
I reply to those people all the time. How am I supposed to know not to reply to them? Are we sure engineers still work there? It may be all run by AI/automation at this point. I stopped going on. Who wants to be on a social media site where you get shadow banned for no reason then have no support team?
Who wants to be on a social media site where you get shadow banned for no reason then have no support team?
They could have a support team the size of a government and it would make no difference. The effect of a shadowban is you don't know to ask for support.
Plus, all platforms do this. So one reason to use social media is to share that this practice continues.
True, but it hasn't happened to me on other sites. My posts are relatively plain. I'm deleting all my tweets, replies, and retweets over the next month and going to post inspirational quotes randomly or something. Only specific accounts are lifted up these days...especially the large content creators so being on there does not feel organic or social anymore.
True, but it hasn't happened to me on other sites.
How can you be certain? The nature of this style of moderation is you are not told, and it actively hides that from you. I've recorded thousands of instances of people who have no idea this happens. Just look at what they say:
It happens on Facebook, Truth Social, TikTok etc. There is little refuge. You must be an expert to know, and even then you cannot be certain.
FYI, my checker (linked above) has extensive info on what "show more replies" means; there are four levels of tweets.
It should be intensely easy to force changes at Twitter. They censor millions of users each day and if those users knew that they'd send a message to Twitter by not tweeting.
I've tried for years to get someone with a megaphone to let those censored users know what Twitter does but I can't find anyone. To make it worse, GOP/MAGA scumballs like Parscale, Elder, etc pretend they're the only ones Twitter censors. Twitter loves them doing that.
To make it worse, GOP/MAGA scumballs like Parscale, Elder, etc pretend they're the only ones Twitter censors.
Certainly conservatives are not the only ones getting shadowbanned. @rambda is an example of that, and so is this Washington Post article from December: Shadowbanning is real: Here’s how you end up muted by social media
It should be intensely easy to force changes at Twitter.
Maybe, but you would need to tell a good story. You can no longer easily show users their shadowbanned content directly. As you know, as of earlier this year, when you view Tweets while logged out, the replies do not appear. So tools like yours require a second account to check the status of replies. Yet users would have difficulty following your app's install steps, let alone creating a second account. So to share shadowbans with users, you would need another approach, like telling stories about people who have been shadowbanned, like the WP article did.
However, Twitter's change means Twitter is now less of a space for public conversations, and there is a huge opportunity for a newcomer to fill that gap. That gap may be part of what motivated Meta's Threads. I expect Threads also shadowbans, but it will be easier to show shadowbans to users on Threads than on Twitter, since Threads still publicly shows replies.
When a social media platform's content is more publicly visible, it becomes a double edged sword. Meta's Threads may get more views and thus more users, yet they will eventually also face more scrutinization than Twitter's less public offering. So the catch is you can't be the most popular and be unquestioned. The two concepts are diametrically opposed.
A clear metric should be given for every user to determine this status for greater transparency.