twitter / the-algorithm

Source code for Twitter's Recommendation Algorithm
https://blog.twitter.com/engineering/en_us/topics/open-source/2023/twitter-recommendation-algorithm
GNU Affero General Public License v3.0
62.05k stars 12.14k forks source link

Excessively penalizing accounts for being blocked or reported #658

Open setlightlyupon opened 1 year ago

setlightlyupon commented 1 year ago

Either you've experienced it or you're very lucky. Sometimes, even during civil conversation, someone will be in a bad mood and suddenly block you - even someone who you've regularly interacted with!

Please reduce the penalty for this. Also, shadowbanning and/or reducing account "reach" silently, without transparency, is wrong (but much appreciation for this major step towards transparency). Please show us exactly what we've done to be deboosted. That way, we can correct the behavior and remain allowed into the public square.

Please indicate which of our posts have been reported so that we can correct the behavior.

It's truly frustrating (and Orwellian) when a previously well-functioning account is suddenly extremely throttled. Please increase the speed at which an account can recover after being sent to the e-gulag.

<3 Musk and the new twitter. You're the good guys, and we all know it! You're saving democracy and so much more. Free speech is the bedrock of everything. "Freedom of speech, not freedom of reach" is the antithesis of free speech, however.

syrusakbary commented 1 year ago

I actually also thought of this, and was in the way of fixing it.

I think we should stop penalizing blocks or mutes long term, and we can do that by stablishing a window count limit for the block and mutes.

Xpenzz commented 1 year ago

People that spam blocks & mutes should be penalized, not the other way around.

setlightlyupon commented 1 year ago

People that spam blocks & mutes should be penalized, not the other way around.

Agreed! The current system means that someone aware of this, with flexible morals, can just block all their political enemies.

khatharr commented 1 year ago

Please reduce the penalty for this.

Sorry, but can you point to where in the source this "penalty" is issued?

Sqaaakoi commented 1 year ago

People that spam blocks & mutes should be penalized, not the other way around.

obviously you are not a minority in any form

goonette commented 1 year ago

crazy

darkdevildeath commented 1 year ago

I believe that there should be a credibility scale for accounts in principle. The level of credibility should be visible at least to the next user of the account. The scale could consider:

  1. Account verification
  2. Positive vs negative engagement
  3. Whether the account is new or old
  4. Number of blocks and mutes
  5. Number of reports received
  6. Confirmed phone number
  7. Activity in Community Notes

This scale should be used in various mechanisms of Twitter, including the application of penalties for blocking and muting. Accounts with a higher score on the scale have a greater impact on a user's reach when that user is blocked. New accounts, without verification, with few followers, few likes, many reports, and little reputation in Community Notes would have almost no impact on the penalization of third parties.

PaulNewton commented 1 year ago

Sorry, but can you point to where in the source this "penalty" is issued?

From #660's pull file https://github.com/twitter/the-algorithm/pull/660/files#diff-026dfe956965210a79449841d84abe9fb9a4fd63ffc11f288f932ca3a8b6cc0fR53

Then rubber starts meeting road around L122 negativeFeatures.saveAsCustomOutput https://github.com/twitter/the-algorithm/blob/ac1aa2a720170fcffe6450fa4995be33ca20b92f/src/scala/com/twitter/interaction_graph/scio/agg_negative/InteractionGraphNegativeJob.scala#L122

~L109 in original https://github.com/twitter/the-algorithm/blob/main/src/scala/com/twitter/interaction_graph/scio/agg_negative/InteractionGraphNegativeJob.scala#L109

PaulNewton commented 1 year ago

I believe that there should be a credibility scale for accounts in principle

Same, I'd think it does need to be clearly contrasted in how it's not a defacto "social credit" score because that's the type of semantics people like to latch on to for tools like that (i.e. conflate low ranking ~ being banned ). Though they would definitely no longer be wrong if other factors slowly creep in over time such as social status,class,sex,religion,regionality etc.

Alternative packaging: Curation (curation scale , score , or skill). Credibility is already very socially ambiguous , moreso once you mix in words like positive or negative in it's measurement. One can be perceived negatively and still have credibility and vice versa

Accounts with a higher score on the scale have a greater impact on a user's reach when that user is blocked.

That is a just another system of abuse in an attempt to prevent abuse that guarantees positions of power against dissent. Minimum fix is blocked users reach is limited within the higher scored posters immediate reach/sphere for a time but that's still kinda vague. While blocked users shouldn't be able to platform themselves through abuse, platformed users definitely shouldn't be able to build moats of abuse.

Number of blocks and mutes

Just to clarify that should not be penalized against an account for how many other accounts they themselves block. Just how many others have blocks them.

Order Suggestion and splitting into categories Verification metrics (real world)

  1. Account verification
  2. Confirmed phone number
  3. Whether the account is new or old

    Behavior metrics (usage)

  4. Activity in Community Notes
  5. Positive vs negative engagement against the account by others, by followers
  6. Positive vs negative engagement against the accounts of others, by followers

Penalties

  1. Number of blocks and mutes
  2. Number of reports received
  3. Cool off period since peak numbers of 1# & 2#