Alovoa / alovoa

Free and open-source dating platform that respects your privacy
https://alovoa.com
GNU Affero General Public License v3.0
483 stars 100 forks source link

snapchat, onlyfans, spam, bots, phishing, etc. #278

Open spbeach46 opened 1 year ago

spbeach46 commented 1 year ago

A robust way to prevent users from abusing the platform to promote paid "premium content" is sorely needed.

One thing that makes competitors' apps like Tinder so painful to use is having to sift through profiles that are very clearly marketing paid sexual content and "hookups" outside of the app. It hurts to see our open source alternative suffer the same fate. Alovoa is already inundated with these types of profiles. While allowing these profiles to exist gives the app an artificially inflated user base and might be good in the short term for onboarding, it ultimately hurts the project.

Giving users a built in tool such as a regex filter to block certain profile names and bios that contain any sneaky variation of snapchat and onlyfans names could be a decent start. Additionally, keeping an up to date, community driven, filter list in the repo would be cool

Nonononoki commented 1 year ago

Easier said than done. Even something simple like spam email domain detection is a lot of work. They scan just write sc instead of Snapchat, which makes it nearly undetectable.

I had a several different ideas of combating these kind of profiles.

  1. More tags - One for "I offer paid services" and one for "I want to see paid services". Those who offer such services but don't have that tag will theoretically be blocked more often (maybe).
  2. Block filter - Let the user filter out users that have been blocked by X other users. X can just be set by the user individually.
spbeach46 commented 1 year ago

It is indeed a challenging problem. Any semi-effective patch would set this app far above competitors. While I think brute force server side detection would be nice I do think that route might prove very difficult for a number of reasons. It would likely require training machine learning models on user-flagged profiles or something heavy like that.

I do like the idea of disincentivizing abusers by allowing real users to filter and block profiles by some means. I'm still convinced allowing individuals to apply their own regex filters to bios and usernames in addition to maintaining a filter list in the repo which is sorted by efficacy (based on a voting system) would do wonders. Not sure how difficult that is to implement however and I certainly don't take such work for granted.

Your suggestion of allowing a user to block blocked profiles could also be a step in the right direction as well. One issue I see with that is that a reason one user blocks another can be arbitrary and ambiguous. However, I have noticed that users do tend to block profiles that are highly suspicious which is a good sign.

I'm not sure more tags would work due to abusers simply not playing by the rules. They are already posing as real users so I would not be surprised if none of them used their appropriate tags.

Any and all ideas regarding this issue should be welcomed

ctlw83 commented 1 year ago

Yeah, I already reported at least 1 profile where it was provable that the picture/name was previously used in catfishing scams.

ip6li commented 1 year ago

A theoretical approach Fake Profile Detection Using Machine Learning Techniques regarding a machine learning solution. Obviously that is a really big project and should be developed outside from Alovoa. Some hints to detect fake accounts:

jeffrey734 commented 10 months ago

I usually thought that it would be interesting to have a rating/review system based on the interaction we had with the person. Some criteria could be debated but something like how talkative the person is, if he/she offers/asks for sexual services, how respectful this person is etc.

sure there's the problem of the fake/robot profiles, but there are also a lot of bad users who are real people and active on those services. For example, if a guy has a flag on its profile for inappropriate requests multiple times or a girl that only promotes her onlyfan page, it would be way easier to decide if we give a like and go to talk to this person.

The concept could be discussed and reworked maybe in a different post eventually.

BGebken commented 9 months ago

Combining the rating / review system with a filtering by count of blocked users could greatly disincentivize "bad" behavior without having to predefine all the potential types of bad behavior. Imagine someone sending explicit photos as soon as they match or an influencer sending links to paid content - if interactions are rated, they would eventually only match with each other which would disincentivize them from continuing to use the platform while improving the user experience.

This could "nudge" better behavior as users would be rating interactions 1-5 and recognize that if they display anti-social behavior, their scores are likely to go down and they might not get the opportunity to match with people they are seeking.

Of course, there would probably need be a time axis to the rating - to ensure that users don't keep creating new accounts to avoid ratings. One solution would be users don't have a rating until they get 10 ratings or some sort of minimum number - essentially borrowing from how the gaming industry does match making and penalizing anti-social behaviors that ruin the experience for their users. https://www.researchgate.net/publication/260086948_Matchmaking_in_multi-player_on-line_games_Studying_user_traces_to_improve_the_user_experience