Open elvirmuslic opened 4 years ago
This sounds incredibly useful and reasonable. I would love this feature and think this could be a great advantage.
Let's get started. What about the device emulating first? How could we get all necessary information out of our android phone to add to device.py?
We need to figure out what is causing their system to recognize the bot. I've tried using different 'devices' i.e. changing the emulated devices and even trying to match it with my own... still got blocked. So that means either they can tell if a device is fake or there's another main point of recognition. It already exists the - the device emulation that you mentioned - but it's not sophisticated. One more worry of mine is that it has something to do with the API. Since every bot uses the same API in the same way. If we could build a proof of concept without the API (that is currently being used i.e. instagram api) we might be able to go on from there.
Based on how the API actually works I don't reckon it's this causing the problem.
On Instabot's client Android devices where is the Instagram cookie actually stored, and could we get this off of our own devices and pass it to the bot?
We could technically, but that doesn't seem to pose the issue as I've almost cloned my device's information onto the bot and didn't seem to help much. However, later this would be much more useful as it would inherit the so called trust factor of our own legitimate devices.
For now, this doesn't seem to be the thing that exposes the bot.
@elvirmuslic so you would first focus on simulating real persons behavior, am I right? Or will it be more useful to implement a history stats function to detect under which circumstances we are blocked? For example we could log: follower/following of own account, request which leads to block as well as a summary of the delays and total actions (like/followed/commented) since starting the bot.
is this problem open?
With further analysis I found out that this isn't the main issue or way the bot is getting detected. There were calls to a logger that tracked everyhing from, mouse movmenent, timing, mouse positions, misclicks, API calls.
This basically comes down to, it tracks very low level, primitive stuff that isn't being replicated by bots. No bot moves mouse, just clicks precisely on the button in the middle of the box, this is not humanly possible to click 4 times in a row in the middle of the circle precisely and even after that there is no mouse movement rather a javascript that scrolls for instance.
I found out many of the parametars behind the instagram tracking but it is too complex for me to understand and seed. Hopefully, someone can find a solution to this.
If you can share all those endpoints and how Instagram is checking what that will be very helpful
I'm also getting the same problem with my automation library instauto.
I think @elvirmuslic is right. It sounds reasonably easy for instagram to distinguish between bots and humans by whether and how they are using their mouse pointer and scrolling.
If you go to instagram web and check out network while doing scrolls and clicks etc, you can see it makes a few different calls:
POST https://graph.instagram.com/logging_client_events
POST https://www.instagram.com/logging/falco
POST https://www.instagram.com/logging/arwing
POST https://www.instagram.com/ajax/bz
Alternatively they are just looking at the statistics of a particular device's actions. If they see mostly mass actions being performed over long periods of time, then it is unlikely to be a human and they flag it.
Instauto kept giving action blocked
when using Puppeteer browser automation. I tried deleting cookies, different user agent, running from a different IP, still action blocked. However using chrome directly on my computer I am not getting action blocked.
I managed to get rid of the message by using this plugin with puppeteer: https://github.com/berstend/puppeteer-extra/tree/master/packages/puppeteer-extra-plugin-stealth So it may indicate that instagram is detecting bots by their "browser signature".
we use the API rather than a browser. So we cannot use this fix.
I was running on a different (but still residential) IP than the WiFi that my phone with insta app was connected to. (in a different country). I consistently got action blocked on the first unfollow action every day. Even with deleting cookies after each run, and sleeping 24 hours after I got an action blocked. But when changing IP to the one running the wifi that my phone is on. it ran 150 actions without any problem over night.
Since instagram takes into account device, usage and similar. It would be a wise option to implement a fake action seeder or behave like a human for a period of time. Also send statistics after how many action the profile gets blocked and such so we can make a better strategy on how to proceed.
What do I mean? Make a human like bot for a few days that will act like a human, just liking and similar and then do actual bot actions, mass follow and such.
Why? Instagram takes into account if you just logged from another device and suddenly start doing mass like/follow or similar. It takes a few days until instagram recognizes and starts trusting the new device.
Tehnical - It seems that Instagram has a Trust Score of the device amongst other things. Implement a feature i.e. script that will roam and do everyday tasks.
We could skip this step if we could emulate OUR own devices that we use regularly and it wouldn't it would automatically inherit our Trust Factor.