OWASP / API-Security

OWASP API Security Project
https://owasp.org/www-project-api-security/
Other
2.07k stars 377 forks source link

2023RC API8 - Human Detection prevention recommendation - believe not viable #69

Closed MrPRogers closed 1 year ago

MrPRogers commented 1 year ago

in the “How to Prevent” section there is a bullet around Human detection. This prevention I don’t believe to be a viable option in an API as ultimately it is code which is the API consumer and thus Captcha or biometric controls aren’t going to work (and more at UI level). Yes you could do Captcha etc to better secure the UI web app / mobile app however it doesn't make any underlying API(s) more secure, i.e. it should be in Web Application OWASP recommendations instead of here.

Personally would recommend removal of this bullet as the following bullet around non-human patterns covers this in a manner which technically could be implemented. As well as time based patterns you could also consider putting something in around potentially secondary checks / validations / monitoring – e.g. [linked to your 2 example scenarios] limiting (or alerting) based on billing address / rate of referral crediting per account etc.

ynvb commented 1 year ago

I second that and completely agree with the above. I would also add and say that (regardless of whether OWASP decides to remove the "human-based" detection or not) - for the machine interaction part of this section, the prevention suggestion sounds too vague. The bullet says: "Secure and limit access to APIs consumed directly by machines (such as developer and B2B APIs). They tend to be an easy target for attackers because they often don't implement all the required protection mechanisms."

So what is precisely the prevention recommendation here? How exactly should I limit my APIs? This specific category talks about automated attacks that are often very hard to correlate (using different IP addresses, different user agents, and so on). Therefore, the suggestion here is an empty one - since there is no natural way to limit the API to block this attack.

I don't like the "Human Detection" recommendations in the first part, both because of what was said in the comment above and also because "Human Detection" is a particular vendor field and points to a particular set of solutions. This can create a bias toward using this specific type of solution vs. other solutions that might also be applicable to block this kind of attack.

Regarding Machine interactions - perhaps a better suggestion could be to try and automatically find deviations from the standard business logic. / Regular user flows that might indicate an attack on an API. This could also apply to Human interaction, but it's simply the only applicable prevention method I can think of here - I would love to hear if anyone in the community has other ideas on how this can be prevented.

LaurentCB commented 1 year ago

there is no natural way to limit the API to block this attack

Rate limiting is one natural way, but is not sufficient alone and must be combined with other approach.

Beside, I'm not sure Human Detection, as challenges, is a particular vendor field, IMO anyone can implement its own solution (giving the time..).

But trained AI to resolve that kind of challenges will surely lead to new threats in a near future if not already.

inonshk commented 1 year ago

Thanks for the feedback, it is deeply appreciated.

The goal of this category is to address bot-related attacks like "scalping", "spamming" and others listed in the OWASP Automated Threats . Based on our research, attackers tend to target APIs to perform this type of attacks.

The reality is that, unlike vulnerabilities like "XSS" or "Injection" there is currently no one straightforward solution to solve this type of threats. We want to bring up this category to encourage security participants to think more in this direction. We don't have the perfect answers when it comes to "how to protect"

Even though not all APIs consume by humans, we see that attackers tend to exploit the business flows that are designed to be accessed by humans: such as "purchasing a product", "create a new user", "apply coupon code". At the same time, we see that one of the common approaches to prevent excessive access to these business flows, is using Human Detection type of patterns.

I acknowledge and agree that those patterns can be consumed by machines as well. Based on the above and your comments, I suggest the following:

I'll be happy to hear more from you how you would suggest to protect against this type of attacks.

LaurentCB commented 1 year ago

I totally agree with you @inonshk, except on the point that "rate limiting should be more strict to API that are consumed by machines".

I personally see Rate Limiting acting as a "safeguard" that has to be enable on every API workflow that can be automated, not regarding its destination (to human or not, b2b or not, etc.).

As I was pointing in my last message, there are several services (some based on AI and it is developing at a fast pace) that enable scripting based tool to complete challenges destined to human beings.

Thus Rate Limiting is, for me, the last fence ensuring a peculiar node in our system won't fail, even in the case it would get overflowded, for instance, because of an advanced DDoS attack with a large botNet.

And of course API Rate limitation should be implemented in layers, gradually focusing on a client centric perspective (global RL > grouped clients RL with IP range or so > then client RL e.g IP+browser pair or so).

I'm no expert, just arguing. RL seems to me to be a very powerful tool, that can be enforced in different ways, many of them able to combine together. And, again in my conception of things, with nowadays available solutions, it can be very cheap to add it at Application Level in an efficient way so why bother not to ? 🙂

Tatsuya-hasegawa commented 1 year ago

I am sorry to sidestep, as I found the conversation interesting.

I've used a lot of APIs, mainly in OSINT security related services. Most of the APIs had the rate-limit feature. Even if it is not a security service, APIs with a certain amount of access volume are rate-limited to maintain latency.

On the other hand, we believe that API servers such as those used for campaigns that are launched temporarily could be vulnerable. This is because it is difficult to predict the number of accesses and it is unclear from the developer whether it is juicy for an attacker. We believe that rate limiting would be more widely implemented if there was a packaged AI that could predict the threshold for rate limiting based on the access statistics of similar campaign services in advance, or dynamically adjust it based on the actual number of accesses.

inonshk commented 1 year ago

@Tatsuya-hasegawa thank you for your feedback. Your point makes sense. The idea of rate limiting (under #8), isn't to implement a generic solution. On top of the generic rate limiting solution you might have (e.g, block IP addresses if they generate more than 100 calls/second), you should address the more sensitive business flows, such as buying a new product or creating a new user and restrict the access. Naive example - one IP address shouldn't access the create_user EP more than 1 time a day.

Regarding the AI based solutions - we are trying to keep the list vendor neutral and refer to technologies that are open source/free to use.