OWASP / Top10

Official OWASP Top 10 Document Repository
Other
4.23k stars 823 forks source link

Proposal to Remove/Replace A7 #82

Closed m1spl4c3ds0ul closed 6 years ago

m1spl4c3ds0ul commented 7 years ago

Copied from mailing list ( http://lists.owasp.org/pipermail/owasp-topten/2017-June/001522.html )

======== Proposed entry "A7 - Insufficient Attack Protection" is an inappropriate and potentially dangerous addition to the 2017 OWASP Top 10 Application Security Risks list. We recommend replacing proposed entry A7 with a defensible security risk prior to finalizing the new Top 10 list and making it generally available.

In general, we caution against including any elements that prescribe security controls or particular security testing in the Top 10 Application Security Risks list, as the newly proposed entries A7 and A10 do. We believe that such inclusions muddle the clarity and purpose of the OWASP Top 10 as well as reduce its utility.

[Why the OWASP Top 10 matters] The OWASP Top 10 has had a huge impact on our industry. Over the years, the project grew from an awareness spearhead to a foundational plank of application security compliance, particularly pertaining to PCI. Changes to the Top 10 drive mandated spend across industries.

Even accepting the effect of long-standing data collection, process, and transparency problems with the OWASP Top 10, A7 is a particularly extraordinary proposal. Our proposal to remove the proposed entry A7 is based on three primary OWASP Top 10 project maintainers:

[Formulation as risk or prescription?] The OWASP Top 10 list bills itself as an enumeration of "Application Security Risks" [SR]. Proposed entry A7 is not formulated as an application security risk (of injection/scripting or forgery, of impersonation or escalation of privilege, exposure, or any other). A7 is authored to mandate use of an unspecified amount of a security control. Using language of presumptive close, A7 advocates,

            "Be sure to understand what types of attacks are covered by
             attack protection."

With this sentence, the proposed A7 conjures a new security control category "attack protection" promoting it to the level of 'Authentication' or 'Encryption.' 'Attack Protection'. however, is not an accepted type of security control: it's not indicated in OWASP's definition of a security control [SC] nor is it present in source material from ISACA or COBIT. It is interesting to note, however, that Contrast Security's website is the first non-ad result in Google when searching for "attack protection" and "application security." It is also the only result on the first page of search results that uses the words together in this context with similar meaning. IBM, Radware, and others use the two words together but not as a moniker for a single class of control, let alone as a valid category of risk that would befit the Top 10 list. Additionally, "attack protection" is not indicated as a control type by OWASP's "Proactive Controls" project [PC].

By coining a new control type without the benefit of broad industry acceptance, the proposed entry advocates a specific means of protection (a "how"):

             "You can use technologies like WAFs, RASP, and OWASP
              AppSensor to detect or block attacks, and/or virtually patch
              vulnerabilities."

This can be interpreted to mean exclusively Gartner's emerging product category: RASP. Listed alternatives offer false choice. Notwithstanding its OWASP "flagship" status, AppSensor hasn't experienced a major release announcement in about two years [15]. The wiki, mailing list, and github are neither up-to-date, rich in content, nor dynamic. Though conceived of and maintained by gifted individuals, AppSensor cannot be considered viable for industry-level adoption by either SMBs nor enterprises. To be clear, we believe it is possible to get value out of AppSensor, but only after expending considerable effort, much in green fields. Indicating AppSensor as a drop-in control is analogous to indicating that using Express.js gives you a website.

WAFs, the other listed alternative, have been shown ineffective even after years of maturation. Particularly, testing shows a lack of application layer visibility and stateful context disallows WAFs from resisting manual attack effectively. In our professional services vulnerability discovery experience, we see little-to-no attack resistance afforded by WAFs. We also see very little customer interest in WAFs. Previous adopters are moving on. That leaves RASP. Thus, the proposed A7 is written to say, "if you don't want A7, get RASP."

Another challenge exists with A7's proposed formulation: it speaks to virtual patching in addition to the introduction of "attack protection." Elements of A7's description ("Threat Agents," "Attack Vectors") demand runtime protection. Others focus on the speed at which organizations respond and patch ("Technical Impacts" and "Business Impacts" particularly). Still others ("Security Weakness," "Am I vulnerable?," "How do I prevent?") cover both runtime protection and speed of patching. As was the case with its coined term "attack protection," neither A7's description nor the "Virtual Patching" best practice page https://www.owasp.org/index.php/Virtual_Patching_Best_Practices list a viable industry-wide solution to the virtual patching problem (though, again, it represents great work by gifted teams).

The main issue is that if the proposed A7 were to be re-worded as risk, it is unclear which we are considering. It could either be risk resulting from:

Even in committed DevOps cultures, these two capabilities are separate, owned by different stakeholders. They are two complementary and additive concepts. Process concerns and security capabilities, such as patch management, have no place in the OWASP Top 10's a list of application security risks. Instead, they belong in the OpenSAMM or a related project. Calls for use of a particular tool (such as RASP) also do not have a place in the Top 10.

Some may counter that while the OWASP Top 10 list of application security risks exists in the application domain, risks may be reasonably countered by controls in a different domain. Indeed, OWASP and other standards documentation indicate that training or security testing (activities within a security initiative) and WAFs (a tool deployed by an initiative) are valid security controls. Yet, if the 2017 OWASP Top 10 RC intends to shine light on this dramatically broader scope, then why has every other Top 10 release entry in its history been confined to the scope of software vulnerability? Why has A7 been documented as a control prescription and not a risk? Though less flagrant, some may perceive the entry A10 proposal as another pitch for RASP and/or IAST technologies. This item explicitly indicates that:

             "Dynamic and sometimes even static tools don't work well
              on APIs"

then warns,

             "Be sure your security analysis and testing covers all your
              APIs and your tools can discover and analyze them all 
              effectively."

Wichers admits to re-treading other entries in the context of APIs [DW]. This is direct admission that A10 as proposed does not reflect a unique application security risk but the application of other risks to a specific application design element: the API. One could consider limitations in the formulation of A10 at length in its own right or the 'supportive' effects it has on A7, but those topics are beyond this article.

[Data-driven] The lack of meaningful connection drawn between the published data trove <https://github.com/OWASP/Top10/blob/master/2017/datacall/OWASP Top 10 - 2017 Data Call-Public Release.xlsx?raw=true> and the proposed Top 10 entries has been pointed out by others. In his articles Glas [BG1][BG2] concludes:

           "There is data (to some extent) for each of the Top 10
             categories, with the exception of the new ones (A7 & A10).
             <snip> Without data, we have to look a little more at where 
             the other entries may have come from. The only references
             I could find to A7 and A9 were recommendations from 
             Contrast to add them to the Top 10. While I don't disagree 
             that these are issues, I'm trying to determine the
             justification behind making them the only two new entries
             in this Top 10."

Glas points out that he cannot find any satisfying justification for A7 among mailing lists, the slack channel, or elsewhere.

[Conflating design with products] Conflating a single product category, like WAF or RASP, with an important design property of security design leads organizations in the wrong direction. It leaves them vulnerable to classes of attack even when the products work--subject to DoS, or worse.

When maturity and capability allow, and risk appetite indicates, we believe that organizations should design software to conditions observed at runtime with security in mind. We've had designing systems that do this for over two decades now and we've spoken on the topic at length within developer conferences like SecAppDev https://www.secappdev.org/. Because of this, we can speak to both the challenge and value of such an approach.

Like any cross-cutting concern, an application's ability to detect and respond to conditions at runtime affects a host of other security controls and requires a broad range of feature/functions, individually or even acting in concert. A few classes of response include: behavioral access control, dynamic log level adjustment, velocity throttling, and user and administrative notifications. The tools and frameworks listed by A7 have no claim to behavioral access control and only proof-of-concept functionality to address log adjustment, velocity throttling, or notification.

Countless other domain-specific dynamic response categories exist based on organizations' business logic. Neither RASP products nor AppSensor have the abstractions or integration necessary to provide a policy-driven dynamic runtime response across response categories and/or in areas of business logic. Neither has an ability to specify and manage detection or response at a level of organizational or business unit policy description, as an SMB or enterprise would need.

[Proposed actions] The scope of this letter demonstrates the weakness of A7's proposed formulation and show that it is inappropriate for inclusion in a list of Top 10 application security risks before the end of the comment period. Providing specific and actionable advice for replacing A7 (and potentially A10) with the proposed entries is beyond that scope.

However, consideration of the evolution of OWASP's Top 10 list from 2003 to the 2017 RC shows the maintainers have continued to wrestle with the challenge of characterizing authentication/authorization risks, particularly as they pertain to access those resources provided by APIs (object or account reference for instance). Data from OWASP community participants in the Top 10 project can be used to tune the previous Top 10 release candidate and provide a practical Top 10 risk list that both addresses the persistent ontological challenges and provides the fresh update to those building systems with modern languages, frameworks, and platforms. That, not a call for RASP, is what the industry deserves.

[Footnotes]

  1. [BG1] - https://nvisium.com/blog/2017/04/18/musings-on-the-owasp-top-10-2017-rc1/

  2. [BG2] - https://nvisium.com/blog/2017/04/24/musings-on-the-owasp-top-10-2017-rc1-pt2/

  3. [DW] - http://lists.owasp.org/pipermail/owasp-topten/2017-May/001480.html

  4. [PC] - https://www.owasp.org/index.php/OWASP_Proactive_Controls

  5. [SC] - https://www.owasp.org/index.php/Category:Control

  6. [SR] - https://www.owasp.org/index.php/Category:OWASP_Top_Ten_Project

  7. [15] - https://www.owasp.org/index.php/OWASP_AppSensor_Project

As we were crafting this response, important changes to the Top 10 occurred, principally due to the OWASP summit. Andrew van der Stock has taken ownership of the project. We pledge to work with him and high-quality partners like Brian Glas to reconsider the data in pursuit of a vendor-neutral and impactful release candidate. Additionally, Andrew recently accepted a position with SNPS Consulting Services. Welcome to SNPS, Andrew!

Moving forward, I, personally, will redouble my eye on neutrality. The community should as well. Re-evaluating the data is essential and since the community sees little or no tie between the available data and RC1's A7, we're hopeful. The remaining challenge, for those who--like myself--support the concepts sincerely backing A7, is that which faced A9 ("OSS risk") from the Top 10's last evolution. OWASP's new leadership will have to wrestle with how to best draw attention to those issues, vital to the industry, for which there is a fundamental lack of visibility due to the limitations in existing vulnerability discovery tools and services.

m1spl4c3ds0ul commented 7 years ago

Based on conversations I've had, I would suggest the following steps accompany a revisitation of the data:

1) Properly define "Attack protection", if the intent is to coin a new phrase. Consider elements thus far discussed including: 1a) Identity and Access Management, and vulnerability associated with impersonation or forgery; 1b) Anti-automation, for its own sake, from click-jacking and CSRF protection to preventing brute force or enumeration attacks, as well as other mentioned categories; and 1c) Prevention of abuse of logging/monitoring facilities, or those vulnerabilities that prevent the proper attribution or aggregation of usage within an application or service.

2) Address and attempt to solve structural problems with top ten surrounding failures in identity proofing, authentication, and authorization mechanisms. Recombobulate accordingly.

3) Collate prescriptive advice and export to appropriate OWASP project, outside of Top Ten.

m1spl4c3ds0ul commented 7 years ago

Here are some thoughts on what "attack protection" may entail: (this is straw man from the cuff, not a well-edited and reviewed list):

1) Upgrade RBAC to C/APAC, and architect “behavioral access control” instead of binary authN/Z; 2) Automate measures to prevent probing/discovery of sites and their vulnerabilities; 3) Respond with the appropriate level of logging AND monitoring to production conditions at runtime; 4) Resist automation attacks, including: a. Click-jacking; b. CSRF; c. other impersonation-based attacks; d. Brute force of credentials; and e. So forth and so on. 5) Identify and prevent invalid state-transitions: a. Force-browsing; b. (in)direct object reference w/ or w/o the correct user's tokens; and c. Business logic/flow: e.g. ship before purchase 6) And so forth, and so on.

From my perspective, aspects of AppSensor's taxonomy of "events" (page 125 of the AppSensor Guide) are inappropriate to this list. For instance, input validation, encoding, and escaping is a regime in its own right, and one clearly handled amply by the Top Ten elsewhere.

Were one to assert, "injection and input filtration/output encoding are classes of attack within the attack protection moniker", then one would be making the category sufficiently inclusive and general as to be meaningless. Such a move, I feel, detracts from the purpose of the top ten (which conveys knowledge through taxonomy, dividing and categorizing vulnerability so as to educate) and makes it less actionable.

vanderaj commented 7 years ago

Thanks John. The OWASP Top 10 has always had forward looking issues that weren't supported by the data. For example, the OWASP Top 10 2007 included CSRF, because at the time, 100% of apps had no CSRF defenses. That has changed, and now it's just part of access control.

At the OWASP Summit in mid June, which had good participation, we came to the following outcomes:

a) We will re-open the data call until just before AppSec USA, and @infosecdad will assist in doing data analysis. As you'll see below, data is not the only way items end up in the OWASP Top 10, it's primarily a method to order the issues. I have teed up a time to talk with Brian tomorrow, and there will be an open call soon. b) there will be an RC2 at AppSec USA c) there will be more independence of project leadership. This has occurred. d) We discussed approved that there can be up to TWO (2) non-data driven issues in the OWASP Top 10. This allows us to look forward and include things like CSRF or API protection e) I published many other minor outcomes, but I think these address many of your points, and I think overall, you might be happy with the overall direction.

We are working to re-open the data call and re-imagine what the list would look like with additional data. The data itself only affects the ordering, and it has a 1/6 weighting when you follow the OWASP Risk Rating methodology used in the banner of each "risk". The problem I had from 2010-2013 is calling things a "risk" assumes we know the impact, but the business impact of a risk is unknowable and unique for each app and data asset. Therefore, I might find out if folks want to change the view slightly to be about vulnerability.

This then leaves a scope for the OWASP Proactive Controls, where application monitoring (which is A7 in a nutshell) is already a control. Proactive Controls are developer focused. There's obviously a missing OWASP Top 10 Defenses list. May be that needs to built out for the blue team view. Let's see where the data opens up, and be a lot more specific about A7's language, and possibly it might still be included. I'm not fussed and I'm not tied to it in particular, but I would hate to see a major revision of the OWASP Top 10 go out without some form of detection included.

Does it need to be RASP / WAF? No. OWASP is open. ALL of the OWASP Top 10 controls should be achievable without purchasing a tool. There are open source alternatives for A7, but possibly we need to generalize it out. I will look through in detail where we can make the text more inclusive and yet specific (i.e. testable), because otherwise, we will have vendors claiming "OWASP Top 10" compliance when no such thing is possible.

I, for one, will always welcome strong, opinionated, but positive feedback of anything we do. This makes our Flagship projects better, adoptable, and defensible. I think we have started the journey to get there. Please work with us to get this out the door this year.

jmanico commented 7 years ago

For example, the OWASP Top 10 2007 included CSRF, because at the time, 100% of apps had no CSRF defenses.

Andrew, at some level this looks like a clearly good data point to me. We have a serious vuln that folks need to be aware of, and a clear strategy at the time to address it (sychronizer token). I do not think it's fair to compare CSRF in 2007 with the issues around A7/A10. A7 is especially hard to quantify - ever - in my opinion. It's too nebulous, IMO, and not nearly as important as risks like XXE.

This then leaves a scope for the OWASP Proactive Controls, where application monitoring (which is A7 in a nutshell) is already a control. Proactive Controls are developer focused. There's obviously a missing OWASP Top 10 Defenses list. May be that needs to built out for the blue team view. Let's see where the data opens up....

I'm happy to help with a Top Ten Defense list or some list that is developer centric beyond the proactive controls project.

Aloha, Jim

On 6/29/17 1:59 PM, Andrew van der Stock wrote:

Thanks John. The OWASP Top 10 has always had forward looking issues that weren't supported by the data. For example, the OWASP Top 10 2007 included CSRF, because at the time, 100% of apps had no CSRF defenses. That has changed, and now it's just part of access control.

At the OWASP Summit in mid June, which had good participation, we came to the following outcomes:

a) We will re-open the data call until just before AppSec USA, and @infosecdad https://github.com/infosecdad will assist in doing data analysis. As you'll see below, data is not the only way items end up in the OWASP Top 10, it's primarily a method to order the issues. I have teed up a time to talk with Brian tomorrow, and there will be an open call soon. b) there will be an RC2 at AppSec USA c) there will be more independence of project leadership. This has occurred. d) We discussed approved that there can be /up to/ TWO (2) non-data driven issues in the OWASP Top 10. This allows us to look forward and include things like CSRF or API protection e) I published many other minor outcomes, but I think these address many of your points, and I think overall, you might be happy with the overall direction.

We are working to re-open the data call and re-imagine what the list would look like with additional data. The data itself only affects the ordering, and it has a 1/6 weighting when you follow the OWASP Risk Rating methodology used in the banner of each "risk". The problem I had from 2010-2013 is calling things a "risk" assumes we know the impact, but the business impact of a risk is unknowable and unique for each app and data asset. Therefore, I might find out if folks want to change the view slightly to be about vulnerability.

This then leaves a scope for the OWASP Proactive Controls, where application monitoring (which is A7 in a nutshell) is already a control. Proactive Controls are developer focused. There's obviously a missing OWASP Top 10 Defenses list. May be that needs to built out for the blue team view. Let's see where the data opens up, and be a lot more specific about A7's language, and possibly it might still be included. I'm not fussed and I'm not tied to it in particular, but I would hate to see a major revision of the OWASP Top 10 go out without some form of detection included.

Does it need to be RASP / WAF? No. OWASP is open. ALL of the OWASP Top 10 controls should be achievable without purchasing a tool. There are open source alternatives for A7, but possibly we need to generalize it out. I will look through in detail where we can make the text more inclusive and yet specific (i.e. testable), because otherwise, we will have vendors claiming "OWASP Top 10" compliance when no such thing is possible.

I, for one, will always welcome strong, opinionated, but positive feedback of anything we do. This makes our Flagship projects better, adoptable, and defensible. I think we have started the journey to get there. Please work with us to get this out the door this year.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/OWASP/Top10/issues/82#issuecomment-312067941, or mute the thread https://github.com/notifications/unsubscribe-auth/AAgcCX6NWT9OXR-vVxIs7xzIc61Np3uFks5sI_QmgaJpZM4OFwlV.

-- Jim Manico Manicode Security https://www.manicode.com

m1spl4c3ds0ul commented 7 years ago

Andrew,

I appreciate the reply. This will be short and sweet:

1) The method of re-evaluation makes sense at the highest level, as does the way you've staffed it;

2) The reply does not address the principal of the published arguments against A7, which include: A) Ontological problems - "Attack protection" has not been defined. No stable, accepted, and testable resolution for this concept exists; B) Taxonomical problems - 1) "Attack protection", if defined (2A), appears inappropriate to a list of "Top Ten Security Risks/Vulnerability" but suited for a list of proactive controls; 2) Both A7 and A10 are, by their very nature, "cross-cutting" of the other list items, and thus do not "stand on their own" without adjustment.

Some commitment is explicitly stated towards making sure that the item be modified to be resolved by adopting organizations without tool purchase. What standard of measure/proof do you imagine applying to non-vendor alternatives (such as AppSensor) were A7 to remain, in light of the argument made against AppSensor's applicability in the face of maturity and coverage?

In short, if the OWASP Top Ten project's reply to the call for removal is the email to which I'm replying, that reply does not address let alone resolve the main planks of the call for removal.

-jOHN

Phone: 703.727.4034 https://plus.google.com/u/0/115010545042017654487/

On Thu, Jun 29, 2017 at 2:59 PM, Andrew van der Stock < notifications@github.com> wrote:

Thanks John. The OWASP Top 10 has always had forward looking issues that weren't supported by the data. For example, the OWASP Top 10 2007 included CSRF, because at the time, 100% of apps had no CSRF defenses. That has changed, and now it's just part of access control.

At the OWASP Summit in mid June, which had good participation, we came to the following outcomes:

a) We will re-open the data call until just before AppSec USA, and @infosecdad https://github.com/infosecdad will assist in doing data analysis. As you'll see below, data is not the only way items end up in the OWASP Top 10, it's primarily a method to order the issues. I have teed up a time to talk with Brian tomorrow, and there will be an open call soon. b) there will be an RC2 at AppSec USA c) there will be more independence of project leadership. This has occurred. d) We discussed approved that there can be up to TWO (2) non-data driven issues in the OWASP Top 10. This allows us to look forward and include things like CSRF or API protection e) I published many other minor outcomes, but I think these address many of your points, and I think overall, you might be happy with the overall direction.

We are working to re-open the data call and re-imagine what the list would look like with additional data. The data itself only affects the ordering, and it has a 1/6 weighting when you follow the OWASP Risk Rating methodology used in the banner of each "risk". The problem I had from 2010-2013 is calling things a "risk" assumes we know the impact, but the business impact of a risk is unknowable and unique for each app and data asset. Therefore, I might find out if folks want to change the view slightly to be about vulnerability.

This then leaves a scope for the OWASP Proactive Controls, where application monitoring (which is A7 in a nutshell) is already a control. Proactive Controls are developer focused. There's obviously a missing OWASP Top 10 Defenses list. May be that needs to built out for the blue team view. Let's see where the data opens up, and be a lot more specific about A7's language, and possibly it might still be included. I'm not fussed and I'm not tied to it in particular, but I would hate to see a major revision of the OWASP Top 10 go out without some form of detection included.

Does it need to be RASP / WAF? No. OWASP is open. ALL of the OWASP Top 10 controls should be achievable without purchasing a tool. There are open source alternatives for A7, but possibly we need to generalize it out. I will look through in detail where we can make the text more inclusive and yet specific (i.e. testable), because otherwise, we will have vendors claiming "OWASP Top 10" compliance when no such thing is possible.

I, for one, will always welcome strong, opinionated, but positive feedback of anything we do. This makes our Flagship projects better, adoptable, and defensible. I think we have started the journey to get there. Please work with us to get this out the door this year.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/OWASP/Top10/issues/82#issuecomment-312067941, or mute the thread https://github.com/notifications/unsubscribe-auth/AAh-S679SrPLIHwr1kjMITLfMSgIdYwQks5sI_QmgaJpZM4OFwlV .

vanderaj commented 7 years ago

John,

I have reviewed your original blog post and the supplemental follow ups.

I accept the fundamental premise of your argument for non-inclusion, and we will work to resolve the inclusion, ontological and taxonomical issues around the existing draft. However, there's a fair amount of effort between accepting the premise of your argument, and producing an eventual RC2 that either resolves this issue or at least tries to make it significantly clearer and defensible.

I have drafted a blog post / letter to the OWASP Top 10 mailing list and shared it with the Top 10 leadership to ensure that we are all on the same page. Once the other leaders have had a chance to weigh in, I am sure you'll be happier with the direction I personally want to see the OWASP Top 10 take. I do not want to pre-judge the results of a second data call or any survey results, as otherwise why bother with them, consulting the community or massaging the data? This is how we ended up here, and I do not want to repeat that mistake.

I think it will be difficult for A7 or A10 to survive in their current form, especially if we address precisely what "attack protection" is. Personally, I see both of these issues best off in the Proactive Controls, but I would hate to see an OWASP Top 10 in 2017 that doesn't mention or recommend monitoring or adaptive behavioral responses.

This doesn't mean A7 and A10 are automatically out, and thus resolve this issue, but it does mean that A7 and A10 will have to be significantly redrafted if they survive, and if they do survive, the notes from this issue will be taken into account.

jmanico commented 7 years ago

I sent this suggestion to the Proactive Control Team. I'll keep you all posted.

PS: We already have a sort of proactive attack protection/detection but I'll talk to the team about expanding it.

-- Jim Manico @Manicode

On Jun 30, 2017, at 4:37 PM, Andrew van der Stock notifications@github.com wrote:

John,

I have reviewed your original blog post and the supplemental follow ups.

I accept the fundamental premise of your argument for non-inclusion, and we will work to resolve the inclusion, ontological and taxonomical issues around the existing draft. However, there's a fair amount of effort between accepting the premise of your argument, and producing an eventual RC2 that either resolves this issue or at least tries to make it significantly clearer and defensible.

I have drafted a blog post / letter to the OWASP Top 10 mailing list and shared it with the Top 10 leadership to ensure that we are all on the same page. Once the other leaders have had a chance to weigh in, I am sure you'll be happier with the direction I personally want to see the OWASP Top 10 take. I do not want to pre-judge the results of a second data call or any survey results, as otherwise why bother with them, consulting the community or massaging the data? This is how we ended up here, and I do not want to repeat that mistake.

I think it will be difficult for A7 or A10 to survive in their current form, especially if we address precisely what "attack protection" is. Personally, I see both of these issues best off in the Proactive Controls, but I would hate to see an OWASP Top 10 in 2017 that doesn't mention or recommend monitoring or adaptive behavioral responses.

This doesn't mean A7 and A10 are automatically out, and thus resolve this issue, but it does mean that A7 and A10 will have to be significantly redrafted if they survive, and if they do survive, the notes from this issue will be taken into account.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.

jmanico commented 7 years ago

I think the set of concerns from John and Andrews reply were both thoughtful and fair.

Andrew is juggling the concerns of a large number of people and is stepping into significant controversy that impacts our multi-billion dollar industry. He needs a little wiggle room.

John, please keep commenting. I'm thrilled to see your participating in this important conversation. I like the raw science and depth you're bringing to these issues. We need more John to help keep us intellectually honest.

Andrew, hold on! You signed up for a rather difficult leadership position that will require all of your abilities. If you need a hand please reach out to various folks you trust. I am grateful you're triggering a second data call and are waiting to see that dataset before jumping to any conclusions. I'm Also super glad you're bringing a reputable AppSec data scientist into the project.

I'm excited to see where this new OT10 era will take us!

Aloha,

Jim Manico @Manicode

On Jun 30, 2017, at 4:37 PM, Andrew van der Stock notifications@github.com wrote:

John,

I have reviewed your original blog post and the supplemental follow ups.

I accept the fundamental premise of your argument for non-inclusion, and we will work to resolve the inclusion, ontological and taxonomical issues around the existing draft. However, there's a fair amount of effort between accepting the premise of your argument, and producing an eventual RC2 that either resolves this issue or at least tries to make it significantly clearer and defensible.

I have drafted a blog post / letter to the OWASP Top 10 mailing list and shared it with the Top 10 leadership to ensure that we are all on the same page. Once the other leaders have had a chance to weigh in, I am sure you'll be happier with the direction I personally want to see the OWASP Top 10 take. I do not want to pre-judge the results of a second data call or any survey results, as otherwise why bother with them, consulting the community or massaging the data? This is how we ended up here, and I do not want to repeat that mistake.

I think it will be difficult for A7 or A10 to survive in their current form, especially if we address precisely what "attack protection" is. Personally, I see both of these issues best off in the Proactive Controls, but I would hate to see an OWASP Top 10 in 2017 that doesn't mention or recommend monitoring or adaptive behavioral responses.

This doesn't mean A7 and A10 are automatically out, and thus resolve this issue, but it does mean that A7 and A10 will have to be significantly redrafted if they survive, and if they do survive, the notes from this issue will be taken into account.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.

m1spl4c3ds0ul commented 7 years ago

Andrew,

This seems reasonable and I believe more fully addresses the concern.

-jOHN

On Fri, Jun 30, 2017 at 5:37 PM Andrew van der Stock < notifications@github.com> wrote:

John,

I have reviewed your original blog post and the supplemental follow ups.

I accept the fundamental premise of your argument for non-inclusion, and we will work to resolve the inclusion, ontological and taxonomical issues around the existing draft. However, there's a fair amount of effort between accepting the premise of your argument, and producing an eventual RC2 that either resolves this issue or at least tries to make it significantly clearer and defensible.

I have drafted a blog post / letter to the OWASP Top 10 mailing list and shared it with the Top 10 leadership to ensure that we are all on the same page. Once the other leaders have had a chance to weigh in, I am sure you'll be happier with the direction I personally want to see the OWASP Top 10 take. I do not want to pre-judge the results of a second data call or any survey results, as otherwise why bother with them, consulting the community or massaging the data? This is how we ended up here, and I do not want to repeat that mistake.

I think it will be difficult for A7 or A10 to survive in their current form, especially if we address precisely what "attack protection" is. Personally, I see both of these issues best off in the Proactive Controls, but I would hate to see an OWASP Top 10 in 2017 that doesn't mention or recommend monitoring or adaptive behavioral responses.

This doesn't mean A7 and A10 are automatically out, and thus resolve this issue, but it does mean that A7 and A10 will have to be significantly redrafted if they survive, and if they do survive, the notes from this issue will be taken into account.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/OWASP/Top10/issues/82#issuecomment-312378469, or mute the thread https://github.com/notifications/unsubscribe-auth/AAh-S1VH7NhiqgCCV8ReZimVEGFdouQYks5sJWqkgaJpZM4OFwlV .

yohgaki commented 7 years ago

Why not simply reintroduce OWASP TOP 10 2004 A1 Unvalidated Inputs? We know input validation is the most important and effective security control, but most applications do not validate all inputs. Proper attack protections could be explained in this section.

There are many web application validation libraries/features that lack the most important validation feature. Many libraries merely check "string type" and do not check string chars, encoding, formats and length even if web applications are built on top of "string interfaces" such as HTML, SQL, JavaScript, LDAP, XML, etc.

There are too many developers who do not know relation between "Input control" and "Output control". i.e. Input and output controls are independent.

There are too many developers who do not care about "Invalid(Unvalidated) inputs" even if computer programs are not able to operate properly with invalid inputs. e.g. "len = INT_MAX + 1", "print '

' + html_escape(unvalidated_product_id) + '
'", "SELECT * FROM tbl LIMIT 999999999999" and so on.

Even when developers care about input validation, many developers do not comply "Fail Fast Principle". i.e. Validations in models or libraries are too late. Software boundary input validations are mandatory with this principle, but only few applications do this.

Although input validation is one of the most fundamental security requirement for softwares, most applications (and web frameworks) do not treat inputs sufficiently, and result in attackable. "Unvalidated inputs" is not ignorable application security risk. Current OWASP recommends validations, but it seems most developers ignored them.

jmanico commented 7 years ago

We know input validation is the most important and effective security control....

I think that is a dangerous message. When developers depend on input validation, they end up with legal email addresses with SQL injection. They end up trying to validate open text, which is not a reasonable path.

Validation is a good way to reduce the attack surface and it's certainly a good control, but it's not the "most important control" especially when addressing injection. To really stop injection we need query parameterization, XML parser configuration, escaping/encoding and similar.

On 8/16/17 9:53 PM, Yasuo Ohgaki wrote:

Why not simply reintroduce OWASP TOP 10 2004 A1 Unvalidated Inputs? We know input validation is the most important and effective security control, but most applications do not validate all inputs. Proper attack protections could be explained in this section.

There are many web application validation libraries/features that lack the most important validation feature. Many libraries merely check "string type" and do not check string chars, encoding, formats and length even if web applications are built on top of "string interfaces" such as HTML, SQL, JavaScript, LDAP, XML, etc.

There are too many developers who do not know relation between "Input control" and "Output control". i.e. Input and output controls are independent.

There are too many developers who do not care about "Invalid(Unvalidated) inputs" even if computer programs are not able to operate properly with invalid inputs. e.g. "len = INT_MAX + 1", "print '

' + html_escape(unvalidated_product_id) + ' '", "SELECT * FROM tbl LIMIT 999999999999" and so on.

Even when developers care about input validation, many developers do not comply "Fail Fast Principle". i.e. Validations in models or libraries are too late. Software boundary input validations are mandatory with this principle, but only few applications do this.

Although input validation is one of the most fundamental security requirement for softwares, most applications (and web frameworks) do not treat inputs sufficiently, and result in attackable. "Unvalidated inputs" is not ignorable application security risk. Current OWASP recommends validations, but it seems most developers ignored them.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/OWASP/Top10/issues/82#issuecomment-322947063, or mute the thread https://github.com/notifications/unsubscribe-auth/AAgcCTQt0er2j2KrHUYKZSRAQdAhmUerks5sY5z_gaJpZM4OFwlV.

-- Jim Manico Manicode Security https://www.manicode.com

Neil-Smithline commented 7 years ago

I think that the inclusion of both "A1 2004 Unvalidated Input" and "A6 2004 Injection Flaws" is a bit confusing. Injection is almost always due to unvalidated inputs. Do you see a way to include both of them @yohgaki?

jmanico commented 7 years ago

I strongly disagree with this line of thinking Neil. Please proceed with caution. This is not at all the right defensive advice. Injection is due to a lack of escaping, lack of XML configuration, lack of query parameterization, lack of data sanitization and other core injection defenses.

If you tell developers that validation is all you need to stop injection then HTML input, JSON, email addresses, URL's and other more complex forms of input that are fully legit input from a validation point of view will still cause injection.

Look at Security Shepherd. One of the labs demands a legal email address that still contains a SQL injection attack.

Jim Manico

On Aug 18, 2017, at 2:13 PM, Neil Smithline notifications@github.com wrote:

I think that the inclusion of both "A1 2004 Unvalidated Input" and "A6 2004 Injection Flaws" is a bit confusing. Injection is almost always due to unvalidated inputs. Do you see a way to include both of them @yohgaki?

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.

Neil-Smithline commented 7 years ago

OK. I can't argue with that @jmanico.

jmanico commented 7 years ago

Thank for willing to listen. Neil, if you schedule time with me I'll walk you through a few code samples and attacks that illustrate my point.

Aloha, Jim

On Aug 18, 2017, at 3:13 PM, Neil Smithline notifications@github.com wrote:

OK. I can't argue with that @jmanico.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.

Neil-Smithline commented 7 years ago

Kind of you, Jim. I'll see you in Orlando in a month.

vanderaj commented 6 years ago

This risk is going to be removed as per community feedback at the Project Summit, survey data and revised data call. Thank you for your input on this matter.

yohgaki commented 6 years ago

I agree that there are risk that developers only rely on input validations for software security. In fact, I saw so many codes do this in past and current.

Developers must aware that "Input handling" and "Output handling" are independent security measure.

The reason is rather obvious. When app takes inputs from other software and validate them, app can (and should) only validate value format for the input context, i.e. Phone number, Name, ID, etc. Every input has certain format, length, range, chars and encoding, that can be valid. Input validation can (and should) only validate input value "appearance".

Input validation has nothing to do with output security because:

Let's focus on output handling code now.

I totally agree that injections can be avoided by output encode/escape. However, we have to ask questions:

Although, output encode/escape can get rid of "Injections", but it leaves invalid value issue and risks involved with invalid value. Proper input validation can remove/reduce invalid value risks. (e.g. WordPress fixes placeholder bugs in 4.8.2)

Now, let's face with real world code.

This fact not only make a lot easier attackers to find "Attackable" vulnerabilities, but also makes code security audit a lot harder for both automated and manual code audit.

IMO, it should stress both "input" and "output" importance. Currently, almost all apps ignore secure coding practices, both inputs and outputs (outputs handling is better, but it's partial in most cases). After all, codes can only work correctly with valid inputs, attackers supply invalid inputs for misbehaviors/attacks. Not mentioning input control flaw does not make much sense to me.

BTW, "logical data validation", zip code and address matches/etc, is not responsibility of input/output handling code. "logical data validation" is business logic responsibility, e.g. M in MVC. Please this keep in mind. I see many confusions in past by mixed up responsibilities.

Please keep this in mind also - There are only 3 types of inputs

jmanico commented 6 years ago

The point is, if you validate all your data as best you can, you can still have XSS on a web page, especially for input that is open text or other rich content.

But if you escape all your data properly, you will avoid XSS. There might still be functional errors with your code, but escaping properly in your UI will avoid XSS. It's so important that new frameworks like GO Templates, AngularJS and ReactJS do this escaping by context - automatically.

But often, even in these fancy new frameworks, developers sometimes disable escaping - why?

User submitted chunks of HTML! If your users is allowed to submit HTML via TinyMCE or you just allow certain tags in posts like Slashdot does and similar, you cannot escape HTML without breaking it. So in the case of HTML chunks you need to use a HTML Sanitizer.

In my own code I tend to BOTH sanitize chunks of HTML on the server before storing it AND rendering that potentially dangerous HTML in a sandbox like DOMPurify or JSXSS.

So when dealing with XSS make sure every variable is...

Escaped in the right context or Run through a robust HTML sanitizer

And for DOMXSS issues...

Parse JSON safely and Use Safe JavaScript sinks

Aloha, Jim

On Sep 22, 2017, at 7:21 PM, Yasuo Ohgaki notifications@github.com wrote:

I agree that there are risk that developers only rely on input validations for software security. In fact, I saw so many codes do this in past and current.

Developers must aware that "Input handling" and "Output handling" are independent security measure.

The reason is rather obvious. When app takes inputs from other software and validate them, app can (and should) only validate value format for the input context, i.e. Phone number, Name, ID, etc. Every input has certain format, length, range, chars and encoding, that can be valid. Input validation can (and should) only validate input value "appearance".

Input validation has nothing to do with output security because:

It cannot determine where input used for output. It even could be used for multiple output contexts. e.g. HTML, JSON, Mail, etc Therefore, it is impossible to encode/escape inputs at input handling code, but proper encode/escape for the output context is the responsibility of output handling code. Let's focus on output handling code now.

proper encode/escape for the output context is the responsibility of output handling code. (Anything that can be encoded/escaped should be encoded/escaped without exceptions. Developers should not rely on input validation code nor data type.) I totally agree that injections can be avoided by output encode/escape. However, we have to ask questions:

Is it valid that an app outputs '' for int/phone number/some code? Does this kind of app behaviors are risk free? "Relying on only output encode/escape" is not valid nor risk free. Although, output encode/escape can get rid of "Injections", but it leaves invalid value issue and risks involved with invalid value. Proper input validation can remove/reduce invalid value risks. (e.g. WordPress fixes placeholder bugs in 4.8.2)

Now, let's face with real world code.

Almost all apps do not have proper input validation that validates input values for the context. e.g. Struts2 mess Almost all apps do not have proper output escape/encode that assure 100% output value safety for the context. e.g. Rely on data type(Static languages have generics now), assume "Identifier variables are safe" in SQL prepare statements, assume "output range values" are handled safely somehow by underlying libraries (Int overflows cannot be ignored. See Python 2.7.14 release), etc. This fact not only make a lot easier attackers to find "Attackable" vulnerabilities, but also makes code security audit a lot harder for both automated and manual code audit.

IMO, it should stress both "input" and "output" importance. Currently, almost all apps ignore secure coding practice for both inputs and outputs (outputs is better, but it's partial in most cases). After all, codes can only work correctly with valid inputs, attackers supply invalid inputs for misbehaviors/attacks. Not mentioning input control flaw does not make much sense to me.

BTW, "logical data validation", zip code and address matches/etc, is not responsibility of input/output handling code. "logical data validation" is business logic responsibility, e.g. M in MVC. Please this keep in mind. I see many confusions in past by mixed up responsibilities.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.

Neil-Smithline commented 6 years ago

@yohgaki - would you mind opening a new issue with your thoughts about output encoding? I think that you have some good ideas, but I'm afraid they'll be lost being at the bottom of a closed issue.