OWASP is pleased to announce the release of the OWASP Top 10 - 2017
After a difficult gestation, the OWASP Top 10 Final is out.
You can get it from here: https://github.com/OWASP/Top10/tree/master/2017
As many of you know, there was a lot of passion within the application security community about the OWASP Top 10 2017 RC1, so it was critical that we worked with the community to firm up the data and obtain a consensus view of how to proceed.
After the change of leadership from Dave Wichers and Jeff Williams to Andrew van der Stock in late May 2017, we added diversity to the leadership team, by adding Neil Smithline, Torsten Gigler, and Brian Glas. Each of the leaders brings their own experience and point of view to the OWASP Top 10, making it far stronger. I couldn't have done this by myself, and it would have been a far weaker document if it was just little old me. I thank my co-leaders from the bottom of my heart. I also thank the founding leadership of Dave Wichers and Jeff Williams for creating the OWASP Top 10, and trusting in us to get this done.
In June, Dave Wichers and Brian Glas attended the OWASP Project Summit in London, and I participated remotely. During the summit, as a community, we agreed to governance, methodology, data analysis and transparency improvements. The highlights of this are:
The data call was very successful. We obtained a great deal of new data covering previous years, including 2016, from a wide variety of consultancies and vendors. We have data from over 40 data contributors, 23 of which were used in the final data analysis. From those 23 data sets, the data covered over 114,000 applications, which is one of the biggest data sets on application security anywhere. And you can download it from our GitHub repo. At the last minute, we also received data from BugCrowd. The interesting thing about bug bounty programs is that kudos and payouts only occur when fully validated, and it also shows what is on the top of the list from the point of view of bug bounty programs. The bug bounty data backed up our analysis in terms of prevalence data, so we were definitely on the right track.
The survey was wildly successful. We received over 500 survey responses, so I think we can safely claim consensus on the two new items - Insecure Deserialization and Insufficient Logging and Monitoring. These two items were obviously top of mind for many this year considering the era of the mega breach is not slowing down. We discuss our methodology in more detail within the OWASP Top 10 - 2017 itself, as many will wonder why we didn't use the two top items directly. The short answer - and this should be no surprise - some of these other issues were already in the OWASP Top 10 due to prevalence data, such as XXE and access control.
I will address some of the frequently asked questions - why have CSRF and unvalidated redirects and forwards been removed? It's time to move on. The data for these is no longer strong enough to warrant inclusion, especially when we only have 8 data supported spots with our new methodology, and these two items didn't rank in the community survey. This is actually a sign of success; the fact that CSRF is finally going away is a sign that the OWASP Top 10 has been successful at its mission. Back when I included CSRF in 2007 as a forward looking item, there was no data for it. At all. But ~ 100% of applications had CSRF at that time. Now it's less than 5% of all applications. If you use a modern framework, you're pretty much covered without doing anything. That's a huge success.
This then leads into the discussion about renumbering. We risk rated the resulting list over about a 5 hour meeting, and this is the result. I asked the Twitter community if they wanted a risk based order, a likelihood order, an impact order, or the order from previous OWASP Top 10's. Overwhelmingly risk based order won. Interestingly, the previous OWASP Top 10's kept the previous order, but this was wanted by less than 10% of respondents, compared to over 55% for risk based ordering. So that's what happened. What surprised me is that after re-risk rating many of the existing items didn't move. I was actually surprised by this, particularly in relation to SQL injection, but because we include all forms of injection (which theoretically can cover XSS), it remained at the A1:2017 position. This is because we couple three forms of likelihood (prevalence, detectability, and exploitability) and impact. We have strong prevalence data, but the others were our best judgement. You can look at what we decided upon and review our work. I encourage everyone to do so.
The last common discussion we've had is why we didn't roll up XSS into injections, because it's either HTTP, HTML, or JavaScript injection. The reality is that it would have swamped the important discussion on other injections, and the solutions for XSS are significantly different to preventing OS command injection or SQL injection. I will defend this decision until the day we see XSS gone the way of CSRF. And I can't see that day ... yet. There is hope in the form of CSP and XSS-resistant frameworks such as Ruby on Rails 3 and React, but there's a lot of code out there that is still vulnerable.
The new or heavily updated risks need little explanation:
These new items are modern era issues, and I hope that in the next three years, the industry can make headway on them.
So after more than 370 closed issues and 650 commits, we are finally finished. We received a lot of feedback from the community, and we thank those who reviewed and QA'd the document extremely closely, such as Osama Elnaggar, Dirk Wetter and Jim Manico, as well as over 40 others. For a full list of reviewers, please see the acknowledgement page.
What is the future of the OWASP Top 10? I think if anything, the community's passion during this time around shows how important the OWASP Top 10 is. It is widely adopted and a lot of folks care about it very deeply. It was a time for us to listen and learn from the process, and that will result in improvements for the OWASP Top 10 - 2020.
We will be starting the data collection process much earlier, and we will improve our methodology particularly in relation the survey to provide more choices (we only had 25 CWEs). On top of that, we need to work with NIST / MITRE to keep CWE up to date, because some of the biggest up and coming (and to be fair, some of the existing) weaknesses do not have a CWE entry.
But first, we need a break. Thank you to everyone who participated to make the OWASP Top 10 a much stronger and more evidence based standard. The OWASP Top 10 - 2017 is by far the best sourced, most reviewed, application security standard out there. I encourage everyone to download it and start cracking on the new and updated items. We need translations as well, so if you want to do that, please contact us at @owasptop10 on Twitter or via GitHub.
You can get it from here: https://github.com/OWASP/Top10/tree/master/2017
As many of you know, there was a lot of passion within the application security community about the OWASP Top 10 2017 RC1, so it was critical that we worked with the community to firm up the data and obtain a consensus view of how to proceed.
After the change of leadership from Dave Wichers and Jeff Williams to Andrew van der Stock in late May 2017, we added diversity to the leadership team, by adding Neil Smithline, Torsten Gigler, and Brian Glas. Each of the leaders brings their own experience and point of view to the OWASP Top 10, making it far stronger. I couldn't have done this by myself, and it would have been a far weaker document if it was just little old me. I thank my co-leaders from the bottom of my heart. I also thank the founding leadership of Dave Wichers and Jeff Williams for creating the OWASP Top 10, and trusting in us to get this done.
In June, Dave Wichers and Brian Glas attended the OWASP Project Summit in London, and I participated remotely. During the summit, as a community, we agreed to governance, methodology, data analysis and transparency improvements. The highlights of this are:
- A diversity of leadership at all times (at least two unrelated leaders). This has been an incredible win for the OWASP Top 10, and I hope more OWASP Flagship projects consider doing it.
- The methodology was improved by confirming that we will be using risks, rather than any other metric, and agreeing to up to two items will be selected by the community for up and coming risks
- Data analysis performed by Brian Glas, in particular how to improve the balance from largely automated findings that swamp manual findings, as well as re-opening the data call to obtain 2016 data and survey the community for the two forward looking items
- Transparency is now aligned with OWASP's values - we work in the open at GitHub, and folks can see who suggested an improvement or issue, and how this was resolved in the text. For the first time, there is a strong traceability between the data submitted by participating data contributors and the OWASP Top 10. This means that if you want, you can fork the OWASP Top 10, re-analyze the data to suit your needs and create your own version. (Just don't call it the OWASP Top 10 :-) )
The data call was very successful. We obtained a great deal of new data covering previous years, including 2016, from a wide variety of consultancies and vendors. We have data from over 40 data contributors, 23 of which were used in the final data analysis. From those 23 data sets, the data covered over 114,000 applications, which is one of the biggest data sets on application security anywhere. And you can download it from our GitHub repo. At the last minute, we also received data from BugCrowd. The interesting thing about bug bounty programs is that kudos and payouts only occur when fully validated, and it also shows what is on the top of the list from the point of view of bug bounty programs. The bug bounty data backed up our analysis in terms of prevalence data, so we were definitely on the right track.
The survey was wildly successful. We received over 500 survey responses, so I think we can safely claim consensus on the two new items - Insecure Deserialization and Insufficient Logging and Monitoring. These two items were obviously top of mind for many this year considering the era of the mega breach is not slowing down. We discuss our methodology in more detail within the OWASP Top 10 - 2017 itself, as many will wonder why we didn't use the two top items directly. The short answer - and this should be no surprise - some of these other issues were already in the OWASP Top 10 due to prevalence data, such as XXE and access control.
I will address some of the frequently asked questions - why have CSRF and unvalidated redirects and forwards been removed? It's time to move on. The data for these is no longer strong enough to warrant inclusion, especially when we only have 8 data supported spots with our new methodology, and these two items didn't rank in the community survey. This is actually a sign of success; the fact that CSRF is finally going away is a sign that the OWASP Top 10 has been successful at its mission. Back when I included CSRF in 2007 as a forward looking item, there was no data for it. At all. But ~ 100% of applications had CSRF at that time. Now it's less than 5% of all applications. If you use a modern framework, you're pretty much covered without doing anything. That's a huge success.
This then leads into the discussion about renumbering. We risk rated the resulting list over about a 5 hour meeting, and this is the result. I asked the Twitter community if they wanted a risk based order, a likelihood order, an impact order, or the order from previous OWASP Top 10's. Overwhelmingly risk based order won. Interestingly, the previous OWASP Top 10's kept the previous order, but this was wanted by less than 10% of respondents, compared to over 55% for risk based ordering. So that's what happened. What surprised me is that after re-risk rating many of the existing items didn't move. I was actually surprised by this, particularly in relation to SQL injection, but because we include all forms of injection (which theoretically can cover XSS), it remained at the A1:2017 position. This is because we couple three forms of likelihood (prevalence, detectability, and exploitability) and impact. We have strong prevalence data, but the others were our best judgement. You can look at what we decided upon and review our work. I encourage everyone to do so.
The last common discussion we've had is why we didn't roll up XSS into injections, because it's either HTTP, HTML, or JavaScript injection. The reality is that it would have swamped the important discussion on other injections, and the solutions for XSS are significantly different to preventing OS command injection or SQL injection. I will defend this decision until the day we see XSS gone the way of CSRF. And I can't see that day ... yet. There is hope in the form of CSP and XSS-resistant frameworks such as Ruby on Rails 3 and React, but there's a lot of code out there that is still vulnerable.
The new or heavily updated risks need little explanation:
- We cover API as well as web apps throughout the entire Top 10. This covers mobile, single page apps, RESTful API and traditional web apps.
- A3:2017 Sensitive Data Exposure is now firmly about privacy and PII breaches, and not stack traces or headers.
- A4:2017 XXE is a new data supported item, and so tools and testers need to learn how to find and test for XXE, and developers and devops need to understand how to fix it.
- A6:2017 Misconfiguration now encompasses cloud security issues, such as open buckets.
- A8:2017 Deserialization is a critical issue, asked for by the community. It's time to learn how to find this in tools, and for testers to understand what Java and PHP (and other serialization) looks like so it can be fixed.
- A10:2017 Insufficient Logging and Monitoring. Many folks think this is a missing control, rather than a weakness, but as it was selected by the community, and whilst organizations still take over half a year to detect a breach - usually from external notification - we have to fix this. The way to go forward here for testers is to ask the organization if they detected whatever activity was undertaken, and if they would have responded to it without being prompted. Obviously, we are looking for testing to be undertaken through security devices, but whitelisted, so that logging, escalation and incident response can also be assessed.
So after more than 370 closed issues and 650 commits, we are finally finished. We received a lot of feedback from the community, and we thank those who reviewed and QA'd the document extremely closely, such as Osama Elnaggar, Dirk Wetter and Jim Manico, as well as over 40 others. For a full list of reviewers, please see the acknowledgement page.
What is the future of the OWASP Top 10? I think if anything, the community's passion during this time around shows how important the OWASP Top 10 is. It is widely adopted and a lot of folks care about it very deeply. It was a time for us to listen and learn from the process, and that will result in improvements for the OWASP Top 10 - 2020.
We will be starting the data collection process much earlier, and we will improve our methodology particularly in relation the survey to provide more choices (we only had 25 CWEs). On top of that, we need to work with NIST / MITRE to keep CWE up to date, because some of the biggest up and coming (and to be fair, some of the existing) weaknesses do not have a CWE entry.
But first, we need a break. Thank you to everyone who participated to make the OWASP Top 10 a much stronger and more evidence based standard. The OWASP Top 10 - 2017 is by far the best sourced, most reviewed, application security standard out there. I encourage everyone to download it and start cracking on the new and updated items. We need translations as well, so if you want to do that, please contact us at @owasptop10 on Twitter or via GitHub.
0 Comments:
Post a Comment
Subscribe to Post Comments [Atom]
<< Home