Tag: Security

Setting up an OWASP Juice Shop CTF

Setting up an OWASP Juice Shop CTF

Last updated: 18-March-2018

Introduction

I recently used the very excellent OWASP Juice Shop application developed by the very excellent Björn Kimminich to run an internal Capture the Flag event (CTF) for my department. It went really well and got really good feedback so I thought I would jot down some practical notes on how I did it.

One important point before you start, you should note the disclaimer that that there are plenty of solutions for this challenge on the Internet.

Someone asked, how did I address that in this case?

(I’ll explain the PDF below)

Anyway, let’s get into the details of how I did the CTF.

RTFM (Read The Full Manual)

First of all, there are some great instructions about how to use Juice Shop in CTF mode in the accompanying ebook, see this section specifically. In this blog post, I want to talk about some of the more specific choices I made on top of those instructions.

Obviously, your mileage will vary but hopefully the information below will help you with some of the practicalities of setting this up in the simplest way possible.

The target applications

I originally thought about getting people to download the docker image onto their own laptops and work on that but in the end I decided to go with the Heroku option from the Juice Shop repository as it appeared that as long as you didn’t hammer the server (which shouldn’t be necessary for this app) then you can host it there for free! (Although you do need to supply them with credit card details just in case).

The only thing to do is make sure you set the correct environmental variable to put the application into CTF mode, see the screenshot below.

heroku screenshot

I had split my participants into teams (I did this myself to make sure that the teams were balanced) and I set up multiple application instances such that each team was sharing one application instance so that they would see their shared progress but not interfere with other teams. I also made sure each instance had a unique name to stop teams messing with each other.

Spinning up the CTF platform.

I had previously experimented with the CTFd platform when I had first planned this event a year or so ago so I was confident that I would use this as the scoring system and host it myself in an AWS EC2 instance.

When I headed over to their GitHub repository I could see there were a number of different deployment methods and I decided on the “docker-compose” method because I like the simplicity of Docker. Things got a bit messy as I stumbled into a known issue with performance (which has now been fixed) and I also realised that there was no obvious way of using TLS which I decided I wanted as well.

The guys on the CTFd slack channel were really helpful (thanks especially to kchung and nategraf) and eventually I used a fork which nategraf had made which had the performance issue fixed and also had a different version of the “docker-compose” script which included an nginx reverse proxy to manage the TLS termination.

I used an EC2 t2.medium instance for the scoreboard server (mostly because of the original performance problems) but you could probably get away with a much smaller instance. I chose Ubuntu 16.04 as the operating system.

I installed Docker based on the instructions here up to after I had run sudoapt-get install docker-ce. I then added the local user to the “docker” group using sudo usermod -aG docker ubuntu (you may need to logout/login after this) and then used the Linux instructions from here to install “docker-compose” (don’t make the mistake I made initially and install via apt-get!)

If you just want to use the scoreboard without TLS then you can just clone the CTFd repository, run docker-compose up from within the cloned directory, and you are away.

Using TLS with the CTF platform

If you do want a (hopefully) simple way to use TLS, the fork I initially used no longer exists so I have created a deployment repository which uses the same docker compose file that was in the original fork and includes the nginx reverse proxy but will also pull the latest version of CTFd. You can clone that repository from here.

Once you have cloned that, you will need to get yourself a TLS certificate and private key. I used a subdomain of my personal domain (joshcgrossman.com) and the subdomain to the EC2 server’s IP address by adding an A record to my DNS settings. I then used the EFF’s “certbot” which generates certificates using Let’s Encrypt to produce my certificate.

I installed using the instructions here and then while no other web servers were running, I ran the command” sudo certbot certonly --standalone -d ctfscoreboard.joshcgrossman.com which automatically created the certificate and private key I needed for my chosen domain (make sure port 80 or 443 is open!).

certbot img

I then renamed the “fullchain” file to ctfd.crt and the “privkey” file to ctfd.key and saved them inside the “ssl” directory which you will have if you cloned my deploy repository above. (The nginx.conf file I used for the TLS version of the deployment looks for these files.)

You then just need to make sure that the hostname in the “docker-compose-production.yml” file matches the hostname of your server (in my case ctfscoreboard.joshcgrossman.com) and you can then run docker-compose -f docker-compose-production.yml up -d from within your cloned directory and it should start listening on port 443 with your shiny new SSL certificate!

ctf screenshot

Loading the Juice Shop challenges

This part was easy, I followed the instructions from here to run the tool to export the challenges from Juice Shop and and steps 4 and 5 from here to import the challenges into CTFd.

Setting the stage

I wanted to provide some brief instructions for the teams and also set some ground rules. For most of them, this was their first CTF and I deliberately made the instructions brief but made myself available to answer questions throughout the CTF. I only had four teams so that was a manageable workload.

I gave the teams the following instructions:

  • Each team has their own, Heroku hosted, instance of the vulnerable application. Your scope is limited to that URL, port 443.
  • Before the CTF starts, you need to go register your team details in the scoreboard app: https://appteam-ctfscoreboard.joshcgrossman.com (one account per team)
  • Once the CTF starts, you can use the “Challenges” screen to enter your flags. You should search for the challenge name on the challenges screen.
  • If you miss your flag for some reason, you can go to the scoreboard screen of the vulnerable application and click on the green button to see it again.
  • The clock will start at 16:15 and stop at 18:45 at which point you will not be able to record any additional flags.
  • Be organised and plan your efforts! (Divide and Conquer!)

I also set down the following ground rules:

  • You may not attack or tamper with https://ctfscoreboard.joshcgrossman.com/ in any way whatsoever.
  • You may not try and DoS/DDoS your vulnerable application or indeed anything else related to the challenge.
  • You may not tamper with another team’s instance, another team’s traffic or anything else related to another team or the organisers.
  • You may not use Burp Scanner – it probably won’t help you much and even if it does trigger a flag you won’t understand why it worked.
  • You may not search the Internet or ask anyone other than the organisers for anything related to the specific application, the specific challenges or the application’s source code. You may only search for general information about attacks. You have a PDF containing lots of hints about the challenges.
  • You may not tamper with the database table related to your challenge progress.
  • If you aren’t sure about anything, ask 🙂
  • You may have points deducted if you break the rules!

Giving some help

I mention above a PDF with hints. Like I said above, they were not allowed to search the Internet for Juice Shop specific clues but I still wanted them to benefit from hints to help them out. Björn prepared an ebook with all the hints in but it contained the answers as well. In order to save my competitors from temptation, I created a fork with all the answers removed which you can find here.

Other notes

During the course of the CTF, I projected the CTFd scoreboard onto the big screen and overlaid a countdown timer as well so people knew how long they had to go. I just used a timer from here although it was a little ugly…

I froze the scoreboard for the last 15 minutes to add to the suspense and cranked up some epic music to keep people in the mood.

Final Thoughts

I’ll leave you with the main guidance I gave to the teams before they started:

  • Have fun – that is the main goal of tonight
  • Learn stuff – that is the other main goal of tonight
  • Don’t get stressed about the time, easy to get overwhelmed
  • Team Leaders:
    • Divide up tasks
    • Decide priorities
    • Time management – avoid rabbit-holes
    • Escalate questions
    • Help those with less experience

Everyone had a great time and I got really good feedback so if you have the opportunity to run something like this, I strongly suggest you take it.

If you have any other questions or feedback let me know, my Twitter handle is above.

Updates: 18-March 2018

Team instances

Someone asked about team members sharing an instance. I deliberately organised the CTF with teams of 3-4 people. The primary reason was that our department covers a wide spectrum of skill-sets so I still wanted everyone to take part, enjoy and learn something. I therefore carefully balanced the teams based on abilities. (It also meant I could split my direct reports across different teams so no one could accuse me of favouritism 😉)

My logic in a team sharing an instance was to allow progress to be shared and prevent duplicated effort although I think more than four people in a team would not have been manageable. Overall I think that aspect worked well.

Another thought is that if each team member had their own instance, it is more likely that they would all see the solution to each challenge rather than one person completing it and just telling the others. However, this would have slowed things down which in the time we had available probably wouldn’t have been worth it.

Instance resets

One thing I didn’t do beforehand was practice resetting an instance and restoring progress which caused issues when one team created too much stored XSS and another team somehow accidentally changed the admin password without realising it!

Resetting an instance is possible by saving the continue code from the cookie, restarting the instance (that is easy in Heroku) and then sending a post request to the app looking like this:

PUT /rest/continue-code/apply/<<CONTINUECODE>> HTTP/1.1

 

Reflections on attending and presenting at AppSec Israel 2017

https://appsecil.org/

For various reasons, this year was the first year I made it to OWASP AppSec Israel, the national Application Security conference here in Israel. Not only that but I was honoured to be accepted to present as well. It was a long day including a speakers/organisers dinner in the evening but as well as being tired I was also really buzzing with excitement and I thought I’d jot down a few notes about the day.

The agenda

There were a bunch of really great talks on the agenda (credit to Irene Abezgauz who chaired the content committee) with a big emphasis on talks aimed at sharing ideas and experiences for defenders and builders (with a few cool hacks thrown in as well). I thought having the agenda balanced in that way was really great as, like Avi said in his opening comments, defenders and builders are the main audience for OWASP.

The atmosphere

The overall atmosphere seemed really positive, supportive and open. People seemed to be socialising, people, were making an effort to talk to other people, there seemed to be a really happy buzz in the communal areas.

Presenting at the conference

This was my first time presenting at a major conference and I was pretty nervous. Ultimately I had practiced hard and I think it went OK (if a little fast) and hopefully people will get some benefit out of the ideas I shared. (Eventually I will try and post a blog based on the talk for those who missed it.) Despite my nerves, having friends, colleagues and my boss attending and supporting really made it special and made me feel a lot better. The organisers were really supportive as well with Or telling me a joke just before I was about to start.

Seeing friends and colleagues

It was great to hang out with friends who I work with, friends who I used to work with and friends who I’ve never worked with, especially catching up with those who I don’t see very often. As a presenter, having them there also made it more special. It was also great seeing colleagues who I’ve worked with on different client projects and catching up with them. A great thing about being a consultant is working with a wide range of different people it was great to see some of them there.

The sponsors

It was great to see so many local organisations sponsoring the conference including my employer, Comsec Group. Having these sponsors meant that the conference could be high quality but free to attend and it was great to see these organisations contributing back to the community.

I also thought that the sponsors area had a nice buzz to it with companies raising their profiles whilst also searching for new talent (and giving away some nice goodies as well like a showerproof Bluetooth speaker â˜ș.) It seemed like a win-win for everyone and I didn’t notice much aggressive attention seeking.

Fringe activities

The main conference was two tracks but there was also the CTF and workshops put on by GE Digital as part of their “Diamond” sponsorship of the conference as well as CV review sessions to help job seekers. Again, I thought these added extra facets to the day of the conference.

Meeting new people

This was a great day for meeting new people as well including people I’d never met before, fellow speakers and also people I’d had Twitter conversations with but not met face-to-face before.

Particular highlights were meeting local InfoSec superstar Keren Elazari and chatting to Tiffany Long, the OWASP Community Manager but I also had loads of great conversations with other presenters and other attendees, LobbyCon was definitely going strong.

OWASP Israel

“OWASP works!” — https://youtu.be/TfIky1agmDY?t=794

A few months back, Ian Amit gave a slightly brutal closing keynote at BSidesTLV lamenting the decline of the local InfoSec community. In that talk, he specifically praised the Israeli OWASP chapter for keeping regular meetings going and just generally staying active. The conference today was a great illustration of that strength and it’s a credit to the OWASP Israel board (led up to now by Avi Douglen with Or Katz taking the lead going forward) that the Global OWASP annual conference, AppSecEU is going to be in Tel Aviv for 2018.

These are exciting times for the local AppSec and InfoSec community and I’m looking forward to getting more involved in local and international OWASP activities in the future.

Thanks again to Avi, Or, Ofer, Hemed, Yossi and Irene (and all the others who volunteered their time and effort) for such a great conference!

HPKP is pinning^W pining for the fjords – A lesson on security absolutism?

Introduction

Scott Helme, champion of web security posted a blog this week saying that he is giving up on HTTP Public Key Pinning (HPKP). Whilst other experts have started making similar noises (such as Ivan Ristic’s similar blog post last year), Scott is especially passionate about web security standards (and I would strongly recommend following his Twitter feed and blog) so this would seem like a pretty serious “nail in the coffin” for this particular standard.

Scott’s blog does a great job of breaking down the various reasons for his decision but I want to try and pull out some wider points from this story about Information Security in general.

What is HPKP?

Once again, Scott does a great job of explaining this but, in a nutshell, HPKP is a way of telling a browser that it should only allow a user to browse an HTTPS site if the site certificate’s public key matches a public key which the site supplies in an HTTP header (which is subsequently cached). This means it is not enough for the certificate to be valid for the site, it must also be a specific certificate (or be signed by a specific certificate).

Whilst this adds an additional layer of security, it is hard to manage and a small mistake can potentially lead to the site becoming inaccessible from all modern browsers with no practical way of recovering.

So, what can we learn from this? In the points below, I am purely using HPKP as an example and the purpose is not to give an opinion on HPKP specifically.

Cost/Benefit Analysis

When you are considering implementing a new Information Security control, do you understand the effort involved? That should include the upfront investment and the ongoing maintenance and consider not only actual monetary outlay but also manpower outlay.

HPKP sounds easy to implement, just add another header to your web server, but actually behind that header you need to implement multiple complex processes to be carried out on an ongoing basis plus “disaster recovery” processes to address the risk of the loss of a particular certificate.

Also, how well do you understand the associated benefit? For HPKP, the associated benefit is preventing an attack where the attacker has somehow managed to produce a forged but still valid certificate. Now this is certainly possible but it’s hardly an everyday occurrence or an ability within the reach of a casual attacker.

Given the benefit, have you considered what other controls could be put in place for that cost but with a higher benefit? Is the cost itself even worth the disruption involved in implementing the control?

Business Impact

That brings us onto the next point, how will the new control impact the business? Is the new control going to bring operations to a screeching halt or is there even a risk that this might happen? How does that risk compare to the security risk you are trying to prevent? Have you asked the business this question?

For example, if your new control is going to make the sales team’s job take twice as long, you can almost certainly expect insurmountable pushback from the business unless you can demonstrate a risk that justifies this. Even if you can demonstrate a risk, you will probably need to find a compromise.

In the case of HPKP, in the short-term there is potentially an immediate increase in workload for the teams responsible for managing certificates and the operational risk of the permanent site lockout is always a possibility.

To summarise these two points, if you want to suggest a new security control, you had better make sure you have a solid business case that shows that it’s worth the effort.

This brings us neatly onto my final point.

The A+ Security Scorecard

A tendency has developed, especially with TLS configuration, cipher configuration and security header configuration to give websites/web applications a score based on the strength of their security configuration. I believe that these scorecards are really useful tools for giving a snapshot of a site’s security using a number of different measures.

However, this story makes me wonder if we understand (and articulate) the cost/benefit of achieving a high score well enough and whether the use of these scores may encourage “security absolutism” if improperly explained. This concept is nicely described by Troy Hunt, another AppSec rock star, but effectively represents the idea that if you don’t have every security control then you are not doing well enough. This is clearly not the right way to InfoSec.

In his blog, Scott says:

Given the inherent dangers of HPKP I am tempted to remove the requirement to use it from securityheaders.io and allow sites to achieve an A+ with all of the other headers and HTTPS, with a special marker being given for those few who do deploy HPKP instead.

I think maybe the real challenge here is not to change the scorecard but rather to change the expectation. Maybe we shouldn’t expect every site to achieve an A+ on every scorecard but rather to achieve a score which matches their risk level and exposure and maybe this should be clearer when generating or using this type of scorecard.

Additionally, we need to ensure that we are presenting this type of scorecard in the correct context alongside other site issues. There is a risk that getting a high score will be prioritised over other, more pressing or serious issues which do not have such a convenient way of measuring them or where the fix cannot be as neatly demonstrated.

Conclusion

Scorecards are really useful and convenient tools (and I certainly appreciate the people who have put the effort into developing them) but they may lead to poor security decisions if:

  1. Every site is expected to get the highest score regardless of the relative risk.
  2. We cannot demonstrate the relative importance of a high score compared to other, non-scorable issues.

Next time you produce or receive a security report, make sure you take this into account.

The OWASP Top 10 — An update and a chance to have your say

New developments

You can read my previous blog post about the flap around RC1 of the OWASP Top 10. Since then, there have been a number of important developments.

The first and biggest was that it was decided that the previous project leaders, Dave Wichers and Jeff Williams would be replaced by Andrew van der Stock, who himself has extensive experience in AppSec and OWASP. Andrew later brought in Neil Smithline and Torsten Gigler to assist him in leading the project and Brian Glas (who performed some excellent analysis on RC1) to assist with data analysis of newly collected data.

Next, the OWASP Top 10 was extensively discussed at the OWASP summit in May considering both how it got to where it is today and how it should continue in the future.

Key Outcomes and my Thoughts

The outcomes from the summit can be seen here and here and the subsequent decisions by the project team by the project team are documented here. The most important points (IMHO) that came out of these sessions and the subsequent decisions were as follows:

  • There is a plan in place to require multiple project leaders from multiple organisations for all OWASP flagship projects to try and avoid the independence issues I discussed in my previous post.
  • It is almost certain that the controversial A7 (Insufficient Attack Protection) and A10 (Underprotected APIs) the RC1 will not appear in the next RC or final version. The reason given is similar to my reason in a previous post. These aren’t vulnerabilities (or more specifically vulnerability categories). I am really pleased with this decision and I think it will make it much more straightforward to explain and discuss the Top 10 in a coherent way.
  • A7 and A10 were intended to be occupied by “forward looking” items. This will remain the case and this discussion will be opened up to the community by way of a survey where AppSec professionals can provide their feedback on the additional vulnerability categories which they expect to be most important over the next few years. The survey is only open until 30th August 2017 and is available here. I would strongly recommend that anyone with AppSec knowledge/experience takes the time to complete this for the the good of the new Top 10 version.
  • Additional time is being provided to supply data to be used in assessing the rest of the Top 10. The window is only open until 18th September 2017 and is available here. I’m not honestly sure what the benefit of gathering additional data on the 8 current vulnerability categories is aside from practice for next time.
  • An updated release timeline has been set with RC2 being targeted for 9th October 2017 to allow for feedback and additional modifications before the final release targeted for 18th November 2017.
  • In general, transparency is to be increased with feedback and preparation processes to be primarily based in the project’s Github repository going forward.
  • The OWASP Top 10 is “art” and not science. It is partially data based but intended to be significantly judgment based as well. We need to be clear about this when we are talking about the project.
  • The OWASP Top 10 is for everyone but especially CISOs rather than for developers. It is intended to capture the most high risk vulnerability categories. Once again, developers, especially those working on a new project, should be using the OWASP Top 10 Proactive Controls project as their first reference rather than the main OWASP Top 10.

Conclusion

I am very pleased with the way this has turned out so far. I think that the concerns over such an important project have been taken seriously and steps have been taken to protect the integrity of the project and safeguard its future. I think Andrew, Neil, Torsten and Brian are in a great position to carry on the huge efforts which Dave and Jeff put into this project and maintain it’s position of OWASP’s defacto #1 project.

At the same time, I think that this episode has provided an insight into the efforts and contributions required to progress an OWASP project, shown how an open approach leads to better feedback and contributions and also highlighted other OWASP projects which are complimentary to the Top 10. Overall, I think people should see this story as a positive outcome of a collaborative approach and feel encouraged to take part and contribute to this project and other OWASP projects.

WannaCry — Do you feel lucky?

Was it just ransomware?

During the hysteria around the WannaCry ransomware outbreak, a thought struck me:

A bit later on, I responded to another post where someone had suggested that we were “lucky” that it was only ransomware.

https://twitter.com/JoshCGrossman/status/863991233869484032

As I responded, the only thing that was “lucky” about this story was the fact that the WannaCry outbreak finally brought well-deserved attention to the incredibly dangerous exploits leaked by the TheShadowBrokers in April.

How bad is a vulnerability?

I blogged about these exploits on my employer’s blog, precisely because we wanted to make sure that the company’s clients had the relevant information to protect themselves. We don’t blog about every vulnerability or issue that comes to light but the unique danger posed by this leak meant that we decided it was important that we prepared advice. Like I said there, this probably the worst Windows vulnerability since 2008.

Given the overall panic, it seems that despite the leak of *working exploits*, MS17–010 was still not taken seriously enough by Microsoft or by the Industry. Microsoft in particular stuck to their guns and didn’t patch XP and 2003 for non-paying customers until it became prudent from a PR perspective.

We in the Information Security industry need to find a better way of communicating the risk posed by a security issue. The frustrating “branded vulnerabilities” trend has led to risk assessment based on logo quality rather than actual potential for damage.

The fact is that as noted in the 2016 DBIR (page 16), the most dangerous vulnerabilities are those for which real public exploits exist such as in Metasploit. TheShadowBroker’s release effectively met that criteria but despite this the outcry was less than something like Heartbleed or the damp squib which was Badlock.

Just patch it?

It is clear that there are plenty of organisations where patching is difficult or impossible. For organisations where it is difficult, security professionals need to help these organisations “choose their battles” where MS17-010 should have been a battle that was fought and won as soon as TheShadowBroker’s leak of live exploits was shown to be related to it.

However, for places where patching is impossible and to be honest in all mature organisations, the focus needs to shift to an “anticipate breach” posture whereby it is almost assumed that attackers will get in through unpatched vulnerabilities or just plain old phishing. In this model, the goal becomes preventing and detecting lateral movement through segmentation and better behavioural logging.

In Conclusion

So, in some ways maybe we got lucky that WannaCry drew so much attention to MS17–010 because it should now be easier to get buy-in to patch some of these specific flaws, just in time for the Metasploit module release (remember the DBIR metric above?)

However, despite this we must ask ourselves now, what other quiet malware was able to infiltrate Company networks whilst this flaw remained unpatched or unconsidered? We have already seen a couple of examples of this.

We need to be better at articulating risk regarding known issues but we also need the detective controls to be ready for the unknown issues as well.

OWASP Top 10 2017 — What should be there?

https://www.flickr.com/photos/samchurchill/4182826573

But first


Before I start on that, I think it is important to acknowledge the enormous amount work which Jeff Williams, Dave Wichers and others have put into the OWASP Top 10. Their efforts have made it into the best known OWASP project and certainly the one thing that anyone in technology knows about Application Security. The current controversy and discussion has only arisen due to the project’s high profile and it is important to give credit to those who made that happen.

My background

I have a decade of IT Risk experience with the last few years mostly focussed on Application Security testing. In this time I have seen, tested and found vulnerabilities in web applications of many different sizes, types and technologies. At the same time I have also had experience explaining these vulnerabilities to client contacts and helping developers with practical mitigations. As such, whilst I cannot provide detailed statistics, I think I can provide a fair assessment of the key issues that web application developers are struggling with today.

More importantly, I work as a security consultant and have no actual or perceived allegiance to any solution or service. My interest is that we have effective tools and materials to help clients better understand application security risk overall.

So what should the Top 10 look like?

I have seen at least one criticism of the OWASP Top 10 which states that most of the categories should no longer be relevant. Unfortunately, recent experience has shown that many companies are still struggling with the basics and therefore many of the existing categories which have stayed in the OWASP Top 10 should remain. Here are my thoughts about the changes.

The Good: Removal of 2013 A10 — Unvalidated Redirects and Forwards

A good change in my opinion. This is clearly still a risk but is probably not serious enough to be in the Top 10. I had a call with a client not long ago where I was trying to mentally run through the Top 10 to guide the conversation and this one didn’t come to mind at all.

The Good con’t: 2017 A4 — Broken Access Control

This is a great change that makes explaining the list a lot easier. 2013 A4 and 2013 A7 were just too similar for such a short list and it made explaining things difficult.

The Bad(ish): 2017 A10 — Underprotected APIs

I can appreciate that this is a big enough issue that merits its own item even though fundamentally the security risks of APIs will include many of the other items in the Top 10.

Currently the text of 2017 A10 just talks about standard vulnerabilities that can affect all application types. I think that maybe this item should be a little more focussed on issues which are more specific to APIs or “AJAX” style applications which use APIs for populating their web pages.

For example, it should specifically talk about Mass Assignment style vulnerabilities where the API framework blindly accepts parameters and updates the database without checking them against a whitelist or the opposite issue where it provides too many data items in a response, e.g. the password field from the database.

It should also highlight the perils of mis-configuring Cross Origin Resource Sharing headers which can effectively disable the same origin policy. Maybe also risks of JSONP.

I would rename it to simply “API Vulnerabilities”

The Ugly: 2017 A7 — Insufficient Attack Protection

Lets set aside the independence issues that I have previously discussed.

I have spent a long time wrestling with defining IT Risk and IT Security Risk and one of the key principles I have found is that a risk cannot just be the absence of a control.

This new item is describing the absence of a controls in an application. Other items in the list describe broken controls but this is the only one which actually talks about the absence of a new set of controls.

I 100% agree that the future of application security is applications which can better protect themselves. Clearly this is a widely-held view which is why OWASP already has OWASP Top 10 Proactive Controls which has “Implement Logging and Intrusion Detection” control as its #8. This seems like the correct place for explaining what attack protection measures should be implemented.

I therefore think that this item should not appear in the list at all but rather the “Implement Logging and Intrusion Detection” control should be enhanced with the content leaving the Top 10 Risks containing only actual risks.

One spot left

So I have one spot currently untaken on the Top 10, what will I choose?

Ironically, I agree with one of Contrast Security’s suggestions. Deserialisation Vulnerabilities should have their own spot on the Top 10.

I have the following reasons for this:

  • These issues have been around for a long time and have never received enough attention. They only really came to light in 2015 and are still poorly understood.
  • I think one of the reasons for this is that they are hard to understand and hard to casually exploit, especially within the confined time-frame of security testing.
  • They clearly affect a number of heavily used languages.
  • The severity is often critical leading to full Remote Code Execution on the web server, usually from the external Internet.
  • There are plenty of off-the-shelf products which are vulnerable to this. Some of them have been patched to fix it, the older ones have not.
  • Fixing the issue is not always straightforward or trivial.

In conclusion

I have already said previously that I think the OWASP Top 10 risks concept needs revamping and I stand by that.

However, in the short term, I think that the keeping the focus on actual security risks, especially those which are poorly understood will add the most value to the OWASP Top 10 2017. The Top 10 is a key tool for helping companies to understand and focus their application security efforts but this will only remain the case if the list remains internally consistent and relevant.

The OWASP Top 10 — Response to the controversy from Jeff Williams

https://www.owasp.org/

The official response

Following my previous post about the OWASP Top 10 as well as the reaction from many others, Steve Ragan at CSO Online reached out to Contrast Security for their comments on the inclusion of “Insufficient Attack Protection” as the new A7.

Jeff Williams who is one of the OWASP Top 10 Co-Authors as well as being the CTO for Contrast Security provided the response. My thoughts as follows:

Contributions to the Project

The project is open for anyone to participate in. Unfortunately, like most OWASP projects, it is a huge amount of work and very very few contribute.

I think this a fair and important point. As I previously highlighted, there was a minimal response to the original call for data with only 11 companies responding with relatively large datasets and 13 additional companies with smaller datasets. As I said in my previous post and in my conclusion again here, OWASP needs more feedback and more contributors.

The proactive addition of items

The project uses the open data call data to select and prioritize issues, but has also always looked to experts for ideas on what we could include that would drive the appsec community to get in front of problems instead of being reactive. In 2007 it was CSRF, which is still a top ten item supported by tons of data. In 2013 it was use of libraries with known vulnerabilities, again an obvious yet serious and underappreciated problem, and the T10 helped to refocus the industry on it.

Again, I think this is a fair point. Being forward-looking and pro-active is important in such a fast-moving industry.

Certainly in hindsight, “CSRF” and “Libraries with Known Vulnerabilities” were worthy additions to previous releases of the Top 10 but note that they are both “Risks”, i.e. an issue/problem in the application. This is in keeping with the official title of the OWASP Top 10 which is “the OWASP Top 10 Web Application Security Risks”.

In this case, “Insufficient Attack Protection” is not a “Risk”, it is the lack of a “Control”. Note that OWASP already has a less famous but also very valuable Top 10 Proactive Controls list which already has as its item #8 “Implement Logging and Intrusion Detection”.

Moreover, despite the assertion above that the project has “looked to experts for ideas”, there is still no evidence of any discussion or consultation about the inclusion of A7 as I discussed in my previous post nor is any further information on this provided in this response.

Lack of a Control ≠ a Risk

Depending on where you observe the problem from, isn’t the lack of a defense a security vulnerability? It just depends on what we expect from our code, our vantage point on security.

Lack of defence is certainly an issue but in order to decide which controls should be put in place to defend an application, we first have to decide on the risks/vulnerabilities that are most concerning and prioritise accordingly.

The OWASP Top 10 was supposed to highlight the biggest risks to consider and including a control as part of this list confuses this assessment and takes up a space which could be taken by an actual application security risk.

Much of the appsec industry is focused on creating clean code, rather than protecting against attacks. But clearly we need both, as all the focus on hygiene hasn’t worked.

Agreed and again, I imagine this is why “Implement Logging and Intrusion Detection” is on the Top 10 Proactive Controls list.

In conclusion

Disappointingly, the response does not substantively address what I think is one of the key concerns with the latest release which is the lack of an appearance of independence. The response does not provide any further information to fill in the gap between the raw data and the final list nor demonstrate which other experts may have been consulted outside of the project team.

I think the response’s final sentence clearly demonstrates what the next steps should be:

I hope everyone interested in helping with the OWASP T10 will participate in the process, and discuss the pros and cons of this latest release candidate.

I set out my opinions for the future of the Top 10 risks project in my previous post but it is clear that there will still be a 2017 release.

The official instructions on the OWASP Top 10 site state:

Constructive comments on this OWASP Top 10–2017 Release Candidate should be forwarded via email to OWASP-TopTen@lists.owasp.org. Private comments may be sent to dave.wichers@owasp.org. Anonymous comments are welcome. All non-private comments will be catalogued and published at the same time as the final public release. Comments recommending changes to the items listed in the Top 10 should include a complete suggested list of 10 items, along with a rationale for any changes. All comments should indicate the specific relevant page and section.

I would therefore urge anyone in the application security industry to provide public comments by June 30, 2017 as has been requested by the project team. If enough constructive comments are submitted in the requested format, we will be in a good position at the final release of the list to assess to what extent the project team has taken the industry’s feedback into consideration.