Month: August 2017

HPKP is pinning^W pining for the fjords – A lesson on security absolutism?

Introduction

Scott Helme, champion of web security posted a blog this week saying that he is giving up on HTTP Public Key Pinning (HPKP). Whilst other experts have started making similar noises (such as Ivan Ristic’s similar blog post last year), Scott is especially passionate about web security standards (and I would strongly recommend following his Twitter feed and blog) so this would seem like a pretty serious “nail in the coffin” for this particular standard.

Scott’s blog does a great job of breaking down the various reasons for his decision but I want to try and pull out some wider points from this story about Information Security in general.

What is HPKP?

Once again, Scott does a great job of explaining this but, in a nutshell, HPKP is a way of telling a browser that it should only allow a user to browse an HTTPS site if the site certificate’s public key matches a public key which the site supplies in an HTTP header (which is subsequently cached). This means it is not enough for the certificate to be valid for the site, it must also be a specific certificate (or be signed by a specific certificate).

Whilst this adds an additional layer of security, it is hard to manage and a small mistake can potentially lead to the site becoming inaccessible from all modern browsers with no practical way of recovering.

So, what can we learn from this? In the points below, I am purely using HPKP as an example and the purpose is not to give an opinion on HPKP specifically.

Cost/Benefit Analysis

When you are considering implementing a new Information Security control, do you understand the effort involved? That should include the upfront investment and the ongoing maintenance and consider not only actual monetary outlay but also manpower outlay.

HPKP sounds easy to implement, just add another header to your web server, but actually behind that header you need to implement multiple complex processes to be carried out on an ongoing basis plus “disaster recovery” processes to address the risk of the loss of a particular certificate.

Also, how well do you understand the associated benefit? For HPKP, the associated benefit is preventing an attack where the attacker has somehow managed to produce a forged but still valid certificate. Now this is certainly possible but it’s hardly an everyday occurrence or an ability within the reach of a casual attacker.

Given the benefit, have you considered what other controls could be put in place for that cost but with a higher benefit? Is the cost itself even worth the disruption involved in implementing the control?

Business Impact

That brings us onto the next point, how will the new control impact the business? Is the new control going to bring operations to a screeching halt or is there even a risk that this might happen? How does that risk compare to the security risk you are trying to prevent? Have you asked the business this question?

For example, if your new control is going to make the sales team’s job take twice as long, you can almost certainly expect insurmountable pushback from the business unless you can demonstrate a risk that justifies this. Even if you can demonstrate a risk, you will probably need to find a compromise.

In the case of HPKP, in the short-term there is potentially an immediate increase in workload for the teams responsible for managing certificates and the operational risk of the permanent site lockout is always a possibility.

To summarise these two points, if you want to suggest a new security control, you had better make sure you have a solid business case that shows that it’s worth the effort.

This brings us neatly onto my final point.

The A+ Security Scorecard

A tendency has developed, especially with TLS configuration, cipher configuration and security header configuration to give websites/web applications a score based on the strength of their security configuration. I believe that these scorecards are really useful tools for giving a snapshot of a site’s security using a number of different measures.

However, this story makes me wonder if we understand (and articulate) the cost/benefit of achieving a high score well enough and whether the use of these scores may encourage “security absolutism” if improperly explained. This concept is nicely described by Troy Hunt, another AppSec rock star, but effectively represents the idea that if you don’t have every security control then you are not doing well enough. This is clearly not the right way to InfoSec.

In his blog, Scott says:

Given the inherent dangers of HPKP I am tempted to remove the requirement to use it from securityheaders.io and allow sites to achieve an A+ with all of the other headers and HTTPS, with a special marker being given for those few who do deploy HPKP instead.

I think maybe the real challenge here is not to change the scorecard but rather to change the expectation. Maybe we shouldn’t expect every site to achieve an A+ on every scorecard but rather to achieve a score which matches their risk level and exposure and maybe this should be clearer when generating or using this type of scorecard.

Additionally, we need to ensure that we are presenting this type of scorecard in the correct context alongside other site issues. There is a risk that getting a high score will be prioritised over other, more pressing or serious issues which do not have such a convenient way of measuring them or where the fix cannot be as neatly demonstrated.

Conclusion

Scorecards are really useful and convenient tools (and I certainly appreciate the people who have put the effort into developing them) but they may lead to poor security decisions if:

  1. Every site is expected to get the highest score regardless of the relative risk.
  2. We cannot demonstrate the relative importance of a high score compared to other, non-scorable issues.

Next time you produce or receive a security report, make sure you take this into account.

The OWASP Top 10 — An update and a chance to have your say

New developments

You can read my previous blog post about the flap around RC1 of the OWASP Top 10. Since then, there have been a number of important developments.

The first and biggest was that it was decided that the previous project leaders, Dave Wichers and Jeff Williams would be replaced by Andrew van der Stock, who himself has extensive experience in AppSec and OWASP. Andrew later brought in Neil Smithline and Torsten Gigler to assist him in leading the project and Brian Glas (who performed some excellent analysis on RC1) to assist with data analysis of newly collected data.

Next, the OWASP Top 10 was extensively discussed at the OWASP summit in May considering both how it got to where it is today and how it should continue in the future.

Key Outcomes and my Thoughts

The outcomes from the summit can be seen here and here and the subsequent decisions by the project team by the project team are documented here. The most important points (IMHO) that came out of these sessions and the subsequent decisions were as follows:

  • There is a plan in place to require multiple project leaders from multiple organisations for all OWASP flagship projects to try and avoid the independence issues I discussed in my previous post.
  • It is almost certain that the controversial A7 (Insufficient Attack Protection) and A10 (Underprotected APIs) the RC1 will not appear in the next RC or final version. The reason given is similar to my reason in a previous post. These aren’t vulnerabilities (or more specifically vulnerability categories). I am really pleased with this decision and I think it will make it much more straightforward to explain and discuss the Top 10 in a coherent way.
  • A7 and A10 were intended to be occupied by “forward looking” items. This will remain the case and this discussion will be opened up to the community by way of a survey where AppSec professionals can provide their feedback on the additional vulnerability categories which they expect to be most important over the next few years. The survey is only open until 30th August 2017 and is available here. I would strongly recommend that anyone with AppSec knowledge/experience takes the time to complete this for the the good of the new Top 10 version.
  • Additional time is being provided to supply data to be used in assessing the rest of the Top 10. The window is only open until 18th September 2017 and is available here. I’m not honestly sure what the benefit of gathering additional data on the 8 current vulnerability categories is aside from practice for next time.
  • An updated release timeline has been set with RC2 being targeted for 9th October 2017 to allow for feedback and additional modifications before the final release targeted for 18th November 2017.
  • In general, transparency is to be increased with feedback and preparation processes to be primarily based in the project’s Github repository going forward.
  • The OWASP Top 10 is “art” and not science. It is partially data based but intended to be significantly judgment based as well. We need to be clear about this when we are talking about the project.
  • The OWASP Top 10 is for everyone but especially CISOs rather than for developers. It is intended to capture the most high risk vulnerability categories. Once again, developers, especially those working on a new project, should be using the OWASP Top 10 Proactive Controls project as their first reference rather than the main OWASP Top 10.

Conclusion

I am very pleased with the way this has turned out so far. I think that the concerns over such an important project have been taken seriously and steps have been taken to protect the integrity of the project and safeguard its future. I think Andrew, Neil, Torsten and Brian are in a great position to carry on the huge efforts which Dave and Jeff put into this project and maintain it’s position of OWASP’s defacto #1 project.

At the same time, I think that this episode has provided an insight into the efforts and contributions required to progress an OWASP project, shown how an open approach leads to better feedback and contributions and also highlighted other OWASP projects which are complimentary to the Top 10. Overall, I think people should see this story as a positive outcome of a collaborative approach and feel encouraged to take part and contribute to this project and other OWASP projects.