Scott Helme, champion of web security posted a blog this week saying that he is giving up on HTTP Public Key Pinning (HPKP). Whilst other experts have started making similar noises (such as Ivan Ristic’s similar blog post last year), Scott is especially passionate about web security standards (and I would strongly recommend following his Twitter feed and blog) so this would seem like a pretty serious “nail in the coffin” for this particular standard.
Scott’s blog does a great job of breaking down the various reasons for his decision but I want to try and pull out some wider points from this story about Information Security in general.
What is HPKP?
Once again, Scott does a great job of explaining this but, in a nutshell, HPKP is a way of telling a browser that it should only allow a user to browse an HTTPS site if the site certificate’s public key matches a public key which the site supplies in an HTTP header (which is subsequently cached). This means it is not enough for the certificate to be valid for the site, it must also be a specific certificate (or be signed by a specific certificate).
Whilst this adds an additional layer of security, it is hard to manage and a small mistake can potentially lead to the site becoming inaccessible from all modern browsers with no practical way of recovering.
So, what can we learn from this? In the points below, I am purely using HPKP as an example and the purpose is not to give an opinion on HPKP specifically.
When you are considering implementing a new Information Security control, do you understand the effort involved? That should include the upfront investment and the ongoing maintenance and consider not only actual monetary outlay but also manpower outlay.
HPKP sounds easy to implement, just add another header to your web server, but actually behind that header you need to implement multiple complex processes to be carried out on an ongoing basis plus “disaster recovery” processes to address the risk of the loss of a particular certificate.
Also, how well do you understand the associated benefit? For HPKP, the associated benefit is preventing an attack where the attacker has somehow managed to produce a forged but still valid certificate. Now this is certainly possible but it’s hardly an everyday occurrence or an ability within the reach of a casual attacker.
Given the benefit, have you considered what other controls could be put in place for that cost but with a higher benefit? Is the cost itself even worth the disruption involved in implementing the control?
That brings us onto the next point, how will the new control impact the business? Is the new control going to bring operations to a screeching halt or is there even a risk that this might happen? How does that risk compare to the security risk you are trying to prevent? Have you asked the business this question?
For example, if your new control is going to make the sales team’s job take twice as long, you can almost certainly expect insurmountable pushback from the business unless you can demonstrate a risk that justifies this. Even if you can demonstrate a risk, you will probably need to find a compromise.
In the case of HPKP, in the short-term there is potentially an immediate increase in workload for the teams responsible for managing certificates and the operational risk of the permanent site lockout is always a possibility.
To summarise these two points, if you want to suggest a new security control, you had better make sure you have a solid business case that shows that it’s worth the effort.
This brings us neatly onto my final point.
The A+ Security Scorecard
A tendency has developed, especially with TLS configuration, cipher configuration and security header configuration to give websites/web applications a score based on the strength of their security configuration. I believe that these scorecards are really useful tools for giving a snapshot of a site’s security using a number of different measures.
However, this story makes me wonder if we understand (and articulate) the cost/benefit of achieving a high score well enough and whether the use of these scores may encourage “security absolutism” if improperly explained. This concept is nicely described by Troy Hunt, another AppSec rock star, but effectively represents the idea that if you don’t have every security control then you are not doing well enough. This is clearly not the right way to InfoSec.
In his blog, Scott says:
Given the inherent dangers of HPKP I am tempted to remove the requirement to use it from securityheaders.io and allow sites to achieve an A+ with all of the other headers and HTTPS, with a special marker being given for those few who do deploy HPKP instead.
I think maybe the real challenge here is not to change the scorecard but rather to change the expectation. Maybe we shouldn’t expect every site to achieve an A+ on every scorecard but rather to achieve a score which matches their risk level and exposure and maybe this should be clearer when generating or using this type of scorecard.
Additionally, we need to ensure that we are presenting this type of scorecard in the correct context alongside other site issues. There is a risk that getting a high score will be prioritised over other, more pressing or serious issues which do not have such a convenient way of measuring them or where the fix cannot be as neatly demonstrated.
Scorecards are really useful and convenient tools (and I certainly appreciate the people who have put the effort into developing them) but they may lead to poor security decisions if:
- Every site is expected to get the highest score regardless of the relative risk.
- We cannot demonstrate the relative importance of a high score compared to other, non-scorable issues.
Next time you produce or receive a security report, make sure you take this into account.