Tag: Infosec

The Grinch who stole AppSecEU

The Grinch who stole AppSecEU

A cultural experience

Being an Orthodox Jew, Christmas and the meaning, stories and culture associated with it were always something that I only really saw second-hand.

However, when it was announced earlier this year that OWASP’s AppSecEU Conference, one of the few truly global Application Security conferences, was going to be held on my door step in Tel Aviv in 2018, it truly felt like Christmas was coming. My excitement built from the energy of the OWASP Summit in May to my first time speaking at an OWASP local chapter meeting in June about the difficulties and improvements with the OWASP Top 10 Project (which I later spent some time proof reading and offering minor fixes).

It continued with my presentation at the regional OWASP conference, AppSecIL (over 700 participants) and spending a little time contributing to the OWASP Top 10 Proactive Controls project and the OWASP JuiceShop project. On that high, I had started preparing CFP submissions for AppSecEU and had even included the high quality training that usually comes with the conference in our Company’s training plan for next year.

(Before the shock…)

However, this came to a crashing halt last night when I came back online after the Jewish Sabbath and discovered that this December, the Grinch truly had stolen Christmas. In what appears to be an unprecedented move, the OWASP Global board had voted at their December meeting to arbitrarily move the conference to the UK (again) instead of Tel Aviv and had waited until Friday night, the 23rd of December to announce this. After the build up throughout 2017, this felt like a kick in the gut.

Of course, what I felt would have been nothing compared with how the local organisers must have felt having spent 100s of volunteer hours planning for this conference together with the global OWASP team.

But why?

https://www.dropbox.com/s/y9ahm0vzpml30a8/viss-why.gif?dl=0

At stupid o’clock on Saturday night, I dug out the meeting recording to try and figure out what had happened. A number of reasons were discussed in the meeting which you will hear about later but the thing that stuck out was pretty much the very first question:

Tom Brennan (Board Secretary): “Is anyone representing the local team…on this call to give their comments and feedback on those statements.”

Karen Staley (Executive Director): “I have spoken to…Avi in great detail…What I share with you…is absolutely what we discussed over the phone…”

I was truly astonished by this, not to mention the remainder of this segment where the entire discussion of expected problems with the conference seemed to be framed around the idea that these concerns were coming from the local OWASP chapter or that the issues were the fault of the local chapter for being disorganised.

The board went on to accept this at face value (although I appreciate there was some pushback from some members.) In relatively short order, the board voted unanimously to take the conference away from Tel Aviv (the only city other than Redmond where Microsoft hold their own BlueHat security conference and where it would have coincided but not clashed with CyberWeek at Tel Aviv University which last year had 6,000 attendees from over 50 countries) and move it somewhere else. Specifically to London.

Miscommunications

http://tembusu.nus.edu.sg/treehouse/wp-content/uploads/2015/10/Broken_telephone.jpg

It sounded to me like there had been some sort of miscommunication as from my interactions with the local team it seemed like planning was well underway. OWASP had even sent an employee over to be at AppSecIL and check out the venue which had been agreed. Additionally, I know that Avi, the conference chair has lived and breathed application security and especially OWASP for years now.

I waited impatiently to hear from the local chapter and once their statement was released, it became clear the extent to which the local chapter had been screwed over. As I said, Avi is a very strong proponent of OWASP and for him to have written such a strongly worded statement tells you something about the circumstances.

The statement from OWASP Israel

I would strongly recommend reading the full statement to understand the situation as whilst it is long, it comprehensively explains the extent to which the Israel team have been shoddily treated.

However, I do want to pull out a few key sentences from that statement:

“The OWASP Israel chapter is vehemently opposed to this move, and we do not accept nor agree with the official statement in any way.”

“It should be noted that this decision was made WITHOUT consulting with the local chapter and conference committee, or even gathering the relevant information from us.”

“Regardless of what the OWASP Leadership believes about the AppSec community in israel‍, I have the privilege of being part of one of the strongest, most active OWASP communities in the world.”

“For those companies that usually support or sponsor OWASP Foundation and AppSec conferences, I call on you to continue to support the OWASP communities and its mission — but support the local chapters that are actually doing the work.”

Closing thoughts

The conference that never will be?

The time when I have been writing this was supposed to be set aside for me to polish up and send some more CFP submissions for AppSecEU. Right now, I don’t know if I want to do that. If I get a CFP entry accepted, I don’t really look forward to having to get approval for travel and accommodation from my company for this conference after what the OWASP board has done.

I call on the OWASP Board to urgently consider the following points and act to fix this injustice, ideally restoring AppSecEU 2018 to Tel Aviv:

  • Can the December 6th vote on AppSecEU really be considered to be valid given that the entire discussion was predicated on the local chapters agreement? Surely it is clear that the board needs to receive a presentation from the OWASP Israel team on their position as it was not fairly presented at the board meeting.
  • How was it considered acceptable to release this news on Friday night, 23 December?
  • How can the board ensure that this type of catastrophic misrepresentation does not occur again?
  • How does this action create a “stronger” and “more engaged” community?
  • How is it possible that several months ago the OWASP board withdrew support for the Project Summit 2018 but that the new Executive Director has effectively based the change in AppSecEU on having spoken to the organizer and apparently joining with this summit (rather than speaking with the London chapter leaders).
  • Is it appropriate that this very large decision was considered to be “one little thing”(1:22:52 of the recording)?

I have been excited to get more and more involved with donating my time and energy to OWASP during the course of this year. I will be closely monitoring how this issue is addressed and I will have to consider my future OWASP involvement on this basis.

HPKP is pinning^W pining for the fjords – A lesson on security absolutism?

Introduction

Scott Helme, champion of web security posted a blog this week saying that he is giving up on HTTP Public Key Pinning (HPKP). Whilst other experts have started making similar noises (such as Ivan Ristic’s similar blog post last year), Scott is especially passionate about web security standards (and I would strongly recommend following his Twitter feed and blog) so this would seem like a pretty serious “nail in the coffin” for this particular standard.

Scott’s blog does a great job of breaking down the various reasons for his decision but I want to try and pull out some wider points from this story about Information Security in general.

What is HPKP?

Once again, Scott does a great job of explaining this but, in a nutshell, HPKP is a way of telling a browser that it should only allow a user to browse an HTTPS site if the site certificate’s public key matches a public key which the site supplies in an HTTP header (which is subsequently cached). This means it is not enough for the certificate to be valid for the site, it must also be a specific certificate (or be signed by a specific certificate).

Whilst this adds an additional layer of security, it is hard to manage and a small mistake can potentially lead to the site becoming inaccessible from all modern browsers with no practical way of recovering.

So, what can we learn from this? In the points below, I am purely using HPKP as an example and the purpose is not to give an opinion on HPKP specifically.

Cost/Benefit Analysis

When you are considering implementing a new Information Security control, do you understand the effort involved? That should include the upfront investment and the ongoing maintenance and consider not only actual monetary outlay but also manpower outlay.

HPKP sounds easy to implement, just add another header to your web server, but actually behind that header you need to implement multiple complex processes to be carried out on an ongoing basis plus “disaster recovery” processes to address the risk of the loss of a particular certificate.

Also, how well do you understand the associated benefit? For HPKP, the associated benefit is preventing an attack where the attacker has somehow managed to produce a forged but still valid certificate. Now this is certainly possible but it’s hardly an everyday occurrence or an ability within the reach of a casual attacker.

Given the benefit, have you considered what other controls could be put in place for that cost but with a higher benefit? Is the cost itself even worth the disruption involved in implementing the control?

Business Impact

That brings us onto the next point, how will the new control impact the business? Is the new control going to bring operations to a screeching halt or is there even a risk that this might happen? How does that risk compare to the security risk you are trying to prevent? Have you asked the business this question?

For example, if your new control is going to make the sales team’s job take twice as long, you can almost certainly expect insurmountable pushback from the business unless you can demonstrate a risk that justifies this. Even if you can demonstrate a risk, you will probably need to find a compromise.

In the case of HPKP, in the short-term there is potentially an immediate increase in workload for the teams responsible for managing certificates and the operational risk of the permanent site lockout is always a possibility.

To summarise these two points, if you want to suggest a new security control, you had better make sure you have a solid business case that shows that it’s worth the effort.

This brings us neatly onto my final point.

The A+ Security Scorecard

A tendency has developed, especially with TLS configuration, cipher configuration and security header configuration to give websites/web applications a score based on the strength of their security configuration. I believe that these scorecards are really useful tools for giving a snapshot of a site’s security using a number of different measures.

However, this story makes me wonder if we understand (and articulate) the cost/benefit of achieving a high score well enough and whether the use of these scores may encourage “security absolutism” if improperly explained. This concept is nicely described by Troy Hunt, another AppSec rock star, but effectively represents the idea that if you don’t have every security control then you are not doing well enough. This is clearly not the right way to InfoSec.

In his blog, Scott says:

Given the inherent dangers of HPKP I am tempted to remove the requirement to use it from securityheaders.io and allow sites to achieve an A+ with all of the other headers and HTTPS, with a special marker being given for those few who do deploy HPKP instead.

I think maybe the real challenge here is not to change the scorecard but rather to change the expectation. Maybe we shouldn’t expect every site to achieve an A+ on every scorecard but rather to achieve a score which matches their risk level and exposure and maybe this should be clearer when generating or using this type of scorecard.

Additionally, we need to ensure that we are presenting this type of scorecard in the correct context alongside other site issues. There is a risk that getting a high score will be prioritised over other, more pressing or serious issues which do not have such a convenient way of measuring them or where the fix cannot be as neatly demonstrated.

Conclusion

Scorecards are really useful and convenient tools (and I certainly appreciate the people who have put the effort into developing them) but they may lead to poor security decisions if:

  1. Every site is expected to get the highest score regardless of the relative risk.
  2. We cannot demonstrate the relative importance of a high score compared to other, non-scorable issues.

Next time you produce or receive a security report, make sure you take this into account.

The OWASP Top 10 — An update and a chance to have your say

New developments

You can read my previous blog post about the flap around RC1 of the OWASP Top 10. Since then, there have been a number of important developments.

The first and biggest was that it was decided that the previous project leaders, Dave Wichers and Jeff Williams would be replaced by Andrew van der Stock, who himself has extensive experience in AppSec and OWASP. Andrew later brought in Neil Smithline and Torsten Gigler to assist him in leading the project and Brian Glas (who performed some excellent analysis on RC1) to assist with data analysis of newly collected data.

Next, the OWASP Top 10 was extensively discussed at the OWASP summit in May considering both how it got to where it is today and how it should continue in the future.

Key Outcomes and my Thoughts

The outcomes from the summit can be seen here and here and the subsequent decisions by the project team by the project team are documented here. The most important points (IMHO) that came out of these sessions and the subsequent decisions were as follows:

  • There is a plan in place to require multiple project leaders from multiple organisations for all OWASP flagship projects to try and avoid the independence issues I discussed in my previous post.
  • It is almost certain that the controversial A7 (Insufficient Attack Protection) and A10 (Underprotected APIs) the RC1 will not appear in the next RC or final version. The reason given is similar to my reason in a previous post. These aren’t vulnerabilities (or more specifically vulnerability categories). I am really pleased with this decision and I think it will make it much more straightforward to explain and discuss the Top 10 in a coherent way.
  • A7 and A10 were intended to be occupied by “forward looking” items. This will remain the case and this discussion will be opened up to the community by way of a survey where AppSec professionals can provide their feedback on the additional vulnerability categories which they expect to be most important over the next few years. The survey is only open until 30th August 2017 and is available here. I would strongly recommend that anyone with AppSec knowledge/experience takes the time to complete this for the the good of the new Top 10 version.
  • Additional time is being provided to supply data to be used in assessing the rest of the Top 10. The window is only open until 18th September 2017 and is available here. I’m not honestly sure what the benefit of gathering additional data on the 8 current vulnerability categories is aside from practice for next time.
  • An updated release timeline has been set with RC2 being targeted for 9th October 2017 to allow for feedback and additional modifications before the final release targeted for 18th November 2017.
  • In general, transparency is to be increased with feedback and preparation processes to be primarily based in the project’s Github repository going forward.
  • The OWASP Top 10 is “art” and not science. It is partially data based but intended to be significantly judgment based as well. We need to be clear about this when we are talking about the project.
  • The OWASP Top 10 is for everyone but especially CISOs rather than for developers. It is intended to capture the most high risk vulnerability categories. Once again, developers, especially those working on a new project, should be using the OWASP Top 10 Proactive Controls project as their first reference rather than the main OWASP Top 10.

Conclusion

I am very pleased with the way this has turned out so far. I think that the concerns over such an important project have been taken seriously and steps have been taken to protect the integrity of the project and safeguard its future. I think Andrew, Neil, Torsten and Brian are in a great position to carry on the huge efforts which Dave and Jeff put into this project and maintain it’s position of OWASP’s defacto #1 project.

At the same time, I think that this episode has provided an insight into the efforts and contributions required to progress an OWASP project, shown how an open approach leads to better feedback and contributions and also highlighted other OWASP projects which are complimentary to the Top 10. Overall, I think people should see this story as a positive outcome of a collaborative approach and feel encouraged to take part and contribute to this project and other OWASP projects.

Daily Pen Test reports — Pros and Cons

For some clients where we perform security testing, the client requests that we report on all findings on a daily basis.

Now, I am 100% behind reporting progress in terms of what has been tested (assuming there are multiple elements) or more importantly reporting problems in progressing as soon as possible. However, there are still some clients where they expect this plus findings to be reported.

I wanted to jot down some thoughts on some pros and cons to this approach.

Advantages

A1: Feeling of progress

The client feels like we are working, progressing and finding stuff. (Although status reporting without findings should also mostly accomplish this).

A2: Immediate feedback and fix

The client receives immediate feedback on findings and can start to look at how to fix them even before we finish testing.

They may even be able to fix the finding and allow us to retest before the end of testing. I am always a little wary of the client making changes to an application in the middle of testing but if they are going to fix something but break something else that is going to happen regardless of if it happens during the test or after the test.

A3: Enforces reporting as you go

There is a tendency for consultants to save all the reporting for the end of the project. Hopefully they took enough screenshots along the way but even still, suddenly you are at the end of the project and you have 20 findings to write up. Having a daily report ensures that findings are written up as they are found, whilst they are still fresh in mind.

Disadvantages

D1: Time consuming

Whilst we would have to write up all findings anyway, it is still more time consuming to have to prepare a report daily. The report has to go through a QA process every day instead of just once and if it is necessary to combine reports from multiple people, it can get even more complicated. Especially if we are using a complex reporting template.

D2: Difficult to update already reported findings

Sometimes we will find something and only afterwards find another angle or another element to the issue which means that the finding needs to be updated. This leads to more duplicated effort with the finding being reviewed multiple times and the client having to read and understand the finding multiple times.

D3: Less time to consider findings in detail

Sometimes it takes some time to consider the real impact of a finding. For example, what is the real risk from this finding, can it only be performed by an administrator? Will it only be relevant in certain circumstances? Having to rush the finding out in a daily report loses that thinking time and can lead to an inaccurate initial risk rating.

D4: Getting the report ready in time

Every day becomes a deadline day with a race to get the report ready in time. It can disrupt the testing rhythm and mean that consultants have to break from testing to prepare the daily report therefore losing focus and momentum.

D5: Expectation of linear progress

Testing doesn’t progress in a linear fashion. A consultant might spend a lot of time trying to progress on particular test on one day or on another day find a bunch of quick, lower risk findings. A daily report creates an expectation of news every day and a feeling that no news means a lack of progress.

D6: Increase likelihood of mistakes

With the increased pressure of daily output, the likelihood of mistakes is also increased as report preparers are under pressure to deliver the daily report by the deadline and reviewers are under pressure to quickly release the report to the client.

D7: It might not even get to the client!

If there are a few people in the review process, if just one of them is delayed in looking at the report and they have a query, the report may not make it to the client in time to be relevant before the next day’s report is released anyway!

D8: One size doesn’t fit all

Once you get into the habit of expecting daily reports or you create that expectation with the client, suddenly it is expected for any project regardless of whether it makes sense. This can mean that ongoing discussion with the client is discouraged because “we’re doing a daily report anyway” or alternatively a project which requires in depth thought and research is being constantly disturbed with unhelpful daily reports.

Conclusions

I agree that is is a bad idea to do a load of testing and then only weeks later the client finally sees some output. Especially where there are particularly serious findings that immediately expose the client to serious risk.

However, the need to provide a continual stream of updates leads to time inefficiency, lower quality findings and disturbs the progression of the test.

As such, whilst the reporting format should be discussed at the start of the project with the client, the aim should be to agree on the following points by communicating the reasons discussed in this post:

  1. If this is a large project where there are multiple parts which are being tested one after the other in a short time-frame then it is worth reporting on progress over these parts on a daily basis.
  2. Problems with testing should always be reported as soon as possible plus a daily status update on these issues to make sure these are not forgotten.
  3. Critical threats which immediately put the client at severe risk should always be reported as soon as possible.
  4. If the application is currently under development or there is specific pressure to deliver key findings as fast as possible, then high risk findings or medium risk findings can be delivered during the course of the test but should not be restricted to a strictly daily frequency.

Additionally:

  • If this is a short project (up to a week) without lots of different elements or if this a long project (several months) then daily status reporting is not appropriate.
  • Reporting of all findings on a strictly daily basis will never be appropriate.

I was recently involved in an application security testing project for a large client covering around 20 applications with multiple consultants working simultaneously in just three weeks of testing. By discussing with the client up front and agreeing on points 1, 2 and 3 above we kept the client fully in the loop whilst not burdening ourselves with reporting every tiny detail everyday.

I will probably update this post as I think of more advantages/disadvantages but feel free to send me feedback in the comments or via Twitter.

WannaCry — Do you feel lucky?

Was it just ransomware?

During the hysteria around the WannaCry ransomware outbreak, a thought struck me:

A bit later on, I responded to another post where someone had suggested that we were “lucky” that it was only ransomware.

https://twitter.com/JoshCGrossman/status/863991233869484032

As I responded, the only thing that was “lucky” about this story was the fact that the WannaCry outbreak finally brought well-deserved attention to the incredibly dangerous exploits leaked by the TheShadowBrokers in April.

How bad is a vulnerability?

I blogged about these exploits on my employer’s blog, precisely because we wanted to make sure that the company’s clients had the relevant information to protect themselves. We don’t blog about every vulnerability or issue that comes to light but the unique danger posed by this leak meant that we decided it was important that we prepared advice. Like I said there, this probably the worst Windows vulnerability since 2008.

Given the overall panic, it seems that despite the leak of *working exploits*, MS17–010 was still not taken seriously enough by Microsoft or by the Industry. Microsoft in particular stuck to their guns and didn’t patch XP and 2003 for non-paying customers until it became prudent from a PR perspective.

We in the Information Security industry need to find a better way of communicating the risk posed by a security issue. The frustrating “branded vulnerabilities” trend has led to risk assessment based on logo quality rather than actual potential for damage.

The fact is that as noted in the 2016 DBIR (page 16), the most dangerous vulnerabilities are those for which real public exploits exist such as in Metasploit. TheShadowBroker’s release effectively met that criteria but despite this the outcry was less than something like Heartbleed or the damp squib which was Badlock.

Just patch it?

It is clear that there are plenty of organisations where patching is difficult or impossible. For organisations where it is difficult, security professionals need to help these organisations “choose their battles” where MS17-010 should have been a battle that was fought and won as soon as TheShadowBroker’s leak of live exploits was shown to be related to it.

However, for places where patching is impossible and to be honest in all mature organisations, the focus needs to shift to an “anticipate breach” posture whereby it is almost assumed that attackers will get in through unpatched vulnerabilities or just plain old phishing. In this model, the goal becomes preventing and detecting lateral movement through segmentation and better behavioural logging.

In Conclusion

So, in some ways maybe we got lucky that WannaCry drew so much attention to MS17–010 because it should now be easier to get buy-in to patch some of these specific flaws, just in time for the Metasploit module release (remember the DBIR metric above?)

However, despite this we must ask ourselves now, what other quiet malware was able to infiltrate Company networks whilst this flaw remained unpatched or unconsidered? We have already seen a couple of examples of this.

We need to be better at articulating risk regarding known issues but we also need the detective controls to be ready for the unknown issues as well.