Signature based detection
The implication from this tweet is that Web Application Firewalls (WAF) are blocking strings containing the string “burpcollaborator.net” because it is used by Burp Suite when trying to discover vulnerabilities.
As James says in his tweet, there is a trivially simple workaround for this by replacing the “burpcollaborator.net” string with the server’s IP instead although maybe in an effort to keep up with the “arms race”, WAF developers will start to block input containing that IP address as well.
Sadly this is a often a standard tactic for WAFs (and indeed other forms of malicious content protection to block simple signatures that can be easily bypassed with a minor change to the signature.
Time is money
Whilst these sort of protections will not provide any protection against an even slightly motivated attacker, they do involve incurring a time cost to bypass them. Where this becomes an issue is when a client wants us to perform a security test with these protections in place. This is an issue I come across often during application security testing and something that I discuss in my talk, “How to get the best AppSec test of your life“.
Every time a client tells us that they have a WAF in place, I explain that our preferred testing approach is for us to test without being blocked by the WAF and then, and only if absolutely necessary, validate findings against the WAF protected site at the end of the engagement. On a client engagement, we are (usually) being paid to test the client application and not the WAF. We could spent a lot of time and effort specifically trying to bypass the WAF for each attack but that is inefficient for the client.
Another potential issue is that in an attack or other IT incident, a company may be forced to disable their WAF. If their site has not been tested without a WAF, it may therefore still be vulnerable.
Most of the time, clients will accept this approach without issue once the rationale has been explained. Where they don’t accept this, we will usually agree to test anyway but include a disclaimer in the report that the WAF remained active for the duration of the test.
On one memorable occasion, a client decided that I had to verify a finding with their WAF enabled and I had several rounds of cat and mouse with their WAF vendor as I would bypass the WAF and the WAF vendor would deploy a bug fix or a configuration change to address it. This only ended when I sent a payload that crashed the in-line, cloud-based WAF rendering the client’s site inaccessible for several minutes every time I sent the payload. The WAF vendor then claimed victory since “they had blocked the attack”!
Another example is clients where their mobile applications encrypt traffic in transit in addition to the standard TLS encryption. Usually, the client either cannot disable this functionality when we test their applications or are not prepared to do so. Again such a measure incurs a time cost to bypass by either building a tool to perform the decryption and allow us to view/edit the traffic or testing the mobile application and it’s supporting APIs separately.
Either way, clients generally don’t want to pay the cost associated with this but also expect a comprehensive test.
If you are paying for an application security test, you want the money to be spent in the most efficient way possible with the maximum amount of time and effort allocated to testing your application. Making your tester’s life as easy as possible is the best way of achieving that.
If you leave these sort of time wasting security measures in place, you are going to end up spending money testing these measures rather than your application.
If you are really advanced and really confident in your application, you may want to have someone look at vulnerabilities that only occur when your WAF or other security technology is enabled. We have seen examples like this for CDN and caching technologies but if anyone has any WAF specific examples (I am sure I have seen this but cannot remember where), please let me know via Twitter 🙂