I am delivering training courses on how to build effective processes around application security scanning tools as part of my work for Bounce Security. The course’s official name is “Building a High-Value AppSec Scanning Programme” and it’s unofficial, more fun but less descriptive name is “Tune your Toolbox for Velocity and Value”. This post will serve as a way of getting more information about the course.
The easiest way to attend this course right now is to sign-up for the one day version focusing on SCA and SAST tools at Virtual Global AppSecEU which you can do at the registration page here.
You bought the application security tools, you have the findings, but now what? Many organisations find themselves drowning in “possible vulnerabilities”, struggling to streamline their processes and not sure how to measure their progress.
If you are involved in using SAST, DAST or SCA tools in your organisation, these may be familiar feelings to you and this course comes to try and address these issues
This is a topic I have had significant experience with over the last several years providing application security consulting and “on the ground” assistance to various organisations. This has exposed me to a variety of these tools and several ways of working with them, seeing what works and what does not in different contexts.
Being a consultant means I have no vendor allegiance or commitment and allows me to discuss useful war stories (both successful and less successful) without disclosing sensitive client/employer information.
From seeing these organisations and discussing in various forums, this problem certainly seems to resonate and training like this would fill a gap that urgently needs to be addressed. Companies are being told that they need to improve their application security posture and that more tools are the key to doing this efficiently. However, it is becoming clear that without effective processes and strategies for working with these tools, they quickly become a burden and a blocker.
In this course you will learn how to address these problems and more (in a vendor-neutral way), with topics including:
What to expect from these tools?
Customising and optimising these tools effectively
Building tool processes which fit your business
Automating workflows using CI/CD without slowing it down.
Showing the value and improvements you are making
Faster and easier triage through smart filtering
How to focus on fixing what matters and cut down noise
Techniques for various alternative forms of remediation
Building similar processes for penetration testing activities.
Comparison of the different tool types covered.
To bring the course to life and let you apply what you learn, you will work in teams on table-top exercises where you design processes to cover specific scenarios, explain and justify your decisions to simulated stakeholders and practice prioritising your remediation efforts.
For these exercises, you will work based on specially designed process templates (which we will provide) which you can use afterwards to apply these improvements within your own organisation.
Be ready to work in a group, take part in discussions and present your findings and leave the course with clear strategies and ideas on how to get less stress and more value from these tools.
Audio/Visual information about the course
For those of you who prefer to hear their information rather than read it, here are some useful resources.
Elevator pitch for the course ~2 minutes
In this short video, I give a quick explanation of the course and the ideas around it. Transcript in the original LinkedIn post.
Discussion of the background to the course – ~40 minutes
In this interview with the Application Security Podcast, I talk through the background to the course including where the idea came from and the key takeaways and ideas I want people to get from the course.
Sample of the course material ~ 55 mins
This is an example of some of the course content albeit pushed together in a less interactive way. The course itself has more discussion and exercises interspersed.
How can I attend this training course?
OWASP Virtual Global AppSecEU (8th June)
The easiest way right now is to sign-up for the one day version of the course focusing on SCA and SAST tools. The specific details of the content to be covered in this one day version can be found on the conference website here.
If there is sufficient interest, we may look at running a special session with the other day’s content, especially for attendees to this session.
I am honoured to be listed in the legendary Jim Manico’s training catalogue. Jim’s catalogue is primarily aimed at organisations arranging training for their employees and has a variety of top-class taught training courses. I strongly recommend that anyone looking for the best application and cloud security training takes a close look at what is on offer.
The full training catalogue can be found on the Manicode website and the extracts for my Tools course are below. (I also have an ASVS course available which you can see in the catalogue as well 😀!)
To find out more and how to arrange, you can get in touch with Jim via the Manicode website or get in touch with us directly via info <at> bouncesecurity.com.
I recently had to set up a new laptop and one of the things I wanted was the ability to have both my work and personal GitHub accounts set up on one Linux environment, (more specifically WSL). I also wanted to ensure that at least my personal commits were signed using a GPG key
I discovered quite a few complications in this process so I wanted to include some documentation on how I achieved this. If you are an ssh or git expert then some of this might be obvious but otherwise hopefully it will be helpful!
Git your configuration on
The first step was to get my git configuration set up correctly.
Let me Google that for you
My primary resources for how to get the multiple users set up was a combination of the following two links which were really useful but I got caught in a few issues on the way, (not necessarily the fault of the posts through).
The first thing I liked from the GitGuardian link was having two separate paths for Work and Personal projects using two separate GitHub identities.
Based on their instructions, I created “Work” and “Personal” folders within my WSL home folders (actually soft-links to other locations), created the relevant ssh keys and then the relevant configuration files. Obviously I also had to copy my ssh public keys into the relevant page of the GitHub UI for each account.
Git configuration files
Here are the git configurations I ended up with. I will add some more information about parts them below but they are mostly based on the links above. Note that the heading within which each configuration line sits is important.
There were a couple of issues at this stage that had me scratching my head for a while
Wrong file names
I had used dots in my file names for the personal and work configuration files whereas the main configuration file from the GitGuardian link used hyphens. This took me longer than I care to admit to figure out… This was certainly a PEBKAC issue.
Messed up double quotes
I was getting parsing errors for a very long time on my main configuration file. I tried all sorts of things including not using soft-links but using the full paths from the root instead. After much faffing, I realised that when I copied the same configuration files from the GitGuardian site, it had used “curly double quotes” instead of the regular double quotes and this was tripping up git 🤦♂️.
I wasn’t previously familiar with using ssh authentication with GitHub so this caused me some challenges as well. I will paste in an example of ssh git configuration file first and then walk through this aspect.
It is possible that there are other/better ways to do this so please free to tell me if you have ideas 😀.
SSH Configuration file
Here is the ssh configuration I ended up with. I will explain some of the key aspects further down.
Making ssh work with git
I was used to using HTTPS for cloning repositories and personal access key authentication to push so this was also a bit of a learning curve. My main (eventual) discovery was that despite the use of the “sshCommand” parameter in the previous git configuration files, this is only used for “git fetch” and “git push” operations (not clone) and only when the repository’s origin is set using the SSH syntax rather than the HTTPS.
After some experimentation, I found a few possible ways to clone the repository in a way that would make this all work. In the examples below I have used my personal identity but I could also have used my work identity and cloned to the relevant Work directory.
Option 1 – Without explicitly choosing an account
It is possible to start by cloning the repository using the regular HTTPS clone mechanism within the “Personal” directory. I can copy the clone command straight out of the GitHub UI:
git clone https://github.com/tghosth/testclone
I now have the repository cloned locally but I now need to tell git to use the SSH mechanism instead of the HTTPS mechanism. I can do this as follows:
git remote set-url origin email@example.com:tghosth/testclone.git
Note that at no point did I need to specify the specific identity to use so maybe this could even be automated after a clone operation with some sort of hook…
Either way, if I now do a push, it asks me for the correct key passphrase and works successfully
josh@LAPTOP-ZZZZZZZZ:~/Personal/testclone$ git push -v
Pushing to firstname.lastname@example.org:tghosth/testclone.git
Enter passphrase for key '/home/josh/.ssh/jZZZZZZZZ6_key':
= [up to date] main -> main
updating local tracking ref 'refs/remotes/origin/main'
My main concern about this approach is that I worry how effective it will be if there are multiple remotes or branches or something. The other disadvantage is that it is a two step process and also it will not work smoothly for private repositories.
Option 2 – Doing a clean ssh clone choosing the relevant account
The other option is a one step but I need to mess with the original clone command. When I copy the clone command for an ssh clone, it will look like this:
git clone email@example.com:tghosth/testclone.git
However, before I use it, I need to change it to tell the clone command which identity I want to use as otherwise it will return me errors. I can use the value in the Host field of the ssh configuration file above for this so the command will change to as follows:
git clone git@ghpers:tghosth/testclone.git
You can see above that “ghpers” was the Host I gave to my personal key in the configuration file.
I can then run this and git will know which SSH identity to use for the clone operation. Once I start doing fetches and pushes, it will be using the identity configured in the relevant git configuration file for this folder tree (.pers.gitconfig).
I like this method because it is a single command. Whilst I have to manually change the clone command rather than just copying it from the GitHub UI, I only have to do that once and then everything works. It will also work smoothly for private repositories.
Option 3 – Using ssh-agent
I actually figured this option out whilst writing this blog post 🙃.The freecodecamp link sort of alludes to this but not explicitly for easily cloning the repository in the first place.
The ssh-agent program temporarily keeps ssh private keys in memory and one advantage is that you only have enter the passphrase once per session rather than on every use individually. Without ssh-agent, for every git clone, git fetch and git push, I would need to enter the passphrase every single time.
However, another advantage for our use case is that when the key is held in ssh-agent and I do a git clone via ssh, the ssh operation will automatically use that key without needing to be told.
You can see this in the terminal fragment below. I start ssh-agent running in my current terminal, (see this explanation of why it needs to be done using eval). I then add the identity (my personal identity in this case) I want to use to the agent.
josh@LAPTOP-ZZZZZZZZ:~/Personal/testclone$ eval `ssh-agent -s`
Agent pid 841
josh@LAPTOP-ZZZZZZZZ:~/Personal/testclone$ ssh-add -l
The agent has no identities.
josh@LAPTOP-ZZZZZZZZ:~/Personal/testclone$ ssh-add ~/.ssh/jZZZZZZZZ6_key
Enter passphrase for /home/josh/.ssh/jZZZZZZZZ6_key:
Identity added: /home/josh/.ssh/jZZZZZZZZ6_key (jZZZZZZZZ6@hotmail.com)
I can then run git clone in my Personal directory without changing the ssh path I copied from the GitHub UI. Note that I used a private repo in this example just to check it would work. It automatically uses my “personal” identity held in ssh-agent (as otherwise the clone would have failed).
I can then do a push operation and it will be using the identity configured in the relevant git config file for this folder tree (.pers.gitconfig). It doesn’t need a passphrase because the key is active in ssh-agent.
josh@LAPTOP-ZZZZZZZZ:~/Personal/testclonepriv$ git push -v
Pushing to firstname.lastname@example.org:tghosth/testclonepriv.git
= [up to date] main -> main
updating local tracking ref 'refs/remotes/origin/main'
This option is nice because it also solves having to enter the passphrase every time. Obviously there are implications of using ssh-agent but for a single user local Linux machine it seems like a reasonable solution. If you are jumping between work and personal frequently, it might get fiddly but on the other hand it is most important for the initial clone operation.
GPG Signing Commits
This was more straightforward overall and GitHub has some good documentation for how to get it set up. At the time of writing, GitHub does not support commit signing using an SSH key so you have to set up a GPG key separately. You will notice that in my “.pers.gitconfig” file above I have user.signingkey and commit.gpgsign configured. (I am not currently using this for my work identity.)
Using the documentation, I was able to set this functionality up quite easily but once I had it set up, it kept failing with the following error:
josh@LAPTOP-ZZZZZZZZ:~/Personal/testclonepriv$ git commit --allow-empty -m "test sign"
error: gpg failed to sign the data
fatal: failed to write commit object
After a painfully long time, I finally found a hint in a blog post somewhere that I needed to run the following command in my terminal first:
With that command run, the commit would pop up a GPG window in the terminal prompting me for my GPG passphrase (obviously different to my SSH passphrase) and would then create and sign the commit.
I can use “git log” to show the successful signature.
josh@LAPTOP-ZZZZZZZ:~/Personal/testclonepriv$ git log --show-signature
commit 169a1d725d2ZZZZZZZZZZZZZZZZZZZZZc565ff3d5 (HEAD -> main)
gpg: Signature made Wed Jan 26 09:42:20 2022 IST
gpg: using RSA key 487BBZZZZZZZZZZZZZZZZZZZZZZZZZFB6E4682A9
gpg: Good signature from "Josh Grossman (tghosth) <jZZZZZZZZ6@hotmail.com>" [ultimate]
Author: tghosth <jZZZZZZZZ6@hotmail.com>
Date: Wed Jan 26 09:42:20 2022 +0200
Thanks for reading, I hope this is a useful summary and makes it easier for you to set up this functionality. If you have comments or feedback, the easiest option is to reach out to me on Twitter at @JoshCGrossman!
I recently had the privilege of attending and speaking at the OWASP AppSec USA 2018 conference in San Jose, California, one of OWASP’S global events. OWASP’s global events differ from local or regional events with the most obvious differences being the size of the event and the fact that they are priced more like a regular industry conference (although still nowhere near the expense of something like BlackHat). This is because the global conferences are intended to act as OWASP’s flagship events as well as to raise funds for OWASP’s ongoing activities. In return, you get to hear talks from and network with some of the top security professionals from all over the world.
This was the first time I had attended a OWASP global event despite having attended chapter meet-ups and regional conferences so I wanted to take this opportunity to pull out some of my highlights.
1. A focus on fixing
One of my personal frustrations with many Information Security conferences and meet-ups is the significant bias towards talks about breaking things. Breaking stuff is fun but too often the practicalities of what can be done get overlooked.
The programme at AppSec USA was very much the opposite with most of the talk subjects focusing on themes like “how to use this security measure or feature correctly” or “here’s how we do application internally” or “introducing a new OWASP project and how it can help you”.
This meant that a large portion of the attendees were in “defender” and “builder” job roles who are ultimately responsible for securing software and meant that attendees could expect to pick up skills and ideas which would be immediately applicable in their day jobs.
2. Friendly and fascinating community
I was a little nervous going into the conference as I knew almost no one there and am an introvert by nature. Going from that into the ballroom for lunch with about 800 people at tables was a challenging experience but overall I found that people were really friendly and happy to chat.
I got the chance to speak to the leader of what must be one of the largest OWASP chapters in the world as well as the leader of one of the newest. I met various project leads, people I knew only from Twitter and just generally had a lot of conversations with people from a variety of backgrounds and experiences who had come from all over the world to be at the conference.
Along the way I got pulled into a tequila party (although with absolutely no pressure to drink), tried to pick a lock whilst simultaneously holding a conversation with some seriously smart people and got invited to give my talk again at another conference on the west coast.
The networking event on the first night also really helped with this providing activities and exhibits to interact with which encouraged attendees to work together and discuss.
3. Cutting edge talks and keynotes
With three tracks of talks, (plus the Hush talk track and the OWASP project overview track) some hard decisions had to be made as the overall quality of the talks was really high. Most of the time I was torn in (at least) a couple of directions so I am glad that the talks were all recorded (see playlist here) so I can catch up with those which I missed.
Most of the talks were highlighting something that I had not already come across and I made an effort to chat with some of the speakers afterwards or later on in the conference to discuss further.
There were also some great keynotes from various leaders in the security and tech industry who provided their high level visions of how application security needs to adapt to the current technology landscape.
4. Big name sponsors
I probably didn’t speak to as many of the sponsors as I should have done although I did spend time talking to some of them, including having some really interesting discussions and meeting some really smart people. As a consultant, it is important for me to be familiar with the companies in the industry in case I have a client with a particular problem or I encounter their products at a client. To be honest, having an awareness of the key players in the industry will be valuable whatever your position.
Certainly, the quantity and quality of the sponsors reflected the high-profile of the conference and if you are a “swag” connoisseur then you will also be happy. 😉 Whilst I am generally too shy to load up on too much swag, I was able to pick myself up a nice backup battery for my phone which was invaluable for my sightseeing day in San Francisco after the conference.
5. Supporting OWASP
OWASP is certainly a unique and irreplaceable organisation. By attending a global conference, aside from the other benefits which I have highlighted in this post, you are helping to financially support this vital organisation and ensure that it can continue to support its chapters and projects.
If you are already an OWASP member then you generally get a discount on the conference fee which will cover your membership and if you aren’t already a member then a Global OWASP conference is a great place to sign up 🙂
Members get some dedicated swag but also access to the members lounge. Here you could get coffee and snacks all day whilst avoiding the crowds at the buffet during the coffee breaks but also it provided a quieter, less overwhelming environment to meet people and chat.
Just do it!
Overall, it was an incredible experience and I would strongly recommend anyone in the application and product security space to attend one of these events or, even better, submit a talk to one of these events. If you are looking for a solution-focused conference where you can hear practical talks, apply what you have learnt straight away and meet like-minded people, these are the conferences for you. Look out for announcements for the 2019 conferences!
The implication from this tweet is that Web Application Firewalls (WAF) are blocking strings containing the string “burpcollaborator.net” because it is used by Burp Suite when trying to discover vulnerabilities.
As James says in his tweet, there is a trivially simple workaround for this by replacing the “burpcollaborator.net” string with the server’s IP instead although maybe in an effort to keep up with the “arms race”, WAF developers will start to block input containing that IP address as well.
Whilst these sort of protections will not provide any protection against an even slightly motivated attacker, they do involve incurring a time cost to bypass them. Where this becomes an issue is when a client wants us to perform a security test with these protections in place. This is an issue I come across often during application security testing and something that I discuss in my talk, “How to get the best AppSec test of your life“.
Every time a client tells us that they have a WAF in place, I explain that our preferred testing approach is for us to test without being blocked by the WAF and then, and only if absolutely necessary, validate findings against the WAF protected site at the end of the engagement. On a client engagement, we are (usually) being paid to test the client application and not the WAF. We could spent a lot of time and effort specifically trying to bypass the WAF for each attack but that is inefficient for the client.
Another potential issue is that in an attack or other IT incident, a company may be forced to disable their WAF. If their site has not been tested without a WAF, it may therefore still be vulnerable.
Most of the time, clients will accept this approach without issue once the rationale has been explained. Where they don’t accept this, we will usually agree to test anyway but include a disclaimer in the report that the WAF remained active for the duration of the test.
On one memorable occasion, a client decided that I had to verify a finding with their WAF enabled and I had several rounds of cat and mouse with their WAF vendor as I would bypass the WAF and the WAF vendor would deploy a bug fix or a configuration change to address it. This only ended when I sent a payload that crashed the in-line, cloud-based WAF rendering the client’s site inaccessible for several minutes every time I sent the payload. The WAF vendor then claimed victory since “they had blocked the attack”!
Another example is clients where their mobile applications encrypt traffic in transit in addition to the standard TLS encryption. Usually, the client either cannot disable this functionality when we test their applications or are not prepared to do so. Again such a measure incurs a time cost to bypass by either building a tool to perform the decryption and allow us to view/edit the traffic or testing the mobile application and it’s supporting APIs separately.
Either way, clients generally don’t want to pay the cost associated with this but also expect a comprehensive test.
If you are paying for an application security test, you want the money to be spent in the most efficient way possible with the maximum amount of time and effort allocated to testing your application. Making your tester’s life as easy as possible is the best way of achieving that.
If you leave these sort of time wasting security measures in place, you are going to end up spending money testing these measures rather than your application.
If you are really advanced and really confident in your application, you may want to have someone look at vulnerabilities that only occur when your WAF or other security technology is enabled. We have seen examples like this for CDN and caching technologies but if anyone has any WAF specific examples (I am sure I have seen this but cannot remember where), please let me know via Twitter 🙂
One important point before you start, you should note the disclaimer that that there are plenty of solutions for this challenge on the Internet.
Someone asked, how did I address that in this case?
(I’ll explain the PDF below)
Anyway, let’s get into the details of how I did the CTF.
RTFM (Read The Full Manual)
First of all, there are some great instructions about how to use Juice Shop in CTF mode in the accompanying ebook, see this section specifically. In this blog post, I want to talk about some of the more specific choices I made on top of those instructions.
Obviously, your mileage will vary but hopefully the information below will help you with some of the practicalities of setting this up in the simplest way possible.
The target applications
I originally thought about getting people to download the docker image onto their own laptops and work on that but in the end I decided to go with the Heroku option from the Juice Shop repository as it appeared that as long as you didn’t hammer the server (which shouldn’t be necessary for this app) then you can host it there for free! (Although you do need to supply them with credit card details just in case).
The only thing to do is make sure you set the correct environmental variable to put the application into CTF mode, see the screenshot below.
I had split my participants into teams (I did this myself to make sure that the teams were balanced) and I set up multiple application instances such that each team was sharing one application instance so that they would see their shared progress but not interfere with other teams. I also made sure each instance had a unique name to stop teams messing with each other.
Spinning up the CTF platform.
I had previously experimented with the CTFd platform when I had first planned this event a year or so ago so I was confident that I would use this as the scoring system and host it myself in an AWS EC2 instance.
When I headed over to their GitHub repository I could see there were a number of different deployment methods and I decided on the “docker-compose” method because I like the simplicity of Docker. Things got a bit messy as I stumbled into a known issue with performance (which has now been fixed) and I also realised that there was no obvious way of using TLS which I decided I wanted as well.
The guys on the CTFd slack channel were really helpful (thanks especially to kchung and nategraf) and eventually I used a fork which nategraf had made which had the performance issue fixed and also had a different version of the “docker-compose” script which included an nginx reverse proxy to manage the TLS termination.
I used an EC2 t2.medium instance for the scoreboard server (mostly because of the original performance problems) but you could probably get away with a much smaller instance. I chose Ubuntu 16.04 as the operating system.
UPDATE: For the following section up to the “sudo certbot” below command, I created an ugly shell-script with the commands included.
I installed Docker based on the instructions here up to after I had run sudoapt-get install docker-ce docker-ce-cli containerd.io. I then added the local user to the “docker” group using sudo usermod -aG docker ubuntu (you may need to logout/login after this) and then used the Linux instructions from here to install “docker-compose” (don’t make the mistake I made initially and install via apt-get!)
If you just want to use the scoreboard without TLS then you can just clone the CTFd repository from here, run docker-compose up (using -d if you want it to run in the background) from within the cloned directory, and you are away.
Using TLS with the CTF platform
If you do want a (hopefully) simple way to use TLS, the fork I initially used no longer exists so I have created a deployment repository which uses the same docker compose file that was in the original fork and includes the nginx reverse proxy but will also pull the latest version of CTFd an additional docker compose file to configure the nginx instance for TLS. You can clone that repository from here. You will then need to copy/clone the original CTFd repository from here into that same directory. (The ugly shell-script I mentioned does that automatically.)
Once you have cloned that, you will need to get yourself a TLS certificate and private key. I used the EFF’s “certbot” which generates certificates using Let’s Encrypt to produce my certificate. I installed using the instructions here.
If you used my ugly shell-script, this is where it leaves you and you need to continue following instructions.
I used a subdomain of my personal domain (joshcgrossman.com) and the subdomain to the EC2 server’s IP address by adding an A record to my DNS settings.
Then while no other web servers were running, I ran the command” sudo certbot certonly --standalone -d ctfscoreboard.joshcgrossman.com which automatically created the certificate and private key I needed for my chosen domain (make sure port 80 or 443 is open!).
I then renamed the “fullchain” file to ctfd.crt and the “privkey” file to ctfd.key and saved them inside the “ssl” directory which you will have if you cloned my deploy repository above. (The nginx.conf file I used for the TLS version of the deployment looks for these files.)
You then just need to make sure that the hostname in the “docker-compose-production.yml” file matches the hostname of your server (in my case ctfscoreboard.joshcgrossman.com) and you can then run docker-compose -f docker-compose.yml -f docker-compose-production.yml up -d from within your cloned directory (or use the run_tls.sh file I supply) and it should start listening on port 443 with your shiny new SSL certificate!
Loading the Juice Shop challenges
This part was easy, I followed the instructions from here to run the tool to export the challenges from Juice Shop and and steps 4 and 5 from here to import the challenges into CTFd.
Setting the stage
I wanted to provide some brief instructions for the teams and also set some ground rules. For most of them, this was their first CTF and I deliberately made the instructions brief but made myself available to answer questions throughout the CTF. I only had four teams so that was a manageable workload.
I gave the teams the following instructions:
Each team has their own, Heroku hosted, instance of the vulnerable application. Your scope is limited to that URL, port 443.
You may not try and DoS/DDoS your vulnerable application or indeed anything else related to the challenge.
You may not tamper with another team’s instance, another team’s traffic or anything else related to another team or the organisers.
You may not use Burp Scanner – it probably won’t help you much and even if it does trigger a flag you won’t understand why it worked.
You may not search the Internet or ask anyone other than the organisers for anything related to the specific application, the specific challenges or the application’s source code. You may only search for general information about attacks. You have a PDF containing lots of hints about the challenges.
You may not tamper with the database table related to your challenge progress.
If you aren’t sure about anything, ask 🙂
You may have points deducted if you break the rules!
Giving some help
I mention above a PDF with hints. Like I said above, they were not allowed to search the Internet for Juice Shop specific clues but I still wanted them to benefit from hints to help them out. Björn prepared an ebook with all the hints in but it contained the answers as well. In order to save my competitors from temptation, I created a fork with all the answers removed which you can find here.
During the course of the CTF, I projected the CTFd scoreboard onto the big screen and overlaid a countdown timer as well so people knew how long they had to go. I just used a timer from here although it was a little ugly…
I froze the scoreboard for the last 15 minutes to add to the suspense and cranked up some epic music to keep people in the mood.
I’ll leave you with the main guidance I gave to the teams before they started:
Have fun – that is the main goal of tonight
Learn stuff – that is the other main goal of tonight
Don’t get stressed about the time, easy to get overwhelmed
Divide up tasks
Time management – avoid rabbit-holes
Help those with less experience
Everyone had a great time and I got really good feedback so if you have the opportunity to run something like this, I strongly suggest you take it.
If you have any other questions or feedback let me know, my Twitter handle is above.
Updates: 18-March 2018
Someone asked about team members sharing an instance. I deliberately organised the CTF with teams of 3-4 people. The primary reason was that our department covers a wide spectrum of skill-sets so I still wanted everyone to take part, enjoy and learn something. I therefore carefully balanced the teams based on abilities. (It also meant I could split my direct reports across different teams so no one could accuse me of favouritism 😉)
My logic in a team sharing an instance was to allow progress to be shared and prevent duplicated effort although I think more than four people in a team would not have been manageable. Overall I think that aspect worked well.
Another thought is that if each team member had their own instance, it is more likely that they would all see the solution to each challenge rather than one person completing it and just telling the others. However, this would have slowed things down which in the time we had available probably wouldn’t have been worth it.
One thing I didn’t do beforehand was practice resetting an instance and restoring progress which caused issues when one team created too much stored XSS and another team somehow accidentally changed the admin password without realising it!
Resetting an instance is possible by saving the continue code from the cookie, restarting the instance (that is easy in Heroku) and then sending a post request to the app looking like this:
PUT /rest/continue-code/apply/<<CONTINUECODE>> HTTP/1.1
I know lots of people still have questions about OWASP and the AppSecEU 2018 debacle. Other than being a member, I have no formal standing in OWASP, locally or globally so nothing below represents anything official but I thought I would prepare some answers based purely on publicly available information.
What happened after the initial backlash?
The surprise announcement was followed by an angry rebuttal and a lot of outcry but after a few days things went quiet. Really quiet. The OWASP board email list has historically been relatively busy with consistent traffic. In the past 10 years, the latest traffic has restarted on that list after the holiday period is January 4th and only once has there not been a board meeting by January 14th. In 2018 there was complete board silence until January 18th when a number of OWASP leaders started querying what was going on. A formal, follow-up statement about the decision only came on January 23rd. It appears that there were some discussions being held behind the scenes culminating in a recorded conference call with OWASP board representatives and the UK and Israel OWASP leadership on January 22nd.
Why did AppSecEU get moved to the UK?
The follow-up statements seem to indicate that the root cause of the move was that recent operational challenges at the OWASP foundation, due at least in part to understaffing, meant that the foundation felt it was not in a position to provide the required support for the event. Especially given that it appears that AppSecEU 2017 and AppSecUSA 2017 did not provide the expected financial benefits.
The impression is that an AppSecEU in the UK is a safe choice whilst the foundation tries to address its internal issues.
We would like to acknowledge the effort of the organizing team, while realizing the required level of support from the foundation was not achieved.
What about the supposed lack of preparedness from the OWASP Israel committee?
On the initial board call in December, a big deal was made that despite the conference only (!) being six months away, various preparations had not been made including no signed contract with the venue.
In fact, on the call on January 22nd, the new Executive Director praised the third party why the Israeli organising committee had engaged to assist with the conference logistics and more importantly stated that the foundation would cover the costs of having to withdraw from the contract which had in fact been signed with the venue.
So what is next for OWASP and Israel?
On the call on January 22nd, the board expressed strong support for a global OWASP event to take place in 2019 once the foundation had had a year to address it’s operational challenges. This seems to be how others have interpreted that as well.
I would say so. The plan is next year to be in Israel. So the board decided a swap, as per my understanding.
Given that going forward the Executive Director is keen to start planning OWASP global events up to a year in advance, it remains to be seen over the next few months whether these actions are translated into words.
Additionally, the Israeli chapter have now released their response to the final decision and they are understandably still unhappy about the outcome but also positive about the intentions of the new board to try and repair the relationship and champion an event in Israel for 2019.
I think it is clear to everyone that the initial communication around this decision was not good enough but it is particularly disappointing that the basis for this decision (e.g. the lack of a signed contract and the “support” of the Israeli chapter in the decision) was demonstrably incorrect and that the initial communication and board discussion made out that the root cause was a lack of preparedness and ability to deliver of the Israeli chapter.
It is encouraging that this has been walked back to a certain extent however it is clear that it will take more than that to address the hurt which is felt by the Israeli chapter leadership.
The support for the Israeli chapter over Twitter and the board discussion of a global event in Israel in 2019 is also encouraging and I hope that the OWASP board will proactively reach out to the Israeli chapter leadership to make sure that this comes to fruition.
Being an Orthodox Jew, Christmas and the meaning, stories and culture associated with it were always something that I only really saw second-hand.
However, when it was announced earlier this year that OWASP’s AppSecEU Conference, one of the few truly global Application Security conferences, was going to be held on my door step in Tel Aviv in 2018, it truly felt like Christmas was coming. My excitement built from the energy of the OWASP Summit in May to my first time speaking at an OWASP local chapter meeting in June about the difficulties and improvements with the OWASP Top 10 Project (which I later spent some time proof reading and offering minor fixes).
However, this came to a crashing halt last night when I came back online after the Jewish Sabbath and discovered that this December, the Grinch truly had stolen Christmas. In what appears to be an unprecedented move, the OWASP Global board had voted at their December meeting to arbitrarily move the conference to the UK (again) instead of Tel Aviv and had waited until Friday night, the 23rd of December to announce this. After the build up throughout 2017, this felt like a kick in the gut.
Of course, what I felt would have been nothing compared with how the local organisers must have felt having spent 100s of volunteer hours planning for this conference together with the global OWASP team.
At stupid o’clock on Saturday night, I dug out the meeting recording to try and figure out what had happened. A number of reasons were discussed in the meeting which you will hear about later but the thing that stuck out was pretty much the very first question:
Tom Brennan (Board Secretary): “Is anyone representing the local team…on this call to give their comments and feedback on those statements.”
Karen Staley (Executive Director): “I have spoken to…Avi in great detail…What I share with you…is absolutely what we discussed over the phone…”
I was truly astonished by this, not to mention the remainder of this segment where the entire discussion of expected problems with the conference seemed to be framed around the idea that these concerns were coming from the local OWASP chapter or that the issues were the fault of the local chapter for being disorganised.
The board went on to accept this at face value (although I appreciate there was some pushback from some members.) In relatively short order, the board voted unanimously to take the conference away from Tel Aviv (the only city other than Redmond where Microsoft hold their own BlueHat security conference and where it would have coincided but not clashed with CyberWeek at Tel Aviv University which last year had 6,000 attendees from over 50 countries) and move it somewhere else. Specifically to London.
It sounded to me like there had been some sort of miscommunication as from my interactions with the local team it seemed like planning was well underway. OWASP had even sent an employee over to be at AppSecIL and check out the venue which had been agreed. Additionally, I know that Avi, the conference chair has lived and breathed application security and especially OWASP for years now.
I waited impatiently to hear from the local chapter and once their statement was released, it became clear the extent to which the local chapter had been screwed over. As I said, Avi is a very strong proponent of OWASP and for him to have written such a strongly worded statement tells you something about the circumstances.
The statement from OWASP Israel
I would strongly recommend reading the full statement to understand the situation as whilst it is long, it comprehensively explains the extent to which the Israel team have been shoddily treated.
However, I do want to pull out a few key sentences from that statement:
“The OWASP Israel chapter is vehemently opposed to this move, and we do not accept nor agree with the official statement in any way.”
“It should be noted that this decision was made WITHOUT consulting with the local chapter and conference committee, or even gathering the relevant information from us.”
“Regardless of what the OWASP Leadership believes about the AppSec community in israel, I have the privilege of being part of one of the strongest, most active OWASP communities in the world.”
“For those companies that usually support or sponsor OWASP Foundation and AppSec conferences, I call on you to continue to support the OWASP communities and its mission — but support the local chapters that are actually doing the work.”
The time when I have been writing this was supposed to be set aside for me to polish up and send some more CFP submissions for AppSecEU. Right now, I don’t know if I want to do that. If I get a CFP entry accepted, I don’t really look forward to having to get approval for travel and accommodation from my company for this conference after what the OWASP board has done.
I call on the OWASP Board to urgently consider the following points and act to fix this injustice, ideally restoring AppSecEU 2018 to Tel Aviv:
Can the December 6th vote on AppSecEU really be considered to be valid given that the entire discussion was predicated on the local chapters agreement? Surely it is clear that the board needs to receive a presentation from the OWASP Israel team on their position as it was not fairly presented at the board meeting.
How was it considered acceptable to release this news on Friday night, 23 December?
How can the board ensure that this type of catastrophic misrepresentation does not occur again?
How does this action create a “stronger” and “more engaged” community?
How is it possible that several months ago the OWASP board withdrew support for the Project Summit 2018 but that the new Executive Director has effectively based the change in AppSecEU on having spoken to the organizer and apparently joining with this summit (rather than speaking with the London chapter leaders).
Is it appropriate that this very large decision was considered to be “one little thing”(1:22:52 of the recording)?
I have been excited to get more and more involved with donating my time and energy to OWASP during the course of this year. I will be closely monitoring how this issue is addressed and I will have to consider my future OWASP involvement on this basis.