I am delivering training courses on how to build effective processes around application security scanning tools as part of my work for Bounce Security. The course’s official name is “Building a High-Value AppSec Scanning Programme” and it’s unofficial, more fun but less descriptive name is “Tune your Toolbox for Velocity and Value”. This post will serve as a way of getting more information about the course.
The easiest way to attend this course right now is to sign-up for the one day version focusing on SCA and SAST tools at Virtual Global AppSecEU which you can do at the registration page here. This has now passed but you can see the feedback from the courses here.
We don’t currently have any public dates lined up (although watch this space 😀) but you are welcome to get in touch with me to discuss private training by Bounce Security via email (info <at> bouncesecurity.com) or via Twitter.
You bought the application security tools, you have the findings, but now what? Many organisations find themselves drowning in “possible vulnerabilities”, struggling to streamline their processes and not sure how to measure their progress.
If you are involved in using SAST, DAST or SCA tools in your organisation, these may be familiar feelings to you and this course comes to try and address these issues
This is a topic I have had significant experience with over the last several years providing application security consulting and “on the ground” assistance to various organisations. This has exposed me to a variety of these tools and several ways of working with them, seeing what works and what does not in different contexts.
Being a consultant means I have no vendor allegiance or commitment and allows me to discuss useful war stories (both successful and less successful) without disclosing sensitive client/employer information.
From seeing these organisations and discussing in various forums, this problem certainly seems to resonate and training like this would fill a gap that urgently needs to be addressed. Companies are being told that they need to improve their application security posture and that more tools are the key to doing this efficiently. However, it is becoming clear that without effective processes and strategies for working with these tools, they quickly become a burden and a blocker.
In this course you will learn how to address these problems and more (in a vendor-neutral way), with topics including:
What to expect from these tools?
Customising and optimising these tools effectively
Building tool processes which fit your business
Automating workflows using CI/CD without slowing it down.
Showing the value and improvements you are making
Faster and easier triage through smart filtering
How to focus on fixing what matters and cut down noise
Techniques for various alternative forms of remediation
Building similar processes for penetration testing activities.
Comparison of the different tool types covered.
To bring the course to life and let you apply what you learn, you will work in teams on table-top exercises where you design processes to cover specific scenarios, explain and justify your decisions to simulated stakeholders and practice prioritising your remediation efforts.
For these exercises, you will work based on specially designed process templates (which we will provide) which you can use afterwards to apply these improvements within your own organisation.
Be ready to work in a group, take part in discussions and present your findings and leave the course with clear strategies and ideas on how to get less stress and more value from these tools.
Feedback so far
We ran a 1 day version of the course focussing on SCA and SAST virtually at OWASP Global AppSec EU 2022 and it went great. Feedback included everyone feeling they had achieved their desired learning outcomes, 100% satisfaction with the instructor (me 🥰) and 100% Net Promoter Score®. There was also significant positive feedback on the hands-on exercises.
Attendee comments included:
“On target good advice on taking the next steps in SCA and SAST.“
“For me it was the perfect input to structure the ideas we already have in our sast introduction journey.“
We are excited to expand with more content and broader exercises in upcoming longer versions!
Audio/Visual information about the course
For those of you who prefer to hear their information rather than read it, here are some useful resources.
Elevator pitch for the course ~2 minutes
In this short video, I give a quick explanation of the course and the ideas around it. Transcript in the original LinkedIn post.
Discussion of the background to the course – ~40 minutes
In this interview with the Application Security Podcast, I talk through the background to the course including where the idea came from and the key takeaways and ideas I want people to get from the course.
Sample of the course material ~ 55 mins
This is an example of some of the course content albeit pushed together in a less interactive way. The course itself has more discussion and exercises interspersed.
How can I attend this training course?
I am honoured to be listed in the legendary Jim Manico’s training catalogue. Jim’s catalogue is primarily aimed at organisations arranging training for their employees and has a variety of top-class taught training courses. I strongly recommend that anyone looking for the best application and cloud security training takes a close look at what is on offer.
The full training catalogue can be found on the Manicode website and the extracts for my Tools course are below. (I also have an ASVS course available which you can see in the catalogue as well 😀!)
To find out more and how to arrange, you can get in touch with Jim via the Manicode website or get in touch with us directly via info <at> bouncesecurity.com.
I recently had to set up a new laptop and one of the things I wanted was the ability to have both my work and personal GitHub accounts set up on one Linux environment, (more specifically WSL). I also wanted to ensure that at least my personal commits were signed using a GPG key
I discovered quite a few complications in this process so I wanted to include some documentation on how I achieved this. If you are an ssh or git expert then some of this might be obvious but otherwise hopefully it will be helpful!
Git your configuration on
The first step was to get my git configuration set up correctly.
Let me Google that for you
My primary resources for how to get the multiple users set up was a combination of the following two links which were really useful but I got caught in a few issues on the way, (not necessarily the fault of the posts through).
The first thing I liked from the GitGuardian link was having two separate paths for Work and Personal projects using two separate GitHub identities.
Based on their instructions, I created “Work” and “Personal” folders within my WSL home folders (actually soft-links to other locations), created the relevant ssh keys and then the relevant configuration files. Obviously I also had to copy my ssh public keys into the relevant page of the GitHub UI for each account.
Git configuration files
Here are the git configurations I ended up with. I will add some more information about parts them below but they are mostly based on the links above. Note that the heading within which each configuration line sits is important.
There were a couple of issues at this stage that had me scratching my head for a while
Wrong file names
I had used dots in my file names for the personal and work configuration files whereas the main configuration file from the GitGuardian link used hyphens. This took me longer than I care to admit to figure out… This was certainly a PEBKAC issue.
Messed up double quotes
I was getting parsing errors for a very long time on my main configuration file. I tried all sorts of things including not using soft-links but using the full paths from the root instead. After much faffing, I realised that when I copied the same configuration files from the GitGuardian site, it had used “curly double quotes” instead of the regular double quotes and this was tripping up git 🤦♂️.
I wasn’t previously familiar with using ssh authentication with GitHub so this caused me some challenges as well. I will paste in an example of ssh git configuration file first and then walk through this aspect.
It is possible that there are other/better ways to do this so please free to tell me if you have ideas 😀.
SSH Configuration file
Here is the ssh configuration I ended up with. I will explain some of the key aspects further down.
Making ssh work with git
I was used to using HTTPS for cloning repositories and personal access key authentication to push so this was also a bit of a learning curve. My main (eventual) discovery was that despite the use of the “sshCommand” parameter in the previous git configuration files, this is only used for “git fetch” and “git push” operations (not clone) and only when the repository’s origin is set using the SSH syntax rather than the HTTPS.
After some experimentation, I found a few possible ways to clone the repository in a way that would make this all work. In the examples below I have used my personal identity but I could also have used my work identity and cloned to the relevant Work directory.
Option 1 – Without explicitly choosing an account
It is possible to start by cloning the repository using the regular HTTPS clone mechanism within the “Personal” directory. I can copy the clone command straight out of the GitHub UI:
git clone https://github.com/tghosth/testclone
I now have the repository cloned locally but I now need to tell git to use the SSH mechanism instead of the HTTPS mechanism. I can do this as follows:
git remote set-url origin firstname.lastname@example.org:tghosth/testclone.git
Note that at no point did I need to specify the specific identity to use so maybe this could even be automated after a clone operation with some sort of hook…
Either way, if I now do a push, it asks me for the correct key passphrase and works successfully
josh@LAPTOP-ZZZZZZZZ:~/Personal/testclone$ git push -v
Pushing to email@example.com:tghosth/testclone.git
Enter passphrase for key '/home/josh/.ssh/jZZZZZZZZ6_key':
= [up to date] main -> main
updating local tracking ref 'refs/remotes/origin/main'
My main concern about this approach is that I worry how effective it will be if there are multiple remotes or branches or something. The other disadvantage is that it is a two step process and also it will not work smoothly for private repositories.
Option 2 – Doing a clean ssh clone choosing the relevant account
The other option is a one step but I need to mess with the original clone command. When I copy the clone command for an ssh clone, it will look like this:
git clone firstname.lastname@example.org:tghosth/testclone.git
However, before I use it, I need to change it to tell the clone command which identity I want to use as otherwise it will return me errors. I can use the value in the Host field of the ssh configuration file above for this so the command will change to as follows:
git clone git@ghpers:tghosth/testclone.git
You can see above that “ghpers” was the Host I gave to my personal key in the configuration file.
I can then run this and git will know which SSH identity to use for the clone operation. Once I start doing fetches and pushes, it will be using the identity configured in the relevant git configuration file for this folder tree (.pers.gitconfig).
I like this method because it is a single command. Whilst I have to manually change the clone command rather than just copying it from the GitHub UI, I only have to do that once and then everything works. It will also work smoothly for private repositories.
Option 3 – Using ssh-agent
I actually figured this option out whilst writing this blog post 🙃.The freecodecamp link sort of alludes to this but not explicitly for easily cloning the repository in the first place.
The ssh-agent program temporarily keeps ssh private keys in memory and one advantage is that you only have enter the passphrase once per session rather than on every use individually. Without ssh-agent, for every git clone, git fetch and git push, I would need to enter the passphrase every single time.
However, another advantage for our use case is that when the key is held in ssh-agent and I do a git clone via ssh, the ssh operation will automatically use that key without needing to be told.
You can see this in the terminal fragment below. I start ssh-agent running in my current terminal, (see this explanation of why it needs to be done using eval). I then add the identity (my personal identity in this case) I want to use to the agent.
josh@LAPTOP-ZZZZZZZZ:~/Personal/testclone$ eval `ssh-agent -s`
Agent pid 841
josh@LAPTOP-ZZZZZZZZ:~/Personal/testclone$ ssh-add -l
The agent has no identities.
josh@LAPTOP-ZZZZZZZZ:~/Personal/testclone$ ssh-add ~/.ssh/jZZZZZZZZ6_key
Enter passphrase for /home/josh/.ssh/jZZZZZZZZ6_key:
Identity added: /home/josh/.ssh/jZZZZZZZZ6_key (jZZZZZZZZ6@hotmail.com)
I can then run git clone in my Personal directory without changing the ssh path I copied from the GitHub UI. Note that I used a private repo in this example just to check it would work. It automatically uses my “personal” identity held in ssh-agent (as otherwise the clone would have failed).
I can then do a push operation and it will be using the identity configured in the relevant git config file for this folder tree (.pers.gitconfig). It doesn’t need a passphrase because the key is active in ssh-agent.
josh@LAPTOP-ZZZZZZZZ:~/Personal/testclonepriv$ git push -v
Pushing to email@example.com:tghosth/testclonepriv.git
= [up to date] main -> main
updating local tracking ref 'refs/remotes/origin/main'
This option is nice because it also solves having to enter the passphrase every time. Obviously there are implications of using ssh-agent but for a single user local Linux machine it seems like a reasonable solution. If you are jumping between work and personal frequently, it might get fiddly but on the other hand it is most important for the initial clone operation.
GPG Signing Commits
This was more straightforward overall and GitHub has some good documentation for how to get it set up. At the time of writing, GitHub does not support commit signing using an SSH key so you have to set up a GPG key separately. You will notice that in my “.pers.gitconfig” file above I have user.signingkey and commit.gpgsign configured. (I am not currently using this for my work identity.)
Using the documentation, I was able to set this functionality up quite easily but once I had it set up, it kept failing with the following error:
josh@LAPTOP-ZZZZZZZZ:~/Personal/testclonepriv$ git commit --allow-empty -m "test sign"
error: gpg failed to sign the data
fatal: failed to write commit object
After a painfully long time, I finally found a hint in a blog post somewhere that I needed to run the following command in my terminal first:
With that command run, the commit would pop up a GPG window in the terminal prompting me for my GPG passphrase (obviously different to my SSH passphrase) and would then create and sign the commit.
I can use “git log” to show the successful signature.
josh@LAPTOP-ZZZZZZZ:~/Personal/testclonepriv$ git log --show-signature
commit 169a1d725d2ZZZZZZZZZZZZZZZZZZZZZc565ff3d5 (HEAD -> main)
gpg: Signature made Wed Jan 26 09:42:20 2022 IST
gpg: using RSA key 487BBZZZZZZZZZZZZZZZZZZZZZZZZZFB6E4682A9
gpg: Good signature from "Josh Grossman (tghosth) <jZZZZZZZZ6@hotmail.com>" [ultimate]
Author: tghosth <jZZZZZZZZ6@hotmail.com>
Date: Wed Jan 26 09:42:20 2022 +0200
Thanks for reading, I hope this is a useful summary and makes it easier for you to set up this functionality. If you have comments or feedback, the easiest option is to reach out to me on Twitter at @JoshCGrossman!
I know lots of people still have questions about OWASP and the AppSecEU 2018 debacle. Other than being a member, I have no formal standing in OWASP, locally or globally so nothing below represents anything official but I thought I would prepare some answers based purely on publicly available information.
What happened after the initial backlash?
The surprise announcement was followed by an angry rebuttal and a lot of outcry but after a few days things went quiet. Really quiet. The OWASP board email list has historically been relatively busy with consistent traffic. In the past 10 years, the latest traffic has restarted on that list after the holiday period is January 4th and only once has there not been a board meeting by January 14th. In 2018 there was complete board silence until January 18th when a number of OWASP leaders started querying what was going on. A formal, follow-up statement about the decision only came on January 23rd. It appears that there were some discussions being held behind the scenes culminating in a recorded conference call with OWASP board representatives and the UK and Israel OWASP leadership on January 22nd.
Why did AppSecEU get moved to the UK?
The follow-up statements seem to indicate that the root cause of the move was that recent operational challenges at the OWASP foundation, due at least in part to understaffing, meant that the foundation felt it was not in a position to provide the required support for the event. Especially given that it appears that AppSecEU 2017 and AppSecUSA 2017 did not provide the expected financial benefits.
The impression is that an AppSecEU in the UK is a safe choice whilst the foundation tries to address its internal issues.
We would like to acknowledge the effort of the organizing team, while realizing the required level of support from the foundation was not achieved.
What about the supposed lack of preparedness from the OWASP Israel committee?
On the initial board call in December, a big deal was made that despite the conference only (!) being six months away, various preparations had not been made including no signed contract with the venue.
In fact, on the call on January 22nd, the new Executive Director praised the third party why the Israeli organising committee had engaged to assist with the conference logistics and more importantly stated that the foundation would cover the costs of having to withdraw from the contract which had in fact been signed with the venue.
So what is next for OWASP and Israel?
On the call on January 22nd, the board expressed strong support for a global OWASP event to take place in 2019 once the foundation had had a year to address it’s operational challenges. This seems to be how others have interpreted that as well.
I would say so. The plan is next year to be in Israel. So the board decided a swap, as per my understanding.
Given that going forward the Executive Director is keen to start planning OWASP global events up to a year in advance, it remains to be seen over the next few months whether these actions are translated into words.
Additionally, the Israeli chapter have now released their response to the final decision and they are understandably still unhappy about the outcome but also positive about the intentions of the new board to try and repair the relationship and champion an event in Israel for 2019.
I think it is clear to everyone that the initial communication around this decision was not good enough but it is particularly disappointing that the basis for this decision (e.g. the lack of a signed contract and the “support” of the Israeli chapter in the decision) was demonstrably incorrect and that the initial communication and board discussion made out that the root cause was a lack of preparedness and ability to deliver of the Israeli chapter.
It is encouraging that this has been walked back to a certain extent however it is clear that it will take more than that to address the hurt which is felt by the Israeli chapter leadership.
The support for the Israeli chapter over Twitter and the board discussion of a global event in Israel in 2019 is also encouraging and I hope that the OWASP board will proactively reach out to the Israeli chapter leadership to make sure that this comes to fruition.
For some clients where we perform security testing, the client requests that we report on all findings on a daily basis.
Now, I am 100% behind reporting progress in terms of what has been tested (assuming there are multiple elements) or more importantly reporting problems in progressing as soon as possible. However, there are still some clients where they expect this plus findings to be reported.
I wanted to jot down some thoughts on some pros and cons to this approach.
A1: Feeling of progress
The client feels like we are working, progressing and finding stuff. (Although status reporting without findings should also mostly accomplish this).
A2: Immediate feedback and fix
The client receives immediate feedback on findings and can start to look at how to fix them even before we finish testing.
They may even be able to fix the finding and allow us to retest before the end of testing. I am always a little wary of the client making changes to an application in the middle of testing but if they are going to fix something but break something else that is going to happen regardless of if it happens during the test or after the test.
A3: Enforces reporting as you go
There is a tendency for consultants to save all the reporting for the end of the project. Hopefully they took enough screenshots along the way but even still, suddenly you are at the end of the project and you have 20 findings to write up. Having a daily report ensures that findings are written up as they are found, whilst they are still fresh in mind.
D1: Time consuming
Whilst we would have to write up all findings anyway, it is still more time consuming to have to prepare a report daily. The report has to go through a QA process every day instead of just once and if it is necessary to combine reports from multiple people, it can get even more complicated. Especially if we are using a complex reporting template.
D2: Difficult to update already reported findings
Sometimes we will find something and only afterwards find another angle or another element to the issue which means that the finding needs to be updated. This leads to more duplicated effort with the finding being reviewed multiple times and the client having to read and understand the finding multiple times.
D3: Less time to consider findings in detail
Sometimes it takes some time to consider the real impact of a finding. For example, what is the real risk from this finding, can it only be performed by an administrator? Will it only be relevant in certain circumstances? Having to rush the finding out in a daily report loses that thinking time and can lead to an inaccurate initial risk rating.
D4: Getting the report ready in time
Every day becomes a deadline day with a race to get the report ready in time. It can disrupt the testing rhythm and mean that consultants have to break from testing to prepare the daily report therefore losing focus and momentum.
D5: Expectation of linear progress
Testing doesn’t progress in a linear fashion. A consultant might spend a lot of time trying to progress on particular test on one day or on another day find a bunch of quick, lower risk findings. A daily report creates an expectation of news every day and a feeling that no news means a lack of progress.
D6: Increase likelihood of mistakes
With the increased pressure of daily output, the likelihood of mistakes is also increased as report preparers are under pressure to deliver the daily report by the deadline and reviewers are under pressure to quickly release the report to the client.
D7: It might not even get to the client!
If there are a few people in the review process, if just one of them is delayed in looking at the report and they have a query, the report may not make it to the client in time to be relevant before the next day’s report is released anyway!
D8: One size doesn’t fit all
Once you get into the habit of expecting daily reports or you create that expectation with the client, suddenly it is expected for any project regardless of whether it makes sense. This can mean that ongoing discussion with the client is discouraged because “we’re doing a daily report anyway” or alternatively a project which requires in depth thought and research is being constantly disturbed with unhelpful daily reports.
I agree that is is a bad idea to do a load of testing and then only weeks later the client finally sees some output. Especially where there are particularly serious findings that immediately expose the client to serious risk.
However, the need to provide a continual stream of updates leads to time inefficiency, lower quality findings and disturbs the progression of the test.
As such, whilst the reporting format should be discussed at the start of the project with the client, the aim should be to agree on the following points by communicating the reasons discussed in this post:
If this is a large project where there are multiple parts which are being tested one after the other in a short time-frame then it is worth reporting on progress over these parts on a daily basis.
Problems with testing should always be reported as soon as possible plus a daily status update on these issues to make sure these are not forgotten.
Critical threats which immediately put the client at severe risk should always be reported as soon as possible.
If the application is currently under development or there is specific pressure to deliver key findings as fast as possible, then high risk findings or medium risk findings can be delivered during the course of the test but should not be restricted to a strictly daily frequency.
If this is a short project (up to a week) without lots of different elements or if this a long project (several months) then daily status reporting is not appropriate.
Reporting of all findings on a strictly daily basis will never be appropriate.
I was recently involved in an application security testing project for a large client covering around 20 applications with multiple consultants working simultaneously in just three weeks of testing. By discussing with the client up front and agreeing on points 1, 2 and 3 above we kept the client fully in the loop whilst not burdening ourselves with reporting every tiny detail everyday.
I will probably update this post as I think of more advantages/disadvantages but feel free to send me feedback in the comments or via Twitter.