Category: Technical

Getting multiple GitHub accounts on one Linux/WSL machine

Getting multiple GitHub accounts on one Linux/WSL machine


I recently had to set up a new laptop and one of the things I wanted was the ability to have both my work and personal GitHub accounts set up on one Linux environment, (more specifically WSL). I also wanted to ensure that at least my personal commits were signed using a GPG key

I discovered quite a few complications in this process so I wanted to include some documentation on how I achieved this. If you are an ssh or git expert then some of this might be obvious but otherwise hopefully it will be helpful!

Git your configuration on

The first step was to get my git configuration set up correctly.

Let me Google that for you

My primary resources for how to get the multiple users set up was a combination of the following two links which were really useful but I got caught in a few issues on the way, (not necessarily the fault of the posts through).

The first thing I liked from the GitGuardian link was having two separate paths for Work and Personal projects using two separate GitHub identities.

Based on their instructions, I created “Work” and “Personal” folders within my WSL home folders (actually soft-links to other locations), created the relevant ssh keys and then the relevant configuration files. Obviously I also had to copy my ssh public keys into the relevant page of the GitHub UI for each account.

Git configuration files

Here are the git configurations I ended up with. I will add some more information about parts them below but they are mostly based on the links above. Note that the heading within which each configuration line sits is important.

# ~/.gitconfig
[includeIf "gitdir:~/Personal/"]
path = ~/Personal/.gitconfig.pers
[includeIf "gitdir:~/Work/"]
path = ~/Work/
excludesfile = ~/.gitignore
# ~/Work/.work.gitconfig
email = #redacted
name = Josh Grossman
user = joshacmesecurity #redacted
sshCommand = ssh -i ~/.ssh/joshacme_key #redacted
# ~/Personal/.pers.gitconfig
email = #redacted
name = tghosth
signingkey = 5517ZZZZZZZZZZZZ2A9 #redacted

user = tghosth

gpgsign = true

sshCommand = ssh -i ~/.ssh/jZZZZZZZ6_key #redacted

Git Gotchas

There were a couple of issues at this stage that had me scratching my head for a while

Wrong file names

I had used dots in my file names for the personal and work configuration files whereas the main configuration file from the GitGuardian link used hyphens. This took me longer than I care to admit to figure out… This was certainly a PEBKAC issue.

Messed up double quotes

I was getting parsing errors for a very long time on my main configuration file. I tried all sorts of things including not using soft-links but using the full paths from the root instead. After much faffing, I realised that when I copied the same configuration files from the GitGuardian site, it had used “curly double quotes” instead of the regular double quotes and this was tripping up git 🤦‍♂️.

SSH authentication


I wasn’t previously familiar with using ssh authentication with GitHub so this caused me some challenges as well. I will paste in an example of ssh git configuration file first and then walk through this aspect.

It is possible that there are other/better ways to do this so please free to tell me if you have ideas 😀.

SSH Configuration file

Here is the ssh configuration I ended up with. I will explain some of the key aspects further down.

Host ghpers
  AddKeysToAgent yes
  IdentityFile ~/.ssh/jZZZZZZZ6_key

Host ghwork
  AddKeysToAgent yes
  IdentityFile ~/.ssh/joshacme_key

Making ssh work with git

I was used to using HTTPS for cloning repositories and personal access key authentication to push so this was also a bit of a learning curve. My main (eventual) discovery was that despite the use of the “sshCommand” parameter in the previous git configuration files, this is only used for “git fetch” and “git push” operations (not clone) and only when the repository’s origin is set using the SSH syntax rather than the HTTPS.

After some experimentation, I found a few possible ways to clone the repository in a way that would make this all work. In the examples below I have used my personal identity but I could also have used my work identity and cloned to the relevant Work directory.

Option 1 – Without explicitly choosing an account

It is possible to start by cloning the repository using the regular HTTPS clone mechanism within the “Personal” directory. I can copy the clone command straight out of the GitHub UI:

cd ~/Personal
git clone

I now have the repository cloned locally but I now need to tell git to use the SSH mechanism instead of the HTTPS mechanism. I can do this as follows:

cd ~/Personal/testclone
git remote set-url origin

Note that at no point did I need to specify the specific identity to use so maybe this could even be automated after a clone operation with some sort of hook…

Either way, if I now do a push, it asks me for the correct key passphrase and works successfully

josh@LAPTOP-ZZZZZZZZ:~/Personal/testclone$ git push -v
Pushing to
Enter passphrase for key '/home/josh/.ssh/jZZZZZZZZ6_key':
 = [up to date]      main -> main
updating local tracking ref 'refs/remotes/origin/main'
Everything up-to-date

My main concern about this approach is that I worry how effective it will be if there are multiple remotes or branches or something. The other disadvantage is that it is a two step process and also it will not work smoothly for private repositories.

Option 2 – Doing a clean ssh clone choosing the relevant account

The other option is a one step but I need to mess with the original clone command. When I copy the clone command for an ssh clone, it will look like this:

cd ~/Personal
git clone

However, before I use it, I need to change it to tell the clone command which identity I want to use as otherwise it will return me errors. I can use the value in the Host field of the ssh configuration file above for this so the command will change to as follows:

cd ~/Personal
git clone git@ghpers:tghosth/testclone.git

You can see above that “ghpers” was the Host I gave to my personal key in the configuration file.

I can then run this and git will know which SSH identity to use for the clone operation. Once I start doing fetches and pushes, it will be using the identity configured in the relevant git configuration file for this folder tree (.pers.gitconfig).

I like this method because it is a single command. Whilst I have to manually change the clone command rather than just copying it from the GitHub UI, I only have to do that once and then everything works. It will also work smoothly for private repositories.

Option 3 – Using ssh-agent

I actually figured this option out whilst writing this blog post 🙃.The freecodecamp link sort of alludes to this but not explicitly for easily cloning the repository in the first place.

The ssh-agent program temporarily keeps ssh private keys in memory and one advantage is that you only have enter the passphrase once per session rather than on every use individually. Without ssh-agent, for every git clone, git fetch and git push, I would need to enter the passphrase every single time.

However, another advantage for our use case is that when the key is held in ssh-agent and I do a git clone via ssh, the ssh operation will automatically use that key without needing to be told.

You can see this in the terminal fragment below. I start ssh-agent running in my current terminal, (see this explanation of why it needs to be done using eval). I then add the identity (my personal identity in this case) I want to use to the agent.

josh@LAPTOP-ZZZZZZZZ:~/Personal/testclone$ eval `ssh-agent -s`
Agent pid 841
josh@LAPTOP-ZZZZZZZZ:~/Personal/testclone$ ssh-add -l
The agent has no identities.
josh@LAPTOP-ZZZZZZZZ:~/Personal/testclone$ ssh-add ~/.ssh/jZZZZZZZZ6_key
Enter passphrase for /home/josh/.ssh/jZZZZZZZZ6_key:
Identity added: /home/josh/.ssh/jZZZZZZZZ6_key (

I can then run git clone in my Personal directory without changing the ssh path I copied from the GitHub UI. Note that I used a private repo in this example just to check it would work. It automatically uses my “personal” identity held in ssh-agent (as otherwise the clone would have failed).

josh@LAPTOP-ZZZZZZZZ:~/Personal$ git clone
Cloning into 'testclonepriv'...
remote: Enumerating objects: 4, done.
remote: Counting objects: 100% (4/4), done.
remote: Compressing objects: 100% (3/3), done.
remote: Total 4 (delta 0), reused 0 (delta 0), pack-reused 0
Receiving objects: 100% (4/4), 4.50 KiB | 1.50 MiB/s, done.
josh@LAPTOP-ZZZZZZZZ:~/Personal$ cd testclonepriv/

I can then do a push operation and it will be using the identity configured in the relevant git config file for this folder tree (.pers.gitconfig). It doesn’t need a passphrase because the key is active in ssh-agent.

josh@LAPTOP-ZZZZZZZZ:~/Personal/testclonepriv$ git push -v
Pushing to
 = [up to date]      main -> main
updating local tracking ref 'refs/remotes/origin/main'
Everything up-to-date

This option is nice because it also solves having to enter the passphrase every time. Obviously there are implications of using ssh-agent but for a single user local Linux machine it seems like a reasonable solution. If you are jumping between work and personal frequently, it might get fiddly but on the other hand it is most important for the initial clone operation.

GPG Signing Commits

This was more straightforward overall and GitHub has some good documentation for how to get it set up. At the time of writing, GitHub does not support commit signing using an SSH key so you have to set up a GPG key separately. You will notice that in my “.pers.gitconfig” file above I have user.signingkey and commit.gpgsign configured. (I am not currently using this for my work identity.)

Using the documentation, I was able to set this functionality up quite easily but once I had it set up, it kept failing with the following error:

josh@LAPTOP-ZZZZZZZZ:~/Personal/testclonepriv$ git commit --allow-empty -m "test sign"
error: gpg failed to sign the data
fatal: failed to write commit object

After a painfully long time, I finally found a hint in a blog post somewhere that I needed to run the following command in my terminal first:

export GPG_TTY=$(tty)

With that command run, the commit would pop up a GPG window in the terminal prompting me for my GPG passphrase (obviously different to my SSH passphrase) and would then create and sign the commit.

josh@LAPTOP-ZZZZZZZZ:~/Personal/testclonepriv$ export GPG_TTY=$(tty)
josh@LAPTOP-ZZZZZZZZ:~/Personal/testclonepriv$ git commit --allow-empty -m "test sign"
[main 169a1d7] test sign

I can use “git log” to show the successful signature.

josh@LAPTOP-ZZZZZZZ:~/Personal/testclonepriv$ git log --show-signature
commit 169a1d725d2ZZZZZZZZZZZZZZZZZZZZZc565ff3d5 (HEAD -> main)
gpg: Signature made Wed Jan 26 09:42:20 2022 IST
gpg:                using RSA key 487BBZZZZZZZZZZZZZZZZZZZZZZZZZFB6E4682A9
gpg: Good signature from "Josh Grossman (tghosth) <>" [ultimate]
Author: tghosth <>
Date:   Wed Jan 26 09:42:20 2022 +0200

    test sign


In closing

Thanks for reading, I hope this is a useful summary and makes it easier for you to set up this functionality. If you have comments or feedback, the easiest option is to reach out to me on Twitter at @JoshCGrossman!

Security through Non-testability

Security through Non-testability

Signature based detection

A while ago, I saw the following tweet from James Kettle, Head of Research at Portswigger (the makers of Burp Suite.)

The implication from this tweet is that Web Application Firewalls (WAF) are blocking strings containing the string “” because it is used by Burp Suite when trying to discover vulnerabilities.

As James says in his tweet, there is a trivially simple workaround for this by replacing the “” string with the server’s IP instead although maybe in an effort to keep up with the “arms race”, WAF developers will start to block input containing that IP address as well.

Sadly this is a often a standard tactic for WAFs (and indeed other forms of malicious content protection to block simple signatures that can be easily bypassed with a minor change to the signature.

Time is money

Whilst these sort of protections will not provide any protection against an even slightly motivated attacker, they do involve incurring a time cost to bypass them. Where this becomes an issue is when a client wants us to perform a security test with these protections in place. This is an issue I come across often during application security testing and something that I discuss in my talk, “How to get the best AppSec test of your life“.

Every time a client tells us that they have a WAF in place, I explain that our preferred testing approach is for us to test without being blocked by the WAF and then, and only if absolutely necessary, validate findings against the WAF protected site at the end of the engagement. On a client engagement, we are (usually) being paid to test the client application and not the WAF. We could spent a lot of time and effort specifically trying to bypass the WAF for each attack but that is inefficient for the client.

Illustration of a typical WAF (

Another potential issue is that in an attack or other IT incident, a company may be forced to disable their WAF. If their site has not been tested without a WAF, it may therefore still be vulnerable.

Most of the time, clients will accept this approach without issue once the rationale has been explained. Where they don’t accept this, we will usually agree to test anyway but include a disclaimer in the report that the WAF remained active for the duration of the test.

On one memorable occasion, a client decided that I had to verify a finding with their WAF enabled and I had several rounds of cat and mouse with their WAF vendor as I would bypass the WAF and the WAF vendor would deploy a bug fix or a configuration change to address it. This only ended when I sent a payload that crashed the in-line, cloud-based WAF rendering the client’s site inaccessible for several minutes every time I sent the payload. The WAF vendor then claimed victory since “they had blocked the attack”!

man old depressed headache
Photo by Gerd Altmann on

Another example is clients where their mobile applications encrypt traffic in transit in addition to the standard TLS encryption. Usually, the client either cannot disable this functionality when we test their applications or are not prepared to do so. Again such a measure incurs a time cost to bypass by either building a tool to perform the decryption and allow us to view/edit the traffic or testing the mobile application and it’s supporting APIs separately.

Either way, clients generally don’t want to pay the cost associated with this but also expect a comprehensive test.

In conclusion

If you are paying for an application security test, you want the money to be spent in the most efficient way possible with the maximum amount of time and effort allocated to testing your application. Making your tester’s life as easy as possible is the best way of achieving that.

If you leave these sort of time wasting security measures in place, you are going to end up spending money testing these measures rather than your application.


If you are really advanced and really confident in your application, you may want to have someone look at vulnerabilities that only occur when your WAF or other security technology is enabled. We have seen examples like this for CDN and caching technologies but if anyone has any WAF specific examples (I am sure I have seen this but cannot remember where), please let me know via Twitter 🙂

Setting up an OWASP Juice Shop CTF

Setting up an OWASP Juice Shop CTF

Last updated: 02-August-2020


I recently used the very excellent OWASP Juice Shop application developed by the very excellent Björn Kimminich to run an internal Capture the Flag event (CTF) for my department. It went really well and got really good feedback so I thought I would jot down some practical notes on how I did it.

One important point before you start, you should note the disclaimer that that there are plenty of solutions for this challenge on the Internet.

Someone asked, how did I address that in this case?

(I’ll explain the PDF below)

Anyway, let’s get into the details of how I did the CTF.

RTFM (Read The Full Manual)

First of all, there are some great instructions about how to use Juice Shop in CTF mode in the accompanying ebook, see this section specifically. In this blog post, I want to talk about some of the more specific choices I made on top of those instructions.

Obviously, your mileage will vary but hopefully the information below will help you with some of the practicalities of setting this up in the simplest way possible.

The target applications

I originally thought about getting people to download the docker image onto their own laptops and work on that but in the end I decided to go with the Heroku option from the Juice Shop repository as it appeared that as long as you didn’t hammer the server (which shouldn’t be necessary for this app) then you can host it there for free! (Although you do need to supply them with credit card details just in case).

The only thing to do is make sure you set the correct environmental variable to put the application into CTF mode, see the screenshot below.

heroku screenshot

I had split my participants into teams (I did this myself to make sure that the teams were balanced) and I set up multiple application instances such that each team was sharing one application instance so that they would see their shared progress but not interfere with other teams. I also made sure each instance had a unique name to stop teams messing with each other.

Spinning up the CTF platform.

I had previously experimented with the CTFd platform when I had first planned this event a year or so ago so I was confident that I would use this as the scoring system and host it myself in an AWS EC2 instance.

When I headed over to their GitHub repository I could see there were a number of different deployment methods and I decided on the “docker-compose” method because I like the simplicity of Docker. Things got a bit messy as I stumbled into a known issue with performance (which has now been fixed) and I also realised that there was no obvious way of using TLS which I decided I wanted as well.

The guys on the CTFd slack channel were really helpful (thanks especially to kchung and nategraf) and eventually I used a fork which nategraf had made which had the performance issue fixed and also had a different version of the “docker-compose” script which included an nginx reverse proxy to manage the TLS termination.

I used an EC2 t2.medium instance for the scoreboard server (mostly because of the original performance problems) but you could probably get away with a much smaller instance. I chose Ubuntu 16.04 as the operating system.

UPDATE: For the following section up to the “sudo certbot” below command, I created an ugly shell-script with the commands included.

I installed Docker based on the instructions here up to after I had run sudoapt-get install docker-ce docker-ce-cli I then added the local user to the “docker” group using sudo usermod -aG docker ubuntu (you may need to logout/login after this) and then used the Linux instructions from here to install “docker-compose” (don’t make the mistake I made initially and install via apt-get!)

If you just want to use the scoreboard without TLS then you can just clone the CTFd repository from here, run docker-compose up (using -d if you want it to run in the background) from within the cloned directory, and you are away.

Using TLS with the CTF platform

If you do want a (hopefully) simple way to use TLS, the fork I initially used no longer exists so I have created a deployment repository which uses the same docker compose file that was in the original fork and includes the nginx reverse proxy but will also pull the latest version of CTFd an additional docker compose file to configure the nginx instance for TLS. You can clone that repository from here. You will then need to copy/clone the original CTFd repository from here into that same directory. (The ugly shell-script I mentioned does that automatically.)

Once you have cloned that, you will need to get yourself a TLS certificate and private key. I used the EFF’s “certbot” which generates certificates using Let’s Encrypt to produce my certificate. I installed using the instructions here.

If you used my ugly shell-script, this is where it leaves you and you need to continue following instructions.

I used a subdomain of my personal domain ( and the subdomain to the EC2 server’s IP address by adding an A record to my DNS settings.

Then while no other web servers were running, I ran the command” sudo certbot certonly --standalone -d which automatically created the certificate and private key I needed for my chosen domain (make sure port 80 or 443 is open!).

certbot img

I then renamed the “fullchain” file to ctfd.crt and the “privkey” file to ctfd.key and saved them inside the “ssl” directory which you will have if you cloned my deploy repository above. (The nginx.conf file I used for the TLS version of the deployment looks for these files.)

You then just need to make sure that the hostname in the “docker-compose-production.yml” file matches the hostname of your server (in my case and you can then run docker-compose -f docker-compose.yml -f docker-compose-production.yml up -d from within your cloned directory (or use the file I supply) and it should start listening on port 443 with your shiny new SSL certificate!

ctf screenshot

Loading the Juice Shop challenges

This part was easy, I followed the instructions from here to run the tool to export the challenges from Juice Shop and and steps 4 and 5 from here to import the challenges into CTFd.

Setting the stage

I wanted to provide some brief instructions for the teams and also set some ground rules. For most of them, this was their first CTF and I deliberately made the instructions brief but made myself available to answer questions throughout the CTF. I only had four teams so that was a manageable workload.

I gave the teams the following instructions:

  • Each team has their own, Heroku hosted, instance of the vulnerable application. Your scope is limited to that URL, port 443.
  • Before the CTF starts, you need to go register your team details in the scoreboard app: (one account per team)
  • Once the CTF starts, you can use the “Challenges” screen to enter your flags. You should search for the challenge name on the challenges screen.
  • If you miss your flag for some reason, you can go to the scoreboard screen of the vulnerable application and click on the green button to see it again.
  • The clock will start at 16:15 and stop at 18:45 at which point you will not be able to record any additional flags.
  • Be organised and plan your efforts! (Divide and Conquer!)

I also set down the following ground rules:

  • You may not attack or tamper with in any way whatsoever.
  • You may not try and DoS/DDoS your vulnerable application or indeed anything else related to the challenge.
  • You may not tamper with another team’s instance, another team’s traffic or anything else related to another team or the organisers.
  • You may not use Burp Scanner – it probably won’t help you much and even if it does trigger a flag you won’t understand why it worked.
  • You may not search the Internet or ask anyone other than the organisers for anything related to the specific application, the specific challenges or the application’s source code. You may only search for general information about attacks. You have a PDF containing lots of hints about the challenges.
  • You may not tamper with the database table related to your challenge progress.
  • If you aren’t sure about anything, ask 🙂
  • You may have points deducted if you break the rules!

Giving some help

I mention above a PDF with hints. Like I said above, they were not allowed to search the Internet for Juice Shop specific clues but I still wanted them to benefit from hints to help them out. Björn prepared an ebook with all the hints in but it contained the answers as well. In order to save my competitors from temptation, I created a fork with all the answers removed which you can find here.

Other notes

During the course of the CTF, I projected the CTFd scoreboard onto the big screen and overlaid a countdown timer as well so people knew how long they had to go. I just used a timer from here although it was a little ugly…

I froze the scoreboard for the last 15 minutes to add to the suspense and cranked up some epic music to keep people in the mood.

Final Thoughts

I’ll leave you with the main guidance I gave to the teams before they started:

  • Have fun – that is the main goal of tonight
  • Learn stuff – that is the other main goal of tonight
  • Don’t get stressed about the time, easy to get overwhelmed
  • Team Leaders:
    • Divide up tasks
    • Decide priorities
    • Time management – avoid rabbit-holes
    • Escalate questions
    • Help those with less experience

Everyone had a great time and I got really good feedback so if you have the opportunity to run something like this, I strongly suggest you take it.

If you have any other questions or feedback let me know, my Twitter handle is above.

Updates: 18-March 2018

Team instances

Someone asked about team members sharing an instance. I deliberately organised the CTF with teams of 3-4 people. The primary reason was that our department covers a wide spectrum of skill-sets so I still wanted everyone to take part, enjoy and learn something. I therefore carefully balanced the teams based on abilities. (It also meant I could split my direct reports across different teams so no one could accuse me of favouritism 😉)

My logic in a team sharing an instance was to allow progress to be shared and prevent duplicated effort although I think more than four people in a team would not have been manageable. Overall I think that aspect worked well.

Another thought is that if each team member had their own instance, it is more likely that they would all see the solution to each challenge rather than one person completing it and just telling the others. However, this would have slowed things down which in the time we had available probably wouldn’t have been worth it.

Instance resets

One thing I didn’t do beforehand was practice resetting an instance and restoring progress which caused issues when one team created too much stored XSS and another team somehow accidentally changed the admin password without realising it!

Resetting an instance is possible by saving the continue code from the cookie, restarting the instance (that is easy in Heroku) and then sending a post request to the app looking like this:

PUT /rest/continue-code/apply/<<CONTINUECODE>> HTTP/1.1