Quick note before we start: if you haven't already checked out the Streamlining Simulated Attack Infrastructure (Part 1) post, please have a look at it. There we discuss the inner workings of the infrastructure, our design choices and the overall workflow of the solution.
Today we're diving into the modularity of the infrastructure and how to create and deploy an Evilginx2 module. Building modular infrastructure allows us to divide our codebase into smaller pieces and work on individual implementations of tools into our environment without it turning into spaghetti code.
In order to keep things organised, we also developed our own 'helper' DNS modules, which are modules for tools that require multiple DNS records. By creating smaller functional modules we can reduce the amount of code required for DNS records across the repository while taking a "Don't Repeat Yourself" (DRY) approach.
In order to design our module, we first need to understand how to deploy our tool (in this case Evilginx2). It's important to have your required resources identified before developing your modules.
Evilginx2 requires the following resources:
- An EC2 Instance
- A Public IP address
- DNS Records
- Firewall Rules
Using the AWS Terraform Provider we can easily generate a module with all these requirements. To do so we create a sub-directory within our 'modules' folder and name it after our tool (Evilginx2).
Three files are then created within this directory:
- main.tf - Contains all the resources required for the module, as well as the provider information;
- output.tf - Terraform allows you to configure modules and how they output after a plan is applied. This file contains the output information of the module, for the benefit of the operator;
- variables.tf - This file contains the variables which are designed to be dynamic, and change per project. These variables are carefully designed to reduce the amount of configuration an operator needs to configure when deploying.
Terraform providers offer easy-to-use building blocks to create our own custom modules. In our case the four (4) resources we required above can be broken down into three (3) separate resources of the AWS Provider:
- VPC Security Group
- An EC2 Instance + Public IP address
- DNS helper module
Creating a new Virtual Private Cloud (VPC) ensures that all project-related infrastructure components are isolated into their own VPC. This is done by referencing every component to the newly created VPC ID.
An example of the module main.tf can be found below:
Now for our astute readers, you might be thinking "All this script does is spinning up AWS resources, but it doesn't install Evilginx2". Well, all the magic happens in the line:
user_data = file("./modules/evilginx/installer.sh")
Here we use the power of Cloud-Init to configure our instance. For all intents and purposes, it is just a shell script which is run after the EC2 instance is created. It contains the necessary steps to install the Evilginx2 tool, as well as some custom secret sauce we've developed in-house. (Sorry, our secrets must remain our secrets :P)
One cool feature of Cloud-Init is that the execution of the script is also logged. This log can be accessed by SSH'ing into the EC2 instance being created, and tailing the file, like so:
$ tail -f /var/log/cloud-init-output.log
This file is pretty straightforward. All we want to output from the module is the public IP address of the newly created EC2 instance, and any DNS records associated:
This is what the operator is mainly interested in.
The variables file contains all the dynamic values which can be used by the module. The way we've designed the infrastructure allows us to re-use the variables created by the global repository, as well as more specific variables for the module.
Once the module has been created, we need to instantiate the module within our main repository. To do this we can use the module stanza, and place it within our repository's main.tf file:
The stub will also require the repository's variables.tf file to be updated to include the necessary variables. By adjusting this file we can limit the operator's setup to a single file required for editing (terraform.tfvars).
As much as any of us hate writing documentation, we try to at least make sure every module created has the bare minimum of documentation. The bare minimum basically consists of single and multi-instance configuration examples. This allows any of our operators to quickly copy and paste a working configuration to deploy as quickly as possible. To manage documentation we utilise Gitlab's Wiki feature.
Our operator's workflow consists of modifying the terraform.tfvars file to customise which modules will be created. This can be accomplished by copying and pasting the snippets found in the wiki for each module.
We've dived a little more in depth into the creation of modules for our automated infrastructure, and hopefully showed the simplicity of designing modules and maintaining the infrastructure in a way that ensures documentation is written and ease to use for our operators.