Quick note before we start: if you haven't already checked out the Streamlining Simulated Attack Infrastructure (Part 1) post, please have a look at it. There we discuss the inner workings of the infrastructure, our design choices and the overall workflow of the solution.

Today we're diving into the modularity of the infrastructure and how to create and deploy an Evilginx2 module. Building modular infrastructure allows us to divide our codebase into smaller pieces and work on individual implementations of tools into our environment without it turning into spaghetti code.

In order to keep things organised, we also developed our own 'helper' DNS modules, which are modules for tools that require multiple DNS records. By creating smaller functional modules we can reduce the amount of code required for DNS records across the repository while taking a "Don't Repeat Yourself" (DRY) approach.

Required Resources

In order to design our module, we first need to understand how to deploy our tool (in this case Evilginx2). It's important to have your required resources identified before developing your modules.

Evilginx2 requires the following resources:

  • An EC2 Instance
  • A Public IP address
  • DNS Records
  • Firewall Rules

Module Creation

Using the AWS Terraform Provider we can easily generate a module with all these requirements. To do so we create a sub-directory within our 'modules' folder and name it after our tool (Evilginx2).

Three files are then created within this directory:

  • main.tf - Contains all the resources required for the module, as well as the provider information;
  • output.tf - Terraform allows you to configure modules and how they output after a plan is applied. This file contains the output information of the module, for the benefit of the operator;
  • variables.tf - This file contains the variables which are designed to be dynamic, and change per project. These variables are carefully designed to reduce the amount of configuration an operator needs to configure when deploying.

Terraform providers offer easy-to-use building blocks to create our own custom modules. In our case the four (4) resources we required above can be broken down into three (3) separate resources of the AWS Provider:

  • VPC Security Group
  • An EC2 Instance + Public IP address
  • DNS helper module

Creating a new Virtual Private Cloud (VPC) ensures that all project-related infrastructure components are isolated into their own VPC. This is done by referencing every component to the newly created VPC ID.

main.tf

An example of the module main.tf can be found below:

# Terraform Provider information
terraform {
    required_providers {
        aws = {
            source = "hashicorp/aws"
            version = "4.45.0"
        }
    }
}

# VPC Security Groups are created before the instance to ensure the instance can reference the security group
resource "aws_security_group" "evilginx" {
    name        = "evilginx"
    description = "Evilginx Rules"
    vpc_id      = var.vpc

    ingress {
        description      = "SSH from Trusted"
        from_port        = 22
        to_port          = 22
        protocol         = "tcp"
        prefix_list_ids = ["pl-123456789"]
    }

    ingress {
        description      = "HTTP from Internet"
        from_port        = 80
        to_port          = 80
        protocol         = "tcp"
        cidr_blocks = ["0.0.0.0/0"]
    }

    ingress {
        description      = "HTTPS from Internet"
        from_port        = 443
        to_port          = 443
        protocol         = "tcp"
        cidr_blocks = ["0.0.0.0/0"]
    }

    ingress {
        description      = "DNS from Internet"
        from_port        = 53
        to_port          = 53
        protocol         = "udp"
        cidr_blocks = ["0.0.0.0/0"]
    }

    egress {
        from_port        = 0
        to_port          = 0
        protocol         = "-1"
        cidr_blocks      = ["0.0.0.0/0"]
    }

    tags = {
        Name = "${var.project_name}"
    }
}

# An EC2 instance is created and a public IP address is set. The previously created security group is also referenced at the time of creation.
resource "aws_instance" "main" {
    ami           = "ami-0567f647e75c7bc05" # ubuntu 20.04
    instance_type = "t2.medium"
    associate_public_ip_address = true
    subnet_id = var.subnet
    vpc_security_group_ids = [aws_security_group.evilginx.id]
    key_name = "An SSH Keyfile"
    user_data = file("./modules/evilginx/installer.sh")

    tags = {
        launchedBy = "terraform"
        notes = var.domain
        type = "Ubuntu20.04 - t2.medium"
        project = var.project_name
        createdBy = "Terraform"
        Name = "Project-${var.project_name}-Evilginx"
    }
}

# The custom DNS module is called to create all the necessary DNS records for the module, and point to the public IP address
module "dns" {
    source = "../../modules/dns"
    domain = var.domain
    record = var.record
    public_ip = aws_instance.main.public_ip
    project_name = var.project_name
}

Evilginx2 main.tf file (./modules/evilginx/main.tf)

Now for our astute readers, you might be thinking "All this script does is spinning up AWS resources, but it doesn't install Evilginx2". Well, all the magic happens in the line:

user_data = file("./modules/evilginx/installer.sh")

Here we use the power of Cloud-Init to configure our instance. For all intents and purposes, it is just a shell script which is run after the EC2 instance is created. It contains the necessary steps to install the Evilginx2 tool, as well as some custom secret sauce we've developed in-house. (Sorry, our secrets must remain our secrets :P)

One cool feature of Cloud-Init is that the execution of the script is also logged. This log can be accessed by SSH'ing into the EC2 instance being created, and tailing the file, like so:

$ tail -f /var/log/cloud-init-output.log

output.tf

This file is pretty straightforward. All we want to output from the module is the public IP address of the newly created EC2 instance, and any DNS records associated:

output "public_ip" {
    value = aws_instance.main.public_ip
}

output "dns" {
    value = module.dns.*
}
Evilginx2 output.tf (./modules/evilginx/output.tf)

This is what the operator is mainly interested in.

variables.tf

The variables file contains all the dynamic values which can be used by the module. The way we've designed the infrastructure allows us to re-use the variables created by the global repository, as well as more specific variables for the module.

# GLOBAL
variable "project_name" { type = string }
variable "vpc" { type = string }
variable "subnet" { type = string }

# MODULE SPECIFIC
variable "domain" { type = string }
variable "record" { type = list(map(any)) }
Evilginx2 variables.tf (./modules/evilginx/variables.tf)

Stub

Once the module has been created, we need to instantiate the module within our main repository. To do this we can use the module stanza, and place it within our repository's main.tf file:

module "evilginx" {
    for_each = var.evilginx
    source = "./modules/evilginx"
    project_name = var.project_name
    vpc = module.network.vpc_id
    subnet = module.network.subnet_id

    domain = each.key
    record = each.value
}
Evilginx2 stub used in the repository main.tf

The stub will also require the repository's variables.tf file to be updated to include the necessary variables. By adjusting this file we can limit the operator's setup to a single file required for editing (terraform.tfvars).

Documentation

As much as any of us hate writing documentation, we try to at least make sure every module created has the bare minimum of documentation. The bare minimum basically consists of single and multi-instance configuration examples. This allows any of our operators to quickly copy and paste a working configuration to deploy as quickly as possible. To manage documentation we utilise Gitlab's Wiki feature.

Evilginx2 module wiki page

Usage

Our operator's workflow consists of modifying the terraform.tfvars file to customise which modules will be created. This can be accomplished by copying and pasting the snippets found in the wiki for each module.

project_name = "Project Example"
certificate = {} # empty variables do not deploy anything
mailinabox = {}
gophish = {
    "customdomain03.tld" = {}
}
evilginx = {
    "customdomain01.tld" = [
        { name = "", type = "A" },
    ]
}
cobaltstrike = {
    domain = "customdomain02.tld",
    records = [
        { name = "subdomain", type = "A"},
    ]
}
Example terraform.tfvars

In the above example, the operator has customised the infrastructure to build a GoPhish instance, Evilginx2 instance and a CobaltStrike instance under the project name "Project Example".

Wrapping Up

We've dived a little more in depth into the creation of modules for our automated infrastructure, and hopefully showed the simplicity of designing modules and maintaining the infrastructure in a way that ensures documentation is written and ease to use for our operators.