Red Ira is an cloud automation system scheme for Red Teams, built on the work done in Red Baron. It currently supports AWS only. The accompanying blog post can be found here.
The design philosophy differs from Red Baron in the following :
depends
patternsThis following components were leveraged for development and are stable for this release:
The Ansible Playbooks are currently built for and tested on the latest Amazon Debian Buster AMIs.
apt-get -y install ansible terraform python3-pip
ansible-galaxy install -r ./data/playbooks/requirements.yml
SES requires verification before the relays can be used for Phishing. See the instructions in the README
Since this framework will isolate back-end operator actions from the internet, some manual setup is required in AWS before the framework can be deployed.
In the root directory, copy the environment variables template file as:
cp ./environment_variables.auto.tfvars.json.template ./environment_variables.auto.tfvars.json
Then add the following properties:
Pick an existing VPC or create new one. Record the vpc_id
.
"vpc_id" : "vpc-aabbccdd",
Create or select an existing subnet which will be used for the back-end infrastructure, and which is intended to be accessed by operators only. Record the private_subnet_id
.
"private_subnet_id" : "subnet-aabbccdd",
Create a subnet in which publicly facing redirectors & assets will be placed. Record the public_subnet_id
.
"public_subnet_id" : "subnet-aabbccdd",
This will be the Security Group(s) which redirectors and publicly facing assets will inherit. They only need to define incoming traffic from internal sources to the public subnet, as public sources are resolved through Terraform at runtime depending on the chosen deployment.
Create the SG or multiple, which allows incoming traffic from the private subnet you just created and optionally anywhere in the VPC which you would like to access the publicly facing assets from.
The following is an example which additionally allows the required private subnet total access for managing redirectors, as well as the Terraform controller's subnet prefix from which to ssh in:
Type | Protocol | Port Range | Source | Description |
---|---|---|---|---|
All traffic | All | All | 172.14.130.0/24 | Private Subnet (required) |
SSH | TCP | 22 | 172.1.111.0/24 | Internal SSH access (from Terraform controller's subnet) |
Record these SG(s) as base-public-security-groups
.
"base-public-security_groups" : ["sg-12345678912345678"]
This will be the security group that allows operators to access the back-end infrastructure (the private subnet).
The public SG is required (the one created above), as it allows all incoming traffic from the redirectors/public assets to communicate inbound. The VPN server addition in the example below is an example of how one may allow operators to connect to the internal red team assets from a VPN server.
Type | Protocol | Port Range | Source | Description |
---|---|---|---|---|
All traffic | All | All | sg-12345678912345678 | Public SG (required) |
All traffic | All | All | 172.12.130.101/32 | VPN Server |
Record this SG as the base-internal-security-groups
.
"base-internal-security_groups" : ["sg-12345678912345678"],
export AWS_SECRET_ACCESS_KEY="<secret_key>"
export AWS_DEFAULT_REGION="us-east-1"
export AWS_ACCESS_KEY_ID="<key_id>"
Reference the AWS Deployment README to select the desired deployment.
Clean the root of the previous deployment.
:warning: Be careful as to not clear out someone's pre-existing environment!
rm -f ./aws_*
Copy the desired deployment folder from the AWS Deployments folder to the root.
cp ./deployments/aws/complete/* ./
Rename the auto.tfvars.json.template
file to .json
for that deployment, and fill in the json variables with target values:
mv ./aws_complete.auto.tfvars.json.template ./aws_complete.auto.tfvars.json
If working with Cobalt Strike, retrieve an Oracle JDK gz tar from the Oracle website and copy it to the ./data/oracle folder. This is necessary due to Oracle's licensing restrictions.
cp jdk-8u261-linux-x64.tar.gz ./data/oracle/
If a custom c2 profile file is desired, copy the file to ./data/c2_profiles fill in the filename within the c2-profile variable in the respective .auto.tfvars.json
deployment config file you created. Otherwise, the default CS profile will be used (not recommended).
cp <profile filename> ./data/c2_profiles/
From the root directory:
terraform init
terraform plan -out <plan file>
terraform apply <plan file>
GitHub is used to sync code, as well as to track issues, feature requests, and pull requests.
Pull Requests are always welcome. The following procedure should be adhered to:
master
.Use GitHub issues to track public bugs. Please ensure your description is clear and has sufficient instructions to be able to reproduce the issue.
By contributing to RedIra, you agree that your contributions will be licensed under its GPLv3 license.
If the locals are changed, the paths in the destroy provisioners will need to be updated due to a limitation in terraform
https://github.com/hashicorp/terraform/issues/23675
Terraform doesn't support variables in module source paths, meaning that core modules must remain in place, and deployments must be copied to the root folder or else the module sources will not resolve properly. If Hashicorp implements it in the future, dynamic path resolution could be accomplished by modifying base variables.