iknowjason / PurpleCloud

A little tool to play with Azure Identity - Azure Active Directory lab creation tool
https://www.purplecloud.network
MIT License
498 stars 86 forks source link

Hunting ELK machine is not configuring with Kibana or eslastic stack #16

Closed Johndpete316 closed 2 years ago

Johndpete316 commented 2 years ago

Issue

When running ad.py with --helk enabled the helk machine does not appear to be configured properly. When attempting to access the Kibana UI either with the internal IP from one of the machines on the network (tested with DC1) or from the external IP address getting a "connection refused" error message. After running into this message, I remoted into the velocihelk machine and confirmed that it is not configured with Kibana or any of the tools expected from the Hunting ELK repo / elastic stack. I ended up destroying that range and running the following process again just to confirm the issue.

$ python3 ad.py --admin admin --password Password1 --endpoints 1 --location eastus --helk

helk.tf


resource "azurerm_public_ip" "vh-external" {
  name                    = "vh-public-ip-${random_string.suffix.id}"
  location                = var.location
  resource_group_name = "${var.resource_group_name}-${random_string.suffix.id}"
  allocation_method       = "Static"
  idle_timeout_in_minutes = 30

  depends_on = [azurerm_resource_group.network]
}

resource "azurerm_network_interface" "vh-nic-int" {
  name                    = "vh-nic-int-${random_string.suffix.id}"
  location                = var.location
  resource_group_name = "${var.resource_group_name}-${random_string.suffix.id}"
  internal_dns_name_label = local.virtual_machine_name_helk

  ip_configuration {
    name                          = "primary"
    subnet_id                     = azurerm_subnet.siem_subnet-subnet.id
    private_ip_address_allocation = "Static"
    private_ip_address            = "10.100.30.4" 
    public_ip_address_id          = azurerm_public_ip.vh-external.id
  }

  depends_on = [azurerm_resource_group.network]
}

locals {
  virtual_machine_name_helk = "velocihelk"
}

# Create (and display) an SSH key
resource "tls_private_key" "example_ssh" {
    algorithm = "RSA"
    rsa_bits = 4096
}

# Enable if you want to see the SSH key - It is written to a file
output "tls_private_key" { 
  value = tls_private_key.example_ssh.private_key_pem
  sensitive = true
}

data "template_file" "linux-vm-cloud-init" {
  template = file("${path.module}/files/helk.sh.tpl")

  vars = {
   helk_ip = "10.100.30.4"
  }
}

resource "azurerm_linux_virtual_machine" "vh_vm" {
  name                          = local.virtual_machine_name_helk
  location                      = var.location
  resource_group_name = "${var.resource_group_name}-${random_string.suffix.id}"
  network_interface_ids         = [azurerm_network_interface.vh-nic-int.id]
  size                       = "Standard_D2s_v3"
  admin_username        = "helk"
  disable_password_authentication = true

  custom_data = base64encode(data.template_file.linux-vm-cloud-init.rendered)

  source_image_reference {
    publisher = "Canonical"
    offer     = "UbuntuServer"
    sku       = "18.04-LTS"
    version   = "latest"
  }

  os_disk {
    name              = "${local.virtual_machine_name_helk}-disk1"
    caching           = "ReadWrite"
    storage_account_type = "Premium_LRS"
    disk_size_gb    = 100
  }

  admin_ssh_key {
    username       = "helk"
    public_key     = tls_private_key.example_ssh.public_key_openssh
  }

  tags = {
    environment = "Velociraptor HELK Prod"
  }

  depends_on = [azurerm_resource_group.network]
}

# write public IP address of Linux host to file
resource "local_file" "hosts_cfg_velociraptor" {
    content = templatefile("${path.module}/templates/hosts.tpl",
        {
        ip = azurerm_public_ip.vh-external.ip_address
        huser = "helk"
        }
    )
    filename = "${path.module}/hosts.cfg"

}

# write ssh key to file
resource "local_file" "ssh_key" {
    content = tls_private_key.example_ssh.private_key_pem
    filename = "${path.module}/ssh_key.pem"
    file_permission = "0700"
}

resource "null_resource" "helk-scp-velociraptor-config" {

  provisioner "remote-exec" {
      inline = ["echo 'Hello World'"]

  connection {
    host     = azurerm_public_ip.vh-external.ip_address
    type     = "ssh"
    user     = "helk"
    private_key = tls_private_key.example_ssh.private_key_pem
    timeout  = "3m"
  }
}

provisioner "local-exec" {
  command = "scp -o StrictHostKeyChecking=no -i ${path.module}/ssh_key.pem helk@${azurerm_public_ip.vh-external.ip_address}:/home/helk/config.yaml ${path.module}/files/Velociraptor.config.yaml"
}
  depends_on = [azurerm_linux_virtual_machine.vh_vm]
}
iknowjason commented 2 years ago

Confirmed. We are getting this. Not enough available memory to run HELK once the VM sizes were switched over. I'm going to bump up the HELK instance to 14 GB memory instead of 8 GB

[HELK-INSTALLATION-INFO] HELK hosted on a Linux box
[HELK-INSTALLATION-INFO] Available Memory: 7462 MBs
[HELK-INSTALLATION-INFO] You're using ubuntu version bionic

*****************************************************
*      HELK - Docker Compose Build Choices          *
*****************************************************

1. KAFKA + KSQL + ELK + NGNIX
2. KAFKA + KSQL + ELK + NGNIX + ELASTALERT
3. KAFKA + KSQL + ELK + NGNIX + SPARK + JUPYTER
4. KAFKA + KSQL + ELK + NGNIX + SPARK + JUPYTER + ELASTALERT

[HELK-INSTALLATION-INFO] HELK build set to 4
[HELK-INSTALLATION-INFO] Your available memory for HELK build option 4 is not enough.
[HELK-INSTALLATION-INFO] Minimum required for this build option is 8000 MBs.
[HELK-INSTALLATION-INFO] Please Select option 1 or re-run the script after assigning the correct amount of memory
iknowjason commented 2 years ago

I bumped the HELK instance size to Standard_D4s_v3 and confirmed all services are working now and the installation is just fine. This bumps memory up to 16 GB. I've updated the cost estimate page as well.

Kibana now listening:

helk@velocihelk:~$ sudo netstat -tulpn | grep 443
tcp        0      0 0.0.0.0:443             0.0.0.0:*               LISTEN      8269/docker-proxy   
tcp6       0      0 :::443                  :::*                    LISTEN      8277/docker-proxy   
helk@velocihelk:~$ 
iknowjason commented 2 years ago

I've pushed the commit up @Johndpete316

I'll keep this open, let me know if you can confirm that it is now working for you and we can close it.

Thanks for reporting this!

Johndpete316 commented 2 years ago

Tested with a fresh environment and the installations worked fine.

Thanks again for the quick response, huge lifesaver!

iknowjason commented 2 years ago

No problem - Good Luck!