LordNoteworthy / al-khaser

Public malware techniques used in the wild: Virtual Machine, Emulation, Debuggers, Sandbox detection.
GNU General Public License v2.0
5.78k stars 1.16k forks source link

Versioning and CI #123

Open LordNoteworthy opened 6 years ago

LordNoteworthy commented 6 years ago

We need a better software versioning for the tool. At the moment, it have this scheme: a.xy when xy is incremented when a new malware-trick is added or bugs got fixed.

I suggest to move to something like this: a.b.c:

If we make a major change, we move from version 1.0.0 to 2.0.0 (like an API change/break, huge enhancement in the framework). If we make a smaller change, we move from 1.0.0 to 1.1.0 (add a new malware-trick). If we make a minor change, then we go from 1.0.0 to 1.0.1 (we fixed some bugs or refactored some code).

For continuous integration, it would be nice to integrate TravisCI or Appveyor to Github, so we can build and run some tests when somebody make commits/PR. I did a quick search and found out Appveyor to be suitable for our needs, they offer free Windows VM with different VS versions where we can do our builds/tests.

Open to any suggestions.

gsuberland commented 6 years ago

I agree on the versioning front, and honestly I think we might be about due for a 1.0.0 release, pending some of the larger changes that are in the pipeline. I think we should use GitHub's projects feature to map out what we'd like to see in a 1.0.0 release and work towards it. I've set one up here.

+1 on on CI. Despite never using it myself I've seen it be very useful in other projects. It'd be particularly interesting to see if we could somehow run Al-Khaser on each build and diff the outputs to catch regressions. Even easier now the TLS check is non-interactive.

LordNoteworthy commented 5 years ago

Hi @gsuberland ,

We have now a simple appveyor.yml which does nothing but see if compilation went fine.

Doing proper tests would requires us to setup the different hypervisors (xen, kvm, qemu, vbox, vmware, hyper-v) for (AntiVM) which probably none of the CI providers would give such a possibility. We have to run our own infra which would take some time, so I suggest implemeting the tests in the release after 1.0.0.

We can also do some mocking, to make things easier.

Let me know what do you think ?

Thanks.

LordNoteworthy commented 5 years ago

ping @gsuberland :)

gsuberland commented 5 years ago

I agree that this should be something we do after a post-1.0 release. I have little experience with CI but the small bit I do have makes me certain that the off-the-shelf tooling wouldn't allow for testing on different VM / hypervisor platforms.

Spinning up our own infra for this seems like a reasonable approach, although it comes with some challenges.

The first challenge is that we can't utilise VPSes or cloud servers because they're already virtualised. Dedicated servers tend to run a hypervisor for management purposes, and true bare-metal cloud is expensive. We'd either need to run something local (i.e. at home) or colo a box, so that we had complete control over the stack. I've got a server rack at home with some space in it, and a static IP address with a 24h response SLA on the line, so running it here is an option. The costs of colo also aren't insurmountable (£35/mo for 1U in the UK) which would offer better uptime guarantees, but I'm not sure those costs are justified if I can run things locally. I've got a UPS so we shouldn't run into any power outages.

The next challenge is that we need multiple physical machines to do this. Hyper-V and Xen need whole boxes to themselves. KVM probably does too as of newer versions of Ubuntu/Debian, since you can't run other virtualisation platforms alongside it. VirtualBox, VMware, and QEMU can run alongside each other off the same host OS, although there are stability issues in running them in parallel (this can be resolved by only booting VMs for one platform at a time). All in all we're talking about four physical machines. Probably five if we want a jumpbox as a single entry point for getting to the others, and for staging the tests. For the sake of size and cost that probably means 5x Intel SBCs in a small 2U chassis, each with at least 4GB of RAM. The AAEON UP Square board looks like the best option here - Celeron N3350 2.4GHz (has VT-x and VT-d support), 4GB DDR4, 32GB eMMC, M.2 slot (in case we need expanded storage), and it meets the requirements for Windows Server 2019. The total cost of five of them is approximately £800; I expect total project costs would be around £1000 once the power supplies, cooling, chassis, etc. are counted.

I'd be interested to hear your thoughts on if there's an alternative approach.

gsuberland commented 5 years ago

I put the question out on Twitter and someone suggested PXE booting. This might be a decent middle ground where I'd only need two of the SBCs (one for control and hosting the PXE server, one as the test platform). It'd reduce costs significantly.

LordNoteworthy commented 5 years ago

Hey @gsuberland, thanks a lot for your detailed response. Much appreciated.

Some cloud providers like Vultr provides barel metal servers billed / hour. We can potentially spin new servers via their APIs (in a parallel fashion using ansible for example) from the CI job, provision them (to install the hypervisors).

Running 5x Intel SBC locally also seems to me a good idea. The PXE booting is indeed something we could make use of. Let me think more about it for a day or two.

gsuberland commented 5 years ago

Another idea: get a cheap box with vPro support (e.g. Lenovo Thinkstation M70e) and put hard disks in it with a different virtualisation platform on each, then use a combination of an on-system agent process and Intel AMT to automate it.

There's an official C# library called Intel HLAPI which can be used to interface with AMT. You can use it to change the boot device and perform graceful reboots.

This limits us to sequential testing, but the total hardware cost is almost trivial - £30 for a second hand Thinkstation, then another £40 for 5x 120GB 2.5" HDDs (eBay), and £10 for a SATA controller.

I'm busy for the next couple of weeks (prepping for an event) but I'll get this stuff together next month and start building the infrastructure.

LordNoteworthy commented 5 years ago

Hello @gsuberland

Sounds good, let's do that. Let's talk privately so I can help with finances.

I can also help by creating a small agent running in the VM which receives the binary, run it and read the log. We can either then forward the log to the CI (in some way) so we can do the checks in the CI once all logs from different hypervisors are received.

Appreciate your suppoty, thank you.

gsuberland commented 5 years ago

It's ok, I don't mind writing the agent, and it'll make sense for me to do it in C# because then I can use HLAPI to access the AMT NVRAM for passing data between the controller and the system running the CI.

LordNoteworthy commented 5 years ago

All right, go ahead, I will try to finish the documentation. Thank you.

gsuberland commented 5 years ago

Apologies for the delay. Hardware has now been ordered for this and I'm going to start work on setting it all up.

LordNoteworthy commented 5 years ago

Awesome !! Go to hear, thanks a lot! Drop me a mail in my protonmail email to share expenses.