N0MoreSecr3ts / wraith

Uncover forgotten secrets and bring them back to life, haunting security and operations teams.
MIT License
206 stars 42 forks source link

Support for internal Gitlab CE #59

Open mattyjones opened 4 years ago

mattyjones commented 4 years ago

@codeEmitter ^ does your code support this? Thoughts?

mattyjones commented 4 years ago

@jeffa17

funkwhatyouheard commented 3 years ago

@mattyjones think I've got this one worked out, pretty small code change. Looks like github enterprise support was broken out into a separate cmd file, you want to do the same thing for gitlab enterprise/ce?

funkwhatyouheard commented 3 years ago

@mattyjones side note, think something may have broken in the latest push to master. Getting 406's now whenever I try to hit enterprise github orgs/repos. Looks like a lot of api init code got commented out in session.go, is it actually creating the client correctly?

mattyjones commented 3 years ago

@funkwhatyouheard I will look into this. I have an unstable branch that fixes a lot of things as well, but it is not yet 100% functional. I will devote some time to this this week as I am still on vacation and not really going anywhere, LOL.

funkwhatyouheard commented 3 years ago

@mattyjones No rush! Even if vacation is at home, make sure it's actually a vacation :)

codeEmitter commented 3 years ago

No, but it's something I need to build in the next couple of weeks. Happy to pair on it with whoever may be interested, or I can take a swipe at it solo. Either one.

codeEmitter commented 3 years ago

We have some specific goals in mind for secrets hunting that will probably have me diving into code here soon.

funkwhatyouheard commented 3 years ago

@codeEmitter you referring to breaking out the gitlab ce/enterprise support into a separate cmd file? I'd be happy to help with it. I am curious the thought process behind breaking these out though. Just feels like a lot of duplication for this and github if the only real difference is how the client is initialized (and auth for github). If you and @mattyjones have specific goals in mind may make total sense but I'm out of the loop on those :) Just trying to minimize code footprint.

mattyjones commented 3 years ago

@codeEmitter @funkwhatyouheard

The reason for breaking them out is to make the core functionality as generic as possible. I agree there may seem some duplication at times between the git stuff specifically but all the GitHub stuff is in a single file, github.go and the gitlab stuff is in gitlab.go. There is some overlap here but not as much as you may think. Also both of these are divergent products, I would rather break them up now in the case where something breaks and we later have to split them out. There is a common git.go file with the all around basic stuff.

This also comes down to adding additional targets such as bitbucket, using this method we don't need to refactor a ton of existing code to plug in bitbucket and risk borking something else, nor do we have to know the entire codebase. We simply create a new command for bitbucket, add a client and anything specific in bitbucket.go and never put any of the other functionality at risk. Even when we do get full test coverage that would catch issues like these we may still need to do some heavily plumbing, instead of just using the already exposed bits directly.

There are also thoughts for pastebin, source forge, perforce, etc, as well. Again any work we can do now to break things apart where it makes sense will pay a lot of dividends later on down the line. In the end, wraith itself will be a fancy router. Give it a target w/ flags, a source(s) where that target may be found, and the desired output.

A practical application of this was a wrapping I did of wraith for some tests. Instead of targeting Gitlab, Github, etc sequentially, by having separate commands, I was able to throw something like this together fairly quickly.

main() {

check_wraith
check_signatures

run_wraith_github > /dev/null &  local PIDIOS=$!
run_wraith_gitlab > /dev/null &  local PIDMIX=$!
wait $PIDIOS
wait $PIDMIX

cleanup
}

Now I can just keep adding targets that can run independently of each other. Gitlab may take 2m, GitHub may take 10m, bitbucket may that 6m. I can be reviewing the output of one, while I wait for the others to complete. Sure we can do fancy threading and call multiple sources from wraith if we want using flags, but you will still hit I/O issues without some serious monkey fucking. Too me, having separate commands, each with their own pid, and I/O stream, just makes things easier. This way also means you can do text processing and deduping of the output with sed, awk, uniq, etc all within a single automated wrapper that will be exponentially easier to do then within wraith, or any other language w/ the exception of possibly perl.

OT:

For reference here is the shell scripts I originally wrote w/ this scenario. I had a crap ton of web servers to enumerate and recon, so I wrote an automated Nmap script that dropped anything with 80/443 open into a text file.

1.2.3.4 80
5.6.7.8 80
110.0.0.1 443

that could then be piped directly into a script that ran both gobuster and nikto independently across all the ip's in real time by using tail -f. I hate waiting so whenever I can automate and pipe I will do it.

auto_nmap

#!/bin/bash

set -e

run_baseline() {

  nmap -A -Pn -sC -sV -v -oN "$prefix"_tcp_baseline "$ip_addr"  &  local PIDIOS=$!
  nmap -A -Pn -sC -sV -sU -v -oN "$prefix"_udp_baseline "$ip_addr" &  local PIDMIX=$!
  wait $PIDIOS
  wait $PIDMIX

}

run_full() {
  nmap -A -Pn -sC -sV -p- -v -oN "$prefix"_tcp_full "$ip_addr"  > /dev/null &  local PIDIOS=$!
  nmap -A -Pn -sC -sV -p- -v sU -oN "$prefix"_udp_full "$ip_addr" > /dev/null & local PIDMIX=$!

  wait $PIDIOS
  wait $PIDMIX
}

main() {

  if [ "$input" != "" ]; then
    while IFS= read -r ip_addr
      do
        prefix=$(echo "$ip_addr" | awk -F. '{print $4}')

        if [ "$scan_type" = 'baseline' ]; then
          run_baseline
        fi

        if [ "$scan_type" = 'full' ]; then
          run_full
        fi
    done < "$input"
    else
      if [ "$scan_type" = 'baseline' ]; then
      run_baseline
      fi

    if [ "$scan_type" = 'full' ]; then
      run_full
    fi
    fi
}

scan_type="$1"
ip_addr=""
input=""

if [[ $2 =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
  ip_addr="$2"
  prefix=$(echo "$ip_addr" | awk -F. '{print $4}')
else
  input="$2"
fi

 main

auto_web_server

#! /bin/bash

set -e

ip_addr="$1"
ext="$2"
port="$3"

prefix=$(echo "$ip_addr" | awk -F. '{print $4}')

check_gb() {
if [ "$(which gobuster)" ]
then
        return 0
else
        echo "Please install gobuster"
        return 1
fi
}

check_nikto() {
if [ "$(which nikto)" ]
then
        return 0
else
        echo "Please install nikto"
        return 1
fi
}

run_gobuster() {
  gobuster dir -q -t 40 -v -w /usr/share/wordlists/dirbuster/directory-list-2.3-medium.txt -u "http://$ip_addr" -x "$ext" -k -o raw_gobuster
  $(cat raw_gobuster | grep -v 404 > "$prefix"_gobuster)
}

run_nikto() {
  nikto -host "$ip_addr" -nossl -port "$port" -output raw_nikto -Format json -Tuning x
  $(cat raw_nikto | awk 'BEGIN{FS=OFS=","} {$(NF-1)=$(NF-1) " " $NF; NF--} 2' > int_nikto)
  $(cat int_nikto | awk 'BEGIN{FS=OFS=","} {$(NF-2)=$(NF-1) " " $NF; NF--} 2' > "$prefix"_nikto)
}

cleanup() {
  local file_list=(raw_gobuster raw_nikto int_nikto)

  for f in "${file_list[@]}"; do
    if [ -f "$f" ]; then
      rm $f
  fi
done
}

main() {

check_gb
check_nikto

run_gobuster > /dev/null &  local PIDIOS=$!
run_nikto > /dev/null &  local PIDMIX=$!
wait $PIDIOS
wait $PIDMIX

cleanup
}

main
funkwhatyouheard commented 3 years ago

@mattyjones Totally understand and agree the split between github and gitlab, I meant more the split between github and github enterprise or gitlab and gitlab enterprise/CE. The different versions within the same product family share a lot of the same code. Cool scripts though, reminds me I need to brush up on my bash. :)

mattyjones commented 3 years ago

@funkwhatyouheard You could be right, lets sleep on it. The clients between GitHub and GHE are different as are come of the urls for getting at stuff. Keep that in mind.

codeEmitter commented 3 years ago

I'll be back with you all soon. Getting through a back injury atm.