👽Sliver Red Team Infrastructure

Overall concept, securely setup Red Team C2 infrastructure the right way. OPSEC!!

Before going any further you must consider how important it is to practice good Operational Security "OPSEC". Creating secure channels, storing sensitive files in the right location, and redirecting C2 traffic so as not to burn your C2 are all aspects of practicing good OPSEC.

These are reflect what we are trying to accomplish in the picture below.

Essentially, we want to create a nebula VPN mesh network which will allow our three instances to communicate securely while at the same time redirecting C2 traffic to our listening post via socat. The nebula integration is so that we can practice safe OPSEC locally between our Red Team infrastructure before deploying it out in the wild.

Big shout out to Husky Hacks for putting this blog post together. I pretty much followed every step but decided to host my Red Team infrastructure locally in my home lab so that I could capture the traffic with Security Onion.

My go to choice of C2 is "Sliver". If you haven't had a chance to play around with it I highly advise you check it out. Link below. https://github.com/BishopFox/sliver

These notes along with various screenshots are very rough at the moment when writing this post. I'm terrible at documenting so I have to quickly note down everything once I'm done testing. To successfully mimic the basic setup of one ubuntu listening post, ubuntu lighthouse, and C2 server you'll need a minimum of three VM's. As you can see below my kali instance in on the right and my two ubuntu servers are on the left. After you've setup your virtual machines it's time to pull down all the packages you'll need. A great resource for how to setup sliver as a service can be found in the link below. https://dominicbreuker.com/post/learning_sliver_c2_01_installation/ Also, I highly recommend using tmux so your not fumbling around 100 different ssh sessions. If you unfamiliar on how to use tmux TryHackMe has a great room you can spin up for free. https://tryhackme.com/room/rptmux

kali - listening post - lighthouse

Let's create an ssh key pair that we'll use to setup socat between our C2 and listening post. This will ensure we create a secure connection between our listening post when we forward traffic to our C2. Download nebula and socat. Repeat these steps on your other two instances for socat and nebula, not the ssh key gen step.

┌──(root㉿kali)-[~]
└─# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): /root/certs/id_rsa
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/certs/id_rsa
Your public key has been saved in /root/certs/id_rsa.pub
The key fingerprint is:
SHA256:dOrLjj8YWttsCLt3KRmAT5S0mJSxzNU0pVZejmDZkrE root@kali
The key's randomart image is:
+---[RSA 3072]----+
|  o+o+**o .      |
| +.=oo=B.+       |
|  *o. E.+ o      |
|  . o. . o       |
|   o .  S        |
|    o +.         |
|     = X..       |
|    o BoB.       |
|    .o.*=.       |
+----[SHA256]-----+

┌──(root㉿kali)-[~]
└─# mkdir nebula && cd nebula

┌──(root㉿kali)-[~]
└─# wget https://github.com/slackhq/nebula/releases/download/v1.5.2/nebula-linux-amd64.tar.gz -O nebula.tar.gz

┌──(root㉿kali)-[~]
└─# tar -xvf nebula.tar.gz

┌──(root㉿kali)-[~]
└─# cd nebula

After pulling down socat lets cat our id_rsa.pub key and paste it into id_rsa.pub on our listening post as seen below.

cat the public key into your authorized keys. cat id_rsa.pub >> authorized_keys

Next, from our kali instance, let's create nebula certs and configurations that will be used to stand up a mesh like VPN network between our clients in our C2 infrastructure.

┌──(root㉿kali)-[~/nebula]
└─# pwd
/root/nebula

┌──(root㉿kali)-[~/nebula]
└─# ip -br a
lo               UNKNOWN        127.0.0.1/8 ::1/128
eth0             DOWN
eth1             UP             45.99.15.200/24 fe80::84c8:77ff:fed9:933d/64

┌──(root㉿kali)-[~/nebula]
└─# mkdir certs && mv nebula-cert certs/

┌──(root㉿kali)-[~/nebula]
└─# cd certs/

┌──(root㉿kali)-[~/nebula/certs]
└─# ./nebula-cert ca -name "cyberlabz, LLC"

┌──(root㉿kali)-[~/nebula/certs]
└─# ./nebula-cert sign -name "lighthouse" -ip "192.168.100.1/24"

┌──(root㉿kali)-[~/nebula/certs]
└─# ./nebula-cert sign -name "listeningpost" -ip "192.168.100.2/24" -groups "listening_posts"

┌──(root㉿kali)-[~/nebula/certs]
└─# ./nebula-cert sign -name "teamserver" -ip "192.168.100.3/24" -groups "teamservers"

┌──(root㉿kali)-[~/nebula/certs]
└─# ls
ca.crt  lighthouse.crt  listeningpost.crt  nebula-cert     teamserver.key
ca.key  lighthouse.key  listeningpost.key  teamserver.crt

Here is the original source for how we want to setup our yaml files for the nebula network. However, in Husky's blog post, he made it super simple so we'll just follow along with that for now.

lighthouse-conf.yml

pki:
  ca: /home/hackerman/nebula/certs/ca.crt
  cert: /home/hackerman/nebula/certs/lighthouse.crt
  key: /home/hackerman/nebula/certs/lighthouse.key

static_host_map:
  "192.168.100.1": ["<LIGHTHOUSE IP>:4242"]

lighthouse:
  am_lighthouse: true

listen:
  host: 0.0.0.0
  port: 4242

punchy:
  punch: true

tun:
  disabled: false
  dev: nebula1
  drop_local_broadcast: false
  drop_multicast: false
  tx_queue: 500
  mtu: 1300
  routes:
  unsafe_routes:

logging:
  level: info
  format: text

firewall:
  conntrack:
    tcp_timeout: 12m
    udp_timeout: 3m
    default_timeout: 10m
    max_connections: 100000

  outbound:
    - port: any
      proto: any
      host: any

  inbound:
    - port: any
      proto: icmp
      host: any
    
    - port: 4789
      proto: any
      host: any

    - port: 22
      proto: any
      cidr: 192.168.100.0/24
listeningpost-conf.yml

pki:
  ca: /home/hackerman/nebula/certs/ca.crt
  cert: /home/hackerman/nebula/certs/listeningpost.crt
  key: /home/hackerman/nebula/certs/listeningpost.key

static_host_map:
  "192.168.100.1": ["<LIGHTHOUSE IP>:4242"]

lighthouse:
  am_lighthouse: false
  interval: 60
  hosts:
    - "192.168.100.1"

listen:
  host: 0.0.0.0
  port: 4242

punchy:
  punch: true

tun:
  disabled: false
  dev: nebula1
  drop_local_broadcast: false
  drop_multicast: false
  tx_queue: 500
  mtu: 1300
  routes:
  unsafe_routes:

logging:
  level: info
  format: text

firewall:
  conntrack:
    tcp_timeout: 12m
    udp_timeout: 3m
    default_timeout: 10m
    max_connections: 100000

  outbound:
    - port: any
      proto: any
      host: any

  inbound:
    - port: any
      proto: icmp
      host: any

    - port: 80
      proto: any
      host: any

    - port: 443
      proto: any
      host: any

    - port: 4789
      proto: any
      host: any

    - port: 22
      proto: any
      cidr: 192.168.100.0/24
teamserver-conf.yml

pki:
  ca: /root/nebula/certs/ca.crt
  cert: /root/nebula/certs/teamserver.crt
  key: /root/nebula/certs/teamserver.key

static_host_map:
  "192.168.100.1": ["<LIGHTHOUSE IP>:4242"]

lighthouse:
  am_lighthouse: false
  interval: 60
  hosts:
    - "192.168.100.1"

listen:
  host: 0.0.0.0
  port: 4242

punchy:
  punch: true

tun:
  disabled: false
  dev: nebula1
  drop_local_broadcast: false
  drop_multicast: false
  tx_queue: 500
  mtu: 1300
  routes:
  unsafe_routes:

logging:
  level: info
  format: text

firewall:
  conntrack:
    tcp_timeout: 12m
    udp_timeout: 3m
    default_timeout: 10m
    max_connections: 100000

  outbound:
    - port: any
      proto: any
      host: any

  inbound:
    - port: any
      proto: icmp
      host: any

    - port: 80
      proto: any
      host: any

    - port: 443
      proto: any
      host: any

    - port: 4789
      proto: any
      host: any

    - port: 22
      proto: any
      cidr: 192.168.100.0/24

Below you can see that we have everything pointing towards our lighthouse IP. This is so that it can relay how our listening post can communicate to our C2 server through a VPN like connection.

After everything has been properly copied over to their respective locations we can begin firing up our nebula connections. It would be best to go ahead and begin creating your tmux sessions so that you can switch between multiple panes.

First, fire up your lighthouse so that the listening post will know how to connect to your C2 server. Next, fire up your listening post and lastly fire up your teamserver as seen below. tmux will come in handy after this. Simply press ctrl - b and c to create a new pane in each session.

newly created nebula interfaces

Let's get this show on the road and create some socat connections. See the syntax and images below for reference. One thing to note. When issuing this commands the terminal seems like it hangs but trust me its not. This is why its good to have everything running in tmux so we can manage multiple panes in one session.

sudo ssh -N -R 8443:localhost:443 -i /root/certs/id_rsa [email protected]
sudo socat tcp-listen:443,reuseaddr,fork,bind=45.99.15.221 tcp:127.0.0.1:8443

Next, let's create some certificates and spin up sliver. We'll point sliver to our listening post IP and set the certs for HTTPs communication.

┌──(root㉿kali)-[~/ssl]
└─# openssl req -new -x509 -sha256 -newkey rsa:2048 -nodes -keyout micro.updates.key.pem -days 365 -out micro.updates.cert.pem

You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:AU
State or Province Name (full name) [Some-State]:overthere
Locality Name (eg, city) []:overhere
Organization Name (eg, company) [Internet Widgits Pty Ltd]:me
Organizational Unit Name (eg, section) []:you
Common Name (e.g. server FQDN or YOUR name) []:micro.updates.info
Email Address []:[email protected]

┌──(root㉿kali)-[~/ssl]
└─# ll
total 8
-rw-r--r-- 1 root root 1424 Jul  4 22:51 micro.updates.cert.pem
-rw------- 1 root root 1704 Jul  4 22:50 micro.updates.key.pem

┌──(root㉿kali)-[~/ssl]

Next, on our kali teamserver, lets spin up sliver and point our https listeners to out newly created certs.

Last updated

Was this helpful?