This is a collection of notes and tutorials on various topics. It’s mostly tech-related, but may include other topics. The intended audience is mostly myself.
- hyperreal.coffee : My personal website
- @hyperreal@tilde.zone : Find me in the Fediverse
- XMPP: hyperreal@tilde.team
Install mod_evasive
sudo apt install libapache2-mod-evasive
Add mod_evasive to VirtualHost
<IfModule mod_evasive20.c>
DOSHashTableSize 3097
DOSPageCount 2
DOSSiteCount 50
DOSPageInterval 1
DOSSiteInterval 1
DOSBlockingPeriod 60
DOSEmailNotify <hyperreal@moonshadow.dev>
</IfModule>
Restart apache2:
sudo systemctl restart apache2.service
Get lowest memfree for given analysis date
atopsar -r /var/log/atop/atop_20240703 -m -R 1 | awk 'NR<7{print $0;next}{print $0| "sort -k 3,4"}' | head -11
atopsar
: atop's system activity report.-r /var/log/atop/atop_20240703
: Log file to use.-m
: Memory- and swap-occupation-R 1
: Summarize 1 sample into one sample. Log file contains samples of 10 minutes, so this will summarize each sample.-R 6
will summarize one sample per 60 minutes.awk 'NR<7{print $0;next}{print $0| "sort -k 3,4"}'
: For number of input records (NR
) less than7
,print
the input record ($0
), go to thenext
input record and repeat the{print $0}
pattern until the end is reached, then execute the END rule. The END rule in this case is{print $0| "sort -k 3,4"}
, it prints the remaining input records after piping them through the"sort -k 3,4"
command. This avoids sorting the first 7 lines of the atopsar command.head -11
: Get the top 11 lines of output.
Get top 3 memory processes for given analysis date
atopsar -G -r /var/log/atop/atop_20240710
Identify top-five most frequently executed process during logging period
atop -r /var/log/atop/atop_20241123 -P PRG | grep -oP "(?<=\()[[:alnum:]]{1,}(?=\))" | sort | uniq -c | sort -k1rn | head -5
Count the number of times a particular process has been detected during logging period
atop -r /var/log/atop/atop_20241123 -P PRG | egrep "docker" | awk '{print $5}' | uniq -c -w5
Generate a chart of the number of instances of a particular process during logging period
atop -r /var/log/atop/atop_20241123 -P PRG | egrep "docker" | awk '{print $5}' | uniq -c -w8 | \
gnuplot -e "set terminal dumb 80 20; unset key; set style data labels; set xdata time; set xlabel 'Time'; set ylabel 'docker'; set timefmt '%H:%M:%S'; plot '-' using 2:1:ytic(1) with histeps"
Generate a PNG chart of the number of instances of a particular process during logging period
atop -r /var/log/atop/atop_20241123 -P PRG | awk '{print $5}' | uniq -c -w8 | \
gnuplot -e "set title 'Process Count'; set offset 1,1,1,1; set autoscale xy; set mxtics; set mytics; \
set style line 12 lc rgb '#ddccdd' lt 1 lw 1.5; set style line 13 lc rgb '#ddccdd' lt 1 lw 0.5; set grid xtics mxtics ytics mytics \
back ls 12, ls 13; set terminal png size 1920,1280 enhanced font '/usr/share/fonts/liberation/LiberationSans-Regular.ttf,10'; \
set output 'plot_$(date +'%Y-%m-%d_%H:%M:%S')_${RANDOM}.png'; set style data labels; set xdata time; set xlabel 'Time' font \
'/usr/share/fonts/liberation/LiberationSans-Regular.ttf,8'; set ylabel 'Count' font \
'/usr/share/fonts/liberation/LiberationSans-Regular.ttf,8'; set timefmt '%H:%M:%S'; plot '-' using 2:1 with histeps"
Identify top-ten most frequently executed binaries from /sbin or /usr/sbin during logging period
for i in $(atop -r /var/log/atop/atop_20241123 -P PRG | grep -oP "(?<=\()[[:alnum:]]{1,}(?=\))" | sort | uniq -c | sort -k1rn | head -10); do
which "${i}" 2>/dev/null | grep sbin;
done
Identify disks with over 90% activity during logging period
atopsar -r /var/log/atop/atop_20241123 -d | egrep '^[0-9].*|(9[0-9]|[0-9]{3,})%'
Identify processes responsible for most disk I/O during logging period
atopsar -r /var/log/atop/atop_20241123 -D | sed 's/\%//g' | awk -v k=50 '$4 > k || $8 > k || $12 > k' | sed -r 's/([0-9]{1,})/%/5;s/([0-9]{1,})/%/7;s/([0-9]{1,})/%/9'
Identify periods of heavy swap activity during logging period
atopsar -r /var/log/atop/atop_20241123 -s | awk -v k=1000 '$2 > k || $3 > k || $4 > k'
Identify logical volumes with high activity or high average queue during logging period
atopsar -r /var/log/atop/atop_20241123 -l -S | sed 's/\%//g' | awk -v k=50 -v j=100 '$3 > k || $8 > j' | sed -r 's/([0-9]{1,})/%/4'
Identify processes consuming more than half of all available CPUs during logging period
(( k = $(grep -c proc /proc/cpuinfo) / 2 * 100 ))
atopsar -r /var/log/atop/atop_20241123 -P | sed 's/\%//g' | awk -v k=$k '$4 > k || $8 > k || $12 > k' | sed -r 's/([0-9]{1,})/%/5;s/([0-9]{1,})/%/7;s/([0-9]{1,})/%/9'
Identify time of peak memory utilization during logging period
atopsar -r /var/log/atop/atop_20241123 -m -R 1 | awk 'NR<7{print $0;next}{print $0| "sort -k 3,3"}' | head -15
Heredoc
cat << EOF > file.txt
The current working directory is $PWD.
You are logged in as $(whoami).
EOF
Plain-print the difference between two files
Suppose we have two files: packages.fedora
and packages
.
packages.fedora
:
autossh
bash-completion
bat
bc
borgmatic
bzip2
cmake
curl
diff-so-fancy
diffutils
dnf-plugins-core
packages
:
bash-completion
bc
bzip2
curl
diffutils
dnf-plugins-core
To plain-print the lines that exist in packages.fedora
but do not exist packages
:
comm -23 <(sort packages.fedora) <(sort packages)
Output:
autossh
bat
borgmatic
cmake
diff-so-fancy
- The
comm
command compares two sorted files line by line. - The
-23
flag is shorthand for-2
and-3
. -2
: suppress column 2 (lines unique topackages
)-3
: suppress column 3 (lines that appear in both files)
Split large text file into smaller files with equal number of lines
split -l 60 bigfile.txt prefix-
Loop through lines of file
while read line; do
echo "$line";
done </path/to/file.txt
Use grep to find URLs from HTML file
cat urls.html | grep -Eo "(http|https)://[a-zA-Z0-9./?=_%:-]*"
grep -E
: egrepgrep -o
: only output what has been grepped(http|https)
: either http OR httpsa-zA-Z0-9
: match all lowercase, uppercase, and digits.
: match period/
: match slash?
: match ?=
: match =_
: match underscore%
: match percent:
: match colon-
: match dash*
: repeat the […] group any number of times
Use Awk to print the first line of ps aux
output followed by each grepped line
To find all cron processes with ps aux
.
ps aux | awk 'NR<2{print $0;next}{print $0 | grep "cron"}' | grep -v "awk"
ps aux
: equivalent tops -aux
.-a
displays info about other users processes besides to current user.-u
displays info associated with keywordsuser
,pid
,%cpu
,%mem
,vsz
,rss
,tt
,state
,start
,time
, andcommand
.-x
includes processes which do not have a controlling terminal. Seeman 1 ps
.awk 'NR<2{print $0;next}{print $0 | "grep cron"}' | grep -v "awk"
: For number of input records (NR
) less than 2,print
the input record ($0
), go to the next input record and repeat the{print $0}
pattern until the end is reached, then execute the END rule. The End rule in this case is{print $0 | "grep cron"}
, it prints the remaining input records after piping them through the"grep cron"
command. This allows printing the first line of theps aux
output, which consists of the column labels, and filters out everything besides what you want to grep for (e.g. "cron" processes).grep -v "awk"
: avoids printing the line containing this command.
On the host machine
IMPORTANT Run these commands as root
Add a system user for btrbk:
useradd -c "Btrbk user" -m -r -s /bin/bash -U btrbk
Setup sudo for btrbk:
echo "btrbk ALL=NOPASSWD:/usr/sbin/btrfs,/usr/bin/readlink,/usr/bin/test" | tee -a /etc/sudoers.d/btrbk
Create a subvolume for each client:
mount /dev/sda1 /mnt/storage
btrfs subvolume create client_hostname
On each client machine
Create a dedicated SSH key:
mkdir -p /etc/btrbk/ssh
ssh-keygen -t ed25519 -f /etc/btrbk/ssh/id_ed25519
Add each client's SSH public key to /home/btrbk/.ssh/authorized_keys
on the NAS machine:
ssh-copy-id -i /etc/btrbk/ssh/id_ed25519 btrbk@nas.local
Create /etc/btrbk/btrbk.conf
on each client:
transaction_log /var/log/btrbk.log
snapshot_preserve_min latest
target_preserve 24h 7d 1m 1y
target_preserve_min 7d
ssh_user btrbk
ssh_identity /etc/btrbk/ssh/id_ed25519
backend btrfs-progs-sudo
snapshot_dir /btrbk_snapshots
target ssh://nas.local/mnt/storage/<client hostname>
subvolume /
subvolume /home
snapshot_create ondemand
Create directory to store btrbk snapshots on each client machine:
mkdir /btrbk_snapshots
Create /etc/systemd/system/btrbk.service
:
[Unit]
Description=Daily btrbk backup
[Service]
Type=simple
ExecStart=/usr/bin/btrbk -q -c /etc/btrbk/btrbk.conf run
Create /etc/systemd/system/btrbk.timer
:
[Unit]
Description=Daily btrbk backup
[Timer]
OnCalendar=*-*-* 23:00:00
Persistent=true
[Install]
WantedBy=timers.target
Alternatively, create a shell script to be placed under /etc/cron.daily
:
#!/usr/bin/env bash
set -e
/usr/bin/btrbk -q -c /etc/btrbk/btrbk.conf run >/dev/null
Create systemd.mount unit for Btrfs on external HDD
NOTE internet_archive is used here as an example.
Get the UUID of the Btrfs partition.
sudo blkid -s UUID -o value /dev/sda1
d3b5b724-a57a-49a5-ad1d-13ccf3acc52f
Edit ~/etc/systemd/system/mnt-internet_archive.mount.
[Unit]
Description=internet_archive Btrfs subvolume
DefaultDependencies=yes
[Mount]
What=/dev/disk/by-uuid/d3b5b724-a57a-49a5-ad1d-13ccf3acc52f
Where=/mnt/internet_archive
Type=btrfs
Options=subvol=@internet_archive,compress=zstd:1
[Install]
WantedBy=multi-user.target
DefaultDependencies=yes
: The mount unit automatically acquiresBefore=umount.target
andConflicts=umount.target
. Local filesystems automatically gainAfter=local-fs-pre.target
andBefore=local-fs.target
. Network mounts, such as NFS, automatically acquireAfter=remote-fs-pre.target network.target network-online.target
andBefore=remote-fs.target
.Options=subvol=@internet_archive,compress=zstd:1
: Use the subvolume@internet_archive
and use zstd compression level 1.
Note that the name of the unit file, e.g.
mnt-internet_archive.mount
, must correspond to theWhere=/mnt/internet_archive
directive, such that the filesystem path separator / in theWhere
directive is replaced by an en dash in the unit file name.
Reload the daemons and enable the mount unit.
sudo systemctl daemon-reload
sudo systemctl enable --now mnt-internet_archive.mount
Setup encrypted external drive for backups
Prepare the external drive
sudo cryptsetup --type luks2 -y -v luksFormat /dev/sda1
sudo cryptsetup -v luksOpen /dev/sda1 cryptbackup
sudo mkfs.btrfs /dev/mapper/cryptbackup
sudo mkdir /srv/backup
sudo mount -o noatime,compress=zstd:1 /dev/mapper/cryptbackup /srv/backup
sudo restorecon -Rv /srv/backup
Setup /etc/crypttab
sudo blkid -s UUID -o value /dev/sda1 | sudo tee -a /etc/crypttab
Add the following line to /etc/crypttab
:
cryptbackup UUID=<UUID of /dev/sda1> none discard
Setup /etc/fstab
sudo blkid -s UUID -o value /dev/mapper/cryptbackup | sudo tee -a /etc/fstab
Add the following line to /etc/fstab
:
UUID=<UUID of /dev/mapper/cryptbackup> /srv/backup btrfs compress=zstd:1,nofail 0 0
Reload the daemons:
sudo systemctl daemon-reload
Mount the filesystems:
sudo mount -av
Btrfs-backup script
#!/usr/bin/env bash
LOGFILE="/var/log/btrfs-backup.log"
SNAP_DATE=$(date '+%Y-%m-%d_%H%M%S')
# Check if device is mounted
if ! grep "/srv/backup" /etc/mtab >/dev/null; then
echo "[$(date '+%Y-%m-%d %H:%M:%S')] Backup device is not mounted." | tee -a "$LOGFILE"
notify-send -i computer-fail "Backup device is not mounted"
exit 1
fi
create_snapshot() {
if ! btrfs subvolume snapshot -r "$1" "${1}/.snapshots/$2-$SNAP_DATE" >/dev/null; then
echo "[$(date '+%Y-%m-%d %H:%M:%S')] Error creating snapshot of $1" | tee -a "$LOGFILE"
notify-send -i computer-fail "Error creating snapshot of $1"
exit 1
else
echo "[$(date '+%Y-%m-%d %H:%M:%S')] Create snapshot of $1: OK" | tee -a "$LOGFILE"
fi
}
send_snapshot() {
mkdir -p "/srv/backup/$SNAP_DATE"
if ! btrfs send -q "${1}/.snapshots/$2-$SNAP_DATE" | btrfs receive -q "/srv/backup/$SNAP_DATE"; then
echo "[$(date '+%Y-%m-%d %H:%M:%S')] Error sending snapshot of $1 to /srv/backup" | tee -a "$LOGFILE"
notify-send -i computer-fail "Error sending snapshot of $1 to /srv/backup"
exit 1
else
echo "[$(date '+%Y-%m-%d %H:%M:%S')] Send snapshot of $1 to /srv/backup: OK" | tee -a "$LOGFILE"
fi
}
# Create root and home snapshots
create_snapshot "/" "root"
create_snapshot "/home" "home"
# Send root and home snapshots
send_snapshot "/" "root"
send_snapshot "/home" "home"
Move/copy the script to /etc/cron.daily/btrfs-backup
.
IP whitelist
irc.hyperreal.coffee {
@me {
client_ip 1.2.3.4
}
handle @me {
reverse_proxy localhost:9000
}
respond "You are attempting to access protected resources!" 403
}
Reverse proxy for qBittorrent over Tailscale
I shall explain precisely what these directives do, as soon as I find out precisely what it is. I shall look into it, soon. It would be good to know something about how web servers, headers, and the HTTP protocol work, and what all this "origin", "referer", and "cross-origin" stuff means.
hostname.tailnet.ts.net:8888 {
reverse_proxy localhost:8080 {
header_up Host localhost:8080
header_up X-Forwarded-Host {host}:{hostport}
header_up -Origin
header_up -Referer
}
}
I'm just playing with some ideas here regarding a carpal tunnel syndrome-friendly way to do everyday computing.
Given the limits that nature places on the number of possible ways of manipulating machines, at the current time it seems voice dictation is the only feasible alternative to typing and pointing and clicking. Is it possible to do what I usually do at my computer using 100% voice dictation?
I wouldn't use it for gaming, of course, but for things like web browsing, coding, writing/typing, and system administration tasks. I would need software, preferrably FOSS, that responds to voice commands.
Web browsing
Voice commands for web browsing would have to include something like the following:
- "Scroll N pixels down the page"
- "Refresh the page"
- "Go to tab 6"
- "Download the file at link 8"
- "Go to <www.duckduckgo.com>"
- "Open up the Bitwarden menu"
- "Enter writing mode and compose a new Mastodon post"
- "Enter writing mode and compose a reply to Mastodon timeline item 23"
- "Play the video on Mastodon timeline item 28"
- "Go to bookmark 16"
- "Copy the URL to the system clipboard"
So there would have to be a way to enumerate web page and browser elements. This enumeration concept would also apply to many other apps.
Coding and command line usage
Voice commands that are mapped to:
- shell commands and aliases
- code snippets
- "Create a Go function named helloWorld"
- "helloWorld takes a string parameter named foo"
- Okay, I've realized coding is probably not feasible using 100% voice dictation.
Install Cgit with Caddy
Dependencies
xcaddy package from releases page.
Install caddy-cgi.
xcaddy build --with github.com/aksdb/caddy-cgi/v2
Install remaining dependencies.
sudo apt install gitolite3 cgit python-is-python3 python3-pygments python3-markdown docutils-common groff
Configuration
Make a git user.
sudo adduser --system --shell /bin/bash --group --disabled-password --home /home/git git
Configure gitolite for the git user in ~/.gitolite.rc
.
UMASK => 0027,
GIT_CONFIG_KEYS => 'gitweb.description gitweb.owner gitweb.homepage gitweb.category',
Add caddy user to the git group.
sudo usermod -aG git caddy
Configure cgit in /etc/cgitrc
.
#
# cgit config
# see cgitrc(5) for details
css=/cgit/cgit.css
logo=/cgit/cgit.png
favicon=/cgit/favicon.ico
enable-index-links=1
enable-commit-graph=1
enable-log-filecount=1
enable-log-linecount=1
enable-git-config=1
branch-sort=age
repository-sort=name
clone-url=https://git.hyperreal.coffee/$CGIT_REPO_URL git://git.hyperreal.coffee/$CGIT_REPO_URL ssh://git@git.hyperreal.coffee:$CGIT_REPO_URL
root-title=hyperreal.coffee Git repositories
root-desc=Source code and configs for my projects
##
## List of common mimetypes
##
mimetype.gif=image/gif
mimetype.html=text/html
mimetype.jpg=image/jpeg
mimetype.jpeg=image/jpeg
mimetype.pdf=application/pdf
mimetype.png=image/png
mimetype.svg=image/svg+xml
# Enable syntax highlighting
source-filter=/usr/lib/cgit/filters/syntax-highlighting.py
# Format markdown, rst, manpages, text files, html files, and org files.
about-filter=/usr/lib/cgit/filters/about-formatting.sh
##
### Search for these files in the root of the default branch of repositories
### for coming up with the about page:
##
readme=:README.md
readme=:README.org
robots=noindex, nofollow
section=personal-config
repo.url=doom-emacs-config
repo.path=/home/git/repositories/doom-emacs-config.git
repo.desc=My Doom Emacs config
Org-mode README
IMPORTANT Note: I haven't gotten this to work yet. :-(
git clone https://github.com/amartos/cgit-org2html.git
cd cgit-org2html
sudo cp -v org2html /usr/lib/cgit/filters/html-converters/
sudo chmod +x /usr/lib/cgit/filters/html-converters/org2html
Download blob-formatting.sh.
sudo cp -v blob-formatting.sh /usr/lib/cgit/filters/
Catppuccin Mocha palette for org2html.css
git clone https://github.com/amartos/cgit-org2html.git
cd cgit-org2html/css
Change the color variables to Catppuccin Mocha hex codes.
$red: #f38ba8;
$green: #a6e3a1;
$orange: #fab387;
$gray: #585b70;
$yellow: #f9e2af;
$cyan: #89dceb;
$teal: #94e2d5;
$black: #11111b;
$white: #cdd6f4;
$cream: #f2cdcd;
Install sass.
sudo apt install -y sass
Generate org2html.css from the scss files, and copy the result to the cgit css directory.
sass org2html.scss:org2html.css
sudo cp -v org2html.css /usr/share/cgit/css/
Requirements
- UEFI
- LVM on LUKS with unencrypted
/boot
Disk partitioning
Use cfdisk
to create the following partition layout.
Partition Type | Size |
---|---|
EFI | +600M |
boot | +900M |
Linux | Remaining space |
Format the unencrypted partitions:
mkfs.vfat /dev/nvme0n1p1
mkfs.ext4 /dev/nvme0n1p2
Create LUKS on the remaining partition:
cryptsetup luksFormat /dev/nvme0n1p3
cryptsetup luksOpen /dev/nvme0n1p3 crypt
Create a LVM2 volume group for /dev/nvme0n1p3
, which is located at /dev/mapper/crypt
.
vgcreate chimera /dev/mapper/crypt
Create logical volumes in the volume group.
lvcreate --name swap -L 8G chimera
lvcreate --name root -l 100%FREE chimera
Create the filesystems for the logical volumes.
mkfs.ext4 /dev/chimera/root
mkswap /dev/chimera/swap
Create mount points for the chroot and mount the filesystems.
mkdir /media/root
mount /dev/chimera/root /media/root
mkdir /media/root/boot
mount /dev/nvme0n1p2 /media/root/boot
mkdir /media/root/boot/efi
mount /dev/nvme0n1p1 /media/root/boot/efi
Installation
Chimera-bootstrap and chroot
chimera-bootstrap /media/root
chimera-chroot /media/root
Update the system.
apk update
apk upgrade --available
Install kernel, cryptsetup, and lvm2 packages.
apk add linux-stable cryptsetup-scripts lvm2
Fstab
genfstab / >> /etc/fstab
Crypttab
echo "crypt /dev/disk/by-uuid/$(blkid -s UUID -o value /dev/nvme0n1p3) none luks" > /etc/crypttab
Initramfs refresh
update-initramfs -c -k all
GRUB
apk add grub-x86_64-efi
grub-install --efi-directory=/boot/efi --target=x86_64-efi
Post-installation
passwd root
apk add zsh bash
useradd -c "Jeffrey Serio" -m -s /usr/bin/zsh -U jas
passwd jas
Add the following lines to /etc/doas.conf
:
# Give jas access
permit nopass jas
Set hostname, timezone, and hwclock.
echo "falinesti" > /etc/hostname
ln -sf /usr/share/zoneinfo/America/Chicago /etc/localtime
echo localtime > /etc/hwclock
Xorg and Xfce4
apk add xserver-xorg xfce4
Reboot the machine.
Post-reboot
Login as jas
. Run startxfce4
. Connect to internet via NetworkManager.
Ensure wireplumber and pipewire-pulse are enabled.
dinitctl enable wireplumber
dinitctl start wireplumber
dinitctl enable pipewire-pulse
dinitctl start pipewire-pulse
Install CPU microcode.
doas apk add ucode-intel
doas update-initramfs -c -k all
Install other packages
doas apk add chrony
doas dinitctl enable chrony
doas apk add
Install Debian with LUKS2 Btrfs and GRUB via Debootstrap
Source: https://gist.github.com/meeas/b574e4bede396783b1898c90afa20a30
- Use a Debian Live ISO
- Single LUKS2 encrypted partition
- Single Btrfs filesystem with @, @home, @swap, and other subvolumes
- Encrypted swapfile in Btrfs subvolume
- systemd-boot bootloader
- Optional removal of crypto keys from RAM during laptop suspend
- Optional configurations for laptops
Pre-installation setup
Boot into the live ISO, open a terminal, and become root. Install needed packages.
sudo -i
apt update
apt install -y debootstrap cryptsetup arch-install-scripts
Create paritions.
cfdisk /dev/nvme0n1
- GPT partition table
- 512M
/dev/nvme0n1p1
EFI System partition (EF00) - 100%+
/dev/nvme0n1p2
Linux filesystem
mkfs.fat -F 32 -n EFI /dev/nvme0n1p1
cryptsetup -y -v --type luks2 luksFormat --label Debian /dev/nvme01np2
cryptsetup luksOpen /dev/nvne0n1p2 cryptroot
mkfs.btrfs /dev/mapper/cryptroot
Make Btrfs subvolumes.
mount /dev/mapper/cryptroot /mnt
btrfs subvolume create /mnt/@
btrfs subvolume create /mnt/@home
btrfs subvolume create /mnt/@swap
umount -lR /mnt
Re-mount subvolumes as partitions.
mount -t btrfs -o defaults,subvol=@,compress=zstd:1 /dev/mapper/cryptroot /mnt
mkdir -p /mnt/{boot,home}
mkdir /mnt/boot/efi
mount /dev/nvme0n1p1 /mnt/boot/efi
mount -t btrfs -o defaults,subvol=@home,compress=zstd:1 /dev/mapper/cryptroot /mnt/home
Setup swapfile.
mkdir -p /mnt/swap
mount -t btrfs -o subvol=@swap /dev/mapper/cryptroot /mnt/swap
touch /mnt/swap/swapfile
chmod 600 /mnt/swap/swapfile
chattr +C /mnt/swap/swapfile
btrfs property set ./swapfile compression none
dd if=/dev/zero of=/mnt/swap/swapfile bs=1M count=16384
mkswap /mnt/swap/swapfile
swapon /mnt/swap/swapfile
Base installation
Create a nested subvolume for /var/log
under the @
subvolume. This will be automounted with @
so there is no need to add it to /etc/fstab
. Nested subvolumes are not included in snapshots of the parent subvolume. Creating a nested subvolume for /var/log
will ensure the log files remain untouched when we restore the rootfs from a snapshot.
mkdir -p /mnt/var
btrfs subvolume create /mnt/var/log
debootstrap --arch amd64 <suite> /mnt
Copy the mounted file systems table.
Bind the pseudo-filesystems for chroot.
mount --rbind /dev /mnt/dev
mount --rbind /sys /mnt/sys
mount -t proc proc /mnt/proc
Generate fstab.
genfstab -U /mnt >> /mnt/etc/fstab
Chroot into the new system.
cp -v /etc/resolv.conf /mnt/etc/
chroot /mnt
Configure the new installation
Set the timezone, locale, keyboard configuration, and console.
apt install -y locales
dpkg-reconfigure tzdata locales keyboard-configuration console-setup
Set the hostname.
echo 'hostname' > /etc/hostname
echo '127.0.1.1 hostname.localdomain hostname' >> /etc/hosts
Configure APT sources on /etc/apt/sources.list
deb https://deb.debian.org/debian <suite> main contrib non-free non-free-firmware
deb https://deb.debian.org/debian <suite>-updates main contrib non-free non-free-firmware
deb https://deb.debian.org/debian <suite>-backports main contrib non-free non-free-firmware
deb https://deb.debian.org/debian-security <suite>-security main contrib non-free non-free-firmware
Install essential packages.
apt update -t <suite>-backports
apt dist-upgrade -t <suite>-backports
apt install -y neovim linux-image-amd64 linux-headers-amd64 firmware-linux firmware-linux-nonfree sudo command-not-found systemd-timesyncd systemd-resolved cryptsetup cryptsetup-initramfs efibootmgr btrfs-progs grub-efi
Install desktop environment.
apt install task-gnome-desktop task-desktop task-ssh-server
If installing on a laptop:
sudo apt install -y task-laptop powertop
Create users and groups.
passwd root
adduser jas
echo "jas ALL=(ALL) NOPASSWD: ALL" | tee -a /etc/sudoers.d/jas
chmod 640 /etc/sudoers.d/jas
usermod -aG systemd-journal jas
Setting up the bootloader
Optional package for extra protection of suspended laptops.
apt install cryptsetup-suspend
Setup encryption parameters.
blkid -s UUID -o value /dev/nvme0n1p2
Edit /etc/crypttab
.
cryptroot UUID=<uuid> none luks
Setup bootloader.
grub-install --target=x86_64-efi --efi-directory=/boot/efi --recheck --bootloader-id="Debian"
Edit /etc/default/grub
.
GRUB_CMDLINE_LINUX_DEFAULT=""
GRUB_CMDLINE_LINUX=""
GRUB_ENABLE_CRYPTODISK=y
GRUB_TERMINAL=console
Update grub.
update-grub
Exit chroot and reboot.
exit
umount -lR /mnt
reboot
Emergency recovery from live ISO
sudo -i
cryptsetup luksOpen /dev/nvme0n1p2 cryptroot
mount -t btrfs -o defaults,subvol=@,compress=zstd:1 /dev/mapper/cryptroot /mnt
mount /dev/nvme0n1p1 /mnt/boot/efi
mount -t btrfs -o defaults,subvol=@home,compress=zstd:1 /dev/mapper/cryptroot /mnt/home
mount -t btrfs -o subvol=@swap /dev/mapper/cryptroot /mnt/swap
swapon /mnt/swap/swapfile
mount --rbind /dev /mnt/dev
mount --rbind /sys /mnt/sys
mount -t proc proc /mnt/proc
chroot /mnt
Debian Root on ZFS
Sources:
Configure live environment
Add contrib repo to /etc/apt/sources.list
.
deb http://deb.debian.org/debian bookworm main contrib
Install required utilities.
apt update
apt install -y debootstrap gdisk dkms linux-headers-amd64
apt install -y zfsutils-linux
Generate /etc/hostid
.
zgenhostid -f 0x00bab10c
Define disk variables
For single NVMe:
export BOOT_DISK="/dev/nvme0n1"
export BOOT_PART="1"
export BOOT_DEVICE="${BOOT_DISK}p${BOOT_PART}"
export POOL_DISK="/dev/nvme0n1"
export POOL_PART="2"
export POOL_DEVICE="${POOL_DISK}p${POOL_PART}"
Disk preparation
Wipe partitions.
zpool labelclear -f "$POOL_DISK"
wipefs -af "$POOL_DISK"
wipefs -af "$BOOT_DISK"
sgdisk --zap-all "$POOL_DISK"
sgdisk --zap-all "$BOOT_DISK"
Create EFI boot partition.
sgdisk -n "${BOOT_PART}:1m:+512m" -t "${BOOT_PART}:ef00" "$BOOT_DISK"
Create zpool partition.
sgdisk -n "${POOL_PART}:0:-10m" -t "${POOL_PART}:bf00" "$POOL_DISK"
ZFS pool creation
IMPORTANT Note the change in case for the
-o autotrim=
and-o compatibility=
options.
Unencrypted:
zpool create -f -o ashift=12 \
-O compression=lz4 \
-O acltype=posixacl \
-O xattr=sa \
-O relatime=on \
-o autotrim=on \
-o compatibility=openzfs-2.1-linux \
-m none zroot "$POOL_DEVICE"
Encrypted:
echo "passphrase" > /etc/zfs/zroot.key
chmod 000 /etc/zfs/zroot.key
zpool create -f -o ashift=12 \
-O compression=lz4 \
-O acltype=posixacl \
-O xattr=sa \
-O relatime=on \
-O encryption=aes-256-gcm \
-O keylocation=file:///etc/zfs/zroot.key \
-O keyformat=passphrase \
-o autotrim=on \
-o compatibility=openzfs-2.1-linux \
-m none zroot "$POOL_DEVICE"
Create initial file systems.
zfs create -o mountpoint=none zroot/ROOT
zfs create -o mountpoint=/ -o canmount=noauto zroot/ROOT/debian
zfs create -o mountpoint=/home zroot/home
zpool set bootfs=zroot/ROOT/debian zroot
Export then re-import with a temp mountpoint of /mnt
.
zpool export zroot
zpool import -N -R /mnt zroot
# for encrypted
zfs load-key -L prompt zroot
zfs mount zroot/ROOT/debian
zfs mount zroot/home
Verify that everything mounted correctly.
mount | grep mnt
zroot/ROOT/debian on /mnt type zfs (rw,relatime,xattr,posixacl)
zroot/home on /mnt/home type zfs (rw,relatime,xattr,posixacl)
Update device symlinks.
udevadm trigger
Install Debian
debootstrap bookworm /mnt
Copy files to new install.
cp -v /etc/hostid /mnt/etc/hostid
cp -v /etc/resolv.conf /mnt/etc/
# for encrypted
mkdir /mnt/etc/zfs
cp -v /etc/zfs/zroot.key /mnt/etc/zfs
Chroot into the new OS.
mount -t proc proc /mnt/proc
mount -t sysfs sys /mnt/sys
mount -B /dev /mnt/dev
mount -t devpts pts /mnt/dev/pts
chroot /mnt /bin/bash
Basic Debian configuration
echo 'hostname' > /etc/hostname
passwd
Edit /etc/apt/sources.list
.
deb http://deb.debian.org/debian bookworm main contrib non-free non-free-firmware
deb http://deb.debian.org/debian bookworm-updates main contrib non-free non-free-firmware
deb http://deb.debian.org/debian bookworm-backports main contrib non-free non-free-firmware
deb http://deb.debian.org/debian-security bookworm-security main contrib non-free non-free-firmware
Disable APT downloading translations: /etc/apt/apt.conf.d/99translations
.
Acquire::Languages "none";
Install additional base packages.
apt update
apt install locales keyboard-configuration console-setup
Configure locales, tzdata, keyboard-configuration, and console-setup.
dpkg-reconfigure locales tzdata keyboard-configuration console-setup
ZFS configuration
Install required packages.
apt install linux-headers-amd64 linux-image-amd64 zfs-initramfs dosfstools
echo "REMAKE_INITRD=yes" > /etc/dkms/zfs.conf
Enable ZFS systemd services.
systemctl enable zfs.target
systemctl enable zfs-import-cache
systemctl enable zfs-mount
systemctl enable zfs-import.target
Configure initramfs-tools
.
echo "UMASK=0077" > /etc/initramfs-tools/conf.d/umask.conf
Rebuild the initramfs.
update-initramfs -c -k all
Install and configure ZFSBootMenu
Unencrypted:
zfs set org.zfsbootmenu:commandline="quiet" zroot/ROOT
Encrypted:
zfs set org.zfsbootmenu:commandline="quiet" zroot/ROOT
zfs set org.zfsbootmenu:keysource="zroot/ROOT/debian" zroot
Create a vfat
filesystem.
mkfs.vfat -F32 "$BOOT_DEVICE"
Create an fstab entry and mount.
cat << EOF >> /etc/fstab
$(blkid | grep "$BOOT_DEVICE" | cut -d ' ' -f 2) /boot/efi vfat defaults 0 0
EOF
mkdir -p /boot/efi
mount /boot/efi
Install ZFSBootMenu.
apt install -y curl
mkdir -p /boot/efi/EFI/ZBM
curl -o /boot/efi/EFI/ZBM/VMLINUZ.EFI -L https://get.zfsbootmenu.org/efi
cp -v /boot/efi/EFI/ZBM/VMLINUZ.EFI /boot/efi/EFI/ZBM/VMLINUZ-BACKUP.EFI
Configure EFI boot entries.
mount -t efivarfs efivarfs /sys/firmware/efi/efivars
apt install -y efibootmgr
efibootmgr -c -d "$BOOT_DISK" -p "$BOOT_PART" \
-L "ZFSBootMenu (Backup)" \
-l '\EFI\ZBM\VMLINUZ-BACKUP.EFI'
efibootmgr -c -d "$BOOT_DISK" -p "$BOOT_PART" \
-L "ZFSBootMenu" \
-l '\EFI\ZBM\VMLINUZ.EFI'
Prepare for first boot
Add user.
adduser jas
apt install -y sudo
echo 'Defaults secure_path="/usr/bin:/usr/sbin:/bin:/sbin:/usr/local/bin:/home/linuxbrew/.linuxbrew/bin"' | tee -a /etc/sudoers.d/jas
echo 'jas ALL=(ALL) NOPASSWD: ALL' | tee -a /etc/sudoers.d/jas
chmod 640 /etc/sudoers.d/jas
Exit the chroot, unmount everything.
exit
umount -n -lR /mnt
zpool export zroot
systemctl reboot
Systemd-logind
Install libpam-systemd
:
sudo apt install -y libpam-systemd
Unmask and enable systemd-logind:
sudo systemctl unmask systemd-logind
sudo systemctl enable systemd-logind
sudo systemctl reboot
Nix-shell
Install libgourou in nix-shell.
nix-shell -p libgourou
Docker
docker run \
-v "${PATH_TO_ADOBE_CREDS}:/home/libgourou/.adept" \
-v "$(pwd):/home/libgourou/files" \
--rm \
bcliang/docker-libgourou \
<name_of_adept_metafile.acsm>
Extract PDF or EPUB from ACSM file
Register the device with Adobe username and password.
adept_activate -u user -p password
Download the ACSM file. Make sure the ACSM file is in the current working directory.
acsmdownloader -f Dragon_Age_The_Missing_1.acsm
The downloaded file requires a password to open. Remove the DRM from the files.
find . type -f -name "Dragon_Age_The_Missing*.pdf" -exec adept_remove {} \;
Configure fail2ban on Linux with firewalld
sudo cp -v /etc/fail2ban/jail.conf /etc/fail2ban/jail.local
sudo nvim /etc/fail2ban/jail.local
bantime = 1h
findtime = 1h
maxretry = 5
sudo cp -v /etc/fail2ban/jail.d/00-firewalld.conf /etc/fail2ban/jail.d/00-firewalld.local
sudo nvim /etc/fail2ban/jail.d/sshd.local
[sshd]
enabled = true
bantime = 1d
maxretry = 3
sudo systemctl restart fail2ban.service
sudo fail2ban-client status
Configure fail2ban on FreeBSD with PF
sudo pkg install -y py311-fail2ban
Edit /usr/local/etc/fail2ban/jail.local
.
[DEFAULT]
bantime = 86400
findtime = 3600
maxretry = 3
banaction = pf
[sshd]
enabled = true
Enable and start fail2ban.
sudo sysrc fail2ban_enable="YES"
sudo service fail2ban start
# If not enabled already:
sudo sysrc pf_enable="YES"
sudo service pf start
Configure /etc/pf.conf
.
table <fail2ban> persist
set skip on lo0
block in all
block in quick from <fail2ban>
...
Check and reload PF rules.
sudo pfctl -nf /etc/pf.conf
sudo pfctl -f /etc/pf.conf
Access USB serial device in container
Create a udev rule on the host for all usb-serial devices. Set OWNER to your 1000 user.
cat << EOF | sudo tee /etc/udev/rules.d/50-usb-serial.rules
SUBSYSTEM=="tty", SUBSYSTEMS=="usb-serial", OWNER="jas"
EOF
Reload udev.
sudo udevadm control --reload-rules
sudo udevadm trigger
The serial device should now be owned by your user.
ls -l /dev/ttyUSB0
crw-rw----. 1 jas dialout 188, 0 Mar 15 11:09 /dev/ttyUSB0
You can now run minicom inside the toolbox container.
distrobox enter default
minicom -D /dev/ttyUSB0
Allow connections only from tailnet
Create a new zone for the tailscale0
interface.
sudo firewall-cmd --permanent --new-zone=tailnet
sudo firewall-cmd --permanent --zone=tailnet --add-interface=tailscale0
sudo firewall-cmd --reload
Add services and ports to the tailnet
zone.
sudo firewall-cmd --permanent --zone=tailnet --add-service={http,https,ssh}
sudo firewall-cmd --permanent --zone=tailnet --add-port=9100/tcp
sudo firewall-cmd --reload
Ensure the public
zone does not have any interfaces or sources.
sudo firewall-cmd --permanent --zone=public --remove-interface=eth0
sudo firewall-cmd --reload
The firewall should now only allow traffic coming from the tailnet interface, tailscale0
.
USB 3.1 Type-C to RJ45 Gigabit Ethernet adapter
The Amazon Basics Aluminum USB 3.1 Type-C to RJ45 Gigabit Ethernet Adapter works well with FreeBSD 14.1-RELEASE. It uses the AX88179 chipset from ASIX Electronics Corp.
Install the ports tree
Source: Chapter 4. Installing Applications: Packages and Ports | FreeBSD Documentation Portal
Ensure the FreeBSD source code is checked out
sudo git clone -o freebsd -b releng/14.1 https://git.FreeBSD.org/src.git /usr/src
Check out the ports tree
sudo git clone --depth 1 https://git.FreeBSD.org/ports.git -b 2024Q3 /usr/ports
To switch to a different quarterly branch:
sudo git -C /usr/ports switch 2024Q4
drm-61-kmod
Install from the ports tree.
cd /usr/ports/graphics/drm-61-kmod
sudo make install clean
Alternatively, for Alderlake GPUs:
sudo pkg install drm-kmod
Edit /etc/rc.conf
:
kld_list="i915kms"
Add user to video
group:
sudo pw groupmod video -m jas
Mount filesystems in single-user mode
When booted into single-user mode.
fsck
mount -u /
mount -a -t zfs
zfs mount -a
You should now be able to edit files, add/remove packages, etc.
Mount encrypted zroot in LiveCD
Boot into the LiveCD environment.
mkdir /tmp/mnt
geli attach /dev/nda0p4
zpool import -f -R /tmp/mnt zroot
zfs mount zroot/ROOT/default
The root directory of the zroot, zroot/ROOT/default
, is labeled to not be automounted when imported, hence the need for the last command.
Setup Podman (FreeBSD >= 14)
The following is a condensed version of the guide found at CloudSpinx: Install Podman and run Containers in FreeBSD 14.
sudo pkg install podman-suite
sudo mount -t fdescfs fdesc /dev/fd
Add the following line to /etc/fstab
:
fdesc /dev/fd fdescfs rw 0 0
Enable the Podman service.
sudo sysrc podman_enable="YES"
Container networking requires a NAT to allow the container network's packets to reach the host's network. Copy the sample pf.conf for Podman.
sudo cp -v /usr/local/etc/containers/pf.conf.sample /etc/pf.conf
Change v4egress_if
and v6egress_if
to the host's main network interface in /etc/pf.conf
.
v4egress_if="igc0"
v6egree_if="igc0"
Enable and start PF.
sudo sysrc pf_enable="YES"
sudo service pf start
FreeBSD >= 13.3 has support for rerouting connections from the host to services inside the container. To enable this, load the PF kernel module, then use sysctl
to activate PF support for this rerouting.
echo 'pf_load="YES"' | sudo tee -a /boot/loader.conf
sudo kldload pf
sudo sysctl net.pf.filter_local=1
echo 'net.pf.filter_local=1' | sudo tee -a /etc/sysctl.conf.local
sudo service pf restart
The rerouting rules will only work if the destination address is localhost. Ensure the following exists in /etc/pf.conf
.
nat-anchor "cni-rdr/*"
Container images and related state is stored in /var/db/containers
. Create a ZFS dataset for this with the mountpoint set to that directory.
sudo zfs create -o mountpoint=/var/db/containers zroot/containers
If the system is not using ZFS, change storage.conf
to use the vfs
storage driver.
sudo sed -I .bak -e 's/driver = "zfs"/driver = "vfs"/' /usr/local/etc/containers/storage.conf
If there are any errors caused by the /var/db/containers/storage
database, remove it.
sudo rm -rfv /var/db/containers/storage
IMPORTANT Note: Podman can only be run with root privileges on FreeBSD at this time.
Enable the Linux service.
sudo sysrc linux_enable="YES"
sudo service linux start
To run Linux containers, add the --os=linux
argument to Podman commands.
sudo podman run --os=linux ubuntu /usr/bin/cat "/etc/os-release"
Everything else should work as expected.
Install Linux VM in Bhyve
Based on How to install Linux VM on FreeBSD using bhyve and ZFS, but condensed and collated for my use-case.
Setting up the network interfaces
Make the tap device UP by default in /etc/sysctl.conf
.
echo "net.link.tap.up_on_open=1" >> /etc/sysctl.conf
sysctl net.link.tap.up_on_open=1
Load the kernel modules neeeded for bhyve.
kldload vmm
kldload nmdm
Make sure the modules are loaded at boot time.
echo 'vmm_load="YES"' >> /boot/loader.conf
echo 'nmdm_load="YES"' >> /boot/loader.conf
echo 'if_tap_load="YES"' >> /boot/loader.conf
echo 'if_bridge_load="YES"' >> /boot/loader.conf
Create the bridge and tap device. If you already have a bridge created, use that instead. We'll assume this is the case, and the bridge is called igb0bridge
.
ifconfig bridge create
If a bridge is already created and the main network interface igc0
is attached to it, the following command is not necessary.
ifconfig igb0bridge addm igc0
Create tap interface and attach it to the igb0bridge
.
ifconfig tap0 create
ifconfig igb0bridge addm tap0
If there wasn't a bridge already being used for jails, then /etc/rc.conf
should contain the following:
cloned_interfaces="igb0bridge tap0"
ifconfig_igb0bridge="addm igc0 addm tap0 up"
If there was already a bridge used for jails, then /etc/rc.conf
should contain the following:
cloned_interfaces="igb0bridge tap0"
ifconfig_igb0bridge="inet 10.0.0.8/24 addm igc0 addm tap0 up"
Setting up the ZFS volumes for Linux bhyve VM
zfs create -V128G -o volmode=dev zroot/debianvm
Downloading Debian installer iso
cd /tmp/
DEBIAN_VERSION=12.10.0
wget "https://cdimage.debian.org/debian-cd/current/amd64/iso-cd/debian-${DEBIAN_VERSION}-amd64-netinst.iso"
Installing Debian in VM
Install the grub-bhyve binary to allow booting of non-FreeBSD guest OSes.
pkg install grub2-bhyve bhyve-firmware
Install Debian by running bhyve with the netinstall iso image and the zvol attached.
bhyve -c 4 -m 8G -w -H \
-s 0,hostbridge \
-s 3,ahci-cd,/tmp/debian-12.10.0-amd64-netinst.iso \
-s 4,virtio-blk,/dev/zvol/zroot/debianvm \
-s 5,virtio-net,tap0 \
-s 29,fbuf,tcp=0.0.0.0:5900,w=800,h=600,wait \
-s 30,xhci,tablet \
-s 31,lpc \
-l com1,stdio \
-l bootrom,/usr/local/share/uefi-firmware/BHYVE_UEFI.fd \
debianvm
Arguments | Description |
---|---|
-c 4 | number of virtual CPUs |
-m 8G | RAM size for VM |
-w | ignore unimplemented MSRs |
-H | host filesystem to export to the loader |
-s 3,ahci-cd,/tmp/debian-12.10.0-amd64-netinst.iso | Configure an AHCI-CD device in virtual PCI slot 3 to hold the netinstall iso cdrom. |
-s 4,virtio-blk,/dev/zvol/zroot/debianvm | Configure a virtio block device in virtual PCI slot 4 to install the OS onto. |
-s 5,virtio-net,tap0 | Configure a virtual network interface in virtual PCI slot 5 and attach the tap0 interface. |
-s 29,fbuf,tcp=0.0.0.0:5900,w=800,h=600,wait | Configure a virtual framebuffer device in virtual PCI slot 29 to enable connection from a remote VNC viewer on port 5900. The framebuffer resolution is 800x600. The wait argument instructs bhyve to only boot upon the initiation of a VNC connection. |
-s 30,xhci,tablet | Provides precise cursor synchronization when using VNC. |
-s 31,lpc | Configure a virtual LPC device on virtual PCI slot 31. |
-l com1,stdio | Configure the TTY-class device com1 with stdio. |
-l bootrom,/usr/local/share/uefi-firmware/BHYVE_UEFI.fd | the OS loader to use (UEFI needed for non-FreeBSD OSes). |
When the command runs, use a remote VNC view to connect to and start the netinstall iso.
IMPORTANT The following step is required to boot from UEFI.
Run the Debian installer with desired configuration. When you reach the "Finish the installation" stage, select "Go Back", then select "Execute a shell". Once in the shell, run the following commands:
mkdir /target/boot/efi/EFI/BOOT/
cp -v /target/boot/efi/EFI/debian/grubx64.efi /target/boot/efi/EFI/BOOT/bootx64.efi
exit
Now continue with "Finish the installation".
Booting Debian bhyve VM
The instance of the virtual machine needs to be destroyed before it can be started again.
bhyvectl --destroy --vm=debianvm
Boot the Debian VM.
bhyve -c 4 -m 8G -w -H \
-s 0,hostbridge \
-s 4,virtio-blk,/dev/zvol/zroot/debianvm \
-s 5,virtio-net,tap0 \
-s 29,fbuf,tcp=0.0.0.0:5900,w=1024,h=768 \
-s 30,xhci,tablet \
-s 31,lpc \
-l com1,stdio \
-l bootrom,/usr/local/share/uefi-firmware/BHYVE_UEFI.fd \
debianvm
Starting the Debian VM on boot with a shell script
#!/bin/sh
# Name: startdebianvm
# Purpose: Simple script to start my Debian 10 VM using bhyve on FreeBSD
# Author: Vivek Gite {https://www.cyberciti.biz} under GPL v2.x+
-------------------------------------------------------------------------
# Lazy failsafe (not needed but I will leave them here)
ifconfig tap0 create
ifconfig em0bridge addm tap0
if ! kldstat | grep -w vmm.ko
then
kldload -v vmm
fi
if ! kldstat | grep -w nmdm.ko
then
kldload -v nmdm
fi
bhyve -c 1 -m 1G -w -H \
-s 0,hostbridge \
-s 4,virtio-blk,/dev/zvol/zroot/debianvm \
-s 5,virtio-net,tap0 \
-s 29,fbuf,tcp=0.0.0.0:5900,w=1024,h=768 \
-s 30,xhci,tablet \
-s 31,lpc -l com1,stdio \
-l bootrom,/usr/local/share/uefi-firmware/BHYVE_UEFI.fd \
debianvm
Create a crontab entry:
@reboot /path/to/startdebianvm
Installing a Linux jail
Create the ZFS datasets for the base jail and Linux jail.
sudo zfs create naspool/jails/debian
sudo zfs create naspool/jails/14.2-RELEASE
Download the base userland system for FreeBSD.
fetch https://download.freebsd.org/ftp/releases/amd64/amd64/14.2-RELEASE/base.txz
Extract the base userland into the base jail's directory.
sudo tar -xf base.txz -C /jails/14.2-RELEASE --unlink
Copy DNS and timezone files.
sudo cp -v /etc/resolv.conf /jails/14.2-RELEASE/etc/resolv.conf
sudo cp -v /etc/localtime /jails/14.2-RELEASE/etc/localtime
Update the base jail to the latest patch level.
sudo freebsd-update -b /jails/14.2-RELEASE/ fetch install
Create a ZFS snapshot from the base jail.
sudo zfs snapshot naspool/jails/14.2-RELEASE@base
Clone the base jail to create a thin jail for the Linux distribution.
sudo zfs clone naspool/jails/14.2-RELEASE@base naspool/jails/debian
Enable the Linux ABI.
sudo sysrc linux_enable="YES"
sudo service linux start
Run the jail
command with a quick configuration.
sudo jail -cm \
name=debian \
host.hostname="debian" \
path="/jails/debian" \
interface="igc0" \
ip4.addr="10.0.0.21" \
exec.start="/bin/sh /etc/rc" \
exec.stop="/bin/sh /etc/rc.shutdown" \
mount.devfs \
devfs_ruleset=11 \
allow.mount \
allow.mount.devfs \
allow.mount.fdescfs \
allow.mount.procfs \
allow.mount.linprocfs \
allow.mount.linsysfs \
allow.mount.tmpfs \
enforce_statfs=1
Access the jail.
sudo jexec -u root debian
Install the debootstrap program and prepare the Debian environment.
pkg install debootstrap
debootstrap bookworm /compat/debian
When the process finishes, stop the jail from the host system.
sudo service jail onestop debian
Add an entry in /etc/jail.conf
for the Debian Linux jail.
debian {
# STARTUP/LOGGING
exec.start = "/bin/sh /etc/rc";
exec.stop = "/bin/sh /etc/rc.shutdown";
exec.consolelog = "/var/log/jail_console_${name}.log";
# PERMISSIONS
allow.raw_sockets;
exec.clean;
mount.devfs;
devfs_ruleset = 11;
# HOSTNAME/PATH
host.hostname = "${name}";
path = "/jails/${name}";
# NETWORK
ip4.addr = 10.0.0.21;
interface = igc0;
# MOUNT
mount += "devfs $path/compat/debian/dev devfs rw 0 0";
mount += "tmpfs $path/compat/debian/dev/shm tmpfs rw,size=1g,mode=1777 0 0";
mount += "fdescfs $path/compat/debian/dev/fd fdescfs rw,linrdlnk 0 0";
mount += "linprocfs $path/compat/debian/proc linprocfs rw 0 0";
mount += "linsysfs $path/compat/debian/sys linsysfs rw 0 0";
mount += "/tmp $path/compat/debian/tmp nullfs rw 0 0";
mount += "/home $path/compat/debian/home nullfs rw 0 0";
}
Start the jail.
sudo service jail start debian
The Debian environment can be accessed using the following command:
sudo jexec debian chroot /compat/debian /bin/bash
Setup GitLab runner with Podman
- Install GitLab Runner
- Create a new runner from the GitLab UI.
- Use the authentication token from the GitLab UI to register a new runner on the machine hosting the runner. Select the Docker executor.
sudo systemctl enable --now gitlab-runner.service
sudo gitlab-runner register --url https://git.hyperreal.coffee --token <TOKEN>
- Add the following lines to
/etc/gitlab-runner/config.toml
for Podman:
[[runners]]
environment = ["FF_NETWORK_PER_BUILD=1"]
[runners.docker]
host = "unix://run/podman/podman.sock"
tls_verify = false
image = "git.hyperreal.coffee:5050/fedora-atomic/containers/fedora:latest"
privileged = true
volumes = ["/build-repo", "/cache", "/source-repo"]
- Restart the gitlab-runner:
sudo gitlab-runner restart
We should now be ready to use the Podman runner.
Install and deploy the Grafana server
On Fedora/RHEL systems:
sudo dnf install -y grafana grafana-selinux chkconfig
On Debian systems:
sudo apt-get install -y apt-transport-https software-properties-common
sudo wget -q -O /usr/share/keyrings/grafana.key https://apt.grafana.com/gpg.key
echo "deb [signed-by=/usr/share/keyrings/grafana.key] https://apt.grafana.com stable main" | sudo tee -a /etc/apt/sources.list.d/grafana.list
sudo apt update
sudo apt install -y grafana
Reload the systemctl daemon, start and enable grafana.service
:
sudo systemctl daemon-reload
sudo systemctl enable --now grafana-server.service
sudo systemctl status grafana-server.service
Configure Grafana SELinux policy
IMPORTANT This is not necessary on AlmaLinux 9, Rocky Linux 9, RHEL 9.
For some reason the grafana-selinux package does not provide what Grafana needs to cooperate with SELinux. It's therefore necessary to use a third-party repository at https://github.com/georou/grafana-selinux to compile and install a proper SELinux policy module for Grafana.
# Clone the repo
git clone https://github.com/georou/grafana-selinux.git
cd grafana-selinux
# Copy relevant .if interface file to /usr/share/selinux/devel/include to expose them when building and for future modules.
# May need to use full path for grafana.if if not working.
install -Dp -m 0664 -o root -g root grafana.if /usr/share/selinux/devel/include/myapplications/grafana.if
# Compile and install the selinux module.
sudo dnf install -y selinux-policy-devel setools-console policycoreutils-devel
sudo make -f /usr/share/selinux/devel/Makefile grafana.pp
sudo semodule -i grafana.pp
# Add grafana ports
semanage port -a -t grafana_port_t -p tcp 3000
# Restore all the correct context labels
restorecon -RvF /usr/sbin/grafana-* \
/etc/grafana \
/var/log/grafana \
/var/lib/grafana \
/usr/share/grafana/bin
# Start grafana
systemctl start grafana-server.service
# Ensure it's working in the proper confinement
ps -eZ | grep grafana
Login to the Grafana panel.
- username: admin
- password: password (change this after)
Add Prometheus data source
- Bar menu
- Data sources
- Add new data source
- Choose Prometheus data source
- Name: Prometheus
- URL: http://localhost:9090
- Save & test
Ensure the data source is working before continuing.
If you're running Grafana on an SELinux host, set an SELinux boolean to allow Grafana to access the Prometheus port:
sudo setsebool -P grafana_can_tcp_connect_prometheus_port=1
Add Loki data source
Since Loki is running on hyperreal.coffee:3100, the Firewall's internal zone on that host needs to allow connection to port 3100
from my IP address.
sudo firewall-cmd --zone=internal --permanent --add-port=3100/tcp
sudo firewall-cmd --reload
In the Grafana panel:
- Bar menu
- Data sources
- Add new data source
- Choose Loki data source
- Name: Loki
- URL: http://hyperreal.coffee:3100
- Save & test
Ensure the data source is working before continuing.
Add Node Exporter dashboard
- Visit the Grafana Dashboard Library.
- Search for "Node Exporter Full".
- Copy the ID for Node Exporter Full.
- Go to the Grafana panel bar menu.
- Dashboards
- New > Import
- Paste the Node Exporter Full ID into the field, and press the Load button.
Add Caddy dashboard
- Visit Caddy Monitoring on the Grafana Dashboard Library.
- Copy the ID to clipboard.
- Go to the Grafana panel bar menu.
- Dashboards
- New > Import
- Paste the Caddy Monitoring ID into the field, and press the Load button.
Add qBittorrent dashboard
- Visit qBittorrent Dashboard on Grafana Dashboard Library.
- Copy the ID to clipboard.
- Go to the Grafana panel bar menu.
- Dashboards
- New > Import
- Paste the qBittorrent Dashboard ID into the field, and press the Load button.
Use HTTPS with Tailscale
sudo tailscale certs HOSTNAME.TAILNET.ts.net
sudo mkdir /etc/tailscale-ssl-certs
sudo mv *.key /etc/tailscale-ssl-certs/
sudo mv *.crt /etc/tailscale-ssl-certs/
sudo cp -v /etc/tailscale-ssl-certs/*.key /etc/grafana/grafana.key
sudo cp -v /etc/tailscale-ssl-certs/*.crt /etc/grafana/grafana.crt
sudo chown root:grafana /etc/grafana/grafana.key
sudo chown root:grafana /etc/grafana/grafana.crt
sudo chmod 644 /etc/grafana/grafana.key
sudo chmod 644 /etc/grafana/grafana.crt
Edit /etc/grafana/grafana.ini
:
[server]
protocol = https
http_addr =
http_port = 3000
domain = HOSTNAME.TAILNET.ts.net
enforce_domain = false
root_url = https://HOSTNAME.TAILNET.ts.net:3000
cert_file = /etc/grafana/grafana.crt
cert_key = /etc/grafana/grafana.key
- Install via the software manager on DietPi.
- Install the Android app via Google Play or Aurora Store.
DietPi installation
- Configuration and data is located in
/mnt/dietpi_userdata/homeassistant
. - Main configuration file:
/mnt/dietpi_userdata/homeassistant/configuration.yaml
.
Dawarich
- Documentation: https://dawarich.app/docs/intro
- Ensure docker and docker-compose are installed.
docker-compose.yml
:
networks:
dawarich:
services:
dawarich_redis:
image: redis:7.0-alpine
container_name: dawarich_redis
command: redis-server
networks:
- dawarich
volumes:
- dawarich_shared:/data
restart: always
healthcheck:
test: [ "CMD", "redis-cli", "--raw", "incr", "ping" ]
interval: 10s
retries: 5
start_period: 30s
timeout: 10s
dawarich_db:
image: imresamu/postgis:17-3.5-alpine
shm_size: 1G
container_name: dawarich_db
volumes:
- dawarich_db_data:/var/lib/postgresql/data
- dawarich_shared:/var/shared
# - ./postgresql.conf:/etc/postgresql/postgresql.conf # Optional, uncomment if you want to use a custom config
networks:
- dawarich
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
POSTGRES_DB: dawarich_development
restart: always
healthcheck:
test: [ "CMD-SHELL", "pg_isready -U postgres -d dawarich_development" ]
interval: 10s
retries: 5
start_period: 30s
timeout: 10s
# command: postgres -c config_file=/etc/postgresql/postgresql.conf # Use custom config, uncomment if you want to use a custom config
dawarich_app:
image: freikin/dawarich:latest
container_name: dawarich_app
volumes:
- dawarich_public:/var/app/public
- dawarich_watched:/var/app/tmp/imports/watched
- dawarich_storage:/var/app/storage
networks:
- dawarich
ports:
- 3000:3000
- 9394:9394 # Prometheus exporter, uncomment if needed
stdin_open: true
tty: true
entrypoint: web-entrypoint.sh
command: ['bin/dev', 'server', '-p', '3000', '-b', '::']
restart: on-failure
environment:
RAILS_ENV: development
REDIS_URL: redis://dawarich_redis:6379/0
DATABASE_HOST: dawarich_db
DATABASE_USERNAME: postgres
DATABASE_PASSWORD: password
DATABASE_NAME: dawarich_development
MIN_MINUTES_SPENT_IN_CITY: 60
APPLICATION_HOSTS: localhost
TIME_ZONE: America/Chicago
APPLICATION_PROTOCOL: http
PROMETHEUS_EXPORTER_ENABLED: "true"
PROMETHEUS_EXPORTER_HOST: 0.0.0.0
PROMETHEUS_EXPORTER_PORT: 9394
SELF_HOSTED: "true"
STORE_GEODATA: "true"
logging:
driver: "json-file"
options:
max-size: "100m"
max-file: "5"
healthcheck:
test: [ "CMD-SHELL", "wget -qO - http://127.0.0.1:3000/api/v1/health | grep -q '\"status\"\\s*:\\s*\"ok\"'" ]
interval: 10s
retries: 30
start_period: 30s
timeout: 10s
depends_on:
dawarich_db:
condition: service_healthy
dawarich_redis:
condition: service_healthy
deploy:
resources:
limits:
cpus: '0.50' # Limit CPU usage to 50% of one core
memory: '4G' # Limit memory usage to 4GB
dawarich_sidekiq:
image: freikin/dawarich:latest
container_name: dawarich_sidekiq
volumes:
- dawarich_public:/var/app/public
- dawarich_watched:/var/app/tmp/imports/watched
- dawarich_storage:/var/app/storage
networks:
- dawarich
stdin_open: true
tty: true
entrypoint: sidekiq-entrypoint.sh
command: ['sidekiq']
restart: on-failure
environment:
RAILS_ENV: development
REDIS_URL: redis://dawarich_redis:6379/0
DATABASE_HOST: dawarich_db
DATABASE_USERNAME: postgres
DATABASE_PASSWORD: password
DATABASE_NAME: dawarich_development
APPLICATION_HOSTS: localhost
BACKGROUND_PROCESSING_CONCURRENCY: 10
APPLICATION_PROTOCOL: http
PROMETHEUS_EXPORTER_ENABLED: "true"
PROMETHEUS_EXPORTER_HOST: dawarich_app
PROMETHEUS_EXPORTER_PORT: 9394
SELF_HOSTED: "true"
STORE_GEODATA: "true"
logging:
driver: "json-file"
options:
max-size: "100m"
max-file: "5"
healthcheck:
test: [ "CMD-SHELL", "bundle exec sidekiqmon processes | grep $${HOSTNAME}" ]
interval: 10s
retries: 30
start_period: 30s
timeout: 10s
depends_on:
dawarich_db:
condition: service_healthy
dawarich_redis:
condition: service_healthy
dawarich_app:
condition: service_healthy
deploy:
resources:
limits:
cpus: '0.50' # Limit CPU usage to 50% of one core
memory: '4G' # Limit memory usage to 4GB
volumes:
dawarich_db_data:
dawarich_shared:
dawarich_public:
dawarich_watched:
dawarich_storage:
Edit /mnt/dietpi_userdata/homeassistant/configuration.yaml
to add Dawarich configuration:
# Loads default set of integrations. Do not remove.
default_config:
# Load frontend themes from the themes folder
frontend:
themes: !include_dir_merge_named themes
automation: !include automations.yaml
script: !include scripts.yaml
scene: !include scenes.yaml
# dawarich
rest_command:
push_position_to_dawarich:
url: http://localhost:3000/api/v1/overland/batches?api_key=<APIKEY>
method: POST
content_type: 'application/json'
payload: >
{
"locations": [
{
"type": "Feature",
"geometry":{
"type": "Point",
"coordinates":[
{{ longitude }},
{{ latitude }}
]
},
"properties":{
"api_key": "<APIKEY>",
"timestamp": "{{ now().isoformat() }}",
"altitude": {{ altitude }},
"speed": {{ speed }},
"horizontal_accuracy": 0,
"vertical_accuracy": {{ vertical_accuracy }},
"motion": [],
"pauses": false,
"activity": "{{ activity }}",
"desired_accuracy": 0,
"deferred": 0,
"significant_change": "unknown",
"locations_in_payload": 1,
"device_id": "{{device_id}}",
"wifi": "unknown",
"battery_state": "unknown",
"battery_level": {{ battery_level }}
}
}
]
}
alias: Push pixel6 position to dawarich
description: ""
trigger:
- platform: state
entity_id:
- device_tracker.pixel_6
attribute: longitude
- platform: state
entity_id:
- device_tracker.pixel_6
attribute: latitude
condition: []
action:
- service: rest_command.push_position_to_dawarich
data:
latitude: "{{ state_attr('device_tracker.pixel_6','latitude') }}"
longitude: "{{ state_attr('device_tracker.pixel_6','longitude') }}"
speed: "{{ state_attr('device_tracker.pixel_6','speed') }}"
altitude: "{{ state_attr('device_tracker.pixel_6','altitude') }}"
vertical_accuracy: "{{ state_attr('device_tracker.pixel_6','vertical_accuracy') }}"
activity: "{{ states('sensor.pixel_6_detected_activity') }}"
device_id: pixel7
battery_level: "{{ states('sensor.pixel_6_battery_level') }}"
mode: single
- Go to the Home Assistant dashboard > Settings > Add Integration > Dawarich.
- Enter the following information for the Dawarich integration:
- Host: localhost
- Port: 3000
- Name: Dawarich
- Device Tracker: Select Pixel 6, or whatever device you want to track.
- Use SSL: uncheck
- Verify SSL: uncheck
PicoTTS
On DietPi, install dependencies.
sudo apt install -y libtool build-essential automake autoconf libpopt-dev pkg-config
git clone https://github.com/ihuguet/picotts
cd picotts/pico
./autogen.sh
./configure
make
sudo make install
Test.
pico2wave -l en-US -w test.wav "Hello. How may I assist?"
Copy the test.wav
file to desktop machine. Play the file in a media player.
Configure PicoTTS in /mnt/dietpi_userdata/homeassistant/configuration.yaml
:
tts:
- platform: picotts
language: "en-US"
Org Mode to Hugo
Text formatting
Org Mode | Comments |
---|---|
`*Bold text*` | Bold text |
`/Italic text/` | Italic text |
`_Underline_` | Underline text |
`=Verbatim=` | Verbatim text |
`+Strike-through+` | Strike-through text |
Adding images
=#+ATTR_HTML:= :width 100% :height 100% :class border-2 :alt Description :title Image title
=[[./path/to/image.jpg]]=
Adding metadata
=#+TITLE:= Your title
=#+DATE:= 2024-10-22
=#+TAGS[]:= hugo org-mode writing
=#+DRAFT:= false
=#+AUTHOR:= hyperreal
=#+SLUG:= your-title
=#+DESCRIPTION:= Description
=#+CATEGORIES:= blogging
=#+IMAGES[]:= /images/image.jpg
=#+WEIGHT:= 10
=#+LASTMOD:= 2024-10-23
=#+KEYWORDS[]:= hugo org-mode tutorial
=#+LAYOUT:= post
=#+SERIES:= Techne
=#+SUMMARY:= Summary
=#+TYPE:= Tutorial
=* Main content=
IMPORTANT Note: tags must not contain spaces. Use underscores or en-dashes.
Install Python command line client
pipx install internetarchive
Use Python client to download torrent files from given collection
Ensure "Automatically add torrents from" > Monitored Folder is set to /mnt/torrent_files
and the Override save path is Default save path.
Get itemlist from collection
ia search --itemlist "collection:bbsmagazine" | tee bbsmagazine.txt
Download torrent files from each item using parallel
cat bbsmagazine.txt | parallel 'ia download --format "Archive BitTorrent" --destdir=/mnt/torrent_files {}'
Move .torrent files from their directories to /mnt/torrent_files
find /mnt/torrent_files -type f -name "*.torrent" -exec mv {} /mnt/torrent_files \;
IMPORTANT Note: .torrent files will be removed from
/mnt/torrent_files
by qBittorrent once they are added to the instance.
Remove empty directories
find /mnt/torrent_files -maxdepth 1 -mindepth 1 -type d -delete
Disable core dumps in Linux
limits.conf And sysctl
Edit /etc/security/limits.conf
and append the following lines:
* hard core 0
* soft core 0
Edit /etc/sysctl.d/9999-disable-core-dump.conf
:
fs.suid_dumpable=0
kernel.core_pattern=|/bin/false
sudo sysctl -p /etc/sysctl.d/9999-disable-core-dump.conf
/bin/false
exits with a failure status code. The default value forkernel.core_pattern
iscore
on a Debian server and|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h
on a Fedora desktop. These commands are executed upon crashes. In the case of/bin/false
, nothing happens, and core dump is disabled.fs.suid_dumpable=0
Any process that has changed privilege levels or is execute only will not be dumped. Other values include1
, which is debug mode, and all processes dump core when possible. The current user owns the core dump, no security is applied.2
, suidsafe mode, in which any Linux program that would generally not be dumped is dumped regardless, but only if thekernel.core_pattern
in sysctl is set to a valid program.
Systemd
sudo mkdir /etc/systemd/coredump.conf.d/
sudo nvim /etc/systemd/coredump.conf.d/custom.conf
[Coredump]
Storage=none
ProcessSizeMax=0
Storage=none
andProcessSizeMax=0
disables all coredump handling except for a log entry under systemd.
sudo systemctl daemon-reload
Edit /etc/systemd/system.conf
. Make sure DefaultLimitCORE
is commented out.
#DefaultLimitCORE=infinity
sudo systemctl daemon-reexec
Configure SPF and DKIM for SMTP postfix-relay
Source: https://github.com/wader/postfix-relay#spf
- Add remote forwarding for rsyslog.
- Make the DKIM keys persist indefinitely in a volume at
./volumes/postfix-dkim:/etc/opendkim/keys
. ./volumes
is relative to the parent directory of thedocker-compose.yml
file for the Lemmy instance. E.g./docker/lemmy/volumes
.
Edit docker-compose.yml
:
postfix:
image: mwader/postfix-relay
environment:
- POSTFIX_myhostname=lemmy.hyperreal.coffee
- OPENDKIM_DOMAINS=lemmy.hyperreal.coffee
- RSYSLOG_TO_FILE=yes
- RSYSLOG_TIMESTAMP=yes
- RSYSLOG_REMOTE_HOST=<ip addr of remote logging server>
- RSYSLOG_REMOTE_PORT=514
- RSYSLOG_REMOTE_TEMPLATE=RSYSLOG_ForwardFormat
volumes:
- ./volumes/postfix-dkim:/etc/opendkim/keys
- ./volumes/logs:/var/log
restart: "always"
logging: *default-logging
docker-compose up -d
On domain registrar, add the following TXT records:
Type | Name | Content |
---|---|---|
TXT | lemmy | "v=spf1 a max ipv4:<ip addr of server> -all" |
TXT | mail._domainkey.lemmy | "v=DKIM1; h=sha256; k=rsa; p=<pubkey>" |
The content of mail._domainkey.lemmy
is obtained from the log output of the wader/postfix-relay Docker container.
docker logs lemmy-postfix-1
To test this, allow a few hours for the DNS changes to propagate, then log out of the Lemmy instance and send a password reset request. If the reset confirmation email doesn't go to the spam folder, it works. The email service provider will be able to determine the email is from an authentic email address.
Resources
Rsyslog forwarding to Promtail and Loki
IMPORTANT Running Loki and Promtail on the same host as Prometheus makes managing the firewall and network routes easier.
This is roughly what our network looks like:
Main Monitoring Node
- Runs Prometheus, Promtail, Loki, and rsyslog.
- Traffic must be allowed through the firewall on TCP port 514. If using Tailscale, ensure the ACLs are setup correctly.
- It has an rsyslog ruleset that catches all forwarded logs through TCP port 514 and relays them to Promtail on TCP port 1514.
- Promtail pushes the logs its receives via TCP port 1514 to the Loki client listening on TCP port 3100.
Regular Node 1
- It has an rsyslog ruleset that forwards logs to the Main Monitoring Node on TCP port 514.
- Is allowed to access TCP port 514 on the Main Monitoring Node.
Regular Node 2
- It has an rsyslog ruleset that forwards logs to the Main Monitoring Node on TCP port 514.
- Is allowed to access TCP port 514 on the Main Monitoring Node.
Install Rsyslog, Promtail, and Loki on the Main Monitoring Node
# Debian-based hosts
sudo apt install -y promtail loki rsyslog
# Fedora-based hosts
sudo dnf install -y promtail loki rsyslog
Edit /etc/promtail/config.yml
.
server:
http_listen_port: 9081
grpc_listen_port: 0
positions:
filename: /var/tmp/promtail-syslog-positions.yml
clients:
- url: http://localhost:3100/loki/api/v1/push
scrape_configs:
- job_name: syslog
syslog:
listen_address: 0.0.0.0:1514
labels:
job: syslog
relabel_configs:
- source_labels: [__syslog_message_hostname]
target_label: hostname
- source_labels: [__syslog_message_severity]
target_label: level
- source_labels: [__syslog_message_app_name]
target_label: application
- source_labels: [__syslog_message_facility]
target_label: facility
- source_labels: [__syslog_connection_hostname]
target_label: connection_hostname
Edit /etc/loki/config.yml
.
auth_enabled: false
server:
http_listen_port: 3100
grpc_listen_port: 9096
common:
instance_addr: 127.0.0.1
path_prefix: /tmp/loki
storage:
filesystem:
chunks_directory: /tmp/loki/chunks
rules_directory: /tmp/loki/rules
replication_factor: 1
ring:
kvstore:
store: inmemory
query_range:
results_cache:
cache:
embedded_cache:
enabled: true
max_size_mb: 100
schema_config:
configs:
- from: 2020-10-24
store: tsdb
object_store: filesystem
schema: v13
index:
prefix: index_
period: 24h
ruler:
alertmanager_url: http://localhost:9093
Edit /etc/rsyslog.d/00-promtail-relay.conf
.
# https://www.rsyslog.com/doc/v8-stable/concepts/multi_ruleset.html#split-local-and-remote-logging
ruleset(name="remote"){
# https://www.rsyslog.com/doc/v8-stable/configuration/modules/omfwd.html
# https://grafana.com/docs/loki/latest/clients/promtail/scraping/#rsyslog-output-configuration
action(type="omfwd" Target="localhost" Port="1514" Protocol="tcp" Template="RSYSLOG_SyslogProtocol23Format" TCP_Framing="octet-counted")
}
# https://www.rsyslog.com/doc/v8-stable/configuration/modules/imudp.html
module(load="imudp")
input(type="imudp" port="514" ruleset="remote")
# https://www.rsyslog.com/doc/v8-stable/configuration/modules/imtcp.html
module(load="imtcp")
input(type="imtcp" port="514" ruleset="remote")
Ensure the firewall allows TCP traffic to port 514.
sudo firewall-cmd --permanent --zone=tailnet --add-port=514/tcp
sudo firewall-cmd --reload
Restart and/or enable the services.
sudo systemctl enable --now promtail.service
sudo systemctl enable --now loki.service
sudo systemctl enable --now rsyslog.service
Install and configure Rsyslog on Regular Node 1 and Regular Node 2
# Debian
sudo apt install -y rsyslog
# Fedora
sudo dnf install -y rsyslog
Enable and start the rsyslog service.
sudo systemctl enable --now rsyslog
Edit /etc/rsyslog.conf
.
###############
#### RULES ####
###############
# Forward to Main Monitoring Node
*.* action(type="omfwd" target="<IP addr of Main Monitoring Node>" port="514" protocol="tcp"
action.resumeRetryCount="100"
queue.type="linkedList" queue.size="10000")
Restart the rsyslog service.
sudo systemctl restart rsyslog.service
In the Grafana UI, you should now be able to add Loki as a data source. Then go to Home > Explore > loki and start querying logs from Regular Node 1 and Regular Node 2.
Add disk to LVM volume
Create a new physical volume on the new disk:
sudo pvcreate /dev/vdb
sudo lvmdiskscan -l
Add the newly created physical volume (/dev/vdb
) to an existing logical volume:
sudo vgextend almalinux /dev/vdb
Extend the /dev/almalinux/root
to create a total 1000GB:
sudo lvm lvextend -l +100%FREE /dev/almalinux/root
Grow the filesystem of the root volume:
# ext4
sudo resize2fs -p /dev/mapper/almalinux-root
# xfs
sudo xfs_growfs /
Full-text search with elasticsearch
Install ElasticSearch
sudo apt install -y openjdk-17-jre-headless
wget -O /usr/share/keyrings/elasticsearch.asc https://artifacts.elastic.co/GPG-KEY-elasticsearch
echo "deb [signed-by=/usr/share/keyrings/elasticsearch.asc] https://artifacts.elastic.co/packages/7.x/apt stable main" > /etc/apt/sources.list.d/elastic-7.x.list
sudo apt update
sudo apt install -y elasticsearch
Edit /etc/elasticsearch/elasticsearch.yaml
xpack.security.enabled: true
discovery.type: single-node
Create passwords for built-in users
sudo -u elasticsearch /usr/share/elasticsearch/bin/elasticsearch
In a separate shell:
sudo -u elasticsearch /usr/share/elasticsearch/bin/elasticsearch-setup-passwords auto
Copy the generated password for the elastic
user.
Create custom role for Mastodon to connect
As the mastodon user on the host:
curl -X POST -u elastic:admin_password "localhost:9200/_security/role/mastodon_full_access?pretty" -H 'Content-Type: application/json' -d'
{
"cluster": ["monitor"],
"indices": [{
"names": ["*"],
"privileges": ["read", "monitor", "write", "manage"]
}]
}
'
Create a user for Mastodon and assign it the custom role
curl -X POST -u elastic:admin_password "localhost:9200/_security/user/mastodon?pretty" -H 'Content-Type: application/json' -d'
{
"password": "l0ng-r4nd0m-p@ssw0rd",
"roles": ["mastodon_full_access"]
}
'
Edit .env.production
ES_ENABLED=true
ES_HOST=localhost
ES_PORT=9200
ES_PRESET=single_node_cluster
ES_USER=mastodon
ES_PASS=l0ng-r4ndom-p@ssw0rd
Populate the indices
systemctl restart mastodon-sidekiq
systemctl reload mastodon-web
su - mastodon
cd live
RAILS_ENV=production bin/tootctl search deploy
S3-compatible Object storage with Minio
- Install MinIO
- Set the region for this instance to
homelab
- Create 'mastodata' bucket
- Setup Tailscale
Minio API endpoint: tailnet_ip_addr:9000
Caddy reverse proxy config
IMPORTANT Ensure DNS resolves for assets.hyperreal.coffee
assets.hyperreal.coffee {
rewrite * /mastodata{path}
reverse_proxy http://<tailnet_ip_addr>:9000 {
header_up Host {upstream_hostport}
}
}
fedi.hyperreal.coffee {
@local {
file
not path /
}
@local_media {
path_regexp /system/(.*)
}
redir @local_media https://assets.hyperreal.coffee/{http.regexp.1} permanent
...remainer of config
}
Set custom policy on mastodata bucket
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mastodata/*"
}
]
}
Create mastodon-readwrite policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "arn:aws:s3:::mastodata/*"
}
]
}
Setup .env.production
S3_ENABLED=true
S3_BUCKET=mastodata
AWS_ACCESS_KEY=<access key>
AWS_SECRET_ACCESS_KEY=<secret access key>
S3_REGION=homelab
S3_PROTOCOL=http
S3_ENDPOINT=http://<tailnet_ip_addr>:9000
S3_FORCE_SINGLE_REQUEST=true
S3_ALIAS_HOST=assets.hyperreal.coffee
Restart Caddy and Mastodon services
sudo systemctl restart caddy.service mastodon-web.service mastodon-streaming.service mastodon-sidekiq.service
Prometheus metrics with statsd_exporter
On the host running Mastodon, download the latest binary from releases page.
tar xzvf statsd_exporter*.tar.gz
cd statsd_exporter*/
sudo cp -v statsd_exporter /usr/local/bin/
Install the statsd mapping file from IPng Networks:
curl -OL https://ipng.ch/assets/mastodon/statsd-mapping.yaml
sudo cp -v statsd-mapping.yml /etc/prometheus/
Create /etc/default/statsd_exporter
.
ARGS="--statsd.mapping-config=/etc/prometheus/statsd-mapping.yaml"
Create /etc/systemd/system/statsd_exporter.service
.
[Unit]
Description=Statsd exporter
After=network.target
[Service]
Restart=always
User=prometheus
EnvironmentFile=/etc/default/statsd_exporter
ExecStart=/usr/local/bin/statsd_exporter $ARGS
ExecReload=/bin/kill -HUP $MAINPID
TimeoutStopSec=20s
SendSIGKILL=no
[Install]
WantedBy=multi-user.target
Ensure port 9102 is open in Firewalld's internal zone.
sudo firewall-cmd --permanent --zone=internal --add-port=9102/tcp
sudo firewall-cmd --reload
Edit /home/mastodon/live/.env.production
.
STATSD_ADDR=localhost:9125
Start and restart the daemons.
sudo systemctl daemon-reload
sudo systemctl start statsd_exporter.service
sudo systemctl restart mastodon-sidekiq.service mastodon-streaming.service mastodon-web.service
If using Tailscale, ensure the host running Prometheus can access port 9102 on the host running Mastodon.
On the host running Prometheus, add the statsd config.
- job_name: "stats_exporter"
static_configs:
- targets: ["hyperreal:9102"]
Restart Prometheus.
sudo systemctl restart prometheus.service
To import the Grafana dashboard, use ID 17492.
Source: How to set up monitoring for your Mastodon instance with Prometheus and Grafana
Bucket replication to remote MinIO instance
Use mcli
to create aliases for the local and remote instances.
mcli alias set nas-local http://localhost:9000 username password
mcli alias set nas-remote http://ip.addr:9000 username password
Add configuration rule on source bucket for nas-local to nas-remote to replicate all operations in an active-active replication setup.
mcli replicate add nas-local/sourcebucket --remote-bucket nas-remote/targetbucket --priority 1
Show replication status.
mcli replicate status nas-local/sourcebucket
FreeBSD setup
Create a ZFS dataset to store MinIO data:
sudo zfs create naspool/minio_data
Install the MinIO package:
sudo pkg install -y minio
Configure the MinIO daemon settings in /etc/rc.conf
:
minio_enable="YES"
minio_disks="/naspool/minio_data"
Set the required permissions on /naspool/minio_data
:
sudo chown -R minio:minio /naspool/minio_data
sudo chmod u+rxw /naspool/minio_data
Start the MinIO daemon:
sudo service minio start
Check the logs for any important info:
sudo grep "minio" /var/log/messages
Browse to MinIO web console at http://100.64.0.2:9000.
Disable IPv6 on Debian
Edit /etc/sysctl.conf
.
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
Apply the changes.
sudo sysctl -p
Disable IPv6 on Fedora
sudo grubby --args=ipv6.disable=1 --update-kernel=ALL
sudo grub2-mkconfig -o /boot/grub2/grub.cfg
Rename network interface when using systemd-networkd
Create a udev rule at /etc/udev/rules.d/70-my-net-names.rules
:
SUBSYSTEM=="net", ACTION=="add", ATTR{address}=="your-mac-address", NAME="wlan0"
Using 70-my-net-names.rules
as the filename ensures the rule is ordered before /usr/lib/udev/rules.d/80-net-setup-link.rules
.
Connecting to WiFi network using systemd-networkd and wpa_supplicant
Create a file at /etc/wpa_supplicant/wpa_supplicant-wlan0.conf
. Use wpa_passphrase
to hash the passphrase.
wpa_passphrase your-ssid your-ssid-passphrase | sudo tee -a /etc/wpa_supplicant/wpa_supplicant-wlan0.conf
Edit /etc/wpa_supplicant/wpa_supplicant-wlan0.conf
:
ctrl_interface=/var/run/wpa_supplicant
ctrl_interface_group=0
update_config=1
network={
ssid="your-ssid"
psk="your-hashed-ssid-passphrase"
key_mgmt=WPA-PSK
proto=WPA2
scan_ssid=1
}
Create a file at /etc/systemd/network/25-wlan.network
:
[Match]
Name=wlan0
[Network]
DHCP=ipv4
Enable and start the network services:
sudo systemctl enable --now wpa_supplicant@wlan0.service
sudo systemctl restart systemd-networkd.service
sudo systemctl restart wpa_supplicant@wlan0.service
Check the interface status:
ip a
Use tailnet DNS and prevent DNS leaks
After the above WiFi interface is setup, disable IPv6 as per the above sections, and enable the Tailscale service.
sudo systemctl enable --now tailscaled.service
sudo tailscale up
Edit /etc/systemd/networkd/25-wlan.network
again, and add the following contents:
[Match]
Name=wlan0
[Network]
DHCP=ipv4
DNS=100.100.100.100
DNSSEC=allow-downgrade
[DHCPv4]
UseDNS=no
This will tell the wlan0
interface to use Tailscale's MagicDNS, along with DNSSEC if it is available, and not to get the nameservers from the DHCPv4 connection.
Migrating
Install dependencies:
sudo apt update
sudo apt dist-upgrade
sudo apt install apache2 mariadb-server libapache2-mod-php php-gd php-mysql php-curl php-mbstring php-intl php-gmp php-bcmath php-xml php-imagick php-zip php-apcu redis-server
Setup the database:
sudo mysql
CREATE USER 'nextcloud'@'localhost' IDENTIFIED BY 'password';
CREATE DATABASE IF NOT EXISTS nextcloud CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci;
GRANT ALL PRIVILEGES ON nextcloud.* TO 'nextcloud'@'localhost';
FLUSH PRIVILEGES;
quit;
On the original machine, put Nextcloud into maintenance mode.
cd /var/www/nextcloud
sudo -u www-data php occ maintenance:mode --on
WARNING Wait 6-7 minutes for the sync clients to register the server is in maintenance mode before proceeding.
Stop the web server that runs Nextcloud.
sudo systemctl stop apache2.service
Copy over the Nextcloud directory to the new machine:
rsync -aAX /var/www/nextcloud root@new-machine:/var/www
Copy the PHP configurations to the new machine:
rsync -aAX /etc/php/8.2/apache2/ root@new-machine:/etc/php/8.2/apache2
rsync -aAX /etc/php/8.2/cli/ root@new-machine:/etc/php/8.2/cli
WARNING The PHP version on the new machine must match that from the original machine.
On the new machine, ensure /etc/php/8.2/mods-available/apcu.ini
is configured correctly:
extension=apcu.so
apc.enable_cli=1
On the new machine, ensure permissions are set correctly on /var/www/nextcloud
:
sudo chown -R www-data:www-data /var/www/nextcloud
On the original machine, dump the database:
mysql --single-transaction --default-character-set=utf8mb4 -h localhost -u nextcloud -p nextcloud > nextcloud-sqlbkp.bak
Copy the database backup to the new machine:
rsync -aAX nextcloud-sqlbkp.bak root@new-machine:/root
On the new machine, import the database backup:
mysql -h localhost -u nextcloud -p -e "DROP DATABASE nextcloud"
mysql -h localhost -u nextcloud -p -e "CREATE DATABASE nextcloud CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci"
mysql -h localhost -u nextcloud -p nextcloud < /root/nextcloud-sqlbkp.bak
On the new machine, ensure redis-server service is started:
sudo systemctl enable --now redis-server.service
On the new machine, run the following command to update the data-fingerprint
:
cd /var/www/nextcloud
sudo -u www-data php occ maintenance:data-fingerprint
WARNING Ensure DNS records are changed to the new machine and the web server is running before taking Nextcloud out of maintenance mode.
On the new machine, take Nextcloud out of maintenance mode:
cd /var/www/nextcloud
sudo -u www-data php occ maintenance:mode --off
Setup NFS server on Debian
sudo apt install -y nfs-kernel-server nfs-common
Configure NFSv4 in /etc/default/nfs-common
:
NEED_STATD="no"
NEED_IDMAPD="yes"
Configure NFSv4 in /etc/default/nfs-kernel-server
. Disable NFSv2 and NFSv3.
RPCNFSDOPTS="-N 2 -N 3"
RPCMOUNTDOPTS="--manage-gids -N 2 -N 3"
sudo systemctl restart nfs-server
Configure FirewallD:
sudo firewall-cmd --zone=public --permanent --add-service=nfs
sudo firewall-cmd --reload
Setup pseudo filesystem and exports:
sudo mkdir /shared
sudo chown -R nobody:nogroup /shared
Add exported directory to /etc/exports
:
/shared <ip address of client>(rw,no_root_squash,no_subtree_check,crossmnt,fsid=0)
Create the NFS table:
sudo exportfs -a
Setup NFS client on Debian
sudo apt install -y nfs-common
Create shared directory:
sudo mkdir -p /mnt/shared
Mount NFS exports:
sudo mount.nfs4 <ip address of server>:/ /mnt/shared
IMPORTANT Note that
<server ip>:/
is relative to the exported directory. So/mnt/shared
on the client is/shared
on the server. If you try to mount withmount -t nfs <server ip>:/shared /mnt/shared
you will get a no such file or directory error.
/etc/fstab
entry:
<ip address of server>:/ /mnt/shared nfs4 soft,intr,rsize=8192,wsize=8192
sudo systemctl daemon-reload
sudo mount -av
Setup NFS server on FreeBSD
Edit /etc/rc.conf
.
nfs_server_enable="YES"
nfs_server_flags="-u -t -n 4"
rpcbind_enable="YES"
mountd_flags="-r"
mountd_enable="YES"
Edit /etc/exports
.
/data1 -alldirs -mapall=user1 host1 host2 host3
/data2 -alldirs -maproot=user2 host2
Start the services.
sudo service rpcbind start
sudo service nfsd start
sudo service mountd start
After making changes to the exports file, you need to restart NFS for the changes to take effect:
kill -HUP `cat /var/run/mountd.pid`
Setup NFS client on FreeBSD
Edit /etc/rc.conf
.
nfs_client_enable="YES"
nfs_client_flags="-n 4"
rpc_lockd_enable="YES"
rpc_statd_enable="YES"
Mount NFS share on client with systemd
Create a file at /etc/systemd/system/mnt-backup.mount
.
[Unit]
Description=borgbackup NFS share from FreeBSD
DefaultDependencies=no
Conflicts=umount.target
After=network-online.target remote-fs.target
Before=umount.target
[Mount]
What=10.0.0.119:/coffeeNAS/borgbackup/repositories
Where=/mnt/backup
Type=nfs
Options=defaults,vers=3
[Install]
WantedBy=multi-user.target
Source: https://www.stationx.net/nmap-cheat-sheet/
Target specification
Example | Description |
---|---|
nmap 192.168.1.1 | Scan a single IP |
nmap 192.168.1.1 192.168.2.1 | Scan specific IPs |
nmap 192.168.1.1-254 | Scan a range |
nmap scanme.nmap.org | Scan a domain |
nmap 192.168.1.0/24 | Scan using CIDR notation |
nmap -iL targets.txt | Scan targets from a file |
nmap -iR 100 | Scan 100 random hosts |
nmap -exclude 192.168.1.1 | Exclude listed hosts |
Nmap scan techniques
Example | Description |
---|---|
nmap 192.168.1.1 -sS | TCP SYN port scan (default) |
nmap 192.168.1.1 -sT | TCP connect port scan |
nmap 192.168.1.1 -sU | UDP port scan |
nmap 192.168.1.1 -sA | TCP ACK port scan |
nmap 192.168.1.1 -sW | TCP Window port scan |
nmap 192.168.1.1 -sM | TCP Maimon port scan |
Host discovery
Example | Description |
---|---|
nmap 192.168.1.1-3 -sL | No scan. List targets only. |
nmap 192.168.1.1/24 -sn | Disable port scanning. Host discovery only. |
nmap 192.168.1.1-5 -Pn | Disable host discovery. Port scan only. |
nmap 192.168.1.1-5 -PS22-25,80 | TCP SYN discovery on ports 22-25, 80. (Port 80 by default) |
nmap 192.168.1.1-5 -PA22-25,80 | TCP ACK discovery on ports 22-25, 80. (Port 80 by default) |
nmap 192.168.1.1-5 -PU53 | UDP discovery on port 53. (Port 40125 by default) |
nmap 192.168.1.1-1/24 -PR | ARP discovery on local network |
nmap 192.168.1.1 -n | Never do DNS resolution |
Port specification
Example | Description |
---|---|
nmap 192.168.1.1 -p 21 | Port scan for port 21 |
nmap 192.168.1.1 -p 21-100 | Port scan for range 21-100 |
nmap 192.168.1.1 -p U:53,T:21-25,80 | Port scan multiple TCP and UDP ports |
nmap 192.168.1.1 -p- | Port scan all ports |
nmap 192.168.1.1 -p http,https | Port scan from service name |
nmap 192.168.1.1 -F | Fast port scan (100 ports) |
nmap 192.168.1.1 -top-ports 2000 | Port scan the top 2000 ports |
nmap 192.168.1.1 -p-65535 | Leaving off initial port in range makes the scan start at port 1. |
nmap 192.168.1.1 -p0- | Leaving off the end port in range makes the scan go through to port 65535. |
Service and version detection
Example | Description |
---|---|
nmap 192.168.1.1 -sV | Attempts to determine version of the service running on port. |
nmap 192.168.1.1 -sV -version-intensity 8 | Intensity level 0-9. Higher number increases possibility of correctness. |
nmap 192.168.1.1 -sV -version-light | Enable light mode. Lower possibility of correctness. Faster. |
nmap 192.168.1.1 -sV -version-all | Enable intensity level 9. Higher possibility of correctness. Slower. |
nmap 192.168.1.1 -A | Enables OS detection, version detection, script scanning, and traceroute. |
OS detection
Example | Description |
---|---|
nmap 192.168.1.1 -O | Remote OS detection using TCP/IP stack fingerprinting |
nmap 192.168.1.1 -O -osscan-limit | If at least one open and one closed TCP port are not found it will not try OS detection against host. |
nmap 192.168.1.1 -O -osscan-guess | Makes Nmap guess more aggressively. |
nmap 192.168.1.1 -O -max-os-tries 1 | Set the maximum number of OS detection tries |
nmap 192.168.1.1 -A | Enables OS detection, version detection, script scanning, and traceroute. |
Timing and performance
Example | Description |
---|---|
nmap 192.168.1.1 -T0 | Paranoid (0) IDS evasion |
nmap 192.168.1.1 -T1 | Sneaky (1) IDS evasion |
nmap 192.168.1.1 -T2 | Polite (2) slows down the scan to use less bandwidth and use less target machine resources. |
nmap 192.168.1.1 -T3 | Normal (3) which is default speed |
nmap 192.168.1.1 -T4 | Aggressive (4) speeds scans. Assumes you are on a reasonably fast and reliable network. |
nmap 192.168.1.1 -T5 | Insane (5) speeds scan. Assumes you are on an extraordinarily fast network. |
Timing and performances switches
Example | Description |
---|---|
-host-timeout 1s; -host-timeout 4m; -host-timeout 2h; | Give up on target after this long. |
-min-rtt-timeout/max-rtt-timeout/initial-rtt-timeout 4m; | Specifies probe round trip time. |
-min-hostgroup/max-hostgroup 50 | Parallel host scan group sizes |
-min-parallelism/max-parallelism 10 | Probe parallelization |
-max-retries 3 | Specify the max number of port scan probe retransmissions. |
-min-rate 100 | Send packets no slower than 100 per second |
-max-rate 100 | Send packets no faster than 100 per second |
NSE scripts
Example | Description |
---|---|
nmap 192.168.1.1 -sC | Scan with default NSE scripts. Useful and safe. |
nmap 192.168.1.1 -script default | Scan with default NSE scripts. |
nmap 192.168.1.1 -script=banner | Scan with single script. Example banner. |
nmap 192.168.1.1 -script=http* | Scan with a wildcard. Example http. |
nmap 192.168.1.1 -script=http,banner | Scan with two scripts. http and banner. |
nmap 192.168.1.1 -script "not intrusive" | Scan default, but remove intrusive scripts |
nmap -script snmp-sysdescr -script-args snmpcommunity=admin 192.168.1.1 | NSE script with arguments |
Useful NSE script examples
Example | Description |
---|---|
nmap -Pn -script=http-sitemap-generator scanme.nmap.org | http site map generator |
nmap -n -Pn -p 80 -open -sV -vvv -script banner,http-title -iR 1000 | Fast search for random web servers |
nmap -Pn -script=dns-brute domain.com | Brute forces DNS hostnames guessing subdomains |
nmap -n -Pn -vv -O -sV -script smb-enum*,smb-ls,smb-mbenum,smb-os-discovery,smb-s*,smb-vuln*,smbv2* -vv 192.168.1.1 | Safe SMB scripts to run |
nmap -script whois* domain.com | Whois query |
nmap -p80 -script http-unsafe-output-escaping scanme.nmap.org | Detect cross site scripting vulnerabilites |
nmap -p80 -script http-sql-injection scanme.nmap.org | Check for SQL injections |
Firewall/IDS Evasion and spoofing
Example | Description |
---|---|
nmap 192.168.1.1 -f | Requested scan (including ping scans) use tiny fragmented IP packets. Harder for packet filters. |
nmap 192.168.1.1 -mtu 32 | Set your own offset size |
nmap -D 192.168.1.101,192.168.1.102,192.168.1.103 | Send scans from spoofed IPs |
nmap -D decoy-ip1,decoy-ip2,your-own-ip | Same as above |
nmap -S www.microsoft.com www.facebook.com | Scan Facebook from Microsoft (-e eth0 -Pn may be required) |
nmap -g 53 192.168.1.1 | Use given source port number |
nmap -proxies http://192.168.1.1:8080,http://192.168.1.2:8080 192.168.1.1 | Relay connections through HTTP/SOCKS4 proxies |
nmap -data-length 200 192.168.1.1 | Appends random data to sent packets |
Output
Example | Description |
---|---|
nmap 192.168.1.1 -oN normal.file | Normal output to the file normal.file |
nmap 192.168.1.1 -oX xml.file | XML output to the file xml.file |
nmap 192.168.1.1 -oG grep.file | Grepable output to the file grep.file |
nmap 192.168.1.1 -oA results | Output in the three major formats at once |
nmap 192.168.1.1 -oG - | Grepable output to screen. -oN, -oX also usable |
nmap 192.168.1.1 -oN file.txt -append-output | Append a scan to a previous scan file |
nmap 192.168.1.1 -v | Increase the verbosity level (use -vv or more) |
nmap 192.168.1.1 -d | Increase debugging level (use -dd or more) |
nmap 192.168.1.1 -reason | Display the reason a port is in a particular state, same output as -vv |
nmap 192.168.1.1 -open | Only show open (or possibly open) ports |
nmap 192.168.1.1 -T4 -packet-trace | Show all packets sent and received |
nmap -iflist | Shows the host interfaces and routes |
nmap -resume results.file | Resume a scan from results.file |
Helpful Nmap output examples
Example | Description |
---|---|
nmap -p80 -sV -oG - -open 192.168.1.1/24 | grep open | Scan for web servers and grep to show which IPs are running web servers |
nmap -iR 10 -n -oX out.xml | grep "Nmap" | cut -d " " -f5 > live-hosts.txt | Generate a list of the IPs of live hosts |
nmap -iR 10 -n -oX out2.xml | grep "Nmap" | cut -d " " -f5 >> live-hosts.txt | Append IP to the list of live hosts |
ndiff scan.xml scan2.xml | Compare the output of two scan results |
xsltproc nmap.xml -o nmap.html | Convert nmap xml files to html files |
grep "open" results.nmap | sed -r 's/ +/ /g' | sort | uniq -c | sort -rn | less | Reverse sorted list of how often ports turn up |
Other useful Nmap commands
Example | Description |
---|---|
nmap -iR 10 -PS22-25,80,113,1050,35000 -v -sn | Discovery only on ports X, no port scan |
nmap 192.168.1.1-1/24 -PR -sn -vv | ARP discovery only on local network, no port scan |
nmap -iR 10 -sn -traceroute | Traceroute to random targets, no port scan |
nmap 192.168.1.1-50 -sL -dns-server 192.168.1.1 | Query the internal DNS for hosts, list targets only |
nmap 192.168.1.1 --packet-trace | Show the details of the packets that are sent and received during a scan and capture the traffic. |
Disable blinky LEDs
Edit /etc/udev/rules.d/led_control.rules
:
SUBSYSTEM=="leds", KERNEL=="blue_led", ACTION=="add", ATTR{trigger}="none"
SUBSYSTEM=="leds", KERNEL=="green_led", ACTION=="add", ATTR{trigger}="none"
SUBSYSTEM=="leds", KERNEL=="mmc0::", ACTION=="add", ATTR{trigger}="none"
Reboot the system.
Fix GUI issues with KDE Plasma dark theme
mkdir ~/.config-pt
cd ~/.config
cp -rf dconf gtk-3.0 gtk-4.0 xsettingsd ~/.config-pt
- Right-click on Menu button.
- Click Edit Applications.
- Select Packet Tracer.
- Add
XDG_CONFIG_HOME=/home/jas/.config-pt
to Environment variables. - Save.
Source. Thanks, u/AtomHeartSon!
Pulling files from remote server with rsync
To transfer just the files:
ssh user@remote -- find /path/to/parent/directory -type f | parallel -v -j16 rsync -Havessh -aAXP user@remote:{} /local/path
To transfer the entire directory:
echo "/path/to/parent/directory" | parallel -v -j16 rsync -Havessh -aAXP user@remote:{} /local/path
Pushing files to remote server with rsync
To transfer just the files:
find /path/to/local/directory -type f | parallel -v -j16 -X rsync -aAXP /path/to/local/directory/{} user@remote:/path/to/dest/dir
Running the same command on multiple remote hosts
parallel --tag --nonall -S remote0,remote1,remote2 uptime
Install Pixelfed on Debian (Bookworm)
Prerequisites
Install dependencies.
apt install -y php-bcmath php-curl exif php-gd php8.2-common php-intl php-json php-mbstring libcurl4-openssl-dev php-redis php-tokenizer php-xml php-zip php-pgsql php-fpm composer
Set the following upload limits for PHP processes.
post_max_size = 2G
file_uploads = On
upload_max_filesize = 2G
max_file_uploads = 20
max_execution_time = 1000
Create the PostgreSQL database:
sudo -u postgres psql
CREATE USER pixelfed CREATEDB;
CREATE DATABASE pixelfed;
GRANT ALL PRIVILEGES ON DATABASE pixelfed TO pixelfed;
\q
Create dedicated pixelfed user.
useradd -rU -s /bin/bash pixelfed
Configure PHP-FPM pool and socket.
cd /etc/php/8.2/fpm/pool.d/
cp www.conf pixelfed.conf
Edit /etc/php/8.2/fpm/pool.d/pixelfed.conf
.
; use the username of the app-user as the pool name, e.g. pixelfed
[pixelfed]
user = pixelfed
group = pixelfed
; to use a tcp socket, e.g. if running php-fpm on a different machine than your app:
; (note that the port 9001 is used, since php-fpm defaults to running on port 9000;)
; (however, the port can be whatever you want)
; listen = 127.0.0.1:9001;
; but it's better to use a socket if you're running locally on the same machine:
listen = /run/php-fpm/pixelfed.sock
listen.owner = caddy
listen.group = caddy
listen.mode = 0660
; [...]
Installation
Setup Pixelfed files
Download the source from GitHub.
cd /usr/share/caddy
git clone -b dev https://github.com/pixelfed/pixelfed.git pixelfed
Set correct permissions.
cd pixelfed
chown -R pixelfed:pixelfed .
find . -type d -exec chmod 755 {} \;
find . -type f -exec chmod 644 {} \;
Become the pixelfed user.
su - pixelfed
Initialize PHP dependencies.
composer update
composer install --no-ansi --no-interaction --optimize-autoloader
Configure environment variables
cp .env.example .env
Edit /usr/share/caddy/pixelfed/.env
.
APP_NAME="hyperreal's Pixelfed"
APP_DEBUG="false"
APP_URL="https://pixelfed.hyperreal.coffee"
APP_DOMAIN="pixelfed.hyperreal.coffee"
ADMIN_DOMAIN="pixelfed.hyperreal.coffee"
SESSION_DOMAIN="pixelfed.hyperreal.coffee"
DB_CONNECTION=pgsql
DB_HOST=localhost
DB_PORT=5432
DB_DATABASE=pixelfed
DB_USERNAME=pixelfed
DB_PASSWORD=<password>
REDIS_HOST=localhost
REDIS_PORT=6379
MAIL_FROM_ADDRESS=onboarding@resend.dev
MAIL_FROM_NAME=Pixelfed
MAIL_ENCRYPTION=tls
MAIL_DRIVER=smtp
MAIL_HOST=smtp.resend.com
MAIL_PORT=465
MAIL_USERNAME=resend
MAIL_PASSWORD=<resend API key>
ACTIVITY_PUB=true
AP_REMOTE_FOLLOW=true
Setting up services
These commands should only be run one time.
php artisan key:generate
Link the storage/
directory to the application.
php artisan storage:link
Run database migrations.
php artisan migrate --force
IMPORTANT If the above command fails due to insufficient privileges, then the pixelfed database user needs permission to create tables in the public schema. When we created the database, we ran
GRANT ALL PRIVILEGES ON DATABASE pixelfed TO pixelfed;
in the psql shell. This granted the pixelfed database user privileges on the database itself, not on things within the database. To fix this, the pixelfed database user needs to own the database and all within it, so go back to the psql shell and runALTER DATABASE pixelfed OWNER TO pixelfed;
To enable ActivityPub federation:
php artisan instance:actor
To have routes cached, run the following commands now, and whenever the source code changes or if you change routes.
php artisan route:cache
php artisan view:cache
Run this command whenever you change the .env
file for the changes to take effect.
php artisan config:cache
Use Laravel Horizon for job queueing.
php artisan horizon:install
php artisan horizon:publish
Create a systemd service unit for Pixelfed task queueing.
[Unit]
Description=Pixelfed task queueing via Laravel Horizon
After=network.target
Requires=postgresql
Requires=php8.2-fpm
Requires=redis-server
Requires=caddy
[Service]
Type=simple
ExecStart=/usr/bin/php /usr/share/caddy/pixelfed/artisan horizon
User=pixelfed
Restart=on-failure
[Install]
WantedBy=multi-user.target
Use Cron to schedule periodic tasks. As the pixelfed user, run crontab -e
.
* * * * * /usr/bin/php /usr/share/caddy/pixelfed/artisan schedule:run >> /dev/null 2>&1
Create a Caddyfile that translates HTTP web requests to PHP workers.
pixelfed.hyperreal.coffee {
root * /usr/share/caddy/pixelfed/public
header {
X-Frame-Options "SAMEORIGIN"
X-XSS-Protection "1; mode=block"
X-Content-Type-Options "nosniff"
}
php_fastcgi unix//run/php/php-fpm.sock
file_server
}
Updating Pixelfed
sudo su - pixelfed
cd /usr/share/caddy/pixelfed
git reset --hard
git pull origin dev
composer install
php artisan config:cache
php artisan route:cache
php artisan migrate --force
Change password for user
sudo -u user_name psql db_name
ALTER USER user_name WITH PASSWORD 'new_password';
Update password auth method to SCRAM
Edit /etc/postgresql/16/main/postgresql.conf
:
password_encryption = scram-sha-256
Restart postgresql.service:
sudo systemctl restart postgresql.service
At this point, any services using the old MD5 auth method will fail to connect to their PostgreSQL databases.
Update the settings in /etc/postgresql/16/main/pg_hba.conf
:
TYPE DATABASE USER ADDRESS METHOD
local all mastodon scram-sha-256
local all synapse_user scram-sha-256
Enter a psql shell and determine who needs to upgrade their auth method:
SELECT rolname, rolpassword ~ '^SCRAM-SHA-256\$' AS has_upgraded FROM pg_authid WHERE rolcanlogin;
\password username
Restart postgresql.service and all services using a PostgreSQL database:
sudo systemctl restart postgresql.service
sudo systemctl restart mastodon-web.service mastodon-sidekiq.service mastodon-streaming.service
sudo systemctl restart matrix-synapse.service
Install and configure Node Exporter on each client using Ansible
Install the prometheus.prometheus role from Ansible Galaxy.
ansible-galaxy collection install prometheus.prometheus
Ensure you have an inventory file with clients to setup Prometheus on.
---
prometheus-clients:
hosts:
host0:
ansible_user: user0
ansible_host: host0 ip address or hostname
ansible_python_interpreter: /usr/bin/python3
host1:
...
host2:
...
Create prometheus-setup.yml
.
---
- hosts: prometheus-clients
tasks:
- name: Import the node_exporter role
import_role:
name: prometheus.prometheus.node_exporter
The default values for the node_exporter role variables should be fine.
Run ansible-playbook.
ansible-playbook -i inventory.yml node_exporter-setup.yml
Node Exporter should now be installed, started, and enabled on each host with the homelab label in the inventory.
To confirm that statistics are being collected on each host, navigate to http://host_url:9100
. A page entitled Node Exporter should be displayed containing a link for Metrics. Click the link and confirm that statistics are being collected.
Note that each node_exporter host must be accessible through the firewall on port 9100. Firewalld can be configured for the internal
zone on each host.
sudo firewall-cmd --zone=internal --permanent --add-source=<my_ip_addr>
sudo firewall-cmd --zone=internal --permanent --add-port=9100/tcp
NOTE Note: I have to configure the
internal
zone on Firewalld to allow traffic from my IP address on ports HTTP, HTTPS, SSH, and 1965 in order to access, for example, my web services on the node_exporter host.
Install Node Exporter on FreeBSD
As of FreeBSD 14.1-RELEASE, the version of Node Exporter available, v1.6.1, is outdated. To install the latest version, ensure the ports tree is checked out before running the commands below.
sudo cp -v /usr/ports/sysutils/node_exporter/files/node_exporter.in /usr/local/etc/rc.d/node_exporter
sudo chmod +x /usr/local/etc/rc.d/node_exporter
sudo chown root:wheel /usr/local/etc/rc.d/node_exporter
sudo pkg install gmake go
Download the latest release's source code from https://github.com/prometheus/node_exporter. Unpack the tarball.
tar xvf v1.8.2.tar.gz
cd node_exporter-1.8.2
gmake build
sudo mv node_exporter /usr/local/bin/
sudo chown root:wheel /usr/local/bin/node_exporter
sudo sysrc node_exporter_enable="YES"
sudo service node_exporter start
Configure Prometheus to monitor the client nodes
Edit /etc/prometheus/prometheus.yml
. My Prometheus configuration looks like this:
# my global config
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).
# Alertmanager configuration
alerting:
alertmanagers:
- static_configs:
- targets:
# - alertmanager:9093
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
# - "first_rules.yml"
# - "second_rules.yml"
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: "prometheus"
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ["localhost:9090"]
- job_name: "remote_collector"
scrape_interval: 10s
static_configs:
- targets: ["hyperreal.coffee:9100", "box.moonshadow.dev:9100", "10.0.0.26:9100", "bttracker.nirn.quest:9100"]
The job remote_collector
scrapes metrics from each of the hosts running the node_exporter. Ensure that port 9100
is open in the firewall, and if it is a public-facing node, ensure that port 9100
can only be accessed from my IP address.
Configure Prometheus to monitor qBittorrent client nodes
For each qBittorrent instance you want to monitor, setup a Docker or Podman container with https://github.com/caseyscarborough/qbittorrent-exporter. The containers will run on the machine running Prometheus so they are accessible at localhost. Let's say I have three qBittorrent instances I want to monitor.
podman run \
--name=qbittorrent-exporter-0 \
-e QBITTORRENT_USERNAME=username0 \
-e QBITTORRENT_PASSWORD=password0 \
-e QBITTORRENT_BASE_URL=http://localhost:8080 \
-p 17871:17871 \
--restart=always \
caseyscarborough/qbittorrent-exporter:latest
podman run \
--name=qbittorrent-exporter-1 \
-e QBITTORRENT_USERNAME=username1 \
-e QBITTORRENT_PASSWORD=password1 \
-e QBITTORRENT_BASE_URL=https://qbittorrent1.tld \
-p 17872:17871 \
--restart=always \
caseyscarborough/qbittorrent-exporter:latest
podman run \
--name=qbittorrent-exporter-2 \
-e QBITTORRENT_USERNAME=username2 \
-e QBITTORRENT_PASSWORD=password2 \
-e QBITTORRENT_BASE_URL=https://qbittorrent2.tld \
-p 17873:17871 \
--restart=always \
caseyscarborough/qbittorrent-exporter:latest
Using systemd quadlets
[Unit]
Description=qbittorrent-exporter
After=network-online.target
[Container]
Image=docker.io/caseyscarborough/qbittorrent-exporter:latest
ContainerName=qbittorrent-exporter
Environment=QBITTORRENT_USERNAME=username
Environment=QBITTORRENT_PASSWORD=password
Environment=QBITTORRENT_BASE_URL=http://localhost:8080
PublishPort=17871:17871
[Install]
WantedBy=multi-user.target default.target
Now add this to the scrape_configs
section of /etc/prometheus/prometheus.yml
to configure Prometheus to scrape these metrics.
- job_name: "qbittorrent"
static_configs:
- targets: ["localhost:17871", "localhost:17872", "localhost:17873"]
Monitor Caddy with Prometheus and Loki
Caddy: metrics activation
Add the metrics
global option and ensure the admin endpoint is enabled.
{
admin 0.0.0.0:2019
servers {
metrics
}
}
Restart Caddy:
sudo systemctl restart caddy
sudo systemctl status caddy
Caddy: logs activation
I have my Caddy configuration modularized with /etc/caddy/Caddyfile
being the central file. It looks something like this:
{
admin 0.0.0.0:2019
servers {
metrics
}
}
## hyperreal.coffee
import /etc/caddy/anonoverflow.caddy
import /etc/caddy/breezewiki.caddy
import /etc/caddy/cdn.caddy
...
Each file that is imported is a virtual host that has its own separate configuration and corresponds to a subdomain of hyperreal.coffee. I have logging disabled on most of them except the ones for which troubleshooting with logs would be convenient, such as the one for my Mastodon instance. For /etc/caddy/fedi.caddy
, I've added these lines to enable logging:
fedi.hyperreal.coffee {
log {
output file /var/log/caddy/fedi.log {
roll_size 100MiB
roll_keep 5
roll_keep_for 100d
}
format json
level INFO
}
}
Restart caddy.
sudo systemctl restart caddy
sudo systemctl status caddy
Ensure port 2019
can only be accessed by my IP address, using Firewalld's internal zone:
sudo firewall-cmd --zone=internal --permanent --add-port=2019/tcp
sudo firewall-cmd --reload
sudo firewall-cmd --info-zone=internal
Add the Caddy configuration to the scrape_configs
section of /etc/prometheus/prometheus.yml
:
- job_name: "caddy"
static_configs:
- targets: ["hyperreal.coffee:2019"]
Restart Prometheus on the monitor host:
sudo systemctl restart prometheus.service
Loki and Promtail setup
On the node running Caddy, install the loki and promtail packages:
sudo apt install -y loki promtail
Edit the Promtail configuration file at /etc/promtail/config.yml
:
- job_name: caddy
static_configs:
- targets:
- localhost
labels:
job: caddy
__path__: /var/log/caddy/*.log
agent: caddy-promtail
pipeline_stages:
- json:
expressions:
duration: duration
status: status
- labels:
duration:
status:
The entire Promtail configuration should look like this:
# This minimal config scrape only single log file.
# Primarily used in rpm/deb packaging where promtail service can be started during system init process.
# And too much scraping during init process can overload the complete system.
# https://github.com/grafana/loki/issues/11398
server:
http_listen_port: 9080
grpc_listen_port: 0
positions:
filename: /tmp/positions.yaml
clients:
- url: http://localhost:3100/loki/api/v1/push
scrape_configs:
- job_name: system
static_configs:
- targets:
- localhost
labels:
job: varlogs
#NOTE: Need to be modified to scrape any additional logs of the system.
__path__: /var/log/messages
- job_name: caddy
static_configs:
- targets:
- localhost
labels:
job: caddy
__path__: /var/log/caddy/*log
agent: caddy-promtail
pipeline_stages:
- json:
expressions:
duration: duration
status: status
- labels:
duration:
status:
Restart Promtail and Loki services:
sudo systemctl restart promtail
sudo systemctl restart loki
To ensure that the promtail user has permissions to read caddy logs:
sudo usermod -aG caddy promtail
sudo chmod g+r /var/log/caddy/*.log
The Prometheus dashboard should now show the Caddy target with a state of "UP".
Monitor TOR node
Edit /etc/tor/torrc
to add Metrics info. x.x.x.x
is the IP address where Prometheus is running.
## Prometheus exporter
MetricsPort 0.0.0.0:9035 prometheus
MetricsPortPolicy accept x.x.x.x
Configure FirewallD to allow inbound traffic to port 9035
on the internal zone. Ensure the internal zone's source is the IP address of the server where Prometheus is running. Ensure port 443
is accessible from the Internet on FirewallD's public zone.
sudo firewall-cmd --zone=internal --permanent --add-source=x.x.x.x
sudo firewall-cmd --zone=internal --permanent --add-port=9035/tcp
sudo firewall-cmd --zone=public --permanent --add-service=https
sudo firewall-cmd --reload
Edit /etc/prometheus/prometheus.yml
to add the TOR config. y.y.y.y
is the IP address where TOR is running.
scrape_configs:
- job_name: "tor-relay"
static_configs:
- targets: ["y.y.y.y:9035"]
Restart Prometheus.
sudo systemctl restart prometheus.service
Go to Grafana and import tor_stats.json as a new dashboard, using the Prometheus datasource.
Monitor Synapse homeserver
On the server running Synapase, edit /etc/matrix-synapse/homeserver.yaml
to enable metrics.
enable_metrics: true
Add a new listener to /etc/matrix-synapse/homeserver.yaml
for Prometheus metrics.
listeners:
- port: 9400
type: metrics
bind_addresses: ['0.0.0.0']
On the server running Prometheus, add a target for Synapse.
- job_name: "synapse"
scrape_interval: 1m
metrics_path: "/_synapse/metrics"
static_configs:
- targets: ["hyperreal:9400"]
Also add the Synapse recording rules.
rule_files:
- /etc/prometheus/synapse-v2.rules
On the server running Prometheus, download the Synapse recording rules.
sudo wget https://files.hyperreal.coffee/prometheus/synapse-v2.rules -O /etc/prometheus/synapse-v2.rules
Restart Prometheus.
Use synapse.json for Grafana dashboard.
Monitor Elasticsearch
On the host running Elasticsearch, download the latest binary from the GitHub releases.
tar xvf elasitcsearch_exporter*.tar.gz
cd elasticsearch_exporter*/
sudo cp -v elasticsearch_exporter /usr/local/bin/
Create /etc/systemd/system/elasticsearch_exporter.service
.
[Unit]
Description=elasticsearch exporter
After=network.target
[Service]
Restart=always
User=prometheus
ExecStart=/usr/local/bin/elasticsearch_exporter --es.uri=http://localhost:9200
ExecReload=/bin/kill -HUP $MAINPID
TimeoutStopSec=20s
SendSIGKILL=no
[Install]
WantedBy=multi-user.target
Reload the daemons and enable/start elasticsearch_exporter.
sudo systemctl daemon-reload
sudo systemctl enable --now elasticsearch_exporter.service
Ensure port 9114 is allowed in Firewalld's internal zone.
sudo firewall-cmd --permanent --zone=internal --add-port=9114/tcp
sudo firewall-cmd --reload
If using Tailscale, ensure the host running Prometheus can access port 9114 on the host running Elasticsearch.
On the host running Prometheus, download the elasticsearch.rules.
wget https://raw.githubusercontent.com/prometheus-community/elasticsearch_exporter/refs/heads/master/examples/prometheus/elasticsearch.rules.yml
sudo mv elasticsearch.rules.yml /etc/prometheus/
Edit /etc/prometheus/prometheus.yml
to add the elasticsearch_exporter config.
rule_files:
- "/etc/prometheus/elasticsearch.rules.yml"
...
...
- job_name: "elasticsearch_exporter"
static_configs:
- targets: ["hyperreal:9114"]
Restart Prometheus.
sudo systemctl restart prometheus.service
For a Grafana dashboard, copy the contents of the file located here: https://files.hyperreal.coffee/grafana/elasticsearch.json.
Use HTTPS with Tailscale
If this step has been done already, skip it.
sudo tailscale certs HOSTNAME.TAILNET.ts.net
sudo mkdir /etc/tailscale-ssl-certs
sudo mv HOSTNAME.TAILNET.ts.net.crt HOSTNAME.TAILNET.ts.net.key /etc/tailscale-ssl-certs/
sudo chown -R root:root /etc/tailscale-ssl-certs
Ensure the prometheus.service
systemd file contains the --web.config.file
flag.
[Unit]
Description=Prometheus
Wants=network-online.target
After=network-online.target
[Service]
User=prometheus
Group=prometheus
Type=simple
Restart=on-failure
RestartSec=5s
ExecStart=/usr/local/bin/prometheus \
--config.file /etc/prometheus/prometheus.yml \
--storage.tsdb.path /var/lib/prometheus/ \
--web.console.templates=/etc/prometheus/consoles \
--web.console.libraries=/etc/prometheus/console_libraries \
--web.listen-address=0.0.0.0:9090 \
--web.enable-lifecycle \
--web.config.file /etc/prometheus/web.yml \
--log.level=info
[Install]
WantedBy=multi-user.target
Create the file /etc/prometheus/web.yml
.
tls_server_config:
cert_file: /etc/prometheus/prometheus.crt
key_file: /etc/prometheus/prometheus.key
Copy the cert and key to /etc/prometheus
.
sudo cp -v /etc/tailscale-ssl-certs/HOSTNAME.TAILNET.ts.net.crt /etc/prometheus/prometheus.crt
sudo cp -v /etc/tailscale-ssl-certs/HOSTNAME.TAILNET.ts.net.key /etc/prometheus/prometheus.key
Ensure the permissions are correct on the web config, cert, and key.
sudo chown prometheus:prometheus /etc/prometheus/web.yml
sudo chown prometheus:prometheus /etc/prometheus/prometheus.crt
sudo chown prometheus:prometheus /etc/prometheus/prometheus.key
sudo chmod 644 /etc/prometheus/prometheus.crt
sudo chmod 644 /etc/prometheus/prometheus.key
Reload the daemons and restart Prometheus.
sudo systemctl daemon-reload
sudo systemctl restart prometheus.service
Use labels instead of IP addresses for targets
- job_name: "remote_collector"
scrape_interval: 10s
static_configs:
- targets: ["100.64.0.9:9100"]
labels:
instance: desktop
- targets: ["100.64.0.7:9100"]
labels:
instance: headscale
- targets: ["100.64.0.10:9100"]
labels:
instance: hyperreal-remote
...
Mount qcow2 image
Enable NBD on the host:
sudo modprobe nbd max_part=8
Connect qcow2 image as a network block device:
sudo qemu-nbd --connect=/dev/nbd0 /path/to/image.qcow2
Find the VM's partitions:
sudo fdisk /dev/nbd0 -l
Mount the partition from the VM:
sudo mount /dev/nbd0p3 /mnt/point
To unmount:
sudo umount /mnt/point
sudo qemu-nbd --disconnect /dev/nbd0
sudo rmmod nbd
Resize qcow2 image
Install guestfs-tools (required for virt-resize command):
sudo dnf install -y guestfs-tools
sudo apt install -y guestfs-tools libguestfs-tools
To resize qcow2 images, you'll have to create a new qcow2 image with the size you want, then use virt-resize
on the old qcow2 image to the new one.
You'll need to know the root partition within the old qcow2 image.
Create a new qcow2 image with the size you want:
qemu-img create -f qcow2 -o preallocation=metadata newdisk.qcow2 100G
Now resize the old one to the new one:
virt-resize --expand /dev/vda3 olddisk.qcow2 newdisk.qcow2
Once you boot into the new qcow2 image, you'll probably have to adjust the size of the logical volume if it has LVM:
sudo lvresize -l +100%FREE /dev/mapper/sysvg-root
Then resize the XFS root partition within the logical volume:
sudo xfs_grow /dev/mapper/sysvg-root
Take snapshot of VM
sudo virsh domblklist vm1
Target Source
-----------------------------------------------
vda /var/lib/libvirt/images/vm1.img
sudo virsh snapshot-create-as \
--domain vm1 \
--name guest-state1 \
--diskspec vda,file=/var/lib/libvirt/images/overlay1.qcow2 \
--disk-only \
--atomic \
--quiesce
Ensure qemu-guest-agent
is installed inside the VM. Otherwise omit the --quiesce
flag, but when you restore the VM it will be as if the system had crashed. Not that big of a deal since the VM's OS should flush required data and maintain consistency of its filesystems.
sudo rsync -avhW --progress /var/lib/libvirt/images/vm1.img /var/lib/libvirt/images/vm1-copy.img
sudo virsh blockcommit vm1 vda --active --verbose --pivot
Full disk backup of VM
Start the guest VM:
sudo virsh start vm1
Enumerate the disk(s) in use:
sudo virsh domblklist vm1
Target Source
-------------------------------------------------
vda /var/lib/libvirt/images/vm1.qcow2
Begin the backup:
sudo virsh backup-begin vm1
Backup started
Check the job status. "None" means the job has likely completed.
sudo virsh domjobinfo vm1
Job type: None
Check the completed job status:
sudo virsh domjobinfo vm1 --completed
Job type: Completed
Operation: Backup
Time elapsed: 182 ms
File processed: 39.250 MiB
File remaining: 0.000 B
File total: 39.250 MiB
Now we see the copy of the backup:
sudo ls -lash /var/lib/libvirt/images/vm1.qcow2*
15M -rw-r--r--. 1 qemu qemu 15M May 10 12:22 vm1.qcow2
21M -rw-------. 1 root root 21M May 10 12:23 vm1.qcow2.1620642185
Mount RAID1 mirror
/dev/sda1
/dev/sdb1
Assemble the RAID array:
sudo mdadm --assemble --run /dev/md0 /dev/sda1 /dev/sdb1
Mount the RAID device:
sudo mount /dev/md0 /mnt
Configure msmtp for mdmonitor.service (Ubuntu 24.04)
sudo apt install msmtp msmtp-mta
Edit /etc/msmtprc
.
# Resend account
account resend
host smtp.resend.com
from admin@hyperreal.coffee
port 2587
tls on
tls_starttls on
tls_trust_file /etc/ssl/certs/ca-certificates.crt
auth on
user resend
password APIKEY GO HERE
syslog LOG_MAIL
Edit /etc/mdadm.conf
.
MAILADDR hyperreal@moonshadow.dev
MAILFROM admin@hyperreal.coffee
PROGRAM msmtp
ARRAY ...
ARRAY ...
Rename sendmail and symlink msmtp to sendmail.
sudo mv /usr/sbin/sendmail /usr/sbin/sendmail.bak
sudo ln -s /usr/bin/msmtp /usr/sbin/sendmail
Send a test email.
sudo mdadm --monitor --scan --test --oneshot
Restart mdmonitor.service.
sudo systemctl restart mdmonitor.service
Installation
-
Download Resident Evil Classic Triple Pack PC from archive.org. This contains the Sourcenext versions of all three games.
-
Install all three games using their installers.
-
Download the following files:
- Biohazard PC CD-ROM Mediakite patch version 1.01
- Resident Evil Classic REbirth
- Resident Evil 2 Classic REbirth
- Resident Evil 3 Classic REbirth
- Biohazard Mediakite
- Resident Evil HD mod by TeamX
- Resident Evil 2 HD mod by TeamX
- Resident Evil 3 HD mod by TeamX
- Resident Evil Seamless HD Project v1.1
- Resident Evil 2 Seamless HD Project v2.0
- Resident Evil 3: Nemesis Seamless HD Project v2.0
-
Open the Biohazard Mediakite disc image with 7zip and drag the JPN folder from the disc into
C:\Program Files (x86)\Games Retro\Resident Evil Classic
Resident Evil Director's Cut
Extract the following files to %ProgramFiles(x86)%\Games Retro\Resident Evil Classic
:
Biohazard.exe
from Mediakite v1.01ddraw.dll
from Resident Evil Classic REbirth- All from Resident Evil HD mod by TeamX
- All from Resident Evil Seamless HD Project v1.1
Resident Evil 2
Extract the following files to %ProgramFiles(x86)%\Games Retro\BIOHAZARD 2 PC
:
ddraw.dll
from Resident Evil 2 Classic REbirth- All from Resident Evil 2 HD mod by TeamX
- All from Resident Evil 2 Seamless HD Project v2.0
Resident Evil 3: Nemesis
Extract the following files to %ProgramFiles(x86)%\Games Retro\BIOHAZARD 3 PC
:
ddraw.dll
from Resident Evil 3 Classic REbirth- All from Resident Evil 3 HD mod by TeamX
- All from Resident Evil 3: Nemesis Seamless HD Project v2.0
Testing
Test each game by launching them with the following config changes:
- Resolution 1280x960
- RGB88 colors
- Disable texture filtering
Bluetooth: protocol not available
sudo apt install pulseaudio-module-bluetooth
Add to /lib/systemd/system/bthelper@.service
:
ExecStartPre=/bin/sleep 4
sudo systemctl start sys-subsystem-bluetooth-devices-hci0.device
sudo hciconfig hci0 down
sudo killall pulseaudio
systemctl --user enable --now pulseaudio.service
sudo systemctl restart bluetooth.service
- Ubuntu 24.04
- Orange Pi 5 Plus
- ISP router in bridge mode
- Ethernet from ISP router -> Orange Pi 5 Plus WAN port
- Ethernet from Orange Pi 5 Plus LAN port to switch
Install packages
sudo apt install neovim firewalld fail2ban atop htop python3-dev nmap tcpdump rsync rsyslog iptraf-ng iftop sysstat conntrack logwatch unattended-upgrades byobu
Install Tailscale.
curl -fsSL https://tailscale.com/install.sh | sh
Register router as Tailnet node.
sudo systemctl enable --now tailscaled.service
sudo tailscale up
Netplan with DHCP WAN
sudo nvim /etc/netplan/01-netcfg.yaml
network:
version: 2
renderer: networkd
ethernets:
eth0: # WAN interface (connected to internet)
dhcp4: true
dhcp6: false
nameservers:
addresses:
- 9.9.9.9
- 149.112.112.112
eth1: # LAN interface (connected to local network)
dhcp4: false
dhcp6: false
addresses:
- 10.0.2.1/24
nameservers:
addresses:
- 9.9.9.9
- 149.112.112.112
Bridged LAN+Wifi AP
network:
version: 2
renderer: networkd
ethernets:
eth0:
dhcp4: true
dhcp6: false
nameservers:
addresses:
- 9.9.9.9
- 149.112.112.112
eth1:
dhcp4: false
dhcp6: false
addresses:
- 10.0.2.1/24
nameservers:
addresses:
- 9.9.9.9
- 149.112.112.112
wifis:
wlan0:
access-points:
coffeenet:
auth:
key-management: psk
password: "password"
bridges:
br0:
interfaces:
- eth1
- wlan0
addresses:
- 10.0.2.1/24
nameservers:
addresses:
- 9.9.9.9
- 149.112.112.112
Netplan with static IP
network:
version: 2
renderer: networkd
ethernets:
eth0: # WAN interface (connected to internet)
addresses:
- WAN public IP/prefix
nameservers:
addresses:
- 9.9.9.9
- 149.112.112.112
routes:
- to: default
via: WAN default gateway
metric: 100
eth1:
dhcp4: false
dhcp6: false
addresses:
- 10.0.2.1/24
nameservers:
addresses:
- 9.9.9.9
- 149.112.112.112
Bridged LAN+Wifi AP
network:
version: 2
renderer: networkd
ethernets:
eth0:
dhcp4: false
dhcp6: false
addresses:
- WAN public IP
nameservers:
addresses:
- 9.9.9.9
- 149.112.112.112
routes:
- to: default
via: WAN default gateway
metric: 100
eth1:
dhcp4: false
dhcp6: false
addresses:
- 10.0.2.1/24
nameservers:
addresses:
- 9.9.9.9
- 149.112.112.112
wifis:
wlan0:
access-points:
coffeenet:
auth:
key-management: psk
password: "password"
bridges:
br0:
interfaces:
- eth1
- wlan0
addresses:
- 10.0.2.1/24
nameservers:
addresses:
- 9.9.9.9
- 149.112.112.112
Apply the netplan settings:
sudo netplan apply
IP forwarding
echo "net.ipv4.ip_forward=1" | sudo tee -a /etc/sysctl.conf
sudo sysctl -p
Firewalld
sudo firewall-cmd --permanent --zone=home --add-interface=br0
sudo firewall-cmd --permanent --zone=home --add-service={ssh,dns,http,https,dhcp}
sudo firewall-cmd --permanent --zone=home --add-forward
sudo firewall-cmd --permanent --zone=external --add-interface=eth0
sudo firewall-cmd --permanent --zone=external --add-service=dhcpv6-client
sudo firewall-cmd --permanent --zone=external --add-forward
Create /etc/firewalld/policies/masquerade.xml
to allow traffic to flow from LAN to WAN.
<?xml version="1.0" encoding="utf-8"?>
<policy target="ACCEPT">
<masquerade/>
<ingress-zone name="home"/>
<egress-zone name="external"/>
</policy>
Reload the firewall configuration:
sudo firewall-cmd --reload
Source: Simple RSS, Atom and JSON feed for your blog
A reference for those of us goblins who like to write out our RSS and Atom XML files by hand. ;)
RSS
<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
<channel>
<title>Example website title</title>
<link>https://example.com</link>
<description>Example website description.</description>
<atom:link href="https://example.com/rss.xml" rel="self" type="application/rss+xml" />
<item>
<title>Post one</title>
<link>https://example.com/posts-one</link>
<description>Post one content.</description>
<guid isPermaLink="true">https://example.com/posts-one</guid>
<pubDate>Mon, 22 May 2023 13:00:00 -0600</pubDate>
</item>
<item>
<title>Post two</title>
<link>https://example.com/posts-two</link>
<description>Post two content.</description>
<guid isPermaLink="true">https://example.com/posts-two</guid>
<pubDate>Mon, 15 May 2023 13:00:00 -0600</pubDate>
</item>
</channel>
</rss>
Atom
<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
<id>http://example.com/</id>
<title>Example website title</title>
<updated>2023-05-22T13:00:00.000Z</updated>
<author>
<name>John Doe</name>
</author>
<link href="https://example.com/atom.xml" rel="self" type="application/rss+xml" />
<subtitle>Example website description.</subtitle>
<entry>
<id>https://example.com/posts-one</id>
<title>Post one</title>
<link href="https://example.com/posts-one"/>
<updated>2023-05-22T13:00:00.000Z</updated>
<summary type="html">https://example.com/posts-one</summary>
<content type="html">Post one content.</content>
</entry>
<entry>
<id>https://example.com/posts-two</id>
<title>Post two</title>
<link href="https://example.com/posts-two"/>
<updated>2023-05-15T13:00:00.000Z</updated>
<summary type="html">https://example.com/posts-two</summary>
<content type="html">Post two content.</content>
</entry>
</feed>
JSON
{
"version": "https://jsonfeed.org/version/1.1",
"title": "Example website title",
"home_page_url": "https://example.com",
"feed_url": "https://example.com/feed.json",
"description": "Example website description.",
"items": [
{
"id": "https://example.com/posts-one",
"url": "https://example.com/posts-one",
"title": "Post one content.",
"content_text": "Post one content.",
"date_published": "2023-05-22T13:00:00.000Z"
},
{
"id": "https://example.com/posts-two",
"url": "https://example.com/posts-two",
"title": "Post two content.",
"content_text": "Post two content.",
"date_published": "2023-05-15T13:00:00.000Z"
}
]
}
Resources
- The RSS 2.0 Specification
- The Atom Syndication Format Specification
- The JSON Feed Version 1.1 Specification
- RSS and Atom Feed validator
- JSON Feed validator
Install systemd-boot on Debian
sudo mkdir /boot/efi/loader
printf "default systemd\ntimeout 5\neditor 1\n" | sudo tee /boot/efi/loader/loader.conf
sudo mkdir -p /boot/efi/loader/entries
sudo apt install -y systemd-boot
sudo bootctl install --path=/boot/efi
Check efibootmgr:
sudo efibootmgr
Output:
BootOrder: 0000,0001
Boot0000* Linux Boot Manager
Mount NFS share
Create a unit file at /etc/systemd/system/mnt-backup.mount
. The name of the unit file must match the Where
directive. Ex. Where=/mnt/backup
--> mnt-backup.mount
.
[Unit]
Description=borgbackup NFS share from TrueNAS (10.0.0.81)
DefaultDependencies=no
Conflicts=umount.target
After=network-online.target remote-fs.target
Before=umount.target
[Mount]
What=10.0.0.81:/mnt/coffeeNAS/backup
Where=/mnt/backup
Type=nfs
Options=defaults
[Install]
WantedBy=multi-user.target
Setup a FreeBSD thick VNET jail for torrenting Anna's Archive
Setup the VNET bridge
Create the bridge.
ifconfig bridge create
Attach the bridge to the main network interface. igc0
in this case. For some reason, the resulting bridge device is named igb0bridge
, rather than bridge0
.
ifconfig igb0bridge addm igc0
To make this persist across reboots, add the following to /etc/rc.conf
.
defaultrouter="10.0.0.1"
cloned_interfaces="igb0bridge"
ifconfig_igc0bridge="inet 10.0.0.8/24 addm igc0 up"
Create the classic (thick) jail
Create the ZFS dataset for the jails. We'll use basejail
as a template for subsequent jails.
zfs create -o mountpoint=/jails naspool/jails
zfs create naspool/jails/basejail
Use the bsdinstall
utility to bootstrap the base system to the basejail
.
export DISTRIBUTIONS="base.txz"
export BSDINSTALL_DISTSITE=https://download.freebsd.org/ftp/releases/amd64/14.2-RELEASE/
bsdinstall jail /jails/basejail
Run freebsd-update
to update the base jail.
freebsd-update -b /jails/basejail fetch install
freebsd-update -b /jails/basejail IDS
We now snapshot the basejail
and create a clone of this snapshot for the aa-torrenting
jail that we will use for Anna's Archive.
zfs snapshot naspool/jails/basejail@`freebsd-version`
zfs clone naspool/jails/basejail@`freebsd-version` naspool/jails/aa-torrenting
We now use the following configuration for /etc/jail.conf
.
aa-torrenting {
exec.consolelog = "/var/log/jail_console_${name}.log";
allow.raw_sockets;
exec.clean;
mount.devfs;
devfs_ruleset = 11;
path = "/jails/${name}";
host.hostname = "${name}";
vnet;
vnet.interface = "${epair}b";
$id = "127";
$ip = "10.0.0.${id}/24";
$gateway = "10.0.0.1";
$bridge = "igb0bridge";
$epair = "epair${id}";
exec.prestart = "/sbin/ifconfig ${epair} create up";
exec.prestart += "/sbin/ifconfig ${epair}a up descr jail:${name}";
exec.prestart += "/sbin/ifconfig ${bridge} addm ${epair}a up";
exec.start += "/sbin/ifconfig ${epair}b ${ip} up";
exec.start += "/sbin/route add default ${gateway}";
exec.start += "/bin/sh /etc/rc";
exec.stop = "/bin/sh /etc/rc.shutdown";
exec.poststop = "/sbin/ifconfig ${bridge} deletem ${epair}a";
exec.poststop += "/sbin/ifconfig ${epair}a destroy";
}
Now we create the devfs ruleset to enable access to devices under /dev
inside the jail. Add the following to /etc/devfs.rules
.
[devfsrules_jail_vnet=11]
add include $devfsrules_hide_all
add include $devfsrules_unhide_basic
add include $devfsrules_unhide_login
add include $devfsrules_jail
add path 'tun*' unhide
add path 'bpf*' unhide
Enable the jail
utility in /etc/rc.conf
.
sysrc jail_enable="YES"
sysrc jail_parallel_start="YES"
Start the jail service for aa-torrenting.
service jail start aa-torrenting
Setting up Wireguard inside the jail
Since we have the /dev/tun*
devfs rule, we now need to install Wireguard inside the jail.
jexec -u root aa-torrenting
pkg install wireguard-tools wireguard-go
Download a Wireguard configuration for ProtonVPN, and save it to /usr/local/etc/wireguard/wg0.conf
.
Enable Wireguard to run when the jail boots up.
sysrc wireguard_enable="YES"
sysrc wireguard_interfaces="wg0"
Start the Wireguard daemon and make sure you are connected to it properly.
service wireguard start
curl ipinfo.io
The curl command should display the IP address of the Wireguard server defined in /usr/local/etc/wireguard/wg0.conf
.
Setting up qBittorrent inside the jail
Install the qbittorrent-nox package.
pkg install -y qbittorrent-nox
Before running the daemon from /usr/local/etc/rc.d/qbittorrent
, we must run the qbittorrent command from the shell so that we can see the default password generated for the web UI. For some reason it is not shown in any logs, and the qbittorrent-nox manpage wrongly says the password is "adminadmin". Experience shows otherwise.
pkg install -y sudo
sudo -u qbittorrent qbittorrent-nox --profile=/var/db/qbittorrent/conf --save-path=/var/db/qbittorrent/Downloads --confirm-legal-notice
Copy the password displayed after running the command. Login to the qBittorrent web UI at http://10.0.0.127:8080 with login admin
and the password you copied. In the web UI, open the options menu and go over to the Web UI tab. Change the login password to your own. Save the options to close the menu.
Now press CTRL-c
to stop the qbittorrent-nox process. Make the following changes to the aa-torrenting jail's /etc/rc.conf.
sysrc qbittorrent_enable="YES"
sysrc qbittorrent_flags="--confirm-legal-notice"
Enable the qBittorrent daemon.
service qbittorrent start
Go back to the web UI at http://10.0.0.127:8080. Go to the options menu and go over to the Advanced tab, which is the very last tab. Change the network interface to wg0
.
Finding the forwarded port that the ProtonVPN server is using
Install the libnatpmp
package.
pkg install libnatpmp
Make sure that port forwarding is allowed on the server you're connected to, which it should be if you enabled it while creating the Wireguard configuration on the ProtonVPN website. Run the natpmpc
command against the ProtonVPN Wireguard gateway.
natpmpc -g 10.2.0.1
If the output looks like the following, you're good.
initnatpmp() returned 0 (SUCCESS)
using gateway : 10.2.0.1
sendpublicaddressrequest returned 2 (SUCCESS)
readnatpmpresponseorretry returned 0 (OK)
Public IP address : 62.112.9.165
epoch = 58081
closenatpmp() returned 0 (SUCCESS)
Now create the UDP and TCP port mappings, then loop natpmpc so that it doesn't expire.
while true ; do date ; natpmpc -a 1 0 udp 60 -g 10.2.0.1 && natpmpc -a 1 0 tcp 60 -g 10.2.0.1 || { echo -e "ERROR with natpmpc command \a" ; break ; } ; sleep 45 ; done
The port allocated for this server is shown on the line that says "Mapped public port XXXXX protocol UDP to local port 0 liftime 60". Port forwarding is now activated. Copy this port number and, in the qBittorrent web UI options menu, go to the Connections tab and enter it into the "Port used for incoming connections" box. Make sure to uncheck the "Use UPnP / NAT-PMP port forwarding from my router" box.
If the loop terminates, you'll need to re-run this loop script each time you start a new port forwarding session or the port will only stay open for 60 seconds.
P2P NAT port forwarding script with supervisord
Install supervisord:
sudo pkg install -y py311-supervisor
Enable the supervisord service:
sudo sysrc supervisord_enable="YES"
Edit /usr/local/etc/supervisord.conf
, and add the following to the bottom of the file:
[program:natpmpcd]
command=/usr/local/bin/natpmpcd
autostart=true
Add the following contents to a file at /usr/local/bin/natpmpcd
:
#!/bin/sh
port=$(/usr/local/bin/natpmpc -a 1 0 udp 60 -g 10.2.0.1 | grep "Mapped public port" | awk '{print $4}')
echo $port | tee /usr/local/etc/natvpn_port.txt
while true; do
date
if ! /usr/local/bin/natpmpc -a 1 0 udp 60 -g 10.2.0.1 && /usr/local/bin/natpmpc -a 1 0 tcp 60 -g 10.2.0.1; then
echo "error Failure natpmpc $(date)"
break
fi
sleep 45
done
Ensure the script is executable:
chmod +x /usr/local/bin/natpmpcd
supervisord will start the above shell script automatically. Ensure supervisord service is started:
sudo service supervisord start
The script will print out the forwarded port number at /usr/local/etc/natvpn_port.txt
.
cat /usr/local/etc/natvpn_port.txt
48565
Install on encrypted Btrfs
Source: Void Linux Installation Guide
First, update xbps.
xbps-install -Syu xbps
Partition disk
Install gptfdisk
.
xbps-install -Sy gptfdisk
Run gdisk.
gdisk /dev/nvme1n1
Create the following partitions:
Partition Type | Size |
---|---|
EFI | +600M |
boot | +900M |
root | Remaining space |
Create the filesystems.
mkfs.vfat -nBOOT -F32 /dev/nvme1n1p1
mkfs.ext4 -L grub /dev/nvme1n1p2
cryptsetup luksFormat --type=luks -s=512 /dev/nvme1n1p3
cryptsetup open /dev/nvme1n1p3 cryptroot
mkfs.btrfs -L void /dev/mapper/cryptroot
Mount partitions and create Btrfs subvolumes.
mount -o defaults,compress=zstd:1 /dev/mapper/cryptroot /mnt
btrfs subvolume create /mnt/root
btrfs subvolume create /mnt/home
umount /mnt
mount -o defaults,compress=zstd:1,subvol=root /dev/mapper/cryptroot /mnt
mkdir /mnt/home
mount -o defaults,compress=zstd:1,subvol=home /dev/mapper/cryptroot /mnt/home
Create Btrfs subvolumes for parts of the filesystem to exclude from snapshots. Nested subvolumes are not included in snapshots.
mkdir -p /mnt/var/cache
btrfs subvolume create /mnt/var/cache/xbps
btrfs subvolume create /mnt/var/tmp
btrfs subvolume create /mnt/srv
btrfs subvolume create /mnt/var/swap
Mount EFI and boot partitions.
mkdir /mnt/efi
mount -o rw,noatime /dev/nvme1n1p1 /mnt/efi
mkdir /mnt/boot
mount -o rw,noatime /dev/nvme1n1p2 /mnt/boot
Base system installation
If using x86_64
:
REPO=https://mirrors.hyperreal.coffee/voidlinux/current
ARCH=x86_64
If using musl:
REPO=https://mirrors.hyperreal.coffee/voidlinux/current/musl
ARCH=x86_64-musl
Install the base system.
XBPS_ARCH=$ARCH xbps-install -S -R "$REPO" -r /mnt base-system base-devel btrfs-progs cryptsetup vim sudo dosfstools mtools void-repo-nonfree
Chroot
Mount the pseudo filesystems for the chroot.
for dir in dev proc sys run; do mount --rbind /$dir /mnt/$dir; mount --make-rslave /mnt/$dir; done
Copy DNS configuration.
cp -v /etc/resolv.conf /mnt/etc/
Chroot.
PS1='(chroot) # ' chroot /mnt/ /bin/bash
Set hostname.
echo "hostname" > /etc/hostname
Set timezone.
ln -sf /usr/share/zoneinfo/America/Chicago /etc/localtime
Synchronize the hardware clock.
hwclock --systohc
If using glibc, uncomment en_US.UTF-8
from /etc/default/libc-locales
. Then run:
xbps-reconfigure -f glibc-locales
Set root password.
passwd root
Configure /etc/fstab
.
UEFI_UUID=$(blkid -s UUID -o value /dev/nvme1n1p1)
GRUB_UUID=$(blkid -s UUID -o value /dev/nvme1n1p2)
ROOT_UUID=$(blkid -s UUID -o value /dev/mapper/cryptroot)
cat << EOF > /etc/fstab
UUID=$ROOT_UUID / btrfs defaults,compress=zstd:1,subvol=root 0 1
UUID=$UEFI_UUID /efi vfat defaults,noatime 0 2
UUID=$GRUB_UUID /boot ext4 defaults,noatime 0 2
UUID=$ROOT_UUID /home btrfs defaults,compress=zstd:1,subvol=home 0 2
tmpfs /tmp tmpfs defaults,nosuid,nodev 0 0
EOF
Setup Dracut. A "hostonly" install means that Dracut will generate a lean initramfs with everything you need.
echo "hostonly=yes" >> /etc/dracut.conf
If you have an Intel CPU:
xbps-install -Syu intel-ucode
Install GRUB.
xbps-install -Syu grub-x86_64-efi os-prober
grub-install --target=x86_64-efi --efi-directory=/efi --bootloader-id="Void Linux"
If you are dual-booting with another OS:
echo "GRUB_DISABLE_OS_PROBER=0" >> /etc/default/grub
Setup encrypted swapfile.
truncate -s 0 /var/swap/swapfile
chattr +C /var/swap/swapfile
chmod 600 /var/swap/swapfile
dd if=/dev/zero of=/var/swap/swapfile bs=1G count=16 status=progress
mkswap /var/swap/swapfile
swapon /var/swap/swapfile
RESUME_OFFSET=$(btrfs inspect-internal map-swapfile -r /var/swap/swapfile)
cat << EOF >> /etc/default/grub
GRUB_CMDLINE_LINUX="resume=UUID-$ROOT_UUID resume_offset=$RESUME_OFFSET"
EOF
Regenerate configurations.
xbps-reconfigure -fa
Install Xorg and Xfce.
xbps-install -Syu xorg xfce4
If you have a recent Nvidia GPU:
xbps-install -Syu nvidia
Add user.
useradd -c "Jeffrey Serio" -m -s /usr/bin/zsh -U jas
passwd jas
echo "jas ALL=(ALL) NOPASSWD: ALL" | tee -a /etc/sudoers.d/jas
Enable system services.
for svc in "NetworkManager" "crond" "dbus" "lightdm" "ntpd" "snapperd" "sshd"; do
ln -sf /etc/sv/$svc /var/service;
done
Disable bitmap fonts.
ln -sf /usr/share/fontconfig/conf.avail/70-no-bitmaps.conf /etc/fonts/conf.d/
xbps-reconfigure -f fontconfig
Setup package repository.
echo "repository=https://mirrors.hyperreal.coffee/voidlinux/current" | tee /etc/xbps.d/00-repository-main.conf
# For musl
echo "repository=https://mirrors.hyperreal.coffee/voidlinux/current/musl" | tee /etc/xbps.d/00-repository-main.conf
Setup Pipewire for audio.
mkdir -p /etc/pipewire/pipewire.conf.d
ln -sf /usr/share/examples/wireplumber/10-wireplumber.conf /etc/pipewire/pipewire.conf.d/
ln -sf /usr/share/applications/pipewire.desktop /etc/xdg/autostart/
Generate configurations.
xbps-reconfigure -fa
Exit chroot, unmount disks, and reboot.
exit
umount -lR /mnt
reboot
Repair boot files
- Download Windows 11 ISO from Microsoft and write to USB.
- Boot into Windows setup utility.
- Select Repair computer -> Troubleshoot -> Advanced -> Cmd prompt
This procedure assumes the following:
- main disk is
disk 0
- EFI partition is
part 1
- Windows OS drive letter is
c:
The following commands will format the old EFI partition, mount it to s:
, and copy the boot files to it:
diskpart
> list disk
> sel disk 0
> list part
> sel part 1
> format fs=fat32 quick label=System
> list vol
> exit
mountvol S: /S
bcdboot c:\windows /s s: /f UEFI /v
exit