Techne: A practical knowledge base
This is a collection of notes and tutorials on various topics. It's mostly tech-related, but may include other topics. The intended audience for notes is mostly myself, while tutorials are written with others in mind.
Archive Proton Mail with offlineimap and Podman
Problem
- I want to keep an archive of my email on my local machine for storage and record-keeping.
- I use Proton Mail as an email provider, so I have to use the Proton Mail Bridge app. I have the Flatpak version installed and set to run when I login to my desktop.
- I run Fedora Silverblue (actually it's my own custom variant called Vauxite, but that's not relevant here), so I have to run offlineimap from a Podman container on the host.
- Ideally, I'd have offlineimap run daily at, say, 10:00 PM. On a non-atomic OS, I'd just use the offlineimap systemd service and timer. Thankfully, Podman has the ability to generate a systemd unit file for containers.
Solution
Build OCI image for offlineimap
The first thing I did was create a Containerfile.
FROM fedora:latest
LABEL maintainer "Jeffrey Serio <hyperreal@fedoraproject.org>"
RUN printf "fastestmirror=True\ndeltarpm=True\nmax_parallel_downloads=10\n" | tee -a /etc/dnf/dnf.conf \
&& dnf install -y offlineimap python3-distro \
&& dnf clean all \
&& mkdir /{mail,metadata}
CMD ["/usr/bin/offlineimap", "-o", "-u", "basic", "-c", ".offlineimaprc"]
I then built the container image locally:
podman build -t localhost/offlineimap:latest .
Create container with OCI image
Once that was done, I created a container for it. I mapped the offlineimap metadata directory, mail directory, and offlineimaprc as volumes for the container.
podman create -it --name offlineimap \
-v ~/mail:/mail:Z \
-v ~/.offlineimap-metadata:/metadata:Z \
-v ~/.offlineimaprc:/.offlineimaprc:Z \
localhost/offlineimap:latest
Generate the systemd unit file for container
With the container created, I can now run the podman command to generate the systemd unit file.
--new
: sets the container to be created and removed before and after each run with the--rm
flag.--name
: name of the container to generate the systemd unit file from.--files
: outputs the files to the current working directory instead of stdout.
podman generate systemd --new --name offlineimap --files
The file looks like this:
# container-offlineimap.service
# autogenerated by Podman 4.1.1
# Tue Aug 16 12:45:40 CDT 2022
[Unit]
Description=Podman container-offlineimap.service
Documentation=man:podman-generate-systemd(1)
Wants=network-online.target
After=network-online.target
RequiresMountsFor=%t/containers
[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
Restart=on-failure
TimeoutStopSec=70
ExecStartPre=/bin/rm -f %t/%n.ctr-id
ExecStart=/usr/bin/podman run \
--cidfile=%t/%n.ctr-id \
--cgroups=no-conmon \
--rm \
--sdnotify=conmon \
-d \
--replace \
-it \
--name offlineimap \
--network host \
-v /var/home/jas/mail:/mail:Z \
-v /var/home/jas/.offlineimap-metadata:/metadata:Z \
-v /var/home/jas/.offlineimaprc:/.offlineimaprc:Z localhost/offlineimap
ExecStop=/usr/bin/podman stop --ignore --cidfile=%t/%n.ctr-id
ExecStopPost=/usr/bin/podman rm -f --ignore --cidfile=%t/%n.ctr-id
Type=notify
NotifyAccess=all
[Install]
WantedBy=default.target
Create and enable systemd timer
Now I have to create a systemd timer for this service:
[Unit]
Description=Run container-offlineimap.service daily at 22:00:00
[Timer]
OnCalendar=*-*-* 22:00:00
Persistent=true
[Install]
WantedBy=timers.target
I move these files to ~/.config/systemd/user
, and enable the timer in user mod:
systemctl --user enable container-offlineimap.timer
So now, at 10:00 PM every night, the systemd timer will trigger the container-offlineimap.service, which will pull mail from the remote Proton Mail server and store it in the /var/home/jas/mail directory. Yay!
Last updated: 2024-05-19
Change default character limit on Mastodon
Step 1
On your Mastodon server, stop the Mastodon systemd services, then become the mastodon user:
systemctl stop mastodon-web mastodon-streaming mastodon-sidekiq
su - mastodon
Step 2
For the sake of this post, we'll assume we want to change the default character limit from 500 to 2000 characters.
Mastodon v4.3.0+:
Edit /live/app/javascript/mastodon/features/compose/containers/compose_form_container.js
.
Find the line containing the default character limit. There is only one place in which this number is coded in the file. Use your text editor's search function to find the following line:
maxChars: state.getIn(['server', 'server', 'configuration', 'statuses', 'max_characters'], 500)
Replace 500
with 2000
.
Earlier Mastodon versions:
Edit live/app/javascript/mastodon/features/compose/components/compose_form.js
.
Find the line containing the default character limit. There are two places in which this number is coded in the file.
You can find it by using your text editor's search function and searching for the string "length(fulltext) > 500". Change the number to 2000.
return !(isSubmitting || isUploading || isChangingUpload || length(fulltext) > 2000 || (isOnlyWhitespace && !anyMedia));
Search with your text editor for the string "CharacterCounter max={500}". Change the number to 2000.
<div className='character-counter__wrapper'>
<CharacterCounter max={2000} text={this.getFulltextForCharacterCounting()} />
</div>
Step 3
Edit live/app/validators/status_length_validator.rb
. There is only one place in which this number is coded in the file.
Search with your text editor for the string "MAX_CHARS". Change it to 2000.
class StatusLengthValidator < ActiveModel::Validator
MAX_CHARS = 2000
Step 4
Edit live/app/serializers/rest/instance_serializer.rb
.
Mastodon v4.3.0+:
Search for the string "attributes :domain". Add :max_toot_chars
after :api_versions
.
attributes :domain, :title, :version, :source_url, :description,
:usage, :thumbnail, :icon, :languages, :configuration,
:registrations, :api_versions, :max_toot_chars
Earlier Mastodon versions:
Search for the string "attributes :domain". Add :max_toot_chars
after :registrations
.
attributes :domain, :title, :version, :source_url, :description
:usage, :thumbnail, :languages, :configuration,
:registrations, :max_toot_chars
Search for the string "private". Place the following code block above it, followed by a blank newline:
def max_toot_chars
2000
end
Step 5
Recompile Mastodon assets:
cd live
export NODE_OPTIONS=--openssl-legacy-provider # Fedora hosts only
RAILS_ENV=production bundle exec rails assets:precompile
Step 6
As the root user, restart the Mastodon systemd services:
systemctl start mastodon-web mastodon-streaming mastodon-sidekiq
That's pretty much all there is to it. May your toots now be sufficiently nuanced!
Install Fedora on RockPro64 with boot on eMMC and rootfs on SSD
This guide goes through the steps of installing Fedora Server on a RockPro64 with the boot and EFI partitions on an eMMC module and rootfs on a SATA or NVMe SSD.
This guide describes installing a vanilla Fedora Server image, but it could also be used for moving an already existing Fedora rootfs to an SSD. If there is already a Fedora installation on the eMMC module, then you can start at step 3.
Requirements
- RockPro64
- Pine64 eMMC module (of at least 16GB)
- microSD card (of at least 16GB) with Armbian flashed to it, though any RockPro64-compatible Linux distro should work as long as it's not Fedora with LVM2.
- SATA or NVMe SSD (of any size reasonable for your use-case)
- One of the following:
- All of the above components are ready for use; i.e., the eMMC module is installed, the SATA or NVMe PCI-e interface card is installed with the SSD attached, microSD card has a usable Linux distro.
Plug your Armbian microSD card into the slot on your RockPro64, and boot it up. This will be used as a maintenance/rescue disk.
1. Downloading and verifying the Fedora image
On the booted Armbian, download a fresh Fedora aarch64 raw image from the Fedora website (I'm using Fedora Server in this example):
wget https://download.fedoraproject.org/pub/fedora/linux/releases/37/Server/aarch64/images/Fedora-Server-37-1.7.aarch64.raw.xz
Download Fedora's GPG keys and checksum files for the raw image you're using:
curl -O https://getfedora.org/static/fedora.gpg
wget https://getfedora.org/static/checksums/37/images/Fedora-Server-37-1.7-aarch64-CHECKSUM
Verify the CHECKSUM file is valid:
Note: On Armbian or other Linux distros, you may need to install the package that provides the
gpgv
command. On Armbian, this package is called gpgv.
gpgv --keyring ./fedora.gpg *-CHECKSUM
The CHECKSUM file should have a good signature from one of the Fedora keys.
Check that the downloaded image's checksum matches:
sha256sum -c *-CHECKSUM
If the output says OK, then it's ready to use.
2. Flashing the Fedora aarch64 image to the eMMC module
If you have an eMMC-to-USB adapter, the adapter allows you to flash Fedora aarch64 onto the eMMC module from your main PC via a USB port. You can run the arm-image-installer from your main PC and then place the eMMC module onto your RockPro64 before step 3.
Run lsblk
to ensure the eMMC module and SSD are detected.
Clone the Fedora arm-image-installer git repository:
git clone https://pagure.io/arm-image-installer.git
cd arm-image-installer
Create a directory for the arm-image-installer and copy the files from the git repository:
sudo mkdir /usr/share/arm-image-installer
sudo cp -rf boards.d socs.d /usr/share/arm-image-installer/
sudo cp -fv arm-image-installer rpi-uboot-update spi-flashing-disk update-uboot /usr/bin/
With the arm-image-installer files in place, it's time to flash the Fedora aarch64 raw image to the eMMC module.
We'll assume the microSD card we're booted from is /dev/mmcblk0
and
the eMMC module is /dev/mmcblk1
:
sudo arm-image-installer \
--image=Fedora-Server-37-1.7-aarch64.raw.xz \
--target=rockpro64-rk3399 \
--media=/dev/mmcblk1 \
--norootpass \
--resizefs \
--showboot \
--relabel
You can also pass the --addkey
flag to add your SSH public key to the
root user's authorized_keys
file on the flashed image.
When the arm-image-installer finishes, your eMMC module should have
Fedora with the following filesystem layout: - /boot
mounted on
/dev/mmcblk1p2
- /boot/efi
mounted on /dev/mmcblk1p1
- /
as an
LVM2 member on /dev/mmcblk1p3
The LVM2 member will consist of a physical volume on /dev/mmcblk1p3
.
This physical volume will consist of a volume group named fedora
. The
volume group will consist of a logical volume named root
, formatted as
an XFS partition. You can check this will the following commands:
sudo pvs
PV VG Fmt Attr PSize PFree
/dev/mmcblk1p3 fedora lvm2 a-- <15.73g 0
sudo lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
root fedora -wi-ao---- <15.73g
Now we're ready to move the rootfs on the logical volume to the SSD.
3. Move the rootfs on the logical volume to the SSD
Since I have a SATA SSD, mine will be named /dev/sda
. If you have an
NVMe SSD, the name will be something like /dev/nvme0n1
.
Ensure the volume group is active:
sudo vgchange -ay
1 logical volume(s) in volume group "fedora" now active
Use fdisk on the SSD to create a GPT label and parition:
sudo fdisk /dev/sda
At the fdisk prompt: - enter g
to create a GPT label - enter n
to
create a new partition - enter w
to write the changes to the SSD.
Create a physical volume on the newly created SSD partition:
pvcreate /dev/sda1
Extend the fedora
volume group to the new physical volume:
vgextend fedora /dev/sda1
Move the allocated physical extents from the old physical volume to the new one.
Note that you don't need to specify the destination physical volume, as
pvmove
will use the "normal" allocation policy for thefedora
volume group. According to the Red Hat documentation, the "normal" allocation policy does not place parallel stripes on the same physical volume, and since there is only one other physical volume on/dev/sda1
,pvmove
infers the destination is/dev/sda1
pvmove /dev/mmcblk1p3
This command will take a while, depending on how much data is being moved.
When the pvmove
command completes, remove the old volume group:
vgreduce fedora /dev/mmcblk1p3
4. Mount the logical volume rootfs, chroot, and update GRUB
The logical volume should now be /dev/fedora/root
or
/dev/mapper/fedora--root
.
Mount the logical volume rootfs and the proc, sys, and dev filesystems:
sudo mount /dev/fedora/root /mnt
sudo mount -t proc proc /mnt/proc
sudo mount --rbind /sys /mnt/sys
sudo mount --rbind /dev /mnt/dev
Mount the boot and EFI partitions:
sudo mount /dev/mmcblk1p2 /mnt/boot
sudo mount /dev/mmcblk1p1 /mnt/boot/efi
Chroot into the new rootfs:
sudo chroot /mnt /bin/bash
Update the GRUB bootloader:
grub2-mkconfig -o /boot/grub2/grub.cfg
Note that the updated GRUB might detect the Linux distro you're booting from on the microSD card. This can be updated again when you boot into the new rootfs.
Once GRUB finishes updating, exit the chroot:
exit
Unmount the boot, EFI, proc, dev, and sys filesystems:
sudo umount -R /mnt/
We should now be able to boot into the new rootfs on the SSD.
sudo systemctl poweroff
With the RockPro64 powered off, remove the microSD card, then power it back on.
"Here goes...something..." --me
If "something" turns out to be "booted into the new rootfs nicely", you should be at the Fedora setup menu, or if you already had a Fedora installation, you should be at the tty login prompt. Yay, it works! You're now using the eMMC module for the boot and EFI partitions and the SSD for the root partition. Enjoy the better disk read and write performance.
Postscript
While booted on the new root partition, you can remove the old physical
volume, which should now be on /dev/mmcblk01p3
.
sudo pvremove /dev/mmcblk0p3
Now we can resize the logical volume containing the root filesystem to its maximum space in the volume group, and then grow the XFS filesystem:
sudo lvextend -l +100%FREE /dev/fedora/root
sudo xfs_growfs /dev/fedora/root
Restoring root and home Btrfs subvolumes to a new Fedora install
Premise
Suppose we have a Fedora installation on a Btrfs filesystem, with two subvolumes called "root" and "home". If something happened to our system and it could not boot properly, we might want to reinstall Fedora, but we may not want to reinstall all the packages and re-setup the configurations we had. In most cases, restoring the "home" subvolume is seamless, but the root subvolume relies on the presence and contents of certain files in order to be bootable. It's possible to reinstall Fedora, restore the "root" and "home" subvolumes from snapshots, and boot back into the system as if nothing had happened since the time the snapshots were taken. I'll describe this process below. We'll have to do these tasks from a Fedora live environment.
Requirements
- external HD with the capacity to hold both the "root" and "home" snapshots
- Fedora live CD
For the purpose of this guide, we'll assume the following:
- internal Fedora disk:
/dev/nvme0n1p3
to be mounted at/mnt/internal
- external HD contains a Btrfs partition
/dev/sda1
to be mounted at/mnt/external
Create read-only snapshots of "root" and "home"
If our disk is encrypted, we'll need to run the following command as root to unlock it:
cryptsetup luksOpen /dev/nvme0n1p3 fedora
This will mount the decrypted volume at /dev/mapper/fedora
.
The decrypted volume contains the top-level of the Btrfs volume originally created by the Anaconda installer. The top-level contains the "root" and "home" subvolumes. We'll mount this top-level volume at /mnt/internal
:
mount /dev/mapper/fedora /mnt/internal
Now create read-only snapshots of the "root" and "home" subvolumes. They must be read-only in order to send them to an external HD. The destination must also be on the same disk, or else we'll get an "Invalid cross-device link" error.
btrfs subvolume snapshot -r /mnt/internal/root /mnt/internal/root-backup-ro
btrfs subvolume snapshot -r /mnt/internal/home /mnt/internal/home-backup-ro
Send the read-only snapshots to the external HD.
btrfs send /mnt/internal/root-backup-ro | btrfs receive /mnt/external/
btrfs send /mnt/internal/home-backup-ro | btrfs receive /mnt/external/
This might take a while depending on the size of the snapshots, the CPU, and the read/write performance of the disks.
After the snapshots are sent, unmount the devices.
umount -lR /mnt/internal
umount -lR /mnt/external
Run the Anaconda installer to install a fresh Fedora system
This is straight-forward. Just run the installer and continue to the next step. DO NOT REBOOT after the installation finishes.
Send the read-only snapshots back to the internal disk
After the installation, go back to the terminal and mount the internal and external disks again. If we've encrypted the internal disk, it should still be decrypted and mapped to /dev/mapper/luks-foo-bar-bas
.
With encryption
mount /dev/mapper/luks-foo-bar-bas /mnt/internal
Unencrypted
mount /dev/nvme0n1p3 /mnt/internal
Mount the external HD again.
mount /dev/sda1 /mnt/external
Send the read-only snapshots back to the internal disk.
btrfs send /mnt/external/root-backup-ro | btrfs receive /mnt/internal/root-backup-ro
btrfs send /mnt/external/home-backup-ro | btrfs receive /mnt/internal/home-backup-ro
Create read-write subvolumes from the read-only snapshots
Create read-write snapshots from the read-only ones. These new read-write snapshots are regular subvolumes.
btrfs subvolume snapshot /mnt/internal/root-backup-ro /mnt/internal/root-backup-rw
btrfs subvolume snapshot /mnt/internal/home-backup-ro /mnt/internal/home-backup-rw
We can now delete the read-only snapshots from both the internal and external disks, if we want to.
btrfs subvolume delete /mnt/internal/root-backup-ro
btrfs subvolume delete /mnt/internal/home-backup-ro
btrfs subvolume delete /mnt/external/root-backup-ro
btrfs subvolume delete /mnt/external/home-backup-ro
The /mnt/internal
directory should now contain the following subvolumes:
root
home
root-backup-rw
home-backup-rw
Copy important files from root
to root-backup-rw
The following files need to be present on the root-backup-rw
subvolume in order to boot into it:
/etc/crypttab
: If we've encrypted our internal disk, this file contains the mount options for the encrypted partition. The Anaconda installer generates a unique UUID for the encrypted partition, so this file needs to be in place when it's referenced during the boot process. The/etc/crypttab
that currently exists on theroot-backup-rw
subvolume has the UUID that was generated in the previous installation, so the boot process will fail if it contains the old contents./etc/fstab
: This file contains the unique UUIDs for each mount point. As with crypttab, the boot process will fail if the contents are different from what the Anaconda installer generated in the current installation./etc/kernel/cmdline
: This file also contains the above information and will cause the boot process to fail if it doesn't match the current installation.cp -v /mnt/internal/root/etc/crypttab /mnt/internal/root-backup-rw/etc/crypttab cp -v /mnt/internal/root/etc/fstab /mnt/internal/root-backup-rw/etc/fstab cp -v /mnt/internal/root/etc/kernel/cmdline /mnt/internal/root-backup-rw/etc/kernel/cmdline
Delete root
and home
subvolumes (optional)
Since all our data is on root-backup-rw
and home-backup-rw
, we no longer need the root
and home
subvolumes and can safely delete them.
btrfs subvolume delete /mnt/internal/root
btrfs subvolume delete /mnt/internal/home
Rename root-backup-rw
and home-backup-rw
The subvolumes are referenced in /etc/fstab
and elsewhere as root
and home
. If we don't want to delete the root
and home
subvolumes then we have to rename them before using the mv
command below.
mv root-backup-rw root
mv home-backup-rw home
Reinstall the kernel and bootloader
The kernel that's installed by the Anaconda installer is likely older than the kernel on the snapshot root filesystem. We'll have to chroot and reinstall the kernel and bootloader so that we can boot into the right kernel.
First, unmount the top-level Btrfs volume of the internal disk, then mount the root subvolume, and prepare the mountpoints for chroot.
umount -lR /mnt/internal
mount -o subvol=root,compress=zstd:1 /dev/mapper/luks-foo-bar-bas /mnt/internal
for dir in /dev /proc /run /sys; do mount --rbind $dir /mnt/$dir; done
mount -o subvol=home,compress=zstd:1 /dev/mapper/luks-foo-bar-bas /mnt/internal/home
mount /dev/nvme0n1p2 /mnt/internal/boot
mount /dev/nvme0n1p1 /mnt/internal/boot/efi
cp -v /etc/resolv.conf /mnt/internal/etc/
PS1='chroot# ' chroot /mnt/internal /bin/bash
While in the chroot, ensure we're connected to the internet, then reinstall the kernel and grub.
ping -c 3 www.google.com
dnf reinstall -y kernel* grub*
exit
Our system should have no problems booting now.
That's about it
Unmount the filesystems, reboot, and hope for the best.
umount -lR /mnt/internal
umount -lR /mnt/external
systemctl reboot
For more information on working with Btrfs subvolumes and snapshots, have a look at Working with Btrfs - Snapshots by Andreas Hartmann. It's been super helpful to me throughout this adventure.
Setup a LAN container registry with Podman and self-signed certs
Prerequisites
- RHEL-compatible or Fedora-based Linux distribution
- a LAN (presumably with access to other machines on the LAN)
- Podman
- OpenSSL
Install the required packages:
sudo dnf install '@container-management' openssl-devel openssl
Self-signed certificate
The benefit of having at least a self-signed certificate is that you can encrypt the traffic to your container registry over your LAN. You know, in the event there is an unknown entity hiding in your house or office, snooping on your local HTTP traffic. A self-signed certificate is fine for LAN-wide access, because presumably you trust yourself; however, if you want something that is accessible from the public Internet then you'd want a certificate signed by a Certificate Authority, because other people on the public Internet using it don't know if they should trust you to encrypt their HTTP traffic.
Create directories to hold the self-signed certificate and htpasswd authorization.
mkdir -p ~/registry/{auth,certs}
Create a subjectAltName configuration file (san.cnf). This will contain
information and configure some settings about the self-signed
certificate. The information I have in the [req_distinguished_name]
and [alt_names]
sections are for an example; you should enter
information that is relevant to your use-case.
Change into the ~/registry/certs
directory and create a file named
san.cnf
with the following contents:
[req]
default_bits = 4096
distinguished_name = req_distinguished_name
req_extensions = req_ext
x509_extensions = v3_req
prompt = no
[req_distinguished_name]
countryName = US
stateOrProvinceName = Illinois
localityName = Chicago
commonName = 127.0.0.1: Self-signed certificate
[req_ext]
subjectAltName = @alt_names
[v3_req]
subjectAltName = @alt_names
[alt_names]
IP.1 = 10.0.0.128
DNS = something.local
Make sure to change the IP.1
and DNS
values to the LAN IP address
and hostname of your registry server machine.
Make sure you're in the ~/registry/certs
directory. Now generate the
key:
openssl req -new -nodes -sha256 -keyout something.local.key -x509 -days 365 -out something.local.crt --config san.cnf
htpasswd authentication
Now we need generate an htpasswd file so that you can authenticate to the registry server. We'll use the registry:2.7.0 container image with an entrypoint to do this. First, change into the auth subdirectory.
Note: you must use the 2.7.0 tag for the registry image, because it seems to be the only one that has the htpasswd command available for the entrypoint flag.
cd ../auth
podman run --rm --entrypoint htpasswd registry:2.7.0 -Bbn USERNAME PASSWORD > htpasswd
The USERNAME
and PASSWORD
you choose will be used with the
podman login
command to authenticate you to the registry server.
Deploy the registry server
Create a directory on the host to store the registry data:
sudo mkdir -p /var/lib/registry
To deploy the registry server, run:
Note: The port mapping of 443:443 is for if you have no other web server running on the machine running the registry server. If you do have another web server, then change the port mapping to 5000:5000.
sudo podman run \
--privileged \
-d \
--name registry \
-p 443:443 \
-v /var/lib/registry:/var/lib/registry:Z \
-v "$HOME/registry/auth:/auth:Z" \
-v "$HOME/registry/certs:/certs:Z" \
-e REGISTRY_AUTH=htpasswd \
-e REGISTRY_AUTH_HTPASSWD_REALM="Registry Realm" \
-e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd \
-e REGISTRY_HTTP_ADDR=0.0.0.0:443 \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/something.local.crt \
-e REGISTRY_HTTP_TLS_KEY=/certs/something.local.key \
registry:latest
The registry container should run without incident. You can run
sudo podman logs registry
and sudo podman ps
to ensure everything is
as it should be.
If you get an error that tells you that 443 is a privileged port, you
can choose a port higher than 1024, or you can add the following line to
/etc/sysctl.conf
:
/etc/sysctl.conf
net.ipv4.ip_unprivileged_port_start=443
Then, to load the new value from /etc/sysctl.conf
, run:
sudo sysctl -p /etc/sysctl.conf
Accessing the registry from another machine
Since we're using a self-signed certificate, we're going to be our own
Certificate Authority (CA). We can use
~/registry/certs/something.local.crt
as our CA root certificate. So
we'll need to copy the contents of that file to the clipboard or copy
the entire file to the machine from which you want to access the
registry.
Run these commands on the other machine
Copy something.local.crt
via SSH:
scp -v youruser@something.local:/home/youruser/registry/certs/something.local.crt .
Now we need to create a directory to store the CA in a place where the Docker daemon or Podman will look for it.
sudo mkdir -p /etc/containers/certs.d/something.local:443
If you're running Docker on the other machine, then change
/etc/containers
to /etc/docker
. If you're using a port other than
443, then make sure to use that port in the name of the CA directory.
Now copy or move the CA file to the newly created directory, and make
sure the resulting filename is ca.crt
:
sudo mv something.local.crt /etc/containers/certs.d/something.local:443/ca.crt
Now you can try to login to the registry with the USERNAME
and
PASSWORD
you created earlier:
podman login -u USERNAME -p PASSWORD something.local
If this works, you should see "Login succeeded!" printed to the console. You can now push and pull images to and from your self-hosted container image registry.
OPTIONAL: Using the container registry for ostree images
If you're running an immutable ostree version of Fedora such as
Silverblue or Kinoite, you can use your self-hosted registry to store
customized container images to rebase to. Just make sure that the
registry is on a LAN server machine that is always up and running. To
rebase from a container image in your registry, we have to make sure
that rpm-ostree knows you're authenticated to the registry. Run the
podman login
command with the following flags:
podman login -u USERNAME -p PASSWORD something.local -v
The -v
flag is especially important, as it will show the name of the
auth.json
file we need that contains the authentication info. The file
should be /run/user/1000/containers/auth.json
. We simply need to copy
that file to /etc/ostree
.
sudo cp -v /run/user/1000/containers/auth.json /etc/ostree/
Now rpm-ostree knows you're authenticated. Assuming you've built and pushed your custom ostree container image to your self-hosted registry, you can rebase to it with the following command:
rpm-ostree rebase --experimental ostree-unverified-registry:something.local/custom_silverblue:latest
Setup ArchiveBox on RHEL-compatible distros
From their website: ArchiveBox "is a powerful, self-hosted internet archiving solution to collect, save, and view sites you want to preserve offline." It offers a command-line tool, web service, and desktop apps for Linux, macOS, and Windows.
There are several ways to install ArchiveBox. The developers recommend to install with docker-compose, but this gets a little cumbersome when we're running a RHEL-compatible Linux distro that has strong opinions on container management and prefers Podman over the "old way" (aka Docker). I've personally found it easier to install it with Python's Pipx tool and have my web server reverse proxy the ArchiveBox server.
Prerequisites
- Preferably a filesystem with compression and deduplication capabilities, such as BTRFS or ZFS, but any journaling filesystem will work fine if we have another way to backup the archives.
- Minimum of 500MB of RAM, but 2GB or more is recommended for chrome-based archiving methods.
Installing dependencies
To get started, we'll install pipx and the Python development package:
sudo dnf install python3-pip python3-devel pipx
Next, we'll install required dependencies, some of which may already be available on our system:
sudo dnf install wget curl git libatomic zlib-ng-devel openssl-devel openldap-devel libgsasl-devel python3-ldap python3-msgpack python3-mutagen python3-regex python3-pycryptodomex procps-ng ldns-utils ffmpeg-free ripgrep
We'll need a recent verison of NodeJS. On AlmaLinux, Rocky Linux, RHEL, or CentOS Stream, we can install version 20 by enabling its module with DNF.
sudo dnf module install nodejs:20
On Fedora, we can install the latest NodeJS version from the repositories.
sudo dnf install nodejs
Then, we'll install optional dependencies. If we want to use chrome-based archiving methods, such as fetching PDFs, screenshots, and the DOM of web pages, we'll need to install the Chromium package. If we want to archive YouTube videos, we'll need the yt-dlp package.
sudo dnf install yt-dlp chromium
Now we'll install ArchiveBox with pipx:
pipx install "archivebox[ldap,sonic]"
Initializing the ArchiveBox database
Create a directory in the archivebox user's home directory to store the archive data:
mkdir data
cd data
Run the initialization:
archivebox init --setup
The setup wizard will prompt us to enter a username, email address, and password. This will allow us to login to our ArchiveBox web dashboard.
Now we need to create a systemd service for the ArchiveBox server. Create the file at ~/.config/systemd/user/archivebox.service
.
[Unit]
Description=Archivebox server
After=network.target network-online.target
Requires=network-online.target
[Service]
Type=simple
Restart=always
ExecStart=bash -c '$HOME/.local/bin/archivebox server 0.0.0.0:8000'
WorkingDirectory=$HOME/data
[Install]
WantedBy=default.target
Reload the daemons:
systemctl --user daemon-reload
Enable and start the archivebox.service:
systemctl --user enable --now archivebox.service
If we're running a web server already, we can reverse proxy the archivebox server on port 8000. I use Caddy, so this is what I have in my Caddyfile:
archive.hyperreal.coffee {
reverse_proxy 0.0.0.0:8000
}
If we're not already running a web server, then we might need to open port 8000 in our firewalld's default zone:
sudo firewall-cmd --zone=public --permanent --add-port=8000/tcp
sudo firewall-cmd --reload
We should now be able to access our ArchiveBox instance from our web server domain or from our localhost by pointing our web browser at http://localhost:8000.
Here are a few examples of what we can do with the ArchiveBox command-line tool:
archivebox add "https://techne.hyperreal.coffee"
archivebox add < ~/Downloads/bookmarks.html
curl https://example.com/some/rss/feed.xml | archivebox add
We can specify the depth if we want to archive all URLs within the web page of the given URL:
archivebox add --depth=1 https://example.com/some/feed.RSS
We can run archivebox on a cron schedule:
archivebox schedule --every=day --depth=1 http://techrights.org/feed/
'Dassit! Enjoy ArchiveBox :-)
Setup a Mastodon instance on Fedora Server
I'll go through the steps to setup a Mastodon instance on Fedora Server. This guide is based on the original Install from source guide in the Mastodon documentation, but includes Fedora-specific tweaks such as packages, file path differences, and SELinux policies. I'll first go through prerequistes and basic setup, and then branch off in the style of a choose-your-own-adventure. Section 2 is for installing a new Mastodon instance; Section 3 is for Migrating from an existing Mastodon instance; Section 4 is for setting up Mastodon with Nginx and Certbot; Section 5 is for setting up Mastodon with Caddy; Section 6 covers SELinux policy modules that need to be enabled for some critical services and executables to work.
This guide presumes the following: - Vanilla Fedora 37 Server install with SELinux in enforcing mode, fail2ban, and firewalld with the HTTP/S ports open. - Mastodon version 4.0.2 - You have a domain name for hosting the Mastodon instance. - You know how to configure SMTP, should you want it.
I'll come back and update the guide as necessary for new releases of the software.
Prerequisites and basic setup
Become the root user and install the following packages:
dnf install postgresql-server postgresql-contrib ImageMagick ImageMagick-devel ffmpeg-free ffmpeg-free-devel libpq libpq-devel libxml2-devel libxslt-devel file git-core '@c-development' '@development-tools' protobuf-devel pkgconf-pkg-config nodejs bison openssl-devel libyaml-devel readline-devel zlib-devel ncurses-devel libffi-devel gdbm-devel redis libidn-devel libicu-devel jemalloc-devel perl-FindBin
Install corepack and set the yarn version:
npm install -g corepack
corepack enable
yarn set version classic
Add the mastodon user, then switch to it:
adduser -m -U mastodon
su - mastodon
As the mastodon user, install rbenv and rbenv-build:
git clone https://github.com/rbenv/rbenv.git ~/.rbenv
cd ~/.rbenv
src/configure
make -C src
echo 'export PATH="$HOME/.rbenv/bin:$PATH"' >> ~/.bashrc
echo 'eval "$(rbenv init -)"' >> ~/.bashrc
exec bash
git clone https://github.com/rbenv/ruby-build.git ~/.rbenv/plugins/ruby-build
Install the required Ruby version:
RUBY_CONFIGURE_OPTS=--with-jemalloc rbenv install 3.0.6
rbenv global 3.0.6
Install bundler:
gem install bundler --no-document
Return to the root user:
exit
Setup postgresql:
postgresql-setup --initdb --unit postgresql
systemctl enable --now postgresql
Become the postgresql user and run psql:
su - postgres
psql
In the psql prompt, create the database role:
CREATE USER mastodon CREATEDB;
\q
Go back to the root user:
exit
Become the mastodon user:
su - mastodon
Check out the Mastodon code:
git clone https://github.com/mastodon/mastodon.git live
cd live
git checkout $(git tag -l | grep -v 'rc[0-9]*$' | sort -V | tail -n 1)
Install Ruby and JavaScript dependencies:
Note that the version of NodeJS installed on Fedora uses OpenSSL3, but Mastodon requires a version of Node with OpenSSL1.1. We can remedy this by using
export NODE_OPTIONS
--openssl-legacy-provider=.
bundle config deployment 'true'
bundle config without 'development test'
bundle install -j$(getconf _NPROCESSORS_ONLN)
export NODE_OPTIONS=--openssl-legacy-provider
yarn install --pure-lockfile
This is the end of the prerequisites and basic setup. Choose one of the options below to continue.
Install a new Mastodon instance
After running the steps from the previous section, we can now run the interactive setup wizard to setup a new Mastodon instance:
cd /home/mastodon/live
export NODE_OPTIONS=--openssl-legacy-provider
RAILS_ENV=production bundle exec rake mastodon:setup
This will: * Create a configuration file * Run asset precompilation * Create the database schema
Choose one of the options below to continue.
Migrate from an existing Mastodon instance
After running the steps from the prerequisites and basic setup section, we can now start migrating the data from an existing Mastodon instance.
Run these commands on the old server machine
Stop the mastodon systemd services:
systemctl stop mastodon-web mastodon-sidekiq mastodon-streaming
Become the mastodon user:
su - mastodon
Dump the postgresql database to
/home/mastodon/mastodon_production.dump
:
pg_dump -Fc mastodon_production -f mastodon_production.dump
Copy the following files from the old server machine to the same paths on the new server machine using rsync or whatever method you think best:
/home/mastodon/live/public/system
directory, which contains
user-uploaded images and videos. This is not required if you're using S3.
/home/mastodon/live/.env.production
, which contains the server
config and secrets. * /home/mastodon/mastodon_production.dump
- Your web server configuration
Run these commands on the new server machine
Ensure the Redis server is started:
systemctl enable --now redis
Become the mastodon user:
su - mastodon
Create an empty database:
createdb -T template0 mastodon_production
Import the postgresql database:
pg_restore -Fc -U mastodon -n public --no-owner --role=mastodon -d mastodon_production mastodon_production.dump
Precompile Mastodon's assets:
cd live
export NODE_OPTIONS=--openssl-legacy-provider
RAILS_ENV=production bundle exec rails assets:precompile
Rebuild the home timelines for each user:
RAILS_ENV=production ./bin/tootctl feeds build
Go back to root user:
exit
As root, start the Mastodon systemd services:
systemctl enable --now mastodon-web mastodon-sidekiq mastodon-streaming
You can now update your DNS settings to point to the new server machine, rerun Certbot to update LetsEncrypt, etc. If you still need a web server setup, you can choose one of the options below, otherwise you can continue with section 6. SELinux.
Setup with Nginx and Certbot
The Mastodon repository provides an Nginx configuration. On Fedora, the
Nginx configuration path is /etc/nginx/conf.d
.
Become root on your Fedora Server and install Nginx and Certbot:
dnf install nginx certbot python3-certbot-nginx
Copy the Nginx configuration:
cp -v /home/mastodon/live/dist/nginx.conf /etc/nginx/conf.d/mastodon.conf
Edit /etc/nginx/conf.d/mastodon.conf
and change example.com
in the
server_name
directive to your Mastodon domain. You can make any other
adjustments you need.
Ensure the syntax of the Nginx configuration is okay:
nginx -t
To acquire an SSL certificate, ensure the HTTP ports are open in your firewall:
firewall-cmd --zone=FedoraServer --permanent --add-service=http
firewall-cmd --zone=FedoraServer --permanent --add-service=https
firewall-cmd --reload
Now run Certbot to obtain the certificate (change example.com to your domain):
certbot --nginx -d example.com
Enable and start Nginx:
systemctl enable --now nginx.service
You can now go to section SELinux
Setup with Caddy
Add the Caddy repository and install Caddy:
dnf install 'dnf-command(copr)'
dnf copr enable @caddy/caddy
dnf install caddy
Create or edit the Caddyfile at /etc/caddy/Caddyfile
:
example.com {
@local {
file
not path /
}
@local_media {
path_regexp /system/(.*)
}
@streaming {
path /api/v1/streaming/*
}
@cache_control {
path_regexp ^/(emoji|packs|/system/accounts/avatars|/system/media_attachments/files)
}
root * /home/mastodon/live/public
log {
output file /var/log/caddy/mastodon.log
}
encode zstd gzip
handle_errors {
rewrite 500.html
file_server
}
header {
Strict-Transport-Security "max-age=31536000"
}
header /sw.js Cache-Control "public, max-age=0"
header @cache_control Cache-Control "public, max-age=31536000, immutable"
handle @local {
file_server
}
reverse_proxy @streaming {
to http://localhost:4000
transport http {
keepalive 5s
keepalive_idle_conns 10
}
}
reverse_proxy {
to http://localhost:3000
header_up X-Forwarded-Port 443
header_up X-Forwarded-Proto https
transport http {
keepalive 5s
keepalive_idle_conns 10
}
}
}
To allow Caddy to access files in the user home directory, the executable bit needs to be set on the parent directories of the files being served:
chmod +x /home/mastodon/live/public
chmod +x /home/mastodon/live
chmod +x /home/mastodon
chmod +x /home
You can now go to section SELinux
SELinux
At this point, a web server should be running, but if SELinux is in
enforcing mode, you will get a 502 Bad Gateway error if you try to
browse to your Mastodon domain. The problem is that SELinux is not
allowing the web server daemon to access files in /home/mastodon/live
.
This can be verified by running:
ausearch -m AVC -ts recent
Nginx
This can be fixed by setting the following SELinux booleans:
setsebool -P httpd_read_user_content=1
setsebool -P httpd_enable_homedirs=1
Caddy
You'll need to set an SELinux policy to allow Caddy to write to
/var/log/caddy
:
module caddy 1.0;
require {
type httpd_log_t;
type httpd_t;
class file write;
}
#============= httpd_t ==============
allow httpd_t httpd_log_t:file write;
Save this to a file named caddy.te
. Now check, compile, and import the
module:
checkmodule -M -m -o caddy.mod caddy.te
semodule_package -o caddy.pp -m caddy.mod
semodule -i caddy.pp
Set the SELinux booleans for httpd:
setsebool -P httpd_read_user_content=1
setsebool -P httpd_enable_homedirs=1
Restart Caddy.
bundle
SELinux also denies the /home/mastodon/.rbenv/shims/bundle
executable.
This can be verified by looking at
journalctl -xeu mastodon-web.service
and ausearch -m AVC -ts recent
.
You'll need the following SELinux policy for bundle to work:
module bundle 1.0;
require {
type init_t;
type user_home_t;
class file { execute execute_no_trans open read };
}
#============= init_t ==============
allow init_t user_home_t:file { execute execute_no_trans open read };
Save this to a file named bundle.te
. Now check, compile, and import
the module:
checkmodule -M -m -o bundle.mod bundle.te
semodule_package -o bundle.pp -m bundle.mod
semodule -i bundle.pp
Restart the Mastodon systemd services:
systemctl restart mastodon-web mastodon-streaming mastodon-sidekiq
Your Mastodon instance should now be up and running!
Closing
If you have any questions, want to report any errors, or have any suggestions for improving this article, you can find me at the following places. You can also open up an issue in the GitHub interface.
- Mastodon: https://fedi.hyperreal.coffee/hyperreal
- IRC: hyperreal on Tilde.chat and Libera.chat
- Email: hyperreal AT fedoraproject DOT org
Apache
Install mod_evasive
sudo apt install libapache2-mod-evasive
Add mod_evasive to VirtualHost
<IfModule mod_evasive20.c>
DOSHashTableSize 3097
DOSPageCount 2
DOSSiteCount 50
DOSPageInterval 1
DOSSiteInterval 1
DOSBlockingPeriod 60
DOSEmailNotify <hyperreal@moonshadow.dev>
</IfModule>
Restart apache2:
sudo systemctl restart apache2.service
Atop
Get lowest memfree for given analysis date
atopsar -r /var/log/atop/atop_20240703 -m -R 1 | awk 'NR<7{print $0;next}{print $0| "sort -k 3,4"}' | head -11
atopsar
: atop's system activity report.-r /var/log/atop/atop_20240703
: Log file to use.-m
: Memory- and swap-occupation-R 1
: Summarize 1 sample into one sample. Log file contains samples of 10 minutes, so this will summarize each sample.-R 6
will summarize one sample per 60 minutes.awk 'NR<7{print $0;next}{print $0| "sort -k 3,4"}'
: For number of input records (NR
) less than7
,print
the input record ($0
), go to thenext
input record and repeat the{print $0}
pattern until the end is reached, then execute the END rule. The END rule in this case is{print $0| "sort -k 3,4"}
, it prints the remaining input records after piping them through the"sort -k 3,4"
command. This avoids sorting the first 7 lines of the atopsar command.head -11
: Get the top 11 lines of output.
Get top 3 memory processes for given analysis date
atopsar -G -r /var/log/atop/atop_20240710
Identify top-five most frequently executed process during logging period
atop -r /var/log/atop/atop_20241123 -P PRG | grep -oP "(?<=\()[[:alnum:]]{1,}(?=\))" | sort | uniq -c | sort -k1rn | head -5
Count the number of times a particular process has been detected during logging period
atop -r /var/log/atop/atop_20241123 -P PRG | egrep "docker" | awk '{print $5}' | uniq -c -w5
Generate a chart of the number of instances of a particular process during logging period
atop -r /var/log/atop/atop_20241123 -P PRG | egrep "docker" | awk '{print $5}' | uniq -c -w8 | \
gnuplot -e "set terminal dumb 80 20; unset key; set style data labels; set xdata time; set xlabel 'Time'; set ylabel 'docker'; set timefmt '%H:%M:%S'; plot '-' using 2:1:ytic(1) with histeps"
Generate a PNG chart of the number of instances of a particular process during logging period
atop -r /var/log/atop/atop_20241123 -P PRG | awk '{print $5}' | uniq -c -w8 | \
gnuplot -e "set title 'Process Count'; set offset 1,1,1,1; set autoscale xy; set mxtics; set mytics; \
set style line 12 lc rgb '#ddccdd' lt 1 lw 1.5; set style line 13 lc rgb '#ddccdd' lt 1 lw 0.5; set grid xtics mxtics ytics mytics \
back ls 12, ls 13; set terminal png size 1920,1280 enhanced font '/usr/share/fonts/liberation/LiberationSans-Regular.ttf,10'; \
set output 'plot_$(date +'%Y-%m-%d_%H:%M:%S')_${RANDOM}.png'; set style data labels; set xdata time; set xlabel 'Time' font \
'/usr/share/fonts/liberation/LiberationSans-Regular.ttf,8'; set ylabel 'Count' font \
'/usr/share/fonts/liberation/LiberationSans-Regular.ttf,8'; set timefmt '%H:%M:%S'; plot '-' using 2:1 with histeps"
Identify top-ten most frequently executed binaries from /sbin or /usr/sbin during logging period
for i in $(atop -r /var/log/atop/atop_20241123 -P PRG | grep -oP "(?<=\()[[:alnum:]]{1,}(?=\))" | sort | uniq -c | sort -k1rn | head -10); do
which "${i}" 2>/dev/null | grep sbin;
done
Identify disks with over 90% activity during logging period
atopsar -r /var/log/atop/atop_20241123 -d | egrep '^[0-9].*|(9[0-9]|[0-9]{3,})%'
Identify processes responsible for most disk I/O during logging period
atopsar -r /var/log/atop/atop_20241123 -D | sed 's/\%//g' | awk -v k=50 '$4 > k || $8 > k || $12 > k' | sed -r 's/([0-9]{1,})/%/5;s/([0-9]{1,})/%/7;s/([0-9]{1,})/%/9'
Identify periods of heavy swap activity during logging period
atopsar -r /var/log/atop/atop_20241123 -s | awk -v k=1000 '$2 > k || $3 > k || $4 > k'
Identify logical volumes with high activity or high average queue during logging period
atopsar -r /var/log/atop/atop_20241123 -l -S | sed 's/\%//g' | awk -v k=50 -v j=100 '$3 > k || $8 > j' | sed -r 's/([0-9]{1,})/%/4'
Identify processes consuming more than half of all available CPUs during logging period
(( k = $(grep -c proc /proc/cpuinfo) / 2 * 100 ))
atopsar -r /var/log/atop/atop_20241123 -P | sed 's/\%//g' | awk -v k=$k '$4 > k || $8 > k || $12 > k' | sed -r 's/([0-9]{1,})/%/5;s/([0-9]{1,})/%/7;s/([0-9]{1,})/%/9'
Identify time of peak memory utilization during logging period
atopsar -r /var/log/atop/atop_20241123 -m -R 1 | awk 'NR<7{print $0;next}{print $0| "sort -k 3,3"}' | head -15
Bash
Split large text file into smaller files with equal number of lines
split -l 60 bigfile.txt prefix-
Loop through lines of file
while read line; do
echo "$line";
done </path/to/file.txt
Use grep to find URLs from HTML file
cat urls.html | grep -Eo "(http|https)://[a-zA-Z0-9./?=_%:-]*"
grep -E
: egrepgrep -o
: only output what has been grepped(http|https)
: either http OR httpsa-zA-Z0-9
: match all lowercase, uppercase, and digits.
: match period/
: match slash?
: match ?=
: match =_
: match underscore%
: match percent:
: match colon-
: match dash*
: repeat the [...] group any number of times
Use Awk to print the first line of ps aux
output followed by each grepped line
To find all cron processes with ps aux
.
ps aux | awk 'NR<2{print $0;next}{print $0 | grep "cron"}' | grep -v "awk"
ps aux
: equivalent tops -aux
.-a
displays info about other users processes besides to current user.-u
displays info associated with keywordsuser
,pid
,%cpu
,%mem
,vsz
,rss
,tt
,state
,start
,time
, andcommand
.-x
includes processes which do not have a controlling terminal. Seeman 1 ps
.awk 'NR<2{print $0;next}{print $0 | "grep cron"}' | grep -v "awk"
: For number of input records (NR
) less than 2,print
the input record ($0
), go to the next input record and repeat the{print $0}
pattern until the end is reached, then execute the END rule. The End rule in this case is{print $0 | "grep cron"}
, it prints the remaining input records after piping them through the"grep cron"
command. This allows printing the first line of theps aux
output, which consists of the column labels, and filters out everything besides what you want to grep for (e.g. "cron" processes).grep -v "awk"
: avoids printing the line containing this command.
Btrbk
On the host machine
Run these commands as root
Add a system user for btrbk:
useradd -c "Btrbk user" -m -r -s /bin/bash -U btrbk
Setup sudo for btrbk:
echo "btrbk ALL=NOPASSWD:/usr/sbin/btrfs,/usr/bin/readlink,/usr/bin/test" | tee -a /etc/sudoers.d/btrbk
Create a subvolume for each client:
mount /dev/sda1 /mnt/storage
btrfs subvolume create client_hostname
On each client machine
Create a dedicated SSH key:
mkdir -p /etc/btrbk/ssh
ssh-keygen -t ed25519 -f /etc/btrbk/ssh/id_ed25519
Add each client's SSH public key to /home/btrbk/.ssh/authorized_keys
on the NAS machine:
ssh-copy-id -i /etc/btrbk/ssh/id_ed25519 btrbk@nas.local
Create /etc/btrbk/btrbk.conf
on each client:
transaction_log /var/log/btrbk.log
snapshot_preserve_min latest
target_preserve 24h 7d 1m 1y
target_preserve_min 7d
ssh_user btrbk
ssh_identity /etc/btrbk/ssh/id_ed25519
backend btrfs-progs-sudo
snapshot_dir /btrbk_snapshots
target ssh://nas.local/mnt/storage/<client hostname>
subvolume /
subvolume /home
snapshot_create ondemand
Create directory to store btrbk snapshots on each client machine:
mkdir /btrbk_snapshots
Create /etc/systemd/system/btrbk.service
:
[Unit]
Description=Daily btrbk backup
[Service]
Type=simple
ExecStart=/usr/bin/btrbk -q -c /etc/btrbk/btrbk.conf run
Create /etc/systemd/system/btrbk.timer
:
[Unit]
Description=Daily btrbk backup
[Timer]
OnCalendar=*-*-* 23:00:00
Persistent=true
[Install]
WantedBy=timers.target
Alternatively, create a shell script to be placed under /etc/cron.daily
:
#!/usr/bin/env bash
set -e
/usr/bin/btrbk -q -c /etc/btrbk/btrbk.conf run >/dev/null
Btrfs
Create systemd.mount unit for Btrfs on external HDD
internet_archive is used here as an example.
Get the UUID of the Btrfs partition.
sudo blkid -s UUID -o value /dev/sda1
d3b5b724-a57a-49a5-ad1d-13ccf3acc52f
Edit ~/etc/systemd/system/mnt-internet_archive.mount.
[Unit]
Description=internet_archive Btrfs subvolume
DefaultDependencies=yes
[Mount]
What=/dev/disk/by-uuid/d3b5b724-a57a-49a5-ad1d-13ccf3acc52f
Where=/mnt/internet_archive
Type=btrfs
Options=subvol=@internet_archive,compress=zstd:1
[Install]
WantedBy=multi-user.target
DefaultDependencies=yes
: The mount unit automatically acquiresBefore=umount.target
andConflicts=umount.target
. Local filesystems automatically gainAfter=local-fs-pre.target
andBefore=local-fs.target
. Network mounts, such as NFS, automatically acquireAfter=remote-fs-pre.target network.target network-online.target
andBefore=remote-fs.target
.Options=subvol=@internet_archive,compress=zstd:1
: Use the subvolume@internet_archive
and use zstd compression level 1.
Note that the name of the unit file, e.g.
mnt-internet_archive.mount
, must correspond to theWhere=/mnt/internet_archive
directive, such that the filesystem path separator / in theWhere
directive is replaced by an en dash in the unit file name.
Reload the daemons and enable the mount unit.
sudo systemctl daemon-reload
sudo systemctl enable --now mnt-internet_archive.mount
Setup encrypted external drive for backups
Prepare the external drive
sudo cryptsetup --type luks2 -y -v luksFormat /dev/sda1
sudo cryptsetup -v luksOpen /dev/sda1 cryptbackup
sudo mkfs.btrfs /dev/mapper/cryptbackup
sudo mkdir /srv/backup
sudo mount -o noatime,compress=zstd:1 /dev/mapper/cryptbackup /srv/backup
sudo restorecon -Rv /srv/backup
Setup /etc/crypttab
sudo blkid -s UUID -o value /dev/sda1 | sudo tee -a /etc/crypttab
Add the following line to /etc/crypttab
:
cryptbackup UUID=<UUID of /dev/sda1> none discard
Setup /etc/fstab
sudo blkid -s UUID -o value /dev/mapper/cryptbackup | sudo tee -a /etc/fstab
Add the following line to /etc/fstab
:
UUID=<UUID of /dev/mapper/cryptbackup> /srv/backup btrfs compress=zstd:1,nofail 0 0
Reload the daemons:
sudo systemctl daemon-reload
Mount the filesystems:
sudo mount -av
btrfs-backup script
#!/usr/bin/env bash
LOGFILE="/var/log/btrfs-backup.log"
SNAP_DATE=$(date '+%Y-%m-%d_%H%M%S')
# Check if device is mounted
if ! grep "/srv/backup" /etc/mtab >/dev/null; then
echo "[$(date '+%Y-%m-%d %H:%M:%S')] Backup device is not mounted." | tee -a "$LOGFILE"
notify-send -i computer-fail "Backup device is not mounted"
exit 1
fi
create_snapshot() {
if ! btrfs subvolume snapshot -r "$1" "${1}/.snapshots/$2-$SNAP_DATE" >/dev/null; then
echo "[$(date '+%Y-%m-%d %H:%M:%S')] Error creating snapshot of $1" | tee -a "$LOGFILE"
notify-send -i computer-fail "Error creating snapshot of $1"
exit 1
else
echo "[$(date '+%Y-%m-%d %H:%M:%S')] Create snapshot of $1: OK" | tee -a "$LOGFILE"
fi
}
send_snapshot() {
mkdir -p "/srv/backup/$SNAP_DATE"
if ! btrfs send -q "${1}/.snapshots/$2-$SNAP_DATE" | btrfs receive -q "/srv/backup/$SNAP_DATE"; then
echo "[$(date '+%Y-%m-%d %H:%M:%S')] Error sending snapshot of $1 to /srv/backup" | tee -a "$LOGFILE"
notify-send -i computer-fail "Error sending snapshot of $1 to /srv/backup"
exit 1
else
echo "[$(date '+%Y-%m-%d %H:%M:%S')] Send snapshot of $1 to /srv/backup: OK" | tee -a "$LOGFILE"
fi
}
# Create root and home snapshots
create_snapshot "/" "root"
create_snapshot "/home" "home"
# Send root and home snapshots
send_snapshot "/" "root"
send_snapshot "/home" "home"
Move/copy the script to /etc/cron.daily/btrfs-backup
.
Caddy
IP whitelist
irc.hyperreal.coffee {
@me {
client_ip 1.2.3.4
}
handle @me {
reverse_proxy localhost:9000
}
respond "You are attempting to access protected resources!" 403
}
Reverse proxy for qBittorrent over Tailscale
I shall explain precisely what these directives do, as soon as I find out precisely what it is. I shall look into it, soon. It would be good to know something about how web servers, headers, and the HTTP protocol work, and what all this "origin", "referer", and "cross-origin" stuff means.
hostname.tailnet.ts.net:8888 {
reverse_proxy localhost:8080 {
header_up Host localhost:8080
header_up X-Forwarded-Host {host}:{hostport}
header_up -Origin
header_up -Referer
}
}
Cgit
Install Cgit with Caddy
Dependencies
xcaddy package from releases page.
Install caddy-cgi.
xcaddy build --with github.com/aksdb/caddy-cgi/v2
Install remaining dependencies.
sudo apt install gitolite3 cgit python-is-python3 python3-pygments python3-markdown docutils-common groff
Configuration
Make a git user.
sudo adduser --system --shell /bin/bash --group --disabled-password --home /home/git git
Configure gitolite for the git user in ~/.gitolite.rc
.
UMASK => 0027,
GIT_CONFIG_KEYS => 'gitweb.description gitweb.owner gitweb.homepage gitweb.category',
Add caddy user to the git group.
sudo usermod -aG git caddy
Configure cgit in /etc/cgitrc
.
#
# cgit config
# see cgitrc(5) for details
css=/cgit/cgit.css
logo=/cgit/cgit.png
favicon=/cgit/favicon.ico
enable-index-links=1
enable-commit-graph=1
enable-log-filecount=1
enable-log-linecount=1
enable-git-config=1
branch-sort=age
repository-sort=name
clone-url=https://git.hyperreal.coffee/$CGIT_REPO_URL git://git.hyperreal.coffee/$CGIT_REPO_URL ssh://git@git.hyperreal.coffee:$CGIT_REPO_URL
root-title=hyperreal.coffee Git repositories
root-desc=Source code and configs for my projects
##
## List of common mimetypes
##
mimetype.gif=image/gif
mimetype.html=text/html
mimetype.jpg=image/jpeg
mimetype.jpeg=image/jpeg
mimetype.pdf=application/pdf
mimetype.png=image/png
mimetype.svg=image/svg+xml
# Enable syntax highlighting
source-filter=/usr/lib/cgit/filters/syntax-highlighting.py
# Format markdown, rst, manpages, text files, html files, and org files.
about-filter=/usr/lib/cgit/filters/about-formatting.sh
##
### Search for these files in the root of the default branch of repositories
### for coming up with the about page:
##
readme=:README.md
readme=:README.org
robots=noindex, nofollow
section=personal-config
repo.url=doom-emacs-config
repo.path=/home/git/repositories/doom-emacs-config.git
repo.desc=My Doom Emacs config
org-mode README
Note: I haven't gotten this to work yet. :-(
git clone https://github.com/amartos/cgit-org2html.git
cd cgit-org2html
sudo cp -v org2html /usr/lib/cgit/filters/html-converters/
sudo chmod +x /usr/lib/cgit/filters/html-converters/org2html
Download blob-formatting.sh.
sudo cp -v blob-formatting.sh /usr/lib/cgit/filters/
Catppuccin Mocha palette for org2html.css
git clone https://github.com/amartos/cgit-org2html.git
cd cgit-org2html/css
Change the color variables to Catppuccin Mocha hex codes.
$red: #f38ba8;
$green: #a6e3a1;
$orange: #fab387;
$gray: #585b70;
$yellow: #f9e2af;
$cyan: #89dceb;
$teal: #94e2d5;
$black: #11111b;
$white: #cdd6f4;
$cream: #f2cdcd;
Install sass.
sudo apt install -y sass
Generate org2html.css from the scss files, and copy the result to the cgit css directory.
sass org2html.scss:org2html.css
sudo cp -v org2html.css /usr/share/cgit/css/
Chimera Linux
Requirements
- UEFI
- LVM on LUKS with unencrypted
/boot
Disk partitioning
Use cfdisk
to create the following partition layout.
Partition Type | Size |
---|---|
EFI | +600M |
boot | +900M |
Linux | Remaining space |
Format the unencrypted partitions:
mkfs.vfat /dev/nvme0n1p1
mkfs.ext4 /dev/nvme0n1p2
Create LUKS on the remaining partition:
cryptsetup luksFormat /dev/nvme0n1p3
cryptsetup luksOpen /dev/nvme0n1p3 crypt
Create a LVM2 volume group for /dev/nvme0n1p3
, which is located at /dev/mapper/crypt
.
vgcreate chimera /dev/mapper/crypt
Create logical volumes in the volume group.
lvcreate --name swap -L 8G chimera
lvcreate --name root -l 100%FREE chimera
Create the filesystems for the logical volumes.
mkfs.ext4 /dev/chimera/root
mkswap /dev/chimera/swap
Create mount points for the chroot and mount the filesystems.
mkdir /media/root
mount /dev/chimera/root /media/root
mkdir /media/root/boot
mount /dev/nvme0n1p2 /media/root/boot
mkdir /media/root/boot/efi
mount /dev/nvme0n1p1 /media/root/boot/efi
Installation
chimera-bootstrap and chroot
chimera-bootstrap /media/root
chimera-chroot /media/root
Update the system.
apk update
apk upgrade --available
Install kernel, cryptsetup, and lvm2 packages.
apk add linux-stable cryptsetup-scripts lvm2
fstab
genfstab / >> /etc/fstab
crypttab
echo "crypt /dev/disk/by-uuid/$(blkid -s UUID -o value /dev/nvme0n1p3) none luks" > /etc/crypttab
Initramfs refresh
update-initramfs -c -k all
GRUB
apk add grub-x86_64-efi
grub-install --efi-directory=/boot/efi --target=x86_64-efi
Post-installation
passwd root
apk add zsh bash
useradd -c "Jeffrey Serio" -m -s /usr/bin/zsh -U jas
passwd jas
Add the following lines to /etc/doas.conf
:
# Give jas access
permit nopass jas
Set hostname, timezone, and hwclock.
echo "falinesti" > /etc/hostname
ln -sf /usr/share/zoneinfo/America/Chicago /etc/localtime
echo localtime > /etc/hwclock
xorg and Xfce4
apk add xserver-xorg xfce4
Reboot the machine.
Post-reboot
Login as jas
. Run startxfce4
. Connect to internet via NetworkManager.
Ensure wireplumber and pipewire-pulse are enabled.
dinitctl enable wireplumber
dinitctl start wireplumber
dinitctl enable pipewire-pulse
dinitctl start pipewire-pulse
Install CPU microcode.
doas apk add ucode-intel
doas update-initramfs -c -k all
Install other packages
doas apk add chrony
doas dinitctl enable chrony
doas apk add
Carpal tunnel syndrome
I'm just playing with some ideas here regarding a carpal tunnel syndrome-friendly way to do everyday computing.
Given the limits that nature places on the number of possible ways of manipulating machines, at the current time it seems voice dictation is the only feasible alternative to typing and pointing and clicking. Is it possible to do what I usually do at my computer using 100% voice dictation?
I wouldn't use it for gaming, of course, but for things like web browsing, coding, writing/typing, and system administration tasks. I would need software, preferrably FOSS, that responds to voice commands.
Web browsing
Voice commands for web browsing would have to include something like the following:
- "Scroll N pixels down the page"
- "Refresh the page"
- "Go to tab 6"
- "Download the file at link 8"
- "Go to www.duckduckgo.com"
- "Open up the Bitwarden menu"
- "Enter writing mode and compose a new Mastodon post"
- "Enter writing mode and compose a reply to Mastodon timeline item 23"
- "Play the video on Mastodon timeline item 28"
- "Go to bookmark 16"
- "Copy the URL to the system clipboard"
So there would have to be a way to enumerate web page and browser elements. This enumeration concept would also apply to many other apps.
Coding and command line usage
Voice commands that are mapped to:
- shell commands and aliases
- code snippets
- "Create a Go function named helloWorld"
- "helloWorld takes a string parameter named foo"
- Okay, I've realized coding is probably not feasible using 100% voice dictation.
Debian
Setup unattended-upgrades
Edit /etc/apt/apt.conf.d/50unattended-upgrades
. Comment out the following lines.
Unattended-Upgrade::Origins-Pattern {
// Codename based matching:
// This will follow the migration of a release through different
// archives (e.g. from testing to stable and later oldstable).
// Software will be the latest available for the named release,
// but the Debian release itself will not be automatically upgraded.
"origin=Debian,codename=${distro_codename}-updates";
"origin=Debian,codename=${distro_codename}-proposed-updates";
"origin=Debian,codename=${distro_codename},label=Debian";
"origin=Debian,codename=${distro_codename},label=Debian-Security";
"origin=Debian,codename=${distro_codename}-security,label=Debian-Security";
// Archive or Suite based matching:
// Note that this will silently match a different release after
// migration to the specified archive (e.g. testing becomes the
// new stable).
// "o=Debian,a=stable";
// "o=Debian,a=stable-updates";
// "o=Debian,a=proposed-updates";
"o=Debian Backports,a=${distro_codename}-backports,l=Debian Backports";
};
Unattended-Upgrade::Remove-Unused-Dependencies "true";
Issue the command below to enable automatic updates:
sudo dpkg-reconfigure --priority=low unattended-upgrades
/etc/apt/apt.conf.d/20auto-upgrades
should contain the following:
APT::Periodic::Update-Package-Lists "1";
APT::Periodic::Unattended-Upgrade "1";
Enable the systemd service:
sudo systemctl enable --now unattended-upgrades.service
DietPi
systemd-logind
Install libpam-systemd
:
sudo apt install -y libpam-systemd
Unmask and enable systemd-logind:
sudo systemctl unmask systemd-logind
sudo systemctl enable systemd-logind
sudo systemctl reboot
DRM
Extract PDF or EPUB from ACSM file
Install libgourou in nix-shell.
nix-shell -p libgourou
Register the device with Adobe username and password.
adept_activate -u user -p password
Download the ACSM file. Make sure the ACSM file is in the current working directory.
acsmdownloader -f Dragon_Age_The_Missing_1.acsm
The downloaded file requires a password to open. Remove the DRM from the files.
find . type -f -name "Dragon_Age_The_Missing*.pdf" -exec adept_remove {} \;
Fedora Atomic
Access USB serial device in container
Create a udev rule on the host for all usb-serial devices. Set OWNER to your 1000 user.
cat << EOF | sudo tee /etc/udev/rules.d/50-usb-serial.rules
SUBSYSTEM=="tty", SUBSYSTEMS=="usb-serial", OWNER="jas"
EOF
Reload udev.
sudo udevadm control --reload-rules
sudo udevadm trigger
The serial device should now be owned by your user.
ls -l /dev/ttyUSB0
crw-rw----. 1 jas dialout 188, 0 Mar 15 11:09 /dev/ttyUSB0
You can now run minicom inside the toolbox container.
distrobox enter default
minicom -D /dev/ttyUSB0
Firewalld
Allow connections only from tailnet
Create a new zone for the tailscale0
interface.
sudo firewall-cmd --permanent --new-zone=tailnet
sudo firewall-cmd --permanent --zone=tailnet --add-interface=tailscale0
sudo firewall-cmd --reload
Add services and ports to the tailnet
zone.
sudo firewall-cmd --permanent --zone=tailnet --add-service={http,https,ssh}
sudo firewall-cmd --permanent --zone=tailnet --add-port=9100/tcp
sudo firewall-cmd --reload
Ensure the public
zone does not have any interfaces or sources.
sudo firewall-cmd --permanent --zone=public --remove-interface=eth0
sudo firewall-cmd --reload
The firewall should now only allow traffic coming from the tailnet interface, tailscale0
.
FreeBSD
USB 3.1 Type-C to RJ45 Gigabit Ethernet adapter
The Amazon Basics Aluminum USB 3.1 Type-C to RJ45 Gigabit Ethernet Adapter works well with FreeBSD 14.1-RELEASE. It uses the AX88179 chipset from ASIX Electronics Corp.
Install the ports tree
Source: Chapter 4. Installing Applications: Packages and Ports | FreeBSD Documentation Portal
Ensure the FreeBSD source code is checked out
sudo git clone -o freebsd -b releng/14.1 https://git.FreeBSD.org/src.git /usr/src
Check out the ports tree
sudo git clone --depth 1 https://git.FreeBSD.org/ports.git -b 2024Q3 /usr/ports
To switch to a different quarterly branch:
sudo git -C /usr/ports switch 2024Q4
drm-61-kmod
Install from the ports tree.
cd /usr/ports/graphics/drm-61-kmod
sudo make install clean
Alternatively, for Alderlake GPUs:
sudo pkg install drm-kmod
Edit /etc/rc.conf
:
kld_list="i915kms"
Add user to video
group:
sudo pw groupmod video -m jas
Mount filesystems in single-user mode
When booted into single-user mode.
fsck
mount -u /
mount -a -t zfs
zfs mount -a
You should now be able to edit files, add/remove packages, etc.
Mount encrypted zroot in LiveCD
Boot into the LiveCD environment.
mkdir /tmp/mnt
geli attach /dev/nda0p4
zpool import -f -R /tmp/mnt zroot
zfs mount zroot/ROOT/default
The root directory of the zroot, zroot/ROOT/default
, is labeled to not be automounted when imported, hence the need for the last command.
GitLab
Setup GitLab runner with Podman
-
Install GitLab Runner
-
Create a new runner from the GitLab UI.
-
Use the authentication token from the GitLab UI to register a new runner on the machine hosting the runner. Select the Docker executor.
sudo systemctl enable --now gitlab-runner.service
sudo gitlab-runner register --url https://git.hyperreal.coffee --token <TOKEN>
- Add the following lines to
/etc/gitlab-runner/config.toml
for Podman:
[[runners]]
environment = ["FF_NETWORK_PER_BUILD=1"]
[runners.docker]
host = "unix://run/podman/podman.sock"
tls_verify = false
image = "git.hyperreal.coffee:5050/fedora-atomic/containers/fedora:latest"
privileged = true
volumes = ["/build-repo", "/cache", "/source-repo"]
- Restart the gitlab-runner:
sudo gitlab-runner restart
We should now be ready to use the Podman runner.
Grafana
Install and deploy the Grafana server
On Fedora/RHEL systems:
sudo dnf install -y grafana grafana-selinux chkconfig
On Debian systems:
sudo apt-get install -y apt-transport-https software-properties-common
sudo wget -q -O /usr/share/keyrings/grafana.key https://apt.grafana.com/gpg.key
echo "deb [signed-by=/usr/share/keyrings/grafana.key] https://apt.grafana.com stable main" | sudo tee -a /etc/apt/sources.list.d/grafana.list
sudo apt update
sudo apt install -y grafana
Reload the systemctl daemon, start and enable grafana.service
:
sudo systemctl daemon-reload
sudo systemctl enable --now grafana-server.service
sudo systemctl status grafana-server.service
Configure Grafana SELinux policy
This is not necessary on AlmaLinux 9, Rocky Linux 9, RHEL 9.
For some reason the grafana-selinux package does not provide what Grafana needs to cooperate with SELinux. It's therefore necessary to use a third-party repository at https://github.com/georou/grafana-selinux to compile and install a proper SELinux policy module for Grafana.
# Clone the repo
git clone https://github.com/georou/grafana-selinux.git
cd grafana-selinux
# Copy relevant .if interface file to /usr/share/selinux/devel/include to expose them when building and for future modules.
# May need to use full path for grafana.if if not working.
install -Dp -m 0664 -o root -g root grafana.if /usr/share/selinux/devel/include/myapplications/grafana.if
# Compile and install the selinux module.
sudo dnf install -y selinux-policy-devel setools-console policycoreutils-devel
sudo make -f /usr/share/selinux/devel/Makefile grafana.pp
sudo semodule -i grafana.pp
# Add grafana ports
semanage port -a -t grafana_port_t -p tcp 3000
# Restore all the correct context labels
restorecon -RvF /usr/sbin/grafana-* \
/etc/grafana \
/var/log/grafana \
/var/lib/grafana \
/usr/share/grafana/bin
# Start grafana
systemctl start grafana-server.service
# Ensure it's working in the proper confinement
ps -eZ | grep grafana
Login to the Grafana panel.
- username: admin
- password: password (change this after)
Add Prometheus data source
- Bar menu
- Data sources
- Add new data source
- Choose Prometheus data source
- Name: Prometheus
- URL: http://localhost:9090
- Save & test
Ensure the data source is working before continuing.
If you're running Grafana on an SELinux host, set an SELinux boolean to allow Grafana to access the Prometheus port:
sudo setsebool -P grafana_can_tcp_connect_prometheus_port=1
Add Loki data source
Since Loki is running on hyperreal.coffee:3100, the Firewall's internal zone on that host needs to allow connection to port 3100
from my IP address.
sudo firewall-cmd --zone=internal --permanent --add-port=3100/tcp
sudo firewall-cmd --reload
In the Grafana panel:
- Bar menu
- Data sources
- Add new data source
- Choose Loki data source
- Name: Loki
- URL: http://hyperreal.coffee:3100
- Save & test
Ensure the data source is working before continuing.
Add Node Exporter dashboard
- Visit the Grafana Dashboard Library.
- Search for "Node Exporter Full".
- Copy the ID for Node Exporter Full.
- Go to the Grafana panel bar menu.
- Dashboards
- New > Import
- Paste the Node Exporter Full ID into the field, and press the Load button.
Add Caddy dashboard
- Visit Caddy Monitoring on the Grafana Dashboard Library.
- Copy the ID to clipboard.
- Go to the Grafana panel bar menu.
- Dashboards
- New > Import
- Paste the Caddy Monitoring ID into the field, and press the Load button.
Add qBittorrent dashboard
- Visit qBittorrent Dashboard on Grafana Dashboard Library.
- Copy the ID to clipboard.
- Go to the Grafana panel bar menu.
- Dashboards
- New > Import
- Paste the qBittorrent Dashboard ID into the field, and press the Load button.
Use HTTPS with Tailscale
sudo tailscale certs HOSTNAME.TAILNET.ts.net
sudo mkdir /etc/tailscale-ssl-certs
sudo mv *.key /etc/tailscale-ssl-certs/
sudo mv *.crt /etc/tailscale-ssl-certs/
sudo cp -v /etc/tailscale-ssl-certs/*.key /etc/grafana/grafana.key
sudo cp -v /etc/tailscale-ssl-certs/*.crt /etc/grafana/grafana.crt
sudo chown root:grafana /etc/grafana/grafana.key
sudo chown root:grafana /etc/grafana/grafana.crt
sudo chmod 644 /etc/grafana/grafana.key
sudo chmod 644 /etc/grafana/grafana.crt
Edit /etc/grafana/grafana.ini
:
[server]
protocol = https
http_addr =
http_port = 3000
domain = HOSTNAME.TAILNET.ts.net
enforce_domain = false
root_url = https://HOSTNAME.TAILNET.ts.net:3000
cert_file = /etc/grafana/grafana.crt
cert_key = /etc/grafana/grafana.key
Hugo
Org Mode to Hugo
Text formatting
Org Mode | Comments |
---|---|
`*Bold text*` | Bold text |
`/Italic text/` | Italic text |
`_Underline_` | Underline text |
`=Verbatim=` | Verbatim text |
`+Strike-through+` | Strike-through text |
Adding images
=#+ATTR_HTML:= :width 100% :height 100% :class border-2 :alt Description :title Image title
=[[./path/to/image.jpg]]=
Adding metadata
=#+TITLE:= Your title
=#+DATE:= 2024-10-22
=#+TAGS[]:= hugo org-mode writing
=#+DRAFT:= false
=#+AUTHOR:= hyperreal
=#+SLUG:= your-title
=#+DESCRIPTION:= Description
=#+CATEGORIES:= blogging
=#+IMAGES[]:= /images/image.jpg
=#+WEIGHT:= 10
=#+LASTMOD:= 2024-10-23
=#+KEYWORDS[]:= hugo org-mode tutorial
=#+LAYOUT:= post
=#+SERIES:= Techne
=#+SUMMARY:= Summary
=#+TYPE:= Tutorial
=* Main content=
Note: tags must not contain spaces. Use underscores or en-dashes.
Internet Archive
Install Python command line client
pipx install internetarchive
Use Python client to download torrent files from given collection
Ensure "Automatically add torrents from" > Monitored Folder is set to /mnt/torrent_files
and the Override save path is Default save path.
Get itemlist from collection
ia search --itemlist "collection:bbsmagazine" | tee bbsmagazine.txt
Download torrent files from each item using parallel
cat bbsmagazine.txt | parallel 'ia download --format "Archive BitTorrent" --destdir=/mnt/torrent_files {}'
Move .torrent files from their directories to /mnt/torrent_files
find /mnt/torrent_files -type f -name "*.torrent" -exec mv {} /mnt/torrent_files \;
Note: .torrent files will be removed from
/mnt/torrent_files
by qBittorrent once they are added to the instance.
Remove empty directories
find /mnt/torrent_files -maxdepth 1 -mindepth 1 -type d -delete
Linux Kernel
Disable core dumps in Linux
limits.conf and sysctl
Edit /etc/security/limits.conf
and append the following lines:
* hard core 0
* soft core 0
Edit /etc/sysctl.d/9999-disable-core-dump.conf
:
fs.suid_dumpable=0
kernel.core_pattern=|/bin/false
sudo sysctl -p /etc/sysctl.d/9999-disable-core-dump.conf
/bin/false
exits with a failure status code. The default value forkernel.core_pattern
iscore
on a Debian server and|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h
on a Fedora desktop. These commands are executed upon crashes. In the case of/bin/false
, nothing happens, and core dump is disabled.fs.suid_dumpable=0
Any process that has changed privilege levels or is execute only will not be dumped. Other values include1
, which is debug mode, and all processes dump core when possible. The current user owns the core dump, no security is applied.2
, suidsafe mode, in which any Linux program that would generally not be dumped is dumped regardless, but only if thekernel.core_pattern
in sysctl is set to a valid program.
systemd
sudo mkdir /etc/systemd/coredump.conf.d/
sudo nvim /etc/systemd/coredump.conf.d/custom.conf
[Coredump]
Storage=none
ProcessSizeMax=0
Storage=none
andProcessSizeMax=0
disables all coredump handling except for a log entry under systemd.
sudo systemctl daemon-reload
Edit /etc/systemd/system.conf
. Make sure DefaultLimitCORE
is commented out.
#DefaultLimitCORE=infinity
sudo systemctl daemon-reexec
Lemmy
Configure SPF and DKIM for SMTP postfix-relay
Source: https://github.com/wader/postfix-relay#spf
- Add remote forwarding for rsyslog.
- Make the DKIM keys persist indefinitely in a volume at
./volumes/postfix-dkim:/etc/opendkim/keys
. ./volumes
is relative to the parent directory of thedocker-compose.yml
file for the Lemmy instance. E.g./docker/lemmy/volumes
.
Edit docker-compose.yml
:
postfix:
image: mwader/postfix-relay
environment:
- POSTFIX_myhostname=lemmy.hyperreal.coffee
- OPENDKIM_DOMAINS=lemmy.hyperreal.coffee
- RSYSLOG_TO_FILE=yes
- RSYSLOG_TIMESTAMP=yes
- RSYSLOG_REMOTE_HOST=<ip addr of remote logging server>
- RSYSLOG_REMOTE_PORT=514
- RSYSLOG_REMOTE_TEMPLATE=RSYSLOG_ForwardFormat
volumes:
- ./volumes/postfix-dkim:/etc/opendkim/keys
- ./volumes/logs:/var/log
restart: "always"
logging: *default-logging
docker-compose up -d
On domain registrar, add the following TXT records:
Type | Name | Content |
---|---|---|
TXT | lemmy | "v=spf1 a max ipv4:<ip addr of server> -all" |
TXT | mail._domainkey.lemmy | "v=DKIM1; h=sha256; k=rsa; p=<pubkey>" |
The content of mail._domainkey.lemmy
is obtained from the log output of the wader/postfix-relay Docker container.
docker logs lemmy-postfix-1
To test this, allow a few hours for the DNS changes to propagate, then log out of the Lemmy instance and send a password reset request. If the reset confirmation email doesn't go to the spam folder, it works. The email service provider will be able to determine the email is from an authentic email address.
Resources
Loki
Rsyslog forwarding to Promtail and Loki
Running Loki and Promtail on the same host as Prometheus makes managing the firewall and network routes easier.
This is roughly what our network looks like:
Main Monitoring Node
- Runs Prometheus, Promtail, Loki, and rsyslog.
- Traffic must be allowed through the firewall on TCP port 514. If using Tailscale, ensure the ACLs are setup correctly.
- It has an rsyslog ruleset that catches all forwarded logs through TCP port 514 and relays them to Promtail on TCP port 1514.
- Promtail pushes the logs its receives via TCP port 1514 to the Loki client listening on TCP port 3100.
Regular Node 1
- It has an rsyslog ruleset that forwards logs to the Main Monitoring Node on TCP port 514.
- Is allowed to access TCP port 514 on the Main Monitoring Node.
Regular Node 2
- It has an rsyslog ruleset that forwards logs to the Main Monitoring Node on TCP port 514.
- Is allowed to access TCP port 514 on the Main Monitoring Node.
Install Rsyslog, Promtail, and Loki on the Main Monitoring Node
# Debian-based hosts
sudo apt install -y promtail loki rsyslog
# Fedora-based hosts
sudo dnf install -y promtail loki rsyslog
Edit /etc/promtail/config.yml
.
server:
http_listen_port: 9081
grpc_listen_port: 0
positions:
filename: /var/tmp/promtail-syslog-positions.yml
clients:
- url: http://localhost:3100/loki/api/v1/push
scrape_configs:
- job_name: syslog
syslog:
listen_address: 0.0.0.0:1514
labels:
job: syslog
relabel_configs:
- source_labels: [__syslog_message_hostname]
target_label: hostname
- source_labels: [__syslog_message_severity]
target_label: level
- source_labels: [__syslog_message_app_name]
target_label: application
- source_labels: [__syslog_message_facility]
target_label: facility
- source_labels: [__syslog_connection_hostname]
target_label: connection_hostname
Edit /etc/loki/config.yml
.
auth_enabled: false
server:
http_listen_port: 3100
grpc_listen_port: 9096
common:
instance_addr: 127.0.0.1
path_prefix: /tmp/loki
storage:
filesystem:
chunks_directory: /tmp/loki/chunks
rules_directory: /tmp/loki/rules
replication_factor: 1
ring:
kvstore:
store: inmemory
query_range:
results_cache:
cache:
embedded_cache:
enabled: true
max_size_mb: 100
schema_config:
configs:
- from: 2020-10-24
store: tsdb
object_store: filesystem
schema: v13
index:
prefix: index_
period: 24h
ruler:
alertmanager_url: http://localhost:9093
Edit /etc/rsyslog.d/00-promtail-relay.conf
.
# https://www.rsyslog.com/doc/v8-stable/concepts/multi_ruleset.html#split-local-and-remote-logging
ruleset(name="remote"){
# https://www.rsyslog.com/doc/v8-stable/configuration/modules/omfwd.html
# https://grafana.com/docs/loki/latest/clients/promtail/scraping/#rsyslog-output-configuration
action(type="omfwd" Target="localhost" Port="1514" Protocol="tcp" Template="RSYSLOG_SyslogProtocol23Format" TCP_Framing="octet-counted")
}
# https://www.rsyslog.com/doc/v8-stable/configuration/modules/imudp.html
module(load="imudp")
input(type="imudp" port="514" ruleset="remote")
# https://www.rsyslog.com/doc/v8-stable/configuration/modules/imtcp.html
module(load="imtcp")
input(type="imtcp" port="514" ruleset="remote")
Ensure the firewall allows TCP traffic to port 514.
sudo firewall-cmd --permanent --zone=tailnet --add-port=514/tcp
sudo firewall-cmd --reload
Restart and/or enable the services.
sudo systemctl enable --now promtail.service
sudo systemctl enable --now loki.service
sudo systemctl enable --now rsyslog.service
Install and configure Rsyslog on Regular Node 1 and Regular Node 2
# Debian
sudo apt install -y rsyslog
# Fedora
sudo dnf install -y rsyslog
Enable and start the rsyslog service.
sudo systemctl enable --now rsyslog
Edit /etc/rsyslog.conf
.
###############
#### RULES ####
###############
# Forward to Main Monitoring Node
*.* action(type="omfwd" target="<IP addr of Main Monitoring Node>" port="514" protocol="tcp"
action.resumeRetryCount="100"
queue.type="linkedList" queue.size="10000")
Restart the rsyslog service.
sudo systemctl restart rsyslog.service
In the Grafana UI, you should now be able to add Loki as a data source. Then go to Home > Explore > loki and start querying logs from Regular Node 1 and Regular Node 2.
LVM2
Add disk to LVM volume
Create a new physical volume on the new disk:
sudo pvcreate /dev/vdb
sudo lvmdiskscan -l
Add the newly created physical volume (/dev/vdb
) to an existing logical volume:
sudo vgextend almalinux /dev/vdb
Extend the /dev/almalinux/root
to create a total 1000GB:
sudo lvm lvextend -l +100%FREE /dev/almalinux/root
Grow the filesystem of the root volume:
# ext4
sudo resize2fs -p /dev/mapper/almalinux-root
# xfs
sudo xfs_growfs /
Mastodon
Full-text search with elasticsearch
Install ElasticSearch
sudo apt install -y openjdk-17-jre-headless
wget -O /usr/share/keyrings/elasticsearch.asc https://artifacts.elastic.co/GPG-KEY-elasticsearch
echo "deb [signed-by=/usr/share/keyrings/elasticsearch.asc] https://artifacts.elastic.co/packages/7.x/apt stable main" > /etc/apt/sources.list.d/elastic-7.x.list
sudo apt update
sudo apt install -y elasticsearch
Edit /etc/elasticsearch/elasticsearch.yaml
xpack.security.enabled: true
discovery.type: single-node
Create passwords for built-in users
sudo -u elasticsearch /usr/share/elasticsearch/bin/elasticsearch
In a separate shell:
sudo -u elasticsearch /usr/share/elasticsearch/bin/elasticsearch-setup-passwords auto
Copy the generated password for the elastic
user.
Create custom role for Mastodon to connect
As the mastodon user on the host:
curl -X POST -u elastic:admin_password "localhost:9200/_security/role/mastodon_full_access?pretty" -H 'Content-Type: application/json' -d'
{
"cluster": ["monitor"],
"indices": [{
"names": ["*"],
"privileges": ["read", "monitor", "write", "manage"]
}]
}
'
Create a user for Mastodon and assign it the custom role
curl -X POST -u elastic:admin_password "localhost:9200/_security/user/mastodon?pretty" -H 'Content-Type: application/json' -d'
{
"password": "l0ng-r4nd0m-p@ssw0rd",
"roles": ["mastodon_full_access"]
}
'
Edit .env.production
ES_ENABLED=true
ES_HOST=localhost
ES_PORT=9200
ES_PRESET=single_node_cluster
ES_USER=mastodon
ES_PASS=l0ng-r4ndom-p@ssw0rd
Populate the indices
systemctl restart mastodon-sidekiq
systemctl reload mastodon-web
su - mastodon
cd live
RAILS_ENV=production bin/tootctl search deploy
S3-compatible object storage with Minio
- Install MinIO
- Set the region for this instance to
homelab
- Create 'mastodata' bucket
- Setup Tailscale
Minio API endpoint: tailnet_ip_addr:9000
Caddy reverse proxy config
Ensure DNS resolves for assets.hyperreal.coffee
assets.hyperreal.coffee {
rewrite * /mastodata{path}
reverse_proxy http://<tailnet_ip_addr>:9000 {
header_up Host {upstream_hostport}
}
}
fedi.hyperreal.coffee {
@local {
file
not path /
}
@local_media {
path_regexp /system/(.*)
}
redir @local_media https://assets.hyperreal.coffee/{http.regexp.1} permanent
...remainer of config
}
Set custom policy on mastodata bucket
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mastodata/*"
}
]
}
Create mastodon-readwrite policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "arn:aws:s3:::mastodata/*"
}
]
}
Setup .env.production
S3_ENABLED=true
S3_BUCKET=mastodata
AWS_ACCESS_KEY=<access key>
AWS_SECRET_ACCESS_KEY=<secret access key>
S3_REGION=homelab
S3_PROTOCOL=http
S3_ENDPOINT=http://<tailnet_ip_addr>:9000
S3_FORCE_SINGLE_REQUEST=true
S3_ALIAS_HOST=assets.hyperreal.coffee
Restart Caddy and Mastodon services
sudo systemctl restart caddy.service mastodon-web.service mastodon-streaming.service mastodon-sidekiq.service
Prometheus metrics with statsd_exporter
On the host running Mastodon, download the latest binary from releases page.
tar xzvf statsd_exporter*.tar.gz
cd statsd_exporter*/
sudo cp -v statsd_exporter /usr/local/bin/
Install the statsd mapping file from IPng Networks:
curl -OL https://ipng.ch/assets/mastodon/statsd-mapping.yaml
sudo cp -v statsd-mapping.yml /etc/prometheus/
Create /etc/default/statsd_exporter
.
ARGS="--statsd.mapping-config=/etc/prometheus/statsd-mapping.yaml"
Create /etc/systemd/system/statsd_exporter.service
.
[Unit]
Description=Statsd exporter
After=network.target
[Service]
Restart=always
User=prometheus
EnvironmentFile=/etc/default/statsd_exporter
ExecStart=/usr/local/bin/statsd_exporter $ARGS
ExecReload=/bin/kill -HUP $MAINPID
TimeoutStopSec=20s
SendSIGKILL=no
[Install]
WantedBy=multi-user.target
Ensure port 9102 is open in Firewalld's internal zone.
sudo firewall-cmd --permanent --zone=internal --add-port=9102/tcp
sudo firewall-cmd --reload
Edit /home/mastodon/live/.env.production
.
STATSD_ADDR=localhost:9125
Start and restart the daemons.
sudo systemctl daemon-reload
sudo systemctl start statsd_exporter.service
sudo systemctl restart mastodon-sidekiq.service mastodon-streaming.service mastodon-web.service
If using Tailscale, ensure the host running Prometheus can access port 9102 on the host running Mastodon.
On the host running Prometheus, add the statsd config.
- job_name: "stats_exporter"
static_configs:
- targets: ["hyperreal:9102"]
Restart Prometheus.
sudo systemctl restart prometheus.service
To import the Grafana dashboard, use ID 17492.
Source: How to set up monitoring for your Mastodon instance with Prometheus and Grafana
MinIO
Bucket replication to remote MinIO instance
Use mcli
to create aliases for the local and remote instances.
mcli alias set nas-local http://localhost:9000 username password
mcli alias set nas-remote http://ip.addr:9000 username password
Add configuration rule on source bucket for nas-local to nas-remote to replicate all operations in an active-active replication setup.
mcli replicate add nas-local/sourcebucket --remote-bucket nas-remote/targetbucket --priority 1
Show replication status.
mcli replicate status nas-local/sourcebucket
Networking
Disable IPv6 on Debian
Edit /etc/sysctl.conf
.
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
Apply the changes.
sudo sysctl -p
Disable IPv6 on Fedora
sudo grubby --args=ipv6.disable=1 --update-kernel=ALL
sudo grub2-mkconfig -o /boot/grub2/grub.cfg
Rename network interface when using systemd-networkd
Create a udev rule at /etc/udev/rules.d/70-my-net-names.rules
:
SUBSYSTEM=="net", ACTION=="add", ATTR{address}=="your-mac-address", NAME="wlan0"
Using 70-my-net-names.rules
as the filename ensures the rule is ordered before /usr/lib/udev/rules.d/80-net-setup-link.rules
.
Connecting to WiFi network using systemd-networkd and wpa_supplicant
Create a file at /etc/wpa_supplicant/wpa_supplicant-wlan0.conf
. Use wpa_passphrase
to hash the passphrase.
wpa_passphrase your-ssid your-ssid-passphrase | sudo tee -a /etc/wpa_supplicant/wpa_supplicant-wlan0.conf
Edit /etc/wpa_supplicant/wpa_supplicant-wlan0.conf
:
ctrl_interface=/var/run/wpa_supplicant
ctrl_interface_group=0
update_config=1
network={
ssid="your-ssid"
psk="your-hashed-ssid-passphrase"
key_mgmt=WPA-PSK
proto=WPA2
scan_ssid=1
}
Create a file at /etc/systemd/network/25-wlan.network
:
[Match]
Name=wlan0
[Network]
DHCP=ipv4
Enable and start the network services:
sudo systemctl enable --now wpa_supplicant@wlan0.service
sudo systemctl restart systemd-networkd.service
sudo systemctl restart wpa_supplicant@wlan0.service
Check the interface status:
ip a
Use tailnet DNS and prevent DNS leaks
After the above WiFi interface is setup, disable IPv6 as per the above sections, and enable the Tailscale service.
sudo systemctl enable --now tailscaled.service
sudo tailscale up
Edit /etc/systemd/networkd/25-wlan.network
again, and add the following contents:
[Match]
Name=wlan0
[Network]
DHCP=ipv4
DNS=100.100.100.100
DNSSEC=allow-downgrade
[DHCPv4]
UseDNS=no
This will tell the wlan0
interface to use Tailscale's MagicDNS, along with DNSSEC if it is available, and not to get the nameservers from the DHCPv4 connection.
Nextcloud
Migrating
Backup: Run these commands on old server machine
Assumes Nextcloud instance is installed on DietPi
sudo systemctl stop nginx.service
Put Nextcloud into maintenance mode:
cd /var/www/nextcloud
sudo -u www-data php occ maintenance:mode --on
Backup the directories:
DATE=$(date '+%Y%m%d')
sudo rsync -aAX /etc/nginx /home/dietpi/nginx-backup_$DATE
sudo rsync -aAX /var/www/nextcloud /home/dietpi/nextcloud-dir-backup_$DATE
sudo rsync -aAX /mnt/dietpi_userdata/nextcloud_data /home/dietpi/nextcloud-data-backup_$DATE
Dump the MariaDB database:
sudo mysqldump --single-transaction --default-character-set=utf8mb4 -h localhost -u <username> -p <password> nextcloud > /home/dietpi/nextcloud-db-backup_$DATE.sql
Rsync the files over to the new server machine:
sudo rsync -aAX \
/home/dietpi/nginx-backup_$DATE \
/home/dietpi/nextcloud-dir-backup_$DATE \
/home/dietpi/nextcloud-data-backup_$DATE \
/home/dietpi/nextcloud-db-backup_$DATE.sql \
dietpi@<new server ip>:/home/dietpi
Restore: Run these commands on new server machine
Assuming the web server is stopped.
Move the nextcloud-dir and nextcloud-data directories to their correct locations. First ensure the default directories are removed.
sudo rm -rf /etc/nginx
sudo rm -rf /var/www/nextcloud
sudo rm -rf /mnt/dietpi_userdata/nextcloud_data
sudo mv nginx_$DATE /etc/nginx
sudo mv nextcloud-dir-backup_$DATE /var/www/nextcloud
sudo mv nextcloud-data-backup_$DATE /mnt/dietpi_userdata/nextcloud_data
sudo chown -R dietpi:dietpi /mnt/dietpi_userdata/nextcloud_data
sudo chown -R root:root /etc/nginx
Create the nextcloud database in MariaDB:
sudo mysql -h localhost -u root -p <password> -e "DROP DATABASE nextcloud"
sudo mysql -h localhost -u root -p <password> -e "CREATE DATABASE nextcloud CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci"
sudo mysql -h localhost -u root -p <password> nextcloud < /home/dietpi/nextcloud-db-backup_$DATE.sql
Take Nextcloud out of maintenance mode:
You may have to change the 'oc_admin' database user password for occ commands to work.
sudo -u www-data php occ maintenance:mode --off
Restart the services:
sudo systemctl restart nginx mariadb redis-server php8.2-fpm
NFS
Setup NFS server on Debian
sudo apt install -y nfs-kernel-server nfs-common
Configure NFSv4 in /etc/default/nfs-common
:
NEED_STATD="no"
NEED_IDMAPD="yes"
Configure NFSv4 in /etc/default/nfs-kernel-server
. Disable NFSv2 and NFSv3.
RPCNFSDOPTS="-N 2 -N 3"
RPCMOUNTDOPTS="--manage-gids -N 2 -N 3"
sudo systemctl restart nfs-server
Configure FirewallD:
sudo firewall-cmd --zone=public --permanent --add-service=nfs
sudo firewall-cmd --reload
Setup pseudo filesystem and exports:
sudo mkdir /shared
sudo chown -R nobody:nogroup /shared
Add exported directory to /etc/exports
:
/shared <ip address of client>(rw,no_root_squash,no_subtree_check,crossmnt,fsid=0)
Create the NFS table:
sudo exportfs -a
Setup NFS client on Debian
sudo apt install -y nfs-common
Create shared directory:
sudo mkdir -p /mnt/shared
Mount NFS exports:
sudo mount.nfs4 <ip address of server>:/ /mnt/shared
Note that
<server ip>:/
is relative to the exported directory. So/mnt/shared
on the client is/shared
on the server. If you try to mount withmount -t nfs <server ip>:/shared /mnt/shared
you will get a no such file or directory error.
/etc/fstab
entry:
<ip address of server>:/ /mnt/shared nfs4 soft,intr,rsize=8192,wsize=8192
sudo systemctl daemon-reload
sudo mount -av
Setup NFS server on FreeBSD
Edit /etc/rc.conf
.
nfs_server_enable="YES"
nfs_server_flags="-u -t -n 4"
rpcbind_enable="YES"
mountd_flags="-r"
mountd_enable="YES"
Edit /etc/exports
.
/data1 -alldirs -mapall=user1 host1 host2 host3
/data2 -alldirs -maproot=user2 host2
Start the services.
sudo service rpcbind start
sudo service nfsd start
sudo service mountd start
After making changes to the exports file, you need to restart NFS for the changes to take effect:
kill -HUP `cat /var/run/mountd.pid`
Setup NFS client on FreeBSD
Edit /etc/rc.conf
.
nfs_client_enable="YES"
nfs_client_flags="-n 4"
rpc_lockd_enable="YES"
rpc_statd_enable="YES"
Mount NFS share on client with systemd
Create a file at /etc/systemd/system/mnt-backup.mount
.
[Unit]
Description=borgbackup NFS share from FreeBSD
DefaultDependencies=no
Conflicts=umount.target
After=network-online.target remote-fs.target
Before=umount.target
[Mount]
What=10.0.0.119:/coffeeNAS/borgbackup/repositories
Where=/mnt/backup
Type=nfs
Options=defaults,vers=3
[Install]
WantedBy=multi-user.target
Packet Tracer
Fix GUI issues with KDE Plasma dark theme
mkdir ~/.config-pt
cd ~/.config
cp -rf dconf gtk-3.0 gtk-4.0 xsettingsd ~/.config-pt
- Right-click on Menu button.
- Click Edit Applications.
- Select Packet Tracer.
- Add
XDG_CONFIG_HOME=/home/jas/.config-pt
to Environment variables. - Save.
Source. Thanks, u/AtomHeartSon!
Parallel
Pulling files from remote server with rsync
To transfer just the files:
ssh user@remote -- find /path/to/parent/directory -type f | parallel -v -j16 rsync -Havessh -aAXP user@remote:{} /local/path
To transfer the entire directory:
echo "/path/to/parent/directory" | parallel -v -j16 rsync -Havessh -aAXP user@remote:{} /local/path
Pushing files to remote server with rsync
To transfer just the files:
find /path/to/local/directory -type f | parallel -v -j16 -X rsync -aAXP /path/to/local/directory/{} user@remote:/path/to/dest/dir
Running the same command on multiple remote hosts
parallel --tag --nonall -S remote0,remote1,remote2 uptime
Pixelfed
Install Pixelfed on Debian (Bookworm)
Prerequisites
Install dependencies.
apt install -y php-bcmath php-curl exif php-gd php8.2-common php-intl php-json php-mbstring libcurl4-openssl-dev php-redis php-tokenizer php-xml php-zip php-pgsql php-fpm composer
Set the following upload limits for PHP processes.
post_max_size = 2G
file_uploads = On
upload_max_filesize = 2G
max_file_uploads = 20
max_execution_time = 1000
Create the PostgreSQL database:
sudo -u postgres psql
CREATE USER pixelfed CREATEDB;
CREATE DATABASE pixelfed;
GRANT ALL PRIVILEGES ON DATABASE pixelfed TO pixelfed;
\q
Create dedicated pixelfed user.
useradd -rU -s /bin/bash pixelfed
Configure PHP-FPM pool and socket.
cd /etc/php/8.2/fpm/pool.d/
cp www.conf pixelfed.conf
Edit /etc/php/8.2/fpm/pool.d/pixelfed.conf
.
; use the username of the app-user as the pool name, e.g. pixelfed
[pixelfed]
user = pixelfed
group = pixelfed
; to use a tcp socket, e.g. if running php-fpm on a different machine than your app:
; (note that the port 9001 is used, since php-fpm defaults to running on port 9000;)
; (however, the port can be whatever you want)
; listen = 127.0.0.1:9001;
; but it's better to use a socket if you're running locally on the same machine:
listen = /run/php-fpm/pixelfed.sock
listen.owner = caddy
listen.group = caddy
listen.mode = 0660
; [...]
Installation
Setup Pixelfed files
Download the source from GitHub.
cd /usr/share/caddy
git clone -b dev https://github.com/pixelfed/pixelfed.git pixelfed
Set correct permissions.
cd pixelfed
chown -R pixelfed:pixelfed .
find . -type d -exec chmod 755 {} \;
find . -type f -exec chmod 644 {} \;
Become the pixelfed user.
su - pixelfed
Initialize PHP dependencies.
composer update
composer install --no-ansi --no-interaction --optimize-autoloader
Configure environment variables
cp .env.example .env
Edit /usr/share/caddy/pixelfed/.env
.
APP_NAME="hyperreal's Pixelfed"
APP_DEBUG="false"
APP_URL="https://pixelfed.hyperreal.coffee"
APP_DOMAIN="pixelfed.hyperreal.coffee"
ADMIN_DOMAIN="pixelfed.hyperreal.coffee"
SESSION_DOMAIN="pixelfed.hyperreal.coffee"
DB_CONNECTION=pgsql
DB_HOST=localhost
DB_PORT=5432
DB_DATABASE=pixelfed
DB_USERNAME=pixelfed
DB_PASSWORD=<password>
REDIS_HOST=localhost
REDIS_PORT=6379
MAIL_FROM_ADDRESS=onboarding@resend.dev
MAIL_FROM_NAME=Pixelfed
MAIL_ENCRYPTION=tls
MAIL_DRIVER=smtp
MAIL_HOST=smtp.resend.com
MAIL_PORT=465
MAIL_USERNAME=resend
MAIL_PASSWORD=<resend API key>
ACTIVITY_PUB=true
AP_REMOTE_FOLLOW=true
Setting up services
These commands should only be run one time.
php artisan key:generate
Link the storage/
directory to the application.
php artisan storage:link
Run database migrations.
php artisan migrate --force
If the above command fails due to insufficient privileges, then the pixelfed database user needs permission to create tables in the public schema. When we created the database, we ran
GRANT ALL PRIVILEGES ON DATABASE pixelfed TO pixelfed;
in the psql shell. This granted the pixelfed database user privileges on the database itself, not on things within the database. To fix this, the pixelfed database user needs to own the database and all within it, so go back to the psql shell and runALTER DATABASE pixelfed OWNER TO pixelfed;
To enable ActivityPub federation:
php artisan instance:actor
To have routes cached, run the following commands now, and whenever the source code changes or if you change routes.
php artisan route:cache
php artisan view:cache
Run this command whenever you change the .env
file for the changes to take effect.
php artisan config:cache
Use Laravel Horizon for job queueing.
php artisan horizon:install
php artisan horizon:publish
Create a systemd service unit for Pixelfed task queueing.
[Unit]
Description=Pixelfed task queueing via Laravel Horizon
After=network.target
Requires=postgresql
Requires=php8.2-fpm
Requires=redis-server
Requires=caddy
[Service]
Type=simple
ExecStart=/usr/bin/php /usr/share/caddy/pixelfed/artisan horizon
User=pixelfed
Restart=on-failure
[Install]
WantedBy=multi-user.target
Use Cron to schedule periodic tasks. As the pixelfed user, run crontab -e
.
* * * * * /usr/bin/php /usr/share/caddy/pixelfed/artisan schedule:run >> /dev/null 2>&1
Create a Caddyfile that translates HTTP web requests to PHP workers.
pixelfed.hyperreal.coffee {
root * /usr/share/caddy/pixelfed/public
header {
X-Frame-Options "SAMEORIGIN"
X-XSS-Protection "1; mode=block"
X-Content-Type-Options "nosniff"
}
php_fastcgi unix//run/php/php-fpm.sock
file_server
}
Updating Pixelfed
sudo su - pixelfed
cd /usr/share/caddy/pixelfed
git reset --hard
git pull origin dev
composer install
php artisan config:cache
php artisan route:cache
php artisan migrate --force
PostgreSQL
Change password for user
sudo -u user_name psql db_name
ALTER USER user_name WITH PASSWORD 'new_password';
Update password auth method to SCRAM
Edit /etc/postgresql/16/main/postgresql.conf
:
password_encryption = scram-sha-256
Restart postgresql.service:
sudo systemctl restart postgresql.service
At this point, any services using the old MD5 auth method will fail to connect to their PostgreSQL databases.
Update the settings in /etc/postgresql/16/main/pg_hba.conf
:
TYPE DATABASE USER ADDRESS METHOD
local all mastodon scram-sha-256
local all synapse_user scram-sha-256
Enter a psql shell and determine who needs to upgrade their auth method:
SELECT rolname, rolpassword ~ '^SCRAM-SHA-256\$' AS has_upgraded FROM pg_authid WHERE rolcanlogin;
\password username
Restart postgresql.service and all services using a PostgreSQL database:
sudo systemctl restart postgresql.service
sudo systemctl restart mastodon-web.service mastodon-sidekiq.service mastodon-streaming.service
sudo systemctl restart matrix-synapse.service
Prometheus
Download and install
Go to https://prometheus.io/download/ and download the latest version.
export PROM_VER="2.54.0"
wget "https://github.com/prometheus/prometheus/releases/download/v${PROM_VER}/prometheus-${PROM_VER}.linux-amd64.tar.gz"
Verify the checksum is correct.
Unpack the tarball:
tar xvfz prometheus-*.tar.gz
rm prometheus-*.tar.gz
Create two directories for Prometheus to use. /etc/prometheus
for configuration files and /var/lib/prometheus
for application data.
sudo mkdir /etc/prometheus /var/lib/prometheus
Move the prometheus
and promtool
binaries to /usr/local/bin
:
cd prometheus-*
sudo mv prometheus promtool /usr/local/bin
Move the configuration file to the configuration directory:
sudo mv prometheus.yml /etc/prometheus/prometheus.yml
Move the remaining files to their appropriate directories:
sudo mv consoles/ console_libraries/ /etc/prometheus/
Verify that Prometheus is installed:
prometheus --version
Configure prometheus.service
Create a prometheus user and assign ownership to directories:
sudo useradd -rs /bin/false prometheus
sudo chown -R prometheus: /etc/prometheus /var/lib/prometheus
Save the following contents to a file at /etc/systemd/system/prometheus.service
:
[Unit]
Description=Prometheus
Wants=network-online.target
After=network-online.target
[Service]
User=prometheus
Group=prometheus
Type=simple
Restart=on-failure
RestartSec=5s
ExecStart=/usr/local/bin/prometheus \
--config.file /etc/prometheus/prometheus.yml \
--storage.tsdb.path /var/lib/prometheus/ \
--web.console.templates=/etc/prometheus/consoles \
--web.console.libraries=/etc/prometheus/console_libraries \
--web.listen-address=0.0.0.0:9090 \
--web.enable-lifecycle \
--log.level=info
[Install]
WantedBy=multi-user.target
Reload the system daemons:
sudo systemctl daemon-reload
Start and enable prometheus.service
:
sudo systemctl enable --now prometheus.service
For systems running SELinux, the following policy settings must be applied.
module prometheus 1.0;
require {
type init_t;
type websm_port_t;
type user_home_t;
type unreserved_port_t;
type hplip_port_t;
class file { execute execute_no_trans map open read };
class tcp_socket name_connect;
}
#============= init_t ==============
allow init_t hplip_port_t:tcp_socket name_connect;
allow init_t unreserved_port_t:tcp_socket name_connect;
allow init_t user_home_t:file { execute execute_no_trans map open read };
allow init_t websm_port_t:tcp_socket name_connect;
Now compile and import the module:
sudo checkmodule -M -m -o prometheus.mod prometheus.te
sudo semodule_package -o prometheus.pp -m prometheus.mod
sudo semodule -i prometheus.pp
Restart prometheus.service
. If it does not start, ensure all SELinux policies have been applied.
sudo grep "prometheus" /var/log/audit/audit.log | sudo audit2allow -M prometheus
sudo semodule -i prometheus.pp
Restart prometheus.service
again.
The Prometheus web interface and dashboard should now be browsable at http://localhost:9090
Install and configure Node Exporter on each client using Ansible
Install the prometheus.prometheus role from Ansible Galaxy.
ansible-galaxy collection install prometheus.prometheus
Ensure you have an inventory file with clients to setup Prometheus on.
---
prometheus-clients:
hosts:
host0:
ansible_user: user0
ansible_host: host0 ip address or hostname
ansible_python_interpreter: /usr/bin/python3
host1:
...
host2:
...
Create prometheus-setup.yml
.
---
- hosts: prometheus-clients
tasks:
- name: Import the node_exporter role
import_role:
name: prometheus.prometheus.node_exporter
The default values for the node_exporter role variables should be fine.
Run ansible-playbook.
ansible-playbook -i inventory.yml node_exporter-setup.yml
Node Exporter should now be installed, started, and enabled on each host with the homelab label in the inventory.
To confirm that statistics are being collected on each host, navigate to http://host_url:9100
. A page entitled Node Exporter should be displayed containing a link for Metrics. Click the link and confirm that statistics are being collected.
Note that each node_exporter host must be accessible through the firewall on port 9100. Firewalld can be configured for the internal
zone on each host.
sudo firewall-cmd --zone=internal --permanent --add-source=<my_ip_addr>
sudo firewall-cmd --zone=internal --permanent --add-port=9100/tcp
Note: I have to configure the
internal
zone on Firewalld to allow traffic from my IP address on ports HTTP, HTTPS, SSH, and 1965 in order to access, for example, my web services on the node_exporter host.
Install Node Exporter on FreeBSD
As of FreeBSD 14.1-RELEASE, the version of Node Exporter available, v1.6.1, is outdated. To install the latest version, ensure the ports tree is checked out before running the commands below.
sudo cp -v /usr/ports/sysutils/node_exporter/files/node_exporter.in /usr/local/etc/rc.d/node_exporter
sudo chmod +x /usr/local/etc/rc.d/node_exporter
sudo chown root:wheel /usr/local/etc/rc.d/node_exporter
sudo pkg install gmake go
Download the latest release's source code from https://github.com/prometheus/node_exporter. Unpack the tarball.
tar xvf v1.8.2.tar.gz
cd node_exporter-1.8.2
gmake build
sudo mv node_exporter /usr/local/bin/
sudo chown root:wheel /usr/local/bin/node_exporter
sudo sysrc node_exporter_enable="YES"
sudo service node_exporter start
Configure Prometheus to monitor the client nodes
Edit /etc/prometheus/prometheus.yml
. My Prometheus configuration looks like this:
# my global config
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).
# Alertmanager configuration
alerting:
alertmanagers:
- static_configs:
- targets:
# - alertmanager:9093
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
# - "first_rules.yml"
# - "second_rules.yml"
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: "prometheus"
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ["localhost:9090"]
- job_name: "remote_collector"
scrape_interval: 10s
static_configs:
- targets: ["hyperreal.coffee:9100", "box.moonshadow.dev:9100", "10.0.0.26:9100", "bttracker.nirn.quest:9100"]
The job remote_collector
scrapes metrics from each of the hosts running the node_exporter. Ensure that port 9100
is open in the firewall, and if it is a public-facing node, ensure that port 9100
can only be accessed from my IP address.
Configure Prometheus to monitor qBittorrent client nodes
For each qBittorrent instance you want to monitor, setup a Docker or Podman container with https://github.com/caseyscarborough/qbittorrent-exporter. The containers will run on the machine running Prometheus so they are accessible at localhost. Let's say I have three qBittorrent instances I want to monitor.
podman run \
--name=qbittorrent-exporter-0 \
-e QBITTORRENT_USERNAME=username0 \
-e QBITTORRENT_PASSWORD=password0 \
-e QBITTORRENT_BASE_URL=http://localhost:8080 \
-p 17871:17871 \
--restart=always \
caseyscarborough/qbittorrent-exporter:latest
podman run \
--name=qbittorrent-exporter-1 \
-e QBITTORRENT_USERNAME=username1 \
-e QBITTORRENT_PASSWORD=password1 \
-e QBITTORRENT_BASE_URL=https://qbittorrent1.tld \
-p 17872:17871 \
--restart=always \
caseyscarborough/qbittorrent-exporter:latest
podman run \
--name=qbittorrent-exporter-2 \
-e QBITTORRENT_USERNAME=username2 \
-e QBITTORRENT_PASSWORD=password2 \
-e QBITTORRENT_BASE_URL=https://qbittorrent2.tld \
-p 17873:17871 \
--restart=always \
caseyscarborough/qbittorrent-exporter:latest
Using systemd quadlets
[Unit]
Description=qbittorrent-exporter
After=network-online.target
[Container]
Image=docker.io/caseyscarborough/qbittorrent-exporter:latest
ContainerName=qbittorrent-exporter
Environment=QBITTORRENT_USERNAME=username
Environment=QBITTORRENT_PASSWORD=password
Environment=QBITTORRENT_BASE_URL=http://localhost:8080
PublishPort=17871:17871
[Install]
WantedBy=multi-user.target default.target
Now add this to the scrape_configs
section of /etc/prometheus/prometheus.yml
to configure Prometheus to scrape these metrics.
- job_name: "qbittorrent"
static_configs:
- targets: ["localhost:17871", "localhost:17872", "localhost:17873"]
Monitor Caddy with Prometheus and Loki
Caddy: metrics activation
Add the metrics
global option and ensure the admin endpoint is enabled.
{
admin 0.0.0.0:2019
servers {
metrics
}
}
Restart Caddy:
sudo systemctl restart caddy
sudo systemctl status caddy
Caddy: logs activation
I have my Caddy configuration modularized with /etc/caddy/Caddyfile
being the central file. It looks something like this:
{
admin 0.0.0.0:2019
servers {
metrics
}
}
## hyperreal.coffee
import /etc/caddy/anonoverflow.caddy
import /etc/caddy/breezewiki.caddy
import /etc/caddy/cdn.caddy
...
Each file that is imported is a virtual host that has its own separate configuration and corresponds to a subdomain of hyperreal.coffee. I have logging disabled on most of them except the ones for which troubleshooting with logs would be convenient, such as the one for my Mastodon instance. For /etc/caddy/fedi.caddy
, I've added these lines to enable logging:
fedi.hyperreal.coffee {
log {
output file /var/log/caddy/fedi.log {
roll_size 100MiB
roll_keep 5
roll_keep_for 100d
}
format json
level INFO
}
}
Restart caddy.
sudo systemctl restart caddy
sudo systemctl status caddy
Ensure port 2019
can only be accessed by my IP address, using Firewalld's internal zone:
sudo firewall-cmd --zone=internal --permanent --add-port=2019/tcp
sudo firewall-cmd --reload
sudo firewall-cmd --info-zone=internal
Add the Caddy configuration to the scrape_configs
section of /etc/prometheus/prometheus.yml
:
- job_name: "caddy"
static_configs:
- targets: ["hyperreal.coffee:2019"]
Restart Prometheus on the monitor host:
sudo systemctl restart prometheus.service
Loki and Promtail setup
On the node running Caddy, install the loki and promtail packages:
sudo apt install -y loki promtail
Edit the Promtail configuration file at /etc/promtail/config.yml
:
- job_name: caddy
static_configs:
- targets:
- localhost
labels:
job: caddy
__path__: /var/log/caddy/*.log
agent: caddy-promtail
pipeline_stages:
- json:
expressions:
duration: duration
status: status
- labels:
duration:
status:
The entire Promtail configuration should look like this:
# This minimal config scrape only single log file.
# Primarily used in rpm/deb packaging where promtail service can be started during system init process.
# And too much scraping during init process can overload the complete system.
# https://github.com/grafana/loki/issues/11398
server:
http_listen_port: 9080
grpc_listen_port: 0
positions:
filename: /tmp/positions.yaml
clients:
- url: http://localhost:3100/loki/api/v1/push
scrape_configs:
- job_name: system
static_configs:
- targets:
- localhost
labels:
job: varlogs
#NOTE: Need to be modified to scrape any additional logs of the system.
__path__: /var/log/messages
- job_name: caddy
static_configs:
- targets:
- localhost
labels:
job: caddy
__path__: /var/log/caddy/*log
agent: caddy-promtail
pipeline_stages:
- json:
expressions:
duration: duration
status: status
- labels:
duration:
status:
Restart Promtail and Loki services:
sudo systemctl restart promtail
sudo systemctl restart loki
To ensure that the promtail user has permissions to read caddy logs:
sudo usermod -aG caddy promtail
sudo chmod g+r /var/log/caddy/*.log
The Prometheus dashboard should now show the Caddy target with a state of "UP".
Monitor TOR node
Edit /etc/tor/torrc
to add Metrics info. x.x.x.x
is the IP address where Prometheus is running.
## Prometheus exporter
MetricsPort 0.0.0.0:9035 prometheus
MetricsPortPolicy accept x.x.x.x
Configure FirewallD to allow inbound traffic to port 9035
on the internal zone. Ensure the internal zone's source is the IP address of the server where Prometheus is running. Ensure port 443
is accessible from the Internet on FirewallD's public zone.
sudo firewall-cmd --zone=internal --permanent --add-source=x.x.x.x
sudo firewall-cmd --zone=internal --permanent --add-port=9035/tcp
sudo firewall-cmd --zone=public --permanent --add-service=https
sudo firewall-cmd --reload
Edit /etc/prometheus/prometheus.yml
to add the TOR config. y.y.y.y
is the IP address where TOR is running.
scrape_configs:
- job_name: "tor-relay"
static_configs:
- targets: ["y.y.y.y:9035"]
Restart Prometheus.
sudo systemctl restart prometheus.service
Go to Grafana and import tor_stats.json as a new dashboard, using the Prometheus datasource.
Monitor Synapse homeserver
On the server running Synapase, edit /etc/matrix-synapse/homeserver.yaml
to enable metrics.
enable_metrics: true
Add a new listener to /etc/matrix-synapse/homeserver.yaml
for Prometheus metrics.
listeners:
- port: 9400
type: metrics
bind_addresses: ['0.0.0.0']
On the server running Prometheus, add a target for Synapse.
- job_name: "synapse"
scrape_interval: 1m
metrics_path: "/_synapse/metrics"
static_configs:
- targets: ["hyperreal:9400"]
Also add the Synapse recording rules.
rule_files:
- /etc/prometheus/synapse-v2.rules
On the server running Prometheus, download the Synapse recording rules.
sudo wget https://files.hyperreal.coffee/prometheus/synapse-v2.rules -O /etc/prometheus/synapse-v2.rules
Restart Prometheus.
Use synapse.json for Grafana dashboard.
Monitor Elasticsearch
On the host running Elasticsearch, download the latest binary from the GitHub releases.
tar xvf elasitcsearch_exporter*.tar.gz
cd elasticsearch_exporter*/
sudo cp -v elasticsearch_exporter /usr/local/bin/
Create /etc/systemd/system/elasticsearch_exporter.service
.
[Unit]
Description=elasticsearch exporter
After=network.target
[Service]
Restart=always
User=prometheus
ExecStart=/usr/local/bin/elasticsearch_exporter --es.uri=http://localhost:9200
ExecReload=/bin/kill -HUP $MAINPID
TimeoutStopSec=20s
SendSIGKILL=no
[Install]
WantedBy=multi-user.target
Reload the daemons and enable/start elasticsearch_exporter.
sudo systemctl daemon-reload
sudo systemctl enable --now elasticsearch_exporter.service
Ensure port 9114 is allowed in Firewalld's internal zone.
sudo firewall-cmd --permanent --zone=internal --add-port=9114/tcp
sudo firewall-cmd --reload
If using Tailscale, ensure the host running Prometheus can access port 9114 on the host running Elasticsearch.
On the host running Prometheus, download the elasticsearch.rules.
wget https://raw.githubusercontent.com/prometheus-community/elasticsearch_exporter/refs/heads/master/examples/prometheus/elasticsearch.rules.yml
sudo mv elasticsearch.rules.yml /etc/prometheus/
Edit /etc/prometheus/prometheus.yml
to add the elasticsearch_exporter config.
rule_files:
- "/etc/prometheus/elasticsearch.rules.yml"
...
...
- job_name: "elasticsearch_exporter"
static_configs:
- targets: ["hyperreal:9114"]
Restart Prometheus.
sudo systemctl restart prometheus.service
For a Grafana dashboard, copy the contents of the file located here: https://files.hyperreal.coffee/grafana/elasticsearch.json.
Use HTTPS with Tailscale
If this step has been done already, skip it.
sudo tailscale certs HOSTNAME.TAILNET.ts.net
sudo mkdir /etc/tailscale-ssl-certs
sudo mv HOSTNAME.TAILNET.ts.net.crt HOSTNAME.TAILNET.ts.net.key /etc/tailscale-ssl-certs/
sudo chown -R root:root /etc/tailscale-ssl-certs
Ensure the prometheus.service
systemd file contains the --web.config.file
flag.
[Unit]
Description=Prometheus
Wants=network-online.target
After=network-online.target
[Service]
User=prometheus
Group=prometheus
Type=simple
Restart=on-failure
RestartSec=5s
ExecStart=/usr/local/bin/prometheus \
--config.file /etc/prometheus/prometheus.yml \
--storage.tsdb.path /var/lib/prometheus/ \
--web.console.templates=/etc/prometheus/consoles \
--web.console.libraries=/etc/prometheus/console_libraries \
--web.listen-address=0.0.0.0:9090 \
--web.enable-lifecycle \
--web.config.file /etc/prometheus/web.yml \
--log.level=info
[Install]
WantedBy=multi-user.target
Create the file /etc/prometheus/web.yml
.
tls_server_config:
cert_file: /etc/prometheus/prometheus.crt
key_file: /etc/prometheus/prometheus.key
Copy the cert and key to /etc/prometheus
.
sudo cp -v /etc/tailscale-ssl-certs/HOSTNAME.TAILNET.ts.net.crt /etc/prometheus/prometheus.crt
sudo cp -v /etc/tailscale-ssl-certs/HOSTNAME.TAILNET.ts.net.key /etc/prometheus/prometheus.key
Ensure the permissions are correct on the web config, cert, and key.
sudo chown prometheus:prometheus /etc/prometheus/web.yml
sudo chown prometheus:prometheus /etc/prometheus/prometheus.crt
sudo chown prometheus:prometheus /etc/prometheus/prometheus.key
sudo chmod 644 /etc/prometheus/prometheus.crt
sudo chmod 644 /etc/prometheus/prometheus.key
Reload the daemons and restart Prometheus.
sudo systemctl daemon-reload
sudo systemctl restart prometheus.service
QCOW2
Mount qcow2 image
Enable NBD on the host:
sudo modprobe nbd max_part=8
Connect qcow2 image as a network block device:
sudo qemu-nbd --connect=/dev/nbd0 /path/to/image.qcow2
Find the VM's partitions:
sudo fdisk /dev/nbd0 -l
Mount the partition from the VM:
sudo mount /dev/nbd0p3 /mnt/point
To unmount:
sudo umount /mnt/point
sudo qemu-nbd --disconnect /dev/nbd0
sudo rmmod nbd
Resize qcow2 image
Install guestfs-tools (required for virt-resize command):
sudo dnf install -y guestfs-tools
sudo apt install -y guestfs-tools libguestfs-tools
To resize qcow2 images, you'll have to create a new qcow2 image with the size you want, then use virt-resize
on the old qcow2 image to the new one.
You'll need to know the root partition within the old qcow2 image.
Create a new qcow2 image with the size you want:
qemu-img create -f qcow2 -o preallocation=metadata newdisk.qcow2 100G
Now resize the old one to the new one:
virt-resize --expand /dev/vda3 olddisk.qcow2 newdisk.qcow2
Once you boot into the new qcow2 image, you'll probably have to adjust the size of the logical volume if it has LVM:
sudo lvresize -l +100%FREE /dev/mapper/sysvg-root
Then resize the XFS root partition within the logical volume:
sudo xfs_grow /dev/mapper/sysvg-root
QEMU
Take snapshot of VM
sudo virsh domblklist vm1
Target Source
-----------------------------------------------
vda /var/lib/libvirt/images/vm1.img
sudo virsh snapshot-create-as \
--domain vm1 \
--name guest-state1 \
--diskspec vda,file=/var/lib/libvirt/images/overlay1.qcow2 \
--disk-only \
--atomic \
--quiesce
Ensure qemu-guest-agent
is installed inside the VM. Otherwise omit the --quiesce
flag, but when you restore the VM it will be as if the system had crashed. Not that big of a deal since the VM's OS should flush required data and maintain consistency of its filesystems.
sudo rsync -avhW --progress /var/lib/libvirt/images/vm1.img /var/lib/libvirt/images/vm1-copy.img
sudo virsh blockcommit vm1 vda --active --verbose --pivot
Full disk backup of VM
Start the guest VM:
sudo virsh start vm1
Enumerate the disk(s) in use:
sudo virsh domblklist vm1
Target Source
-------------------------------------------------
vda /var/lib/libvirt/images/vm1.qcow2
Begin the backup:
sudo virsh backup-begin vm1
Backup started
Check the job status. "None" means the job has likely completed.
sudo virsh domjobinfo vm1
Job type: None
Check the completed job status:
sudo virsh domjobinfo vm1 --completed
Job type: Completed
Operation: Backup
Time elapsed: 182 ms
File processed: 39.250 MiB
File remaining: 0.000 B
File total: 39.250 MiB
Now we see the copy of the backup:
sudo ls -lash /var/lib/libvirt/images/vm1.qcow2*
15M -rw-r--r--. 1 qemu qemu 15M May 10 12:22 vm1.qcow2
21M -rw-------. 1 root root 21M May 10 12:23 vm1.qcow2.1620642185
RAID
Mount RAID1 mirror
/dev/sda1
/dev/sdb1
Assemble the RAID array:
sudo mdadm --assemble --run /dev/md0 /dev/sda1 /dev/sdb1
Mount the RAID device:
sudo mount /dev/md0 /mnt
Configure msmtp for mdmonitor.service (Ubuntu 24.04)
sudo apt install msmtp msmtp-mta
Edit /etc/msmtprc
.
# Resend account
account resend
host smtp.resend.com
from admin@hyperreal.coffee
port 2587
tls on
tls_starttls on
tls_trust_file /etc/ssl/certs/ca-certificates.crt
auth on
user resend
password APIKEY GO HERE
syslog LOG_MAIL
Edit /etc/mdadm.conf
.
MAILADDR hyperreal@moonshadow.dev
MAILFROM admin@hyperreal.coffee
PROGRAM msmtp
ARRAY ...
ARRAY ...
Rename sendmail and symlink msmtp to sendmail.
sudo mv /usr/sbin/sendmail /usr/sbin/sendmail.bak
sudo ln -s /usr/bin/msmtp /usr/sbin/sendmail
Send a test email.
sudo mdadm --monitor --scan --test --oneshot
Restart mdmonitor.service.
sudo systemctl restart mdmonitor.service
RetroPie
Bluetooth: protocol not available
sudo apt install pulseaudio-module-bluetooth
Add to /lib/systemd/system/bthelper@.service
:
ExecStartPre=/bin/sleep 4
sudo systemctl start sys-subsystem-bluetooth-devices-hci0.device
sudo hciconfig hci0 down
sudo killall pulseaudio
systemctl --user enable --now pulseaudio.service
sudo systemctl restart bluetooth.service
Resident Evil HD
Installation
-
Download Resident Evil Classic Triple Pack PC from archive.org. This contains the Sourcenext versions of all three games.
-
Install all three games using their installers.
-
Download the following files:
- Biohazard PC CD-ROM Mediakite patch version 1.01
- Resident Evil Classic REbirth
- Resident Evil 2 Classic REbirth
- Resident Evil 3 Classic REbirth
- Biohazard Mediakite
- Resident Evil HD mod by TeamX
- Resident Evil 2 HD mod by TeamX
- Resident Evil 3 HD mod by TeamX
- Resident Evil Seamless HD Project v1.1
- Resident Evil 2 Seamless HD Project v2.0
- Resident Evil 3: Nemesis Seamless HD Project v2.0
-
Open the Biohazard Mediakite disc image with 7zip and drag the JPN folder from the disc into
C:\Program Files (x86)\Games Retro\Resident Evil Classic
Resident Evil Director's Cut
Extract the following files to %ProgramFiles(x86)%\Games Retro\Resident Evil Classic
:
Biohazard.exe
from Mediakite v1.01ddraw.dll
from Resident Evil Classic REbirth- All from Resident Evil HD mod by TeamX
- All from Resident Evil Seamless HD Project v1.1
Resident Evil 2
Extract the following files to %ProgramFiles(x86)%\Games Retro\BIOHAZARD 2 PC
:
ddraw.dll
from Resident Evil 2 Classic REbirth- All from Resident Evil 2 HD mod by TeamX
- All from Resident Evil 2 Seamless HD Project v2.0
Resident Evil 3: Nemesis
Extract the following files to %ProgramFiles(x86)%\Games Retro\BIOHAZARD 3 PC
:
ddraw.dll
from Resident Evil 3 Classic REbirth- All from Resident Evil 3 HD mod by TeamX
- All from Resident Evil 3: Nemesis Seamless HD Project v2.0
Testing
Test each game by launching them with the following config changes:
- Resolution 1280x960
- RGB88 colors
- Disable texture filtering
RSS
Source: Simple RSS, Atom and JSON feed for your blog
A reference for those of us goblins who like to write out our RSS and Atom XML files by hand. ;)
RSS
<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
<channel>
<title>Example website title</title>
<link>https://example.com</link>
<description>Example website description.</description>
<atom:link href="https://example.com/rss.xml" rel="self" type="application/rss+xml" />
<item>
<title>Post one</title>
<link>https://example.com/posts-one</link>
<description>Post one content.</description>
<guid isPermaLink="true">https://example.com/posts-one</guid>
<pubDate>Mon, 22 May 2023 13:00:00 -0600</pubDate>
</item>
<item>
<title>Post two</title>
<link>https://example.com/posts-two</link>
<description>Post two content.</description>
<guid isPermaLink="true">https://example.com/posts-two</guid>
<pubDate>Mon, 15 May 2023 13:00:00 -0600</pubDate>
</item>
</channel>
</rss>
Atom
<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
<id>http://example.com/</id>
<title>Example website title</title>
<updated>2023-05-22T13:00:00.000Z</updated>
<author>
<name>John Doe</name>
</author>
<link href="https://example.com/atom.xml" rel="self" type="application/rss+xml" />
<subtitle>Example website description.</subtitle>
<entry>
<id>https://example.com/posts-one</id>
<title>Post one</title>
<link href="https://example.com/posts-one"/>
<updated>2023-05-22T13:00:00.000Z</updated>
<summary type="html">https://example.com/posts-one</summary>
<content type="html">Post one content.</content>
</entry>
<entry>
<id>https://example.com/posts-two</id>
<title>Post two</title>
<link href="https://example.com/posts-two"/>
<updated>2023-05-15T13:00:00.000Z</updated>
<summary type="html">https://example.com/posts-two</summary>
<content type="html">Post two content.</content>
</entry>
</feed>
JSON
{
"version": "https://jsonfeed.org/version/1.1",
"title": "Example website title",
"home_page_url": "https://example.com",
"feed_url": "https://example.com/feed.json",
"description": "Example website description.",
"items": [
{
"id": "https://example.com/posts-one",
"url": "https://example.com/posts-one",
"title": "Post one content.",
"content_text": "Post one content.",
"date_published": "2023-05-22T13:00:00.000Z"
},
{
"id": "https://example.com/posts-two",
"url": "https://example.com/posts-two",
"title": "Post two content.",
"content_text": "Post two content.",
"date_published": "2023-05-15T13:00:00.000Z"
}
]
}
Resources
- The RSS 2.0 Specification
- The Atom Syndication Format Specification
- The JSON Feed Version 1.1 Specification
- RSS and Atom Feed validator
- JSON Feed validator
Systemd
Install systemd-boot on Debian
sudo mkdir /boot/efi/loader
printf "default systemd\ntimeout 5\neditor 1\n" | sudo tee /boot/efi/loader/loader.conf
sudo mkdir -p /boot/efi/loader/entries
sudo apt install -y systemd-boot
sudo bootctl install --path=/boot/efi
Check efibootmgr:
sudo efibootmgr
Output:
BootOrder: 0000,0001
Boot0000* Linux Boot Manager
Mount NFS share
Create a unit file at /etc/systemd/system/mnt-backup.mount
. The name of the unit file must match the Where
directive. Ex. Where=/mnt/backup
--> mnt-backup.mount
.
[Unit]
Description=borgbackup NFS share from TrueNAS (10.0.0.81)
DefaultDependencies=no
Conflicts=umount.target
After=network-online.target remote-fs.target
Before=umount.target
[Mount]
What=10.0.0.81:/mnt/coffeeNAS/backup
Where=/mnt/backup
Type=nfs
Options=defaults
[Install]
WantedBy=multi-user.target
Torrenting
Setup a FreeBSD thick VNET jail for torrenting Anna's Archive
Setup the VNET bridge
Create the bridge.
ifconfig bridge create
Attach the bridge to the main network interface. igc0
in this case. For some reason, the resulting bridge device is named igb0bridge
, rather than bridge0
.
ifconfig igb0bridge addm igc0
To make this persist across reboots, add the following to /etc/rc.conf
.
defaultrouter="10.0.0.1"
cloned_interfaces="igb0bridge"
ifconfig_igc0bridge="inet 10.0.0.8/24 addm igc0 up"
Create the classic (thick) jail
Create the ZFS dataset for the jails. We'll use basejail
as a template for subsequent jails.
zfs create -o mountpoint=/jails naspool/jails
zfs create naspool/jails/basejail
Use the bsdinstall
utility to bootstrap the base system to the basejail
.
export DISTRIBUTIONS="base.txz"
export BSDINSTALL_DISTSITE=https://download.freebsd.org/ftp/releases/amd64/14.2-RELEASE/
bsdinstall jail /jails/basejail
Run freebsd-update
to update the base jail.
freebsd-update -b /jails/basejail fetch install
freebsd-update -b /jails/basejail IDS
We now snapshot the basejail
and create a clone of this snapshot for the aa-torrenting
jail that we will use for Anna's Archive.
zfs snapshot naspool/jails/basejail@`freebsd-version`
zfs clone naspool/jails/basejail@`freebsd-version` naspool/jails/aa-torrenting
We now use the following configuration for /etc/jail.conf
.
aa-torrenting {
exec.consolelog = "/var/log/jail_console_${name}.log";
allow.raw_sockets;
exec.clean;
mount.devfs;
devfs_ruleset = 11;
path = "/jails/${name}";
host.hostname = "${name}";
vnet;
vnet.interface = "${epair}b";
$id = "127";
$ip = "10.0.0.${id}/24";
$gateway = "10.0.0.1";
$bridge = "igb0bridge";
$epair = "epair${id}";
exec.prestart = "/sbin/ifconfig ${epair} create up";
exec.prestart += "/sbin/ifconfig ${epair}a up descr jail:${name}";
exec.prestart += "/sbin/ifconfig ${bridge} addm ${epair}a up";
exec.start += "/sbin/ifconfig ${epair}b ${ip} up";
exec.start += "/sbin/route add default ${gateway}";
exec.start += "/bin/sh /etc/rc";
exec.stop = "/bin/sh /etc/rc.shutdown";
exec.poststop = "/sbin/ifconfig ${bridge} deletem ${epair}a";
exec.poststop += "/sbin/ifconfig ${epair}a destroy";
}
Now we create the devfs ruleset to enable access to devices under /dev
inside the jail. Add the following to /etc/devfs.rules
.
[devfsrules_jail_vnet=11]
add include $devfsrules_hide_all
add include $devfsrules_unhide_basic
add include $devfsrules_unhide_login
add include $devfsrules_jail
add path 'tun*' unhide
add path 'bpf*' unhide
Enable the jail
utility in /etc/rc.conf
.
sysrc jail_enable="YES"
sysrc jail_parallel_start="YES"
Start the jail service for aa-torrenting.
service jail start aa-torrenting
Setting up Wireguard inside the jail
Since we have the /dev/tun*
devfs rule, we now need to install Wireguard inside the jail.
jexec -u root aa-torrenting
pkg install wireguard-tools wireguard-go
Download a Wireguard configuration for ProtonVPN, and save it to /usr/local/etc/wireguard/wg0.conf
.
Enable Wireguard to run when the jail boots up.
sysrc wireguard_enable="YES"
sysrc wireguard_interfaces="wg0"
Start the Wireguard daemon and make sure you are connected to it properly.
service wireguard start
curl ipinfo.io
The curl command should display the IP address of the Wireguard server defined in /usr/local/etc/wireguard/wg0.conf
.
Setting up qBittorrent inside the jail
Install the qbittorrent-nox package.
pkg install -y qbittorrent-nox
Before running the daemon from /usr/local/etc/rc.d/qbittorrent
, we must run the qbittorrent command from the shell so that we can see the default password generated for the web UI. For some reason it is not shown in any logs, and the qbittorrent-nox manpage wrongly says the password is "adminadmin". Experience shows otherwise.
pkg install -y sudo
sudo -u qbittorrent qbittorrent-nox --profile=/var/db/qbittorrent/conf --save-path=/var/db/qbittorrent/Downloads --confirm-legal-notice
Copy the password displayed after running the command. Login to the qBittorrent web UI at http://10.0.0.127:8080 with login admin
and the password you copied. In the web UI, open the options menu and go over to the Web UI tab. Change the login password to your own. Save the options to close the menu.
Now press CTRL-c
to stop the qbittorrent-nox process. Make the following changes to the aa-torrenting jail's /etc/rc.conf.
sysrc qbittorrent_enable="YES"
sysrc qbittorrent_flags="--confirm-legal-notice"
Enable the qBittorrent daemon.
service qbittorrent start
Go back to the web UI at http://10.0.0.127:8080. Go to the options menu and go over to the Advanced tab, which is the very last tab. Change the network interface to wg0
.
Finding the forwarded port that the ProtonVPN server is using
Install the libnatpmp
package.
pkg install libnatpmp
Make sure that port forwarding is allowed on the server you're connected to, which it should be if you enabled it while creating the Wireguard configuration on the ProtonVPN website. Run the natpmpc
command against the ProtonVPN Wireguard gateway.
natpmpc -g 10.2.0.1
If the output looks like the following, you're good.
initnatpmp() returned 0 (SUCCESS)
using gateway : 10.2.0.1
sendpublicaddressrequest returned 2 (SUCCESS)
readnatpmpresponseorretry returned 0 (OK)
Public IP address : 62.112.9.165
epoch = 58081
closenatpmp() returned 0 (SUCCESS)
Now create the UDP and TCP port mappings, then loop natpmpc so that it doesn't expire.
while true ; do date ; natpmpc -a 1 0 udp 60 -g 10.2.0.1 && natpmpc -a 1 0 tcp 60 -g 10.2.0.1 || { echo -e "ERROR with natpmpc command \a" ; break ; } ; sleep 45 ; done
The port allocated for this server is shown on the line that says "Mapped public port XXXXX protocol UDP to local port 0 liftime 60". Port forwarding is now activated. Copy this port number and, in the qBittorrent web UI options menu, go to the Connections tab and enter it into the "Port used for incoming connections" box. Make sure to uncheck the "Use UPnP / NAT-PMP port forwarding from my router" box.
If the loop terminates, you'll need to re-run this loop script each time you start a new port forwarding session or the port will only stay open for 60 seconds.
TODO: Create an RC script for this that can be enabled with sysrc and sends output to
/var/log/natpmpc-port-forwarding.log
.
Void Linux
Install on encrypted Btrfs
Source: Void Linux Installation Guide
First, update xbps.
xbps-install -Syu xbps
Partition disk
Install gptfdisk
.
xbps-install -Sy gptfdisk
Run gdisk.
gdisk /dev/nvme1n1
Create the following partitions:
Partition Type | Size |
---|---|
EFI | +600M |
boot | +900M |
root | Remaining space |
Create the filesystems.
mkfs.vfat -nBOOT -F32 /dev/nvme1n1p1
mkfs.ext4 -L grub /dev/nvme1n1p2
cryptsetup luksFormat --type=luks -s=512 /dev/nvme1n1p3
cryptsetup open /dev/nvme1n1p3 cryptroot
mkfs.btrfs -L void /dev/mapper/cryptroot
Mount partitions and create Btrfs subvolumes.
mount -o defaults,compress=zstd:1 /dev/mapper/cryptroot /mnt
btrfs subvolume create /mnt/root
btrfs subvolume create /mnt/home
umount /mnt
mount -o defaults,compress=zstd:1,subvol=root /dev/mapper/cryptroot /mnt
mkdir /mnt/home
mount -o defaults,compress=zstd:1,subvol=home /dev/mapper/cryptroot /mnt/home
Create Btrfs subvolumes for parts of the filesystem to exclude from snapshots. Nested subvolumes are not included in snapshots.
mkdir -p /mnt/var/cache
btrfs subvolume create /mnt/var/cache/xbps
btrfs subvolume create /mnt/var/tmp
btrfs subvolume create /mnt/srv
btrfs subvolume create /mnt/var/swap
Mount EFI and boot partitions.
mkdir /mnt/efi
mount -o rw,noatime /dev/nvme1n1p1 /mnt/efi
mkdir /mnt/boot
mount -o rw,noatime /dev/nvme1n1p2 /mnt/boot
Base system installation
If using x86_64
:
REPO=https://mirrors.hyperreal.coffee/voidlinux/current
ARCH=x86_64
If using musl:
REPO=https://mirrors.hyperreal.coffee/voidlinux/current/musl
ARCH=x86_64-musl
Install the base system.
XBPS_ARCH=$ARCH xbps-install -S -R "$REPO" -r /mnt base-system base-devel btrfs-progs cryptsetup vim sudo dosfstools mtools void-repo-nonfree
chroot
Mount the pseudo filesystems for the chroot.
for dir in dev proc sys run; do mount --rbind /$dir /mnt/$dir; mount --make-rslave /mnt/$dir; done
Copy DNS configuration.
cp -v /etc/resolv.conf /mnt/etc/
Chroot.
PS1='(chroot) # ' chroot /mnt/ /bin/bash
Set hostname.
echo "hostname" > /etc/hostname
Set timezone.
ln -sf /usr/share/zoneinfo/America/Chicago /etc/localtime
Synchronize the hardware clock.
hwclock --systohc
If using glibc, uncomment en_US.UTF-8
from /etc/default/libc-locales
. Then run:
xbps-reconfigure -f glibc-locales
Set root password.
passwd root
Configure /etc/fstab
.
UEFI_UUID=$(blkid -s UUID -o value /dev/nvme1n1p1)
GRUB_UUID=$(blkid -s UUID -o value /dev/nvme1n1p2)
ROOT_UUID=$(blkid -s UUID -o value /dev/mapper/cryptroot)
cat << EOF > /etc/fstab
UUID=$ROOT_UUID / btrfs defaults,compress=zstd:1,subvol=root 0 1
UUID=$UEFI_UUID /efi vfat defaults,noatime 0 2
UUID=$GRUB_UUID /boot ext4 defaults,noatime 0 2
UUID=$ROOT_UUID /home btrfs defaults,compress=zstd:1,subvol=home 0 2
tmpfs /tmp tmpfs defaults,nosuid,nodev 0 0
EOF
Setup Dracut. A "hostonly" install means that Dracut will generate a lean initramfs with everything you need.
echo "hostonly=yes" >> /etc/dracut.conf
If you have an Intel CPU:
xbps-install -Syu intel-ucode
Install GRUB.
xbps-install -Syu grub-x86_64-efi os-prober
grub-install --target=x86_64-efi --efi-directory=/efi --bootloader-id="Void Linux"
If you are dual-booting with another OS:
echo "GRUB_DISABLE_OS_PROBER=0" >> /etc/default/grub
Setup encrypted swapfile.
truncate -s 0 /var/swap/swapfile
chattr +C /var/swap/swapfile
chmod 600 /var/swap/swapfile
dd if=/dev/zero of=/var/swap/swapfile bs=1G count=16 status=progress
mkswap /var/swap/swapfile
swapon /var/swap/swapfile
RESUME_OFFSET=$(btrfs inspect-internal map-swapfile -r /var/swap/swapfile)
cat << EOF >> /etc/default/grub
GRUB_CMDLINE_LINUX="resume=UUID-$ROOT_UUID resume_offset=$RESUME_OFFSET"
EOF
Regenerate configurations.
xbps-reconfigure -fa
Install Xorg and Xfce.
xbps-install -Syu xorg xfce4
If you have a recent Nvidia GPU:
xbps-install -Syu nvidia
Add user.
useradd -c "Jeffrey Serio" -m -s /usr/bin/zsh -U jas
passwd jas
echo "jas ALL=(ALL) NOPASSWD: ALL" | tee -a /etc/sudoers.d/jas
Enable system services.
for svc in "NetworkManager" "crond" "dbus" "lightdm" "ntpd" "snapperd" "sshd"; do
ln -sf /etc/sv/$svc /var/service;
done
Disable bitmap fonts.
ln -sf /usr/share/fontconfig/conf.avail/70-no-bitmaps.conf /etc/fonts/conf.d/
xbps-reconfigure -f fontconfig
Setup package repository.
echo "repository=https://mirrors.hyperreal.coffee/voidlinux/current" | tee /etc/xbps.d/00-repository-main.conf
# For musl
echo "repository=https://mirrors.hyperreal.coffee/voidlinux/current/musl" | tee /etc/xbps.d/00-repository-main.conf
Setup Pipewire for audio.
mkdir -p /etc/pipewire/pipewire.conf.d
ln -sf /usr/share/examples/wireplumber/10-wireplumber.conf /etc/pipewire/pipewire.conf.d/
ln -sf /usr/share/applications/pipewire.desktop /etc/xdg/autostart/
Generate configurations.
xbps-reconfigure -fa
Exit chroot, unmount disks, and reboot.
exit
umount -lR /mnt
reboot
Windows
Repair boot files
- Download Windows 11 ISO from Microsoft and write to USB.
- Boot into Windows setup utility.
- Select Repair computer -> Troubleshoot -> Advanced -> Cmd prompt
This procedure assumes the following:
- main disk is
disk 0
- EFI partition is
part 1
- Windows OS drive letter is
c:
The following commands will format the old EFI partition, mount it to s:
, and copy the boot files to it:
diskpart
> list disk
> sel disk 0
> list part
> sel part 1
> format fs=fat32 quick label=System
> list vol
> exit
mountvol S: /S
bcdboot c:\windows /s s: /f UEFI /v
exit
ZFS
Difference between scrub and resilver
Lifted from Haravikk on ServerFault.
The main scrubbing and resilvering processes in ZFS are essentially identical – in both cases records are being read and verified, and if necessary written out to any disk(s) with invalid (or missing) data.
Since ZFS is aware of which records a disk should have, it won't bother trying to read records that shouldn't exist. This means that during resilvering, new disks will see little or no read activity as there's nothing to read (or at least ZFS doesn't believe there is).
This also means that if a disk becomes unavailable and then available again, ZFS will resilver only the new records created since the disk went unavailable. Resilvering happens automatically in this way, whereas scrubs typically have to be initiated (either manually, or via a scheduled command).
There is also a special "sequential resilver" option for mirrored vdevs that can be triggered using zpool attach -s or zpool replace -s – this performs a faster copy of all data without any checking, and initiates a deferred scrub to verify integrity later. This is good for quickly restoring redundancy, but should only be used if you're confident that the existing data is correct (you run regular scrubs, or scrubbed before adding/replacing).
Finally there are some small differences in settings for scrub and resilver - in general a resilver is given a higher priority than a scrub since it's more urgent (restoring/increasing redundancy), though due to various factors this may not mean a resilver is faster than a scrub depending upon write speed, number of record copies available etc.
For example, when dealing with a mirror a resilver can be faster since it doesn't need to read from all disks, but only if the new disk is fast enough (can be written to at least as quickly as the other disk(s) are read from). A scrub meanwhile always reads from all disks, so for a mirror vdev it can be more intensive. For a raidz1 both processes will read from all (existing) disks, so the resilver will be slower as it also requires writing to one, a raidz2 doesn't need to read all disks so might gain a little speed and so-on.
Basically there's no concrete answer to cover every setup. 😉
Specifically with regards to the original question:
If you know a disk has failed and want to replace it, and are using a mirrored vdev, then a sequential resilver + scrub (zpool replace -s) will be faster in terms of restoring redundancy and performance, but it'll take longer overall before you know for sure that the data was fully restored without any errors since you need to wait for the deferred scrub. A regular resilver will take longer to finish copying the data, but is verified the moment it finishes.
However, if you're talking about repairing data on a disk you still believe to be okay then a scrub is the fastest option, as it will only copy data which fails verification, otherwise the process is entirely reading and checking so it's almost always going to be faster.
In theory a resilver can be just as fast as a scrub, or even faster (since it's higher priority), assuming you are copying onto a suitably fast new drive that's optimised for continuous writing. In practice though that's usually not going to be the case.