As a responsible data steward, I was looking for a way to send my TrueNAS snapshots somewhere far away as an offsite backup. I previously found a blog post by Alan Norbauer outlining how to run Zerotier on TrueNAS, but I'm a big fan of the Nebula project and wanted to use this instead (although I did use Zerotier for a while, following the instructions in the blog post, and it worked well).
In this post, I've outlined the setup I've been using for the last couple years on a mix of TrueNAS Core (yes, still sticking with the BSD-based version as long as I can) and TrueNAS Scale boxes. If you've been searching for a way to connect remote TrueNAS instances for backup, this might work for you, too.
Introduction
This is a simple service file for running Nebula as a service/daemon on TrueNAS Core and/or Scale. Because Nebula is a single binary, we can avoid installing any packages or making permanent modifications to the TrueNAS system, which should keep the "appliance" happy.
Because TrueNAS Core is based on FreeBSD and Scale is a Linux system, there will be some minor differences in how they're setup; these are outlined below.
When we're finished, you should have a directory containing the following:
.
└── /mnt/fast/persistent/nebula/
├── nebula
├── nebula-cert
├── config.yml
├── ca.crt
├── host.crt
├── host.key
├── nebula-setup.sh
├── nebula.rc (FreeBSD/Core)
├── nebula.service (Linux/Scale)
└── nebula-start.sh (Linux/Scale)
Install Nebula
We're going to "install" Nebula in a persistent storage volume to avoid losing our configuration, etc. when TrueNAS is rebooted.
Create a storage volume
I chose to use my fast
storage pool, an SSD mirror, to host the persistent
dataset and this is reflected in the commands below, but you can change things as needed for your setup; relative to the example below, you'll replace fast
with the name of your storage pool.
First, create a new dataset via the TrueNAS web UI. I named mine persistent
. As with the pool, if you choose a different name for your dataset, replace persistent
in the example below with the name you used. I did enable encryption on this dataset because it is storing keys that give access to my Nebula network.
Once the dataset is created, we can jump to the shell (either within the web UI or via SSH) and make a directory for Nebula:
mkdir /mnt/fast/persistent/nebula
and change the ownership to limit who can access the files:
chmod -R 750 /mnt/fast/persistent/nebula
root
by default, so you'll either need to use sudo
with these commands or switch to the root
user with sudo su -
.Download Nebula
With the storage setup, we can now get the Nebula binary. Find and use the latest release from Github repo: https://github.com/slackhq/nebula/releases. As of this writing, the latest version is 1.9.6
and all my TrueNAS machines are x86-based, so the download links are:
- for Scale: https://github.com/slackhq/nebula/releases/download/v1.9.6/nebula-linux-amd64.tar.gz
- for Core: https://github.com/slackhq/nebula/releases/download/v1.9.6/nebula-freebsd-amd64.tar.gz
Download the binary and make it executable:
# Change to our persistent directory
cd /mnt/fast/persistent/nebula
# Download the latest version of nebula
wget https://github.com/slackhq/nebula/releases/download/v1.9.6/nebula-freebsd-amd64.tar.gz
# Extract nebula
tar -xvf nebula-freebsd-amd64.tar.gz
# Remove the tar file
rm nebula-freebsd-amd64.tar.gz
# Make nebula executable
chmod +x nebula
Configure Nebula
I won't go into detail on generating your host certificate and keys here, but suffice to say, you will need them. Specifically, you'll need ca.crt, host.crt, host.key, and config.yml to run Nebula. The quickstart in the Nebula docs goes over the process of generating the necessary certificates and ARS Technica has a good, albeit old, step-by-step on setting up a Nebula network.
I've included an example config.yml, below. There are a couple of assumptions made:
- The subnet for the Nebula network is 100.33.33.0/24
- There is a lighthouse available at lighthouse1.nebula.my.tld
I've also included two groups that exist in my network: infra-storage and infra-backup; these group indicate that a given node is a storage server and a backup server, respectively. A node must belong to both of these groups for the permissions to apply. You don't need to use these specific groups, but you'll likely want to limit access either through groups or to specific hosts.
Speaking of the permissions, there are two necessary permissions: allow access on ports 22 and 80. Port 22 is required for SSH, over which the snapshots are replicated. Port 80 is, I think, necessary when first creating the SSH credentials.
pki:
ca: /mnt/fast/persistent/nebula/ca.crt
cert: /mnt/fast/persistent/nebula/host.crt
key: /mnt/fast/persistent/nebula/host.key
static_host_map:
# List Lighthouse nodes here.
# format "Nebula IP": ["public IP or hostname:port"]
#
"100.33.33.1": ["lighthouse1.nebula.my.tld:4242"]
lighthouse:
interval: 60
am_lighthouse: false
hosts:
- "100.33.33.1"
listen:
host: 0.0.0.0
port: 4242
punchy: true
punch_back: true
# EXPERIMENTAL: relay support for networks that can't establish direct connections
relay:
am_relay: false
use_relays: true
tun:
dev: nebula1
drop_local_broadcast: false
drop_multicast: false
mtu: 1300
# Nebula security group configuration
firewall:
outbound_action: drop
inbound_action: drop
conntrack:
tcp_timeout: 60m
udp_timeout: 3m
default_timeout: 10m
max_connections: 10000
outbound:
# Allow all outbound traffic from this node
- port: any
proto: any
host: any
inbound:
# Allow icmp between any nebula hosts
- port: any
proto: icmp
host: any
# Allow ssh on 22 from storage backup endpoints (e.g. for zfs replication)
- port: 22
proto: tcp
groups:
- infra-storage
- infra-backup
# Allow access to port 80 from other tnas boxes (used to setup credentials(?))
- port: 80
proto: tcp
groups:
- infra-storage
- infra-backup
/mnt/fast/persistent/nebula/config.yml
Once you have your configuration set, you can verify that Nebula will run with:
cd /mnt/fast/persistent/nebula
./nebula -config /mnt/fast/persistent/nebula/config.yml
Run Nebula as a service
This is where Core and Scale diverge, simply because they're different operating systems. Each manages services differently, so we have different scripts to copy the necessary files.
When I first set this up, I copied all the files from the persistent/nebula
directory into the OS directory structure. While this worked, my scripts are simple and don't check for changed files, so an update to any of the configuration files or binary would not consistently be propagated. Now, I could have (and probably still should) update the scripts to check modification dates and copy changed files, but that's not the current state of the project.
My current approach is to simply run nebula
from within the persistent/nebula
directory and reference all the configuration files there, too. If any of the configuration files or the nebula
binary are updated, those changes will be reflected on the next restart of the service.
I've separated the configuration files into FreeBSD/TrueNAS Core and Linux/TrueNAS Scale, below; use the one appropriate for your situation.
FreeBSD/Core service
This was my first time creating a FreeBSD service file; they're not that complicated, but they are different if you're used to Linux/systemd. I've linked to a few references at the end of this post, if you want more info.
Create /mnt/fast/persistent/nebula/nebula.rc
and add the following content:
#!/bin/sh
# PROVIDE: nebula
# REQUIRE: NETWORKING
# KEYWORD: shutdown
. /etc/rc.subr
name=nebula
load_rc_config $name
: ${nebula_enable:="NO"}
: ${nebula_dir:="/mnt/fast/persistent/nebula"}
: ${nebula_config:="/mnt/fast/persistent/nebula/config.yml"}
pidfile="/var/run/${name}.pid"
procname="${nebula_dir}/${name}"
command=/usr/sbin/daemon
command_args="-f -p ${pidfile} ${procname} -config ${nebula_config}"
run_rc_command "$1"
/mnt/fast/persistent/nebula/nebula.rc
The service file needs to be added to usr/local/etc/rc.d/
and made executable. You can do this manually:
cp /mnt/fast/persistent/nebula/nebula.rc /usr/local/etc/rc.d/nebula
chmod +x /usr/local/etc/rc.d/nebula
# Start the service
nebula start
Or, preferably, we automate this with a script.
Create /mnt/fast/persistent/nebula/nebula-setup.sh
and add the following content:
SCRIPT_DIR=$( cd -- "$( dirname -- "$0" )" &> /dev/null && pwd )
# Copy nebula service file
if [ ! -f /usr/local/etc/rc.d/nebula ]
then
cp ${SCRIPT_DIR}/nebula.rc /usr/local/etc/rc.d/nebula
chmod +x /usr/local/etc/rc.d/nebula
fi
# Enable nebula service
if [ ! -f /etc/rc.conf.d/nebula ]
then
touch /etc/rc.conf.d/nebula
sysrc -f /etc/rc.conf.d/nebula nebula_enable=YES
fi
service nebula restart
/mnt/fast/persistent/nebula/nebula-setup.sh
This script copies the service file we created above into /usr/local/etc.rc.d/
and creates an entry for the service in /etc/rc.conf.d/
.
Linux/Scale service
As with the FreeBSD version we need a service file (nebula.service
) and a script to setup the service (nebula-setup.sh
). I've found that I also need a third file, nebula-start.sh
, to ensure that nebula.service
is started at boot.
Create each of the files and populate them with the content below.
# Within /mnt/fast/persistent/nebula
touch nebula.service
touch nebula-setup.sh
touch nebula-start.sh
The systemd unit file (nebula.service
):
[Unit]
Description=Nebula overlay networking tool
Wants=basic.target network-online.target nss-lookup.target time-sync.target
After=basic.target network.target network-online.target
Before=sshd.service
[Service]
SyslogIdentifier=nebula
ExecReload=/bin/kill -HUP $MAINPID
ExecStart=/mnt/tank/persistent/nebula/nebula -config /mnt/tank/persistent/nebula/config.yml
Restart=always
[Install]
WantedBy=multi-user.target
/mnt/fast/persistent/nebula/nebula.service
The script to setup the service (nebula-setup.sh
):
#!/bin/bash
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
if [ ! -f /etc/systemd/system/nebula.service ]
then
cp /mnt/tank/persistent/nebula/nebula.service /etc/systemd/system/nebula.service
systemctl daemon-reload
systemctl enable nebula.service
systemctl restart nebula.service
systemctl restart sshd.service
fi
/mnt/fast/persistent/nebula/nebula-setup.sh
And, the script to start nebula.service
:
#!/bin/bash
systemctl restart nebula.service
systemctl restart sshd.service
/mnt/fast/persistent/nebula/nebula-start.sh
Running Nebula
For both versions of TrueNAS we have the nebula-setup.sh
script that needs to be run at startup to ensure the nebula service is available and started. This script needs to be run as a pre-init script on both systems; these are setup within the web UI.
For Core, on the navigation menu on the left, choose Tasks > Init/Shutdown Scripts.
For Scale, on the left navigation menu choose System Settings > Advanced then Init/Shutdown Scripts in the main window.
Click Add to create a new entry, then:
- Enter your preferred Description
- Select Script as the Type
- Enter
/mnt/fast/persistent/nebula/nebula-setup.sh
as the Command - Choose Pre Init for When
- Make sure it is Enabled
- Leave the Timeout at the default of 10 (I haven't had any issues with the default)
With Scale you need to add the second script (nebula-start.sh
), by clicking Add again and following the steps above, but this time:
- Enter
/mnt/fast/persistent/nebula/nebula-start.sh
as the Command - Choose Post Init for When
You should now be able to reboot your server and have the Nebula network connect automatically, allowing you to either manage your server remotely or send snapshots offsite as an offsite backup.
References
FreeBSD service files
- https://jacquesheunis.com/post/freebsd-service-config
- https://joekuan.wordpress.com/2010/05/09/quick-tutorial-on-how-to-create-a-freebsd-system-startup-script/
- https://docs.freebsd.org/en/articles/rc-scripting/
- https://github.com/freebsd/freebsd-ports/blob/main/security/tinc/files/tincd.in
Nebula mesh network
- https://github.com/slackhq/nebula
- https://arstechnica.com/gadgets/2019/12/how-to-set-up-your-own-nebula-mesh-vpn-step-by-step/