A simpler way to self-host on Linux, by running user services in tmux
windows, secured by using bubblewrap
sandboxes, and started up on boot by cron
. For motivations and rationale behind this repo, please see the blog post Simpler self-hosting with tmux and bubblewrap
Installation
- Install a fresh Linux OS, e.g. Ubuntu Server, or Raspberry Pi OS (on a Raspberry Pi)
- Install
tmux
andbubblewrap
if they aren't already installed. - Log in as the user created during install.
- Download this repo to the user's home directory: download zip. You can alternatively use
fossil clone https://cloud.tobykurien.com/cgi-bin/repo/bws
to clone this repo, and you can keep it up to date viafossil update
. See below for notes on Fossil. - Add the line from crontab file to the user's crontab:
crontab ./crontab
Usage
- To add a service, download the binaries into a subfolder of ~/services
- e.g.
~/services/caddy/
- e.g.
- Make a script file called
start.sh
to start it, e.g.~/services/caddy/start.sh
. Don't forget tochmod +x start.sh
to make it executable. - Start the service:
~/services/bws start caddy
- The logs are written to
/tmp/log/caddy.log
so you cantail -f /tmp/log/caddy.log
, or~/services/bws attach caddy
to attach to the tmux window. Use "Ctrl+b" followed by "d" to detach the window and go back to the command line. - To auto-start the service on boot, add a line to
~/services/start
, e.g.echo "~/services/bws start caddy" >> ~/services/start
- A good way to explore the sandbox of a running service is to try running the ttyd service and poke around. You can also run
~/services/bws shell caddy
to open a shell into the sandbox. This mode makes the service folder writable, which is useful if you need to install or configure the service. - If you need to install some dependencies, e.g. python modules, you can open a service sandbox shell and install inside the sandbox, e.g.
$ ~/services/bws shell myservice Starting a shell inside sandbox of service myservice Enabling write on app directory. $ pip3 install --user -r requirements.txt
- If you run your service on multiple platforms, you can manage the architecture-specific binaries by putting them into "bin/linux-{platform}/" (e.g. ~/services/gemini/bin/linux-x86_64/molly-brown) and the start.sh script can execute the correct binary for the platform using "uname" like so:
BIN="linux-`uname -m`"
bin/$BIN/molly-brown -c ./molly.conf
Configuring the sandbox: sandbox.args
You can configure the sandbox for each service by adding a text file called sandbox.args
containing additional parameters to pass to the bubblewrap sandbox, e.g.
--unshare-net
--bind-try /media/usb $HOME/usb
--ro-bind /etc/pki /etc/pki
The above will remove network access from the sandbox for the service, try to mount the /media/usb folder into the sandbox at ~/usb (writable), and read-only mount /etc/pki into the sandbox so that certain applications can verify SSL certificates. See the bubblewrap man page for more details.
Using Nix package manager: packages.nix
Once you have installed the Nix package manager, you can configure the packages to be installed into the sandbox using a packages.nix
file, which should contain the names of packages available in the nixpkgs channel, e.g. to install nixpkgs.haproxy and nixpkgs.thttpd, the file will contain:
haproxy
thttpd
You can find the package names by searching on https://search.nixos.org/packages.
Notes about Fossil
Fossil is a lot like git so it's familiar to use if you know git - see docs here. Why not just use git? I switched from using GitHub for my projects to self-hosting my repos after GitHub was bought by Microsoft. Fossil is much easier to self-host: it's a single binary file, stores it's repository into a single sqlite database, and provides similar functionality of the GitHub service (like issue tracking, wiki, and additionally a forum, etc.). All the data (including issues, etc.) are stored within the repo sqlite database file, making backups really simple. Type fossil ui
in the checked out repo and you will get the full web interface, all served from one database file by a single binary executable! See Fossil vs Git for more.