# Accessing internal services over HTTPS via SOCKS5 Proxy A few months ago, if you asked my about my homelab setup, I would have been telling you all of the awesome thing about Tailscale and why everyone should use it to connect hosts over different wifi networks. However, when I bought `williamsfam.us.com`, I realized that I could now self-host Tailscale using Headscale. While I believe to this day that Headscale is a great product, it was just too "magic" for my taste. I wanted to be closer to the protocols. So I ditched Headscale in favor of raw Wireguard. I was able to successfully set up a point-to-point network with Wireguard, but it felt very contrived: I didn't like how I had to generate new wireguard keys for every machine, especially since every machine already had SSH keys: _why can't I just use SSH to authenticate Wireguard?_ So I went for a while without any inter-host networking. I configured SSH to safely connect over the open internet, and, in reading the OpenSSH docs, discovered `LocalForward` and `DynamicForward`. The more interesting of the two, `DynamicForward`, starts a SOCKS5 proxy on the client that exposes all ports running on the server machine. I realized that this could fill my Wireguard hole. As a test, I set up a simple fileserver running on `0.0.0.0:8080`. By running ssh with the flag `-D 9090` from a different computer and connecting to the proxy using FoxyProxy, I could successfully access the fileserver. With this simple test accomplished, the world was mine to command! The next test was to get pretty urls for the services. `http://ganymede:8080` is a pretty ugly URL, and I also wanted my connection to the services to be TLS encrypted. First, I learned that you can create custom urls that will only resolve on your machine using a line in `/etc/hosts` that looks like `127.0.0.1 `. Next, I configured my Caddy server to serve on these urls. So, for the file server, I was able to make it accessable on the server at `httpss://files.ganymede` using this caddy configuraton: ``` files.ganymede { tls internal reverse_proxy 127.0.0.1:8080 } ``` The `tls internal` part tells caddy to not attempt to fetch public TLS certificates from Let's Encrypt. This is important because `files.ganymede` is not a valid public domain name, and Let's Encrypt will fail to create the certificate. As a side-effect, using internal TLS means that all client machines needed to trust Caddy's PKI certificate, but this was easy enough to setup using `security.pki.certificates` on NixOS. The only step left was to tell the proxy to resolve DNS on the server instead of on the client. Luckily, FoxyProxy has this as an option called "Proxy DNS" to achieve this At this point, I could access my file server at `https://files.ganymede` from my client machine, as long as I was connected over FoxyProxy. Yay! After this, it was just a matter of setting up lots of internal services! As of now, I have the following services configured: - `stats.ganymede`: a GoAccess server for viewing statistics about my Caddy logs - `music.ganymede`: a Polaris music server for listening to music - `btop.ganymede`: an instance of ttyd that hosts btop over the internet (if anyone can find me a system monitoring tool that's as good as btop and natively runs on the web, please let me know) - `git.ganymede`: a cgit server for my personal programming projects - `files.ganymede`: a Copyparty fileserver - `bittorrent.ganymede`: The qBittorrent WebUI The last piece of friction was that I had to type `ssh -D 9090 collin@ganymede` in the terminal every time I wanted to access my services! I fixed this using AutoSSH, which runs as a systemd service and automatically provides the proxy when the connection can be made. ## Limitations This setup is clearly not as featured as a full VPN, like Wireguard, and definitely not as featured as Tailscale or Headscale. However, it provides me with the features that I need, and that's good enough for me. One limitation that might bother people is that it can only host services from one device. Hosting services on a second device would require double-proxying from that device to my server, then from the server to my end device. This would be slow and inefficient.