tldr: I’d like to set up a reverse proxy with a domain and an SSL cert so my partner and I can access a few selfhosted services on the internet but I’m not sure what the best/safest way to do it is. Asking my partner to use tailsclae or wireguard is asking too much unfortunately. I was curious to know what you all recommend.

I have some services running on my LAN that I currently access via tailscale. Some of these services would see some benefit from being accessible on the internet (ex. Immich sharing via a link, switching over from Plex to Jellyfin without requiring my family to learn how to use a VPN, homeassistant voice stuff, etc.) but I’m kind of unsure what the best approach is. Hosting services on the internet has risk and I’d like to reduce that risk as much as possible.

  1. I know a reverse proxy would be beneficial here so I can put all the services on one box and access them via subdomains but where should I host that proxy? On my LAN using a dynamic DNS service? In the cloud? If in the cloud, should I avoid a plan where you share cpu resources with other users and get a dedicated box?

  2. Should I purchase a memorable domain or a domain with a random string of characters so no one could reasonably guess it? Does it matter?

  3. What’s the best way to geo-restrict access? Fail2ban? Realistically, the only people that I might give access to live within a couple hundred miles of me.

  4. Any other tips or info you care to share would be greatly appreciated.

  5. Feel free to talk me out of it as well.

  • Fedegenerate@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    11 hours ago

    On my home network I have nginxproxymanager running let’s encrypt with my domain for https, currently only for vaultwarden (I’m testing it for a bit for rolling it out or migrating wholly over to https). My domain is a ######.xyz that’s cheap.

    For remote access I use Tailscale. For friends and family I give them a relay [raspberry pi with nginx which proxys them over tailscale] that sits on their home network, that way they need “something they have”[the relay] and “something they know” [login credentials] to get at my stuff. I won’t implement biometrics for “something they are”. This is post hoc justification though, and nonesense to boot. I don’t want to expose a port and a VPS has low WAF and I’m not installing tailscale on all of their devices so s relay is an unhappy compromise.

    For bonus points I run pihole to pretty up the domain names to service.swirl and run a homarr instance so no-one needs to remember anything except home.swirl, but if they do remember immich.swirl that works too.

    If there are many ways to skin a cat I believe I chose to use a spoon, don’t be like me. Updating each dockge instance is a couple minutes and updating diet pi is a few minutes more which, individually, is not a lot on my weekly/monthly maintence respectfully. But on aggregate… I have checklists. One day I’ll write a script that will ssh into a machine > update/upgrade the os > docker compose pull/rebuild/purge> move on to the next relay… That’ll be my impetus to learn how to write a script.

    • a_fancy_kiwi@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      3 hours ago

      That’ll be my impetus to learn how to write a script.

      This part caught my eye. You were able to do all that other stuff without ever attempting to write a script? That’s surprising and awesome. Assuming you are running everything on a linux server, I feel like a bash script that is run via a cronjob would be your best bet, no need to ssh into the server, just let it do it on it’s own. I haven’t tested any of this but I do have scripts I wrote that do automatic ZFS backups and scrubs; the order should go something like:

      open the terminal on the server and type

      mkdir scripts

      cd scripts

      nano docker-updates.sh

      type something along the lines of this (I’m still learning docker so adjust the commands to your needs)

      #!/bin/bash
      
      cd /path/to/scripts/docker-compose.yml
      docker compose pull && docker compose up -d
      docker image prune -f
      

      save the file and then type sudo chmod +x ./docker-updates.sh to make it executable

      and finally set up a cronjob to run the script at specific intervals. type

      crontab -e

      or

      sudo crontab -e (this is if you want to run the script as root but ideally, you just add your user to the docker group so this shouldn’t be needed)

      and at the bottom of the file type this and save, that’s it:

      # runs script at 1am on the first of every month
      0 1 1 * * /path/to/scripts/docker-updates.sh
      

      this website will help you choose a different interval

      For OS updates you basically do the same thing except the script would look something like: (I forget if you need to type “sudo” or not; it’s running as root so I don’t think you need it but maybe try it with sudo in front of both "apt"s if it’s not working. Also use whatever package manager you have if you aren’t using apt)

      while in the scripts folder you created earlier

      nano os-updates.sh

      #!/bin/bash
      
      apt update -y && apt upgrade -y
      reboot now
      

      save and don’t forget to make it exectuable

      then use

      sudo crontab -e (because you’ll need root privileges to update. this will run the script as root without requiring you to input your password)

      # runs script at 12am on the first of every month
      0 0 1 * * /path/to/scripts/os-updates.sh
      
      • Fedegenerate@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        39 minutes ago

        I did think about cron but, long ago, I heard it wasn’t best practice to update through cron because the lack of logs makes things difficult to see where things went wrong, when they do.

        I’ve got automatic-upgrades running on stuff so it’s mostly fine. Dockge is running purely to give me a way to upgrade docker images without having to ssh. It’s just the monthly routine of “apt update && apt upgrade -y” *5 that sucks.

        Thank you for the advice though. I’ll probably set cron to update the images with the script as you suggest. I have a “maintenance” homarr page as a budget uptime kuma so I can quickly look there to make sure everything is pinging at least. I made the page so I can quickly get to everyone’s dockge, pihole and nginx but the pings were a happy accident.