Hi all, sorry if this has been asked/discussed before (I couldn’t find any directly overlapping posts):

I have been running the Nextcloud snap now for quite some time, and although things have run quite smoothly, I never really managed to properly back things up.

I make weekly backups of the database, config and data, but it’s very hard and time consuming to glue these elements back together. And as they say: when you can’t check whether a backup works, it’s not really a backup.

I have been experimenting with KVM/qemu lately and things look pretty great. The idea of simply backing up the entire OS that runs Nextcloud (a backup that you can easily deploy/run somewhere else to test if it’s working) sounds very attractive.

Reading around, however, tells me that some of you recommend running the Nextcloud docker (instead of a VM).

My questions:

  1. What would be the advantage of running Nextcloud as a docker, instead of within a VM?
  2. What would be a sensible way to have an incremental/differential backup of the VM/Docker?
  3. The storage usage of my Nextcloud instance exceeds 1TB. If I run it within a VM, I will have to connect it to a 2TB SSD. Does it make sense to add the external storage space to the VM? How does that affect the ease of backing the full VM up? Or (as I have read here and there) should I simply put the entire VM on the external SSD?
  • PriorProject@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 year ago

    Docker is a powerful tool to increase confidence in your backups.

    • In a VM, the way you figure out which files to backup is to read the docs. If they’re wrong or you misread them, the only way you’ll find out is by doing a full restore test… which is often painful and complex in home setups.
    • In docker, the filesystem outside volumes is destroyed between every container restart. If your volume setup is insufficient, you’ll repeatedly lose state during your initial installation process between container restarts. You’ll continually test your state management throughout the lifetime of the service during restarts. This leaves a much smaller window for backup mistakes.

    The tradeoff with docker is that the networking is complex (well, everything is complex… but the networking is where it often hurts). But if you’re able to deal with that one-time pain, it’s superior almost all the time for home setups. I think the only things I run outside docker are ssh and netdata. SSH because it’s stateless and works perfectly out of the box, and netdata because it wants permissions to everything… and is functionally stateless for me because I don’t care if I drop my observability data.