TLDR: I am running some Docker containers on a homelab server, and the containers’ volumes are mapped to NFS shares on my NAS. Is that bad performance?

  • I have a Linux PC that acts as my homelab server, and a Synology NAS.
  • The server is fast but has 100GB SSD.
  • The NAS is slow(er) but has oodles of storage.
  • Both devices are wired to their own little gigabit switch, using priority ports.

Of course it’s slower to run off HDD drives compared to SSD, but I do not have a large SSD. The question is: (why) would it be “bad practice” to separate CPU and storage this way? Isn’t that pretty much what a data center also does?

  • Molecular0079@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I wouldn’t recommend running container volumes over network shares mainly because network instability between NAS and server can cause some really weird issues. Imagine an application having its files ripped from underneath them while they’re running.

    I would suggest containers + volumes together on the server, and stuff that’s just pure data on the NAS. So for example, if you were to run a Jellyfin media server, the docker container and its volumes will be on the server, but the video and audio files will be stored on the NAS and accessed via a network share mount.

    • PlutoniumAcid@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I think you are saying what I am also saying, but my post was not clear on this:

      The container files live on the server, and I use the volume section in my docker-compose.yml files to map data to the NFS share:

              volumes:
                  - '/mnt/nasvolume/docker/picoshare/data:/data'
      

      Would you say this is an okay approach?