TLDR: I am running some Docker containers on a homelab server, and the containers’ volumes are mapped to NFS shares on my NAS. Is that bad performance?

  • I have a Linux PC that acts as my homelab server, and a Synology NAS.
  • The server is fast but has 100GB SSD.
  • The NAS is slow(er) but has oodles of storage.
  • Both devices are wired to their own little gigabit switch, using priority ports.

Of course it’s slower to run off HDD drives compared to SSD, but I do not have a large SSD. The question is: (why) would it be “bad practice” to separate CPU and storage this way? Isn’t that pretty much what a data center also does?

  • PlutoniumAcid@lemmy.worldOP
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I think you are saying what I am also saying, but my post was not clear on this:

    The container files live on the server, and I use the volume section in my docker-compose.yml files to map data to the NFS share:

            volumes:
                - '/mnt/nasvolume/docker/picoshare/data:/data'
    

    Would you say this is an okay approach?