TLDR: I am running some Docker containers on a homelab server, and the containers’ volumes are mapped to NFS shares on my NAS. Is that bad performance?

  • I have a Linux PC that acts as my homelab server, and a Synology NAS.
  • The server is fast but has 100GB SSD.
  • The NAS is slow(er) but has oodles of storage.
  • Both devices are wired to their own little gigabit switch, using priority ports.

Of course it’s slower to run off HDD drives compared to SSD, but I do not have a large SSD. The question is: (why) would it be “bad practice” to separate CPU and storage this way? Isn’t that pretty much what a data center also does?

  • billwashere@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Well it’s not “bad practice” per se but it ultimately depends on what you are trying to accomplish and what the underlying architecture is capable of supporting.

    Not docker specific but we run enterprise level applications with VMWare esxi hosts accessing vm’s over an iSCSI network share and plan on trying this using NFS datastores over 100gbe with SCM and E1.L SSDs. So this should work fine but this is super fast super expensive hardware and is a lot different from homelab type architecture. Now I have done similar things at home with two esxi hosts using a Drobo for NFS datastores so I could vmotion. Was it super high performance? No. Did it work? Yes.

    For a small container based environment I’d probably keep all the containers and storage for the containers local, probably on a fast SSD or NVME drive. Those are usually small. But any large media I’d NFS mount and keep that on the NAS. You can get 1TB NVME drive and pcie adapter for less than $100 on Amazon that would be way fast enough for anything in a home lab.

    My 2¢… 😀