NFS at 40 – Remembering the Sun Microsystems Network File System

Continued Use and Strengths of NFS

  • Still widely used in production (datacenters, hedge funds, large media origins, HPC clusters) and at home (NAS, backups, media, dev directories, even emulator save-games).
  • Praised for simplicity, performance on fast LANs, POSIX semantics and easy client support on Unix-like systems.
  • Common patterns: NFS-root diskless workstations, centralized /usr/local, shared large datasets, Kubernetes storage, and AWS EFS.

Alternatives and Comparisons

  • SMB/Samba:
    • Works well for many, especially with Windows clients and large shared volumes.
    • Others find Samba configuration painful and fragile compared to NFS, especially with AD.
    • macOS SMB client performance is widely criticized; NFS often performs better there.
  • sshfs:
    • Extremely easy to deploy (just SSH), good auth/encryption, fine for ad‑hoc or low‑demand use; slower and quirky for many small files.
  • WebDAV, SFTP, 9P:
    • Used for niche cases (read‑only shares, firewall‑friendly access, VM filesystem sharing).
  • Object storage (S3 and compatibles):
    • Attractive for robustness and avoiding “hung filesystem” semantics, but not a real filesystem; FUSE/S3 mounts have cost and consistency pitfalls.
  • Other distributed filesystems:
    • AFS/DFS remembered for strong security and global namespace but poor performance and heavy admin burden.
    • Lustre, BeeGFS, Isilon, NetApp et al. used in HPC/enterprise for scalable, parallel IO.
    • Some newer projects use NFS/9P instead of FUSE for local virtual filesystems.

Operational Pitfalls and Limitations

  • Biggest complaint: when the NFS server or network misbehaves, clients can hang hard, sometimes freezing desktops or requiring careful reboot sequencing.
  • “Hard” vs “soft” mounts and options like intr mitigate but introduce their own failure modes; behavior differs by OS and is often under-tested.
  • Latency over network is much worse than local SSD; many modern apps assume low-latency storage and can perform poorly on NFS.
  • Scaling and cross-mount complexity can create “everything is stuck” scenarios in large NFS webs.
  • Security model seen as dated: host/UID-based trust or full Kerberos, with no middle ground; flat UID/GID namespace noted as a long-known issue.

Shifts in Usage Patterns

  • Many everyday use cases have moved to cloud sync/storage (Google Drive, Dropbox, etc.) and to Git/HTTP-based workflows, reducing reliance on shared network filesystems.
  • Nonetheless, several commenters argue NFS remains the most sane, lightweight option for self-hosted storage (TrueNAS, homelabs, small clusters) and that “if it works for you, you’re not doing it wrong.”