Tech Desk — As applications go distributed, teams go hybrid, and the business expects high availability, local disks stop being enough. That’s where networked storage comes in, turning capacity into a centralized, shared resource that multiple machines can access over the network. In that space, two battle-tested technologies still do most of the heavy lifting: NFS (to share files, NAS) and iSCSI (to expose blocks, SAN).

This report explains, in plain terms, what problem each solves, when to use which, the security and performance implications, and what to avoid so your storage doesn’t become an operational trap. It also adds a practical view of how Stackscale implements managed network storage (NFS/iSCSI) and synchronous storage (NetApp-based, RTO=0 / RPO=0) with performance tiers (Flash Premium, Flash Plus, Flash Standard) and Archive for long-term copies.


Why networked storage changes the game

Centralizing data over the network brings immediate benefits:

  • Simpler operations. Instead of watching free space, backups, and alerts on a dozen machines, you run one repository. Capacity upgrades and scheduling backups become more predictable.
  • High availability. If an app server fails, another can take over reading from the same dataset; with local disks, that handoff isn’t possible.
  • Scale without downtime. The storage system grows hot and exposes new capacity to all clients without stopping servers.
  • Real collaboration. Multiple nodes concurrently access shared data (web farms, clusters, HPC).

The real decision isn’t “network storage yes/no,” but at what level you access data: files (NAS) or blocks (SAN). That choice shapes performance, complexity, and how apps will behave.


NAS vs SAN: the distinction that decides the design

NAS (file-level) — NFS. The storage server maintains a filesystem (ext4, XFS…) and exports directories over the network. Clients see folders and request “open/read/write this file.”
Analogy: you ask the librarian for a specific book; you don’t rearrange the shelves.

SAN (block-level) — iSCSI. The storage server exposes a raw volume; the client sees /dev/sdX, partitions and formats it (ext4, XFS, NTFS, VMFS). The server doesn’t know about files—only blocks.
Analogy: you get an empty aisle to organize as you wish; the librarian moves boxes (blocks) on request.

Rule of thumb:

  • Choose NFS for multi-client file sharing (collaboration, web farms, home directories).
  • Choose iSCSI for dedicated block devices with low latency and high IOPS (VM datastores, databases, clusters that require a shared block device with a cluster filesystem).

NFS: native file sharing for Linux/UNIX

What it is & how it works
An NFS server “exports” paths to authorized clients. When the client mounts an export, the directory behaves as if it were local. Administration lives in /etc/exports (what to share, with whom, and with what permissions).

Typical use cases

  • Centralized /home: users see the same environment on any workstation.
  • Web farm: all servers share /srv/www/uploads for uploads and static assets.
  • Shared repositories: datasets, common libraries, build artifacts.

Security considerations

  • NFSv3 trusts IPs and numeric UID/GID. That can suffice in tightly controlled networks but is weak against spoofing and identity mismatches.
  • NFSv4 + Kerberos adds strong authentication, integrity, and optional encryption in transit; it also maps identities by name, not by numbers.

Performance tips

  • Tune client block sizes (rsize/wsize) for large files.
  • Choose sync (safer, slightly slower) or async (faster, data-loss risk on server crash) with intent.
  • Use 10/25/40 GbE, and Jumbo Frames (MTU 9,000) only if every hop supports it.

iSCSI: a “local disk” that travels over TCP/IP

Key concepts

  • Target (server), Initiator (client), LUN (logical unit), and IQN (global identifier).
  • The client discovers the Target, logs in (ideally with CHAP or mutual CHAP), and sees the LUN as a /dev/sdX device to partition and format.

Typical use cases

  • Virtualization (VMware, Proxmox/KVM, Hyper-V): block-level datastores (VMFS, shared LVM).
  • Databases: dedicated LUN, filesystem and tuning to taste; often lower latency than NFS for random I/O.
  • HA clusters that require a shared disk (Windows Failover, Veritas, VMFS).

Critical warning
Mounting the same LUN on multiple clients with ext4/XFS/NTFS will corrupt data. For true block-level multi-access, you need a cluster filesystem (GFS2, OCFS2, VMFS)—or redesign the use case with NFS.

Good practices

  • Activate CHAP; enforce ACLs by IQN; segment with VLANs; isolate the storage plane.
  • Monitor queues and real I/O; care about the backing store (RAID/LVM/SSD/NVMe).

NFS vs iSCSI: a comparison that helps (without dogma)

CharacteristicNFS (files)iSCSI (blocks)
AbstractionHigh: server manages FS; client sees foldersLow: client manages FS; client sees disks
ComplexityLowHigh (Targets, LUNs, IQN, ACLs)
PerformanceVery good for files; metadata overheadExcellent for random I/O; low latency
ConcurrencyExcellent (file locking)Dangerous without cluster FS
Typical casesHome dirs, web farms, shared reposVM datastores, DBs, HA clusters

Three quick decisions

  • 10 web servers with uploadsNFS.
  • 3 hypervisors with live migrationiSCSI (or well-provisioned NFS; iSCSI typically wins on latency).
  • One database server needs +10 TBiSCSI with a dedicated LUN.

Security & performance: beyond “it works”

  • NFS: prefer NFSv4 + Kerberos; if not feasible, restrict by subnet, enforce root_squash, and harden with firewall.
  • iSCSI: CHAP (mutual if possible) + IQN ACL; segment the storage network; log and monitor.
  • Dedicated storage network and Jumbo Frames (MTU 9,000) only when every hop supports them.
  • Observability: track p95/p99 latencies and real I/O; define playbooks and test failovers.

How NFS and iSCSI map to Stackscale: network storage and synchronous storage

Beyond the raw protocol, platform design determines resilience and operator experience. At Stackscale, networked storage is delivered in two complementary ways:

1) Managed network storage (NFS / iSCSI)

  • NAS (NFS) for file sharing among multiple nodes (homes, uploads, shared repos) and SAN (iSCSI) for dedicated blocks (VM datastores, DBs).
  • Hypervisor-agnostic network with real VLANs: the same segment can span VMware clusters, Proxmox clusters, and bare-metal servers, simplifying hybrid designs.
  • Fits virtualization stacks; can be combined with vSAN (VMware) or Ceph (Proxmox/KVM) when the architecture calls for SDS.

2) Synchronous storage (NetApp) with RTO=0 and RPO=0

When the business cannot tolerate downtime nor data loss, synchronous storage on NetApp arrays is the answer:

  • RTO=0 — continuity with no noticeable outage.
  • RPO=0zero data loss: writes are acknowledged simultaneously across redundant arrays.
  • Performance tiers to balance cost/IOPS/latency:
    • Flash Premium — ultra-low latency, very high IOPS (critical DBs, real-time analytics).
    • Flash Plus — balanced latency/throughput (apps, middleware, mixed workloads).
    • Flash Standard — steady performance at optimized cost (general VMs).
    • Archive — capacity tier for backups and retention.
  • Frequent snapshots and multi-DC replication included, with failover/failback paths defined.
  • Natural fit with NFS or iSCSI: export shares or LUNs from the array depending on the use case.

When to go synchronous?

  • Payment front ends, emergency systems, financial core, critical VDI, or any workload where “minutes” are already too much.
  • Projects with strict auditability and SLA-backed continuity: synchronous mode brings certainty (not just redundancy).

Design examples (with Stackscale in the background)

  • Web farm + CDN: NFS on NetApp (Flash Standard/Plus) for /uploads and artifacts, with a real VLAN spanning bare-metal and hypervisors.
  • VMware cluster with live migration: iSCSI + VMFS datastore (Flash Plus/Premium), multi-DC replication, and tested failover playbooks.
  • OLTP database: dedicated iSCSI LUN (Flash Premium), tuned XFS, RPO=0 if the process accepts synchronous mode.
  • Backups & retention: Archive tier with snapshots + immutable copies and verified restores.

Tough questions (and straight answers)

NFS or iSCSI for a VMware/Proxmox datastore?
Both are supported. iSCSI + VMFS typically offers lower latency and higher IOPS for random I/O (critical for VMs). NFS simplifies operations and scales fast when the backend is solid. At Stackscale, iSCSI is a common choice for datastores and NFS for shared content.

What does synchronous storage (RTO=0, RPO=0) add over “classic” replication?
Asynchronous replication reduces RPO—but not to zero; you can still lose seconds/minutes. Synchronous confirms writes across redundant arrays at the same time: no data loss, no interruption. It requires careful network and design (latency, quorum, failover).

Can I mix NFS/iSCSI with Ceph or vSAN?
Yes. Many designs use Ceph/vSAN as local SDS and NFS/iSCSI from the array for persistence and multi-DC replication. Stackscale’s network with real VLANs makes hybrid paths straightforward, without relying on the hypervisor’s SDN.

How do I “secure” NFS/iSCSI in production?

  • NFSv4 + Kerberos for authentication/integrity (and encryption if needed); firewalling, root_squash, and subnet controls.
  • iSCSI with CHAP (mutual ideally), IQN ACLs, and segmentation.
  • Dedicated storage network, MTU 9,000 if every hop supports it, p95/p99 latency metrics, and failover drills.

Bottom line: choose the right tool for the job

  • Pick NFS when you need collaboration and simplicity: multiple servers reading/writing the same files (homes, uploads, shared repos).
  • Pick iSCSI when you need performance and control at the block level: VM datastores, databases, clusters that require an exclusive “disk.”
  • Harden security: NFSv4 + Kerberos where possible; CHAP (mutual ideally) for iSCSI; dedicated storage network.
  • Measure and tune: realistic benchmarks, real I/O telemetry, p95/p99 latencies, and contingency playbooks.

It’s not about whether NFS is “better” than iSCSI, but whether it fits your workload, team, and availability/performance goals.

With Stackscale’s managed network storage (NFS/iSCSI) and synchronous storage (NetApp, RTO=0 / RPO=0), that design shows up in practice: hypervisor-agnostic VLANs, Flash tiers (Premium/Plus/Standard) and Archive, snapshots and multi-DC replication included, and a platform approach that cuts operational friction without sacrificing security and scale. The difference between an IT team that fights fires and one that anticipates growth starts here: the right tool for the right job—and an architecture that won’t box you in tomorrow.


Frequently Asked Questions (FAQ)

What’s the difference between Stackscale’s network storage and synchronous storage?
Network storage exposes NFS/iSCSI resources from the array for NAS/SAN use. Synchronous storage replicates in real time across NetApp arrays, guaranteeing RTO=0 and RPO=0. They can be combined based on criticality and cost.

When should I choose Flash Premium, Flash Plus, or Flash Standard?

  • Premium: minimal latency and very high IOPS (critical DBs, real-time analytics).
  • Plus: balance between latency and throughput (mixed apps, demanding datastores).
  • Standard: steady performance at optimized cost (general VMs). Archive targets backups and long-term retention.

What advantage do Stackscale’s real VLANs bring?
They allow the same network segment to span VMware, Proxmox, and bare-metal—without relying on the hypervisor’s SDN. That simplifies migrations, hybrids, and coexistence with low latency and high performance.

Can I safely run a virtualization cluster on NFS?
Yes. Many hypervisors support NFS datastores with good performance if the network and array are properly sized (10/25 GbE, appropriate cache). For intense random I/O, iSCSI + VMFS often provides lower latency and higher IOPS. The choice depends on your workload profile, SLA, and cost.

Scroll to Top