File system accessible via NFS protocol.

There are many ways to access Distributed File System. For us the most favorite way is through native clients – does not matter whether it is Linux, Mac or Windows.
But what when you cannot install third party software on your client?
Or you need storage for systems that there is no client for?
NFS might be the answer for you. The simplest way to use it would be to create server/gateway. That solution has obvious drawbacks (lack of HA, performance, poor scalability).
We knew that we can do better.
So we did.

How does it work?

Let’s start from NFS 3.x

On each chunk server, there is an NFS server which enables clients to connect to the LizardFS cluster and read/write files via the NFS protocol. Now you can use LizardFS to create a Network Attached Storage solution nearly out of the box. It doesn’t matter what system you are running, as long as it supports NFS you can mount up to 1 Exabyte of storage from a LizardFS cluster to your machine.
Some demanding users might immediately ask questions like: ok but what about chunkserver failure?
Well if you are that exigent you will not mind discussing a support contract with us to not only get peace of mind but also a truly highly available solution.

What about NFS 4.1 and pNFS?

The story is just getting more and more interesting here. Now LizardFS is not only supporting NFS 4.0 but also provides parallel reads and writes through the parallel Network File System (pNFS) plus you are getting High Availability as a bonus. Extra add-ons do not end here. Thanks to NFS4.x support you can use Kerberos authentication for the clients.

Obvious use cases of NFS support
With pNFS
RedHat Enterprise Linux > 6.4
SuSE Linux Enterprise Server > 11 sp. 3

Virtualisation

We are going to test various virtualisation solutions and see how is performing with them. Obvious differences should be observed in the solutions that are already capable of using pNFS like:

– oVirt
– Proxmox
– Redhat Virtualization
– KVM on modern Linux systems
– XEN on modern Linux Systems

We are also interested in seeing the results of tests with others:

– VMware (Although vSphere 6 includes NFSv4.1 support, it does not include pNFS! I)
– Citrix XenServer (no pNFS)
– HyperV (no pNFS)

UNIX Hosts

– AIX
– HP/UX
– Solaris

Windows

– Windows Server (no pNFS)
– Windows (no pNFS)

Challenges

Although NFS seems to be one of the most popular protocols different solutions are supporting different versions of it. It has certain consequences. For instance, with NFS 3 there is no direct support for ACLs. No parallelism in that version has also substantial impact on the performance. So while having a unified environment in regards to communication protocols sounds really good, you need to first analyze what OSs are running on your infrastructure before making the final decision of going that way. Fortunately most of the times we have an option of using other protocols like SMB or once it is acceptable to install additional software on a client machine go with the option of native clients.

Key differentiators between NFS versions

NFS3 – stateless protocol, supports only UNIX semantics, weak security, identification via UID/GID, no delegations.
NFS4 – stateful protocol, UNIX and Windows semantics, strong authentication via kerberos, string based identification (user@host…), delegations possible
pNFS – all the adavantages of NFS4 plus parallelised access to resources