LizardFS AT HPA Tech Retreat

Come visit us at our stand and see how #LizardFS can help you!

Find us in the Innovation Zone at Hollywood Professionals Association Tech Retreat.

High Availability released to the open source community

As promised for some time now, we finally released our High Availability mechanism to our open source community. Enjoy and let us know what you think!

Distributed, parallel, scale-out file system accessible via NFS protocol.

Lizard server

There are many ways to access Distributed File System. For us the most favorite way is through native clients – does not matter whether it is Linux, Mac or Windows.

But what when you cannot install third party software on your client?

Or you need storage for systems that there is no client for?

NFS might be the answer for you. The simplest way to use it would be to create server/gateway. That solution has obvious drawbacks (lack of HA, performance, poor scalability).

We knew that we can do better.

So we did.

How does it work?

Let’s start from NFS 3.x

On each chunk server, there is an NFS server which enables clients to connect to the LizardFS cluster and read/write files via the NFS protocol. Now you can use LizardFS to create a Network Attached Storage solution nearly out of the box. It doesn’t matter what system you are running, as long as it supports NFS you can mount up to 1 Exabyte of storage from a LizardFS cluster to your machine.

Some demanding users might immediately ask questions like: ok but what about chunkserver failure?

Well if you are that exigent you will not mind discussing a support contract with us to not only get peace of mind but also a truly highly available solution.

What about NFS 4.1 and pNFS?

The story is just getting more and more interesting here. Now LizardFS is not only supporting NFS 4.0 but also provides parallel reads and writes through the parallel Network File System (pNFS) plus you are getting High Availability as a bonus. Extra add-ons do not end here. Thanks to NFS4.x support you can use Kerberos authentication for the clients.

Obvious use cases of NFS support

With pNFS

RedHat Enterprise Linux > 6.4

SuSE Linux Enterprise Server > 11 sp. 3

System communication


We are going to test various virtualisation solutions and see how is performing with them. Obvious differences should be observed in the solutions that are already capable of using pNFS like:

  • oVirt
  • Proxmox
  • Redhat Virtualization
  • KVM on modern Linux systems
  • XEN on modern Linux Systems

We are also interested in seeing the results of tests with others:

  • VMware (Although vSphere 6 includes NFSv4.1 support, it does not include pNFS! I)
  • Citrix XenServer (no pNFS)
  • HyperV (no pNFS)

UNIX Hosts

  • AIX
  • HP/UX
  • Solaris


  • Windows Server (no pNFS)
  • Windows (no pNFS)


Although NFS seems to be one of the most popular protocols different solutions are supporting different versions of it. It has certain consequences. For instance, with NFS 3 there is no direct support for ACLs. No parallelism in that version has also substantial impact on the performance.

So while having a unified environment in regards to communication protocols sounds really good, you need to first analyze what OSs are running on your infrastructure before making the final decision of going that way.

Fortunately most of the times we have an option of using other protocols like SMB or once it is acceptable to install additional software on a client machine go with the option of native clients.

Key differentiators between NFS versions

NFS3 – stateless protocol, supports only UNIX semantics, weak security, identification via UID/GID, no delegations.

NFS4 – stateful protocol, UNIX and Windows semantics, strong authentication via kerberos, string based identification (user@host…), delegations possible

pNFS – all the adavantages of NFS4 plus parallelised access to resources

Which platforms support what versions and features of NFS

PlatformVersionNFS versionpNFS supportNFS brokenComments
RedHat6.34.1nativeup to 6.5 problems with NFS in general on RHRead more
SuSE SLES11 sp. 34.1native
Linux Kernel2.6.394.1nativerequires the proper version of nfs-utils to work
Ubuntu14.044.1nativesome broken support from 12.04VMWare seems to have problems implementing proper NFS support for ages now
VMWare6.54.1nonepNFS not implemented
Citrix XenServer74.1nonepNFS not implementedSee bug
oVirtnativeRead more
Proxmox44.1nativeBased on Debian 9, so full support for pNFS
Redhat Virtualization ServerNative pNFS support if based on RHEL > 6.5
XENDepends on OS, works on RHEL/Derivative > 6.4 and SLES > 11.3 and Debian >8 and Ubuntu >= 14.04. Not sure which others.
Oracle VM3.4nativeIf running on RHEL/Oracle Linux > 6.4
Windows Server20163noneWindows only supports NFS v3
Solaris114noneThere was a prototype made available a few years ago when OpenSolaris was still alive, as of today, Solaris has no support for pNFS.
AIX64nonepNFS not implemented
Amazon EFS4.1nonepNFS not implemented
Oracle dNFS12CR24.1nativesome minor problems that limit full performance, but still faster than NFSv3Oracle has a NFS implementation inside its RDBMS. It support pNFS from 12Cr2. The support is from 2017 and still has some little quirks.

Read more

OpenStackIcehouse4.1nativepNFS not implementedAs of the Icehouse release, the NFS driver (and other drivers based off it) will attempt to mount shares using version 4.1 of the NFS protocol (including pNFS).

LizardFS at FOSDEM 2018

Even stronger presence this year.

You can meet us:

  • in our booth building K level 1 group C, also
  • join us for Lightening Talk to see the grand premiere of OpenNebula connector:


  • find out what was going on at LizardFS in 2017 “Year in development” in the Software Defined Storage room:

Stay tuned – we will reveal a great surprise in just 4 days – the countdown starts now!



LizardFS 3.12 official release.


It is official now. After a few weeks of tests with positive feedback from the community and our beta testers, LizardFS 3.12 is out.

With promising initial feedback like: “it’s a huge step forward” or “with RichACLs there is no doubt it is a great enterprise product” we cannot wait to hear the outcomes after your upgrades.

Install it, run it and let us know.

We will be updating you with more details soon – so stay tuned.

Things that we improved:

Documentation – corrected install guide and few other details.

Things added:

Completely new Windows Client with rich ACLs support, improved performance (200%) and stability (eliminated problems after one of the OS updates).

Rest looks just as great as before:


– nfs-ganesha plugin

– RichACL – a new POSIX + NFSv4 compatible ACL standard

– OSX ACL support through osxfuse

– ACL in-memory deduplication

– file lock fixes

– AVX2 support for erasure code goals

– MinGW compilation fixes

– more flexible chunkserver options

– many fixes


LizardFS 3.12.0 RC is out!

Release Candidate of 3.12.0

Please test it and let us know about any issues.

Description below.




– nfs-ganesha plugin

– RichACL – a new POSIX + NFSv4 compatible ACL standard

– OSX ACL support through osxfuse

– ACL in-memory deduplication

– file lock fixes

– AVX2 support for erasure code goals

– MinGW compilation fixes

– more flexible chunkserver options

– many fixes

Detailed info:

* C API *

LizardFS 3.12 comes with liblizardfs-client library and C language API header.

It’s now possible to build programs/plugins with direct support for LizardFS operations,

no FUSE needed. For reference, see:




For those building LizardFS from source, pass a -DENABLE_CLIENT_LIB=YES flag to cmake

in order to make sure you’re building client library as well.

* nfs-ganesha plugin *


Our official plugin for Ganesha NFS server is included as well. This plugin enables

a LizardFS FSAL (File System Abstraction Layer) to Ganesha, which is then used

to access LizardFS clusters directly. Our new plugin is pNFS and NFSv4.1 friendly.


For those building LizardFS from source, pass a -DENABLE_NFS_GANESHA=YES flag to cmake in order to make sure you’re building client library as well.

* RichACL *

In order to extend POSIX access control list implementation we introduced RichACL support.

Backward compatibility with POSIX ACLs is guaranteed. Additionally, it’s possible to use NFSv4-style ACL tools (nfs4_getfacl/nfs4_setfacl) and RichACL tools (getrichacl/setrichacl) to manage more complicated access control rules.


Setting/getting ACLs is also possible on OSX via both command line chmod/ls -e interface and desktop.


* File lock fixes *

Global file locking mechanism is now fully fixed and passes all NFS lock tests from connectathon suite.


* AVX2 *

Erasure code goal computing routines now take full advantage of AVX2 processor extensions.


* MinGW *

LizardFS is now bug-free again for MinGW cross-compiling.


* Chunkserver options *

Replication limits are now fully configurable in chunkserver config.

Also, chunk test (a.k.a. scrubbing) has 1 millisecond precision now instead of previous 1 second, which allows users to turn on more aggressive scrubbing with simple chunkserver reload.


LizardFS on OpenNebula conference

Michal Bielicki showing how to connect 1,9 PB cluster to Nodeweaver.

Follow the link to see the full video.




New LizardFS Brochure

LizardFS in a nutshell.

Check out our new brochure, highlighting features, benefits and possible use cases!

LizardFS Brochure



LizardFS Expands Global Presence in United States with Enterprise Support

We are pleased to welcome Michael Kozlowski on the team who will be responsible for scaling the sales team and growing markets across the United States.

With his extensive experience we are sure that he will make a huge impact.

We are excited to work with you Mike!

Read more:


Come meet us at the Wolves Summit

Time flies… We took part in 5th edition and the 6th edition of Wolves Summit is almost here, it will take place on 10-11 October 2017 in Warsaw.

LizardFS will be there sharing our experiences with the startup community and SMBs. Looking forward to meeting potential clients, partners and visionaries! Hope this one will be just as inspiring as the previous one.

Let us know if you will be attending too, always happy to have a chat.