LizardFS@MWC18

Let us know if you will be at #MWC18

Would love to meet up and discuss storage

LizardFS AT HPA Tech Retreat

Come visit us at our stand and see how #LizardFS can help you!

Find us in the Innovation Zone at Hollywood Professionals Association Tech Retreat.

High Availability released to the open source community

As promised for some time now, we finally released our High Availability mechanism to our open source community. Enjoy and let us know what you think!

Distributed, parallel, scale-out file system accessible via NFS protocol.

Lizard server

There are many ways to access Distributed File System. For us the most favourite way is through native clients – does not matter whether it is Linux, Mac or Windows.

But what when you can not install third party software on your client?

Or you need storage for systems that there is no client for?

NFS might be the answer for you. Simplest way to use it would be to createserver/gateway. That solution has obvious drawbacks (lack of HA, performance, poor scalability).

We knew that we can do better.

So we did.

How does it work?

Let’s start from NFS 3.x

On each chunkserver there is a NFS server which enables clients to connect to the LizardFS cluster and read/write files via the NFS protocol. Now you can use LizardFS to create a Network Attached Storage solution nearly out of the box. It doesn’t matter what system you are running, as long as it supports NFS you can mount up to 1 Exabyte of storage from a LizardFS cluster to your machine.

Some demanding users might immediately ask questions like: ok but what about chunkserver failure?

Well if you are that exigent you will not mind discussing a support contract with us to not only get peace of mind but also a truly highly available solution.

What about NFS 4.1 and pNFS?

The story is just getting more and more interesting here. Now LizardFS is not only supporting NFS 4.0 but also provides parallel reads and writes through the parallel Network File System (pNFS) plus you are getting High Availability as a bonus. Extra add ons do not end here. Thanks to NFS4.x support you can use Kerberos authentication for the clients.

Obvious use cases of NFS support.

With pNFS

RedHat Enterprise Linux > 6.4

SuSE Linux Enterprise Server > 11 sp. 3

System communication

Virtualisation.

We are going to test various virtualisation solutions and see how is performing with them. Obvious differences should be observed in the solutions that are already capable of using pNFS like:

  • oVirt
  • Proxmox
  • Redhat Virtualization
  • KVM on modern Linux systems
  • XEN on modern Linux Systems

We are also interested in seeing the results of tests with others:

  • VMware (Although vSphere 6 includes NFSv4.1 support, it does not include pNFS! I)
  • Citrix XenServer (no pNFS)
  • HyperV (no pNFS)

UNIX Hosts

  • AIX
  • HP/UX
  • Solaris

Windows

  • Windows Server (no pNFS)
  • Windows (no pNFS)

Challenges.

Although NFS seems to be one of the most popular protocols different solutions are supporting different versions of it. It has certain consequences. For instance with NFS 3 there is no direct support for ACLs. No parallelism in that version has also substantial impact on the performance.

So while having a unified environment in regards to communication protocols sounds really good, you need to first analyse what OSs are running on your infrastructure before making the final decision of going that way.

Fortunately most of the times we have an option of using other protocols like SMB or once it is acceptable to install additional software on a client machine go with the option of native clients.

Key differentiators between NFS versions.

NFS3 – stateless protocol, supports only UNIX semantics, weak security, identification via UID/GID, no delegations.

NFS4 – stateful protocol, UNIX and Windows semantics, strong authentication via kerberos, string based identification (user@host…), delegations possible

pNFS – all the adavantages of NFS4 plus parallelised access to resources

Which platforms support what versions and features of NFS

Platform Version NFS version pNFS support NFS broken Comments
RedHat 6.3 4.1 native up to 6.5 problems with NFS in general on RH Read more
SuSE SLES 11 sp. 3 4.1 native
Linux Kernel 2.6.39 4.1 native requires the proper version of nfs-utils to work
Debian 8 4.1 native
Ubuntu 14.04 4.1 native some broken support from 12.04 VMWare seems to have problems implementing proper NFS support for ages now
VMWare 6.5 4.1 none pNFS not implemented
Citrix XenServer 7 4.1 none pNFS not implemented See bug
oVirt native Read more
Proxmox 4 4.1 native Based on Debian 9, so full support for pNFS
Redhat Virtualization Server Native pNFS support if based on RHEL > 6.5
XEN Depends on OS, works on RHEL/Derivative > 6.4 and SLES > 11.3 and Debian >8 and Ubuntu >= 14.04. Not sure which others.
Oracle VM 3.4 native If running on RHEL/Oracle Linux > 6.4
Windows Server 2016 3 none Windows only supports NFS v3
Windows 10 3 none
Solaris 11 4 none There was a prototype made available a few years ago when OpenSolaris was still alive, as of today, Solaris has no support for pNFS.
AIX 6 4 none pNFS not implemented
HP/UX 3 none
Amazon EFS 4.1 none pNFS not implemented
Oracle dNFS 12CR2 4.1 native some minor problems that limit full performance, but still faster than NFSv3 Oracle has a NFS implementation inside its RDBMS. It support pNFS from 12Cr2. The support is from 2017 and still has some little quirks.

Read more

OpenStack Icehouse 4.1 native pNFS not implemented As of the Icehouse release, the NFS driver (and other drivers based off it) will attempt to mount shares using version 4.1 of the NFS protocol (including pNFS).

LizardFS at FOSDEM 2018

Even stronger presence this year.

You can meet us:

  • in our booth building K level 1 group C, also
  • join us for Lightening Talk to see the grand premiere of OpenNebula connector:

https://fosdem.org/2018/schedule/event/lizardfs_opennebula/

and

  • find out what was going on at LizardFS in 2017 “Year in development” in the Software Defined Storage room:

https://fosdem.org/2018/schedule/event/lizardfs/

Stay tuned – we will reveal a great suprise in just 4 days – the count down starts now!

 

 

LizardFS 3.12 official release.

 

It is official now. After a few weeks of tests with positive feedback from the community and our beta testers LizardFS 3.12 is out.

With promising initial feedback like: “it’s a huge step forward” or “with RichACLs there is no doubt it is a great enterprise product” we cannot wait to hear the outcomes after your upgrades.

Install it, run it and let us know.

We will be updating you with more details soon – so stayed tuned.

Things that we improved:

Documentation – corrected install guide and few other details.

Things added:

Completely new Windows Client with rich ACLs support, improved performance (200%) and stability (eliminated problems after one of the OS updates).

Rest looks just as great as before:

– C API

– nfs-ganesha plugin

– RichACL – a new POSIX + NFSv4 compatible ACL standard

– OSX ACL support through osxfuse

– ACL in-memory deduplication

– file lock fixes

– AVX2 support for erasure code goals

– MinGW compilation fixes

– more flexible chunkserver options

– many fixes

 

LizardFS 3.12.0 RC is out!

Release Candidate of 3.12.0

Please test it and let us know about any issues.

Description below.

Featuring:

 

– C API

– nfs-ganesha plugin

– RichACL – a new POSIX + NFSv4 compatible ACL standard

– OSX ACL support through osxfuse

– ACL in-memory deduplication

– file lock fixes

– AVX2 support for erasure code goals

– MinGW compilation fixes

– more flexible chunkserver options

– many fixes

Detailed info:

* C API *

LizardFS 3.12 comes with liblizardfs-client library and C language API header.

It’s now possible to build programs/plugins with direct support for LizardFS operations,

no FUSE needed. For reference, see:

src/mount/client/lizardfs_c_api.h

src/data/liblizardfs-client-example.c

 

For those building LizardFS from source, pass a -DENABLE_CLIENT_LIB=YES flag to cmake

in order to make sure you’re building client library as well.

* nfs-ganesha plugin *

 

Our official plugin for Ganesha NFS server is included as well. This plugin enables

a LizardFS FSAL (File System Abstraction Layer) to Ganesha, which is then used

to access LizardFS clusters directly. Our new plugin is pNFS and NFSv4.1 friendly.

 

For those building LizardFS from source, pass a -DENABLE_NFS_GANESHA=YES flag to cmake in order to make sure you’re building client library as well.

* RichACL *

In order to extend POSIX access control list implementation we introduced RichACL support.

Backward compatibility with POSIX ACLs is guaranteed. Additionally, it’s possible to use NFSv4-style ACL tools (nfs4_getfacl/nfs4_setfacl) and RichACL tools (getrichacl/setrichacl) to manage more complicated access control rules.

* OSX ACL *

Setting/getting ACLs is also possible on OSX via both command line chmod/ls -e interface and desktop.

 

* File lock fixes *

Global file locking mechanism is now fully fixed and passes all NFS lock tests from connectathon suite.

 

* AVX2 *

Erasure code goal computing routines now take full advantage of AVX2 processor extensions.

 

* MinGW *

LizardFS is now bug-free again for MinGW cross-compiling.

 

* Chunkserver options *

Replication limits are now fully configurable in chunkserver config.

Also, chunk test (a.k.a. scrubbing) has 1 millisecond precision now instead of previous 1 second, which allows users to turn on more aggressive scrubbing with simple chunkserver reload.

 

https://www.google.com/url?q=https%3A%2F%2Fgithub.com%2Flizardfs%2Flizardfs%2Freleases%2Ftag%2Fv3.12.0-rc1&sa=D&sntz=1&usg=AFQjCNFouUJf1ZZkQPypA9rHJHNpXxvKJA

 

LizardFS on OpenNebula conference

Michal Bielicki showing how to connect 1,9 PB cluster to Nodeweaver.

Follow the link to see the full video.

 

OLYMPUS DIGITAL CAMERA

 

New LizardFS Brochure

LizardFS in a nutshell.

Check out our new brochure, highlighting features, benefits and possible use cases!

LizardFS Brochure

 

 

LizardFS Expands Global Presence in United States with Enterprise Support

We are pleased to welcome Michael Kozlowski on the team who will be responsible for scaling the sales team and growing markets across the United States.

With his extensive experience we are sure that he will make a huge impact.

We are excited to work with you Mike!

Read more:

http://www.pr.com/press-release/732363