Update on LizardFS project

As most of you will have noticed, the LizardFS project has been lacking commitment to the community, and development for the past I would say 2 years. It’s time for a change and I think you will agree. Very recently the project and company have had new owners, new management and new developers. I hope the community still have a glimmer of faith in LizardFS. We have a lot of plans and ideas of how to get LizardFS back to its former glory and way beyond that.

First, I hope you will understand the mess we have taken over, including a large financial burden that we need to work our way out of. With a new team, we will need a little patience from the community while they get up to speed. 3.13.0-rc1 was a total disaster, but it does have lots of good stuff in it and basically 4 major bugs + lots of minor bugs, so we plan to continue forward with that (remember 3.12 is rock solid for any production clusters meanwhile), and we will make 3.13 the way it should have been in the first place.

What I propose is that we are totally transparent with our roadmap and flexible enough to listen to the community’s needs. Below is our plan till the end of the year and the first quarter of 2020.

End of October release of 3.13.0-rc2: 2 major bug fixes issue #780 and #662. Plus lots of minor bug fixes and several enhancements. End of December release of full bulletproof version 3.13 with everything that was promised before but working this time.

In the 2nd quarter we will have the first installment of the agama project, a new mount that will give you minimum 3 times better performance than 3.12 ever produced.

Going further at the moment would be dreaming and just blah blah blah, I prefer to keep it realistic, follow through with what is promised and allow for the community to get behind the project again, then we can plan the rest of 2020 together.

I hope this shows a commitment from us to the project.

If any of the community would like to share knowledge and experience with our devs, it can only help their learning process and also help build a stronger community, we’ve opened up a gitter room for direct communication with the team to help this process. As always any help in bug fixing, enhancements, functionalities, testing the community would like to take part in, will be graciously accepted.

Let’s make LizardFS great again!

LizardFS is now available on openSUSE as well.

LizardFS for openSUSE

LizardFS in official repositories of openSUSE in filesystems subproject!

Download, install and enjoy!

https://build.opensuse.org/package/show/filesystems/lizardfs

LizardFS in QNAP club.

Have you ever run out of storage on your QNAP box?

If you would like to connect a few of them together and create scale-out storage cluster from them it has never been easier.

LizardFS can now be directly downloaded from QNAP club.

Enjoy!

https://qnapclub.eu/en/qpkg/602

LizardFS & Proxmox

Frankfurt here we come!

In just 9 days at the Proxtalks conference you will get a chance to learn how to use LizardFS as a shared directory for Proxmox VE.

Join us at The Squaire – right at the airport to learn about distributed, parallel, scale-out storage for virtualisation and containers.

Bring your laptops – at the end we will use them to build LizardFS cluster together.

Hope to see you there.

 

LizardFS & NFON

LizardFS is becoming a standard for Distributed Data Storage in the Telecom industry.

                                          

                                             Overview

Headquartered in Munich, NFON AG is the only pan-European cloud PBX provider – counting more than 15,000 companies across 13 European countries as customers. The cloud telephone system offers over 150 functions as well as a seamless integration of premium solutions which results in high-quality communication. NFON is the new freedom of business communication.

Rather than being stored on-premises at its customers’ offices or data centers, the NFON cloud PBX operates in the cloud, in several external high-performance computing data centers that are reached via a fully encrypted internet connection. NFON promises a 24/7 service with enterprise-grade availability so that these businesses can be consistently available for their customers and business              partners.

“We were impressed by the stability and fault tolerance LizardFS provided in our tests. The cluster setup was quite easily integrated in our configuration management, too. Another cluster, for a different purpose, was set up within minutes.”

Markus Stumpf
Vice President Operations and Infrastructure
NFON

Read the full case study:

https://lizardfs.com/case-studies/nfon/

Release Candidate of LizardFS 3.13 with built in High Availability finally out!

Dear Users,

3.13.0-rc1 (release candidate) is officially out!

Featuring:

  • uRaft High Availability
  • fixes to EC handling
  • nfs-ganesha plugin changed to use only C code
  • reduced number of secondary group retrievals
  • new fuse3 client
  • many fixes

Detailed info:

  • uRaft HA *

uRaft is HA solution designed for use with LizardFS. It allows for seamless switching of  the master server in case of hardware failure. More information about uRaft is available in LizardfFS Handbook.

  • fixes to EC handling *

After extensive tests, we decided to improve the mechanism of calculating parities greater than 4 e.g EC( 6,5). After this upgrade, the system will show chunks of parities as endangered until the system automatically recalculates.

  • nfs-ganesha plugin changed to use only C code *

In preparation for moving LizardFS nfs-ganesha plugin to official nfs-ganesha repository, we had to remove all occurrences of C++ code and replace it with plain C.

  • reduced number of secondary group retrievals *

In LizardFS we introduced the handling of secondary groups. Unfortunately, the function to retrieve secondary groups in FUSE library used a lot of CPU resources. Thanks to removing unnecessary calls to this function, mount performance increased significantly.

  • added fuse3 client *

LizardFS now includes mount3 client which uses FUSE3 library. Thanks to new features in FUSE3, now the mount performs much better in many scenarios. Here are the most important changes visible to LizardFS users:

  • big_writes option is now enabled by default (not recognized as a parameter anymore).
  • added writeback_cache option which with kernel 3.14 and later improves performance significantly
  • increased read/write performance (especially for small operations)

Because most of the Linux distributions don’t include FUSE3 library, we have build FUSE3 packages and made them available on the LizardFS website.

 

Looking for Partners

Hostersi implemented LizardFS as a storage to simplify administration and increase performance of Webankieta.pl platform

Webankieta is a system for creating questionnaires. It serves such prestigious clients as ING Bank Slaski, Deutsche Bank, BZWBK, Itaka, Medicover, PKP, PZU, Danone, Jysk, Polska Press Grupa.

Implementation

Hostersi first configured their new infrastructure and migrated the systems to a new Data Center in order to eliminate some of the bottlenecks and challenges it was experiencing.

The changes implemented by Hostersi resulted in full redundancy and High Availability. That means that even if a substantial part of the Customers infrastructure should fail, users would not even notice it.

In order to increase the performance of the applications, Hostersi implemented HTTP2 protocol. Another part of the project was to provide fast access to the platform from anywhere by implementing CDN (Content Delivery Network). The level of security was increased by building in DDoS Prevention mechanisms. Upgrading the overall infrastructure eliminating major security gaps related to meltdown.

Last (but not least) part of the project was the implementation of the log management tools set ELK Stack that consists of Elasticsearch (text search engine), Logstash (log aggregation) and Kibana (visualization).

The parallel Distributed Geo-Redundant File System LizardFS was used for storing and aggregating data from many applications before it gets to ELK.

Thanks to that configuration (Elasticsearch and LizardFS) Hostersi managed to build a storage with searching capabilities that are compliant with GDPR.

Data aggregation due to security precautions is done from flat files. These files are later processed by parsers in Logstash. The whole process is done this way so data can return to the canonical state of a log.

Thanks to LizardFS Hostersi can scale both up and down simply by adding a drive or a node – the system will automatically balance itself.

LizardFS being totally hardware agnosticism enables Hostersi to use their existing infrastructure and gives them the possibility to exit vendor lock-in if they are buying any new commodity components.

LizardFS and Elasticsearch are open source products, you can install them on existing infrastructure. They enable you to create a platform for content management and storage in a cost-effective way. High Availability and extreme scalability are just a few added value features provided by LizardFS.