LizardFS is now available on openSUSE as well.

LizardFS for openSUSE

LizardFS in official repositories of openSUSE in filesystems subproject!

Download, install and enjoy!

LizardFS in QNAP club.

Have you ever run out of storage on your QNAP box?

If you would like to connect a few of them together and create scale-out storage cluster from them it has never been easier.

LizardFS can now be directly downloaded from QNAP club.


Running multiple chunkservers on the same machine.

Chunkserver as a service


  1. Prepare another mfschunkserver.cfg file for the new chunkserver in a different path, let’s call it /etc/chunkserver2.cfg
  2. Prepare another mfshdd.cfg file for the new chunkserver in a different path, let’s call it /etc/hdd2.cfg
  3. Set HDD_CONF_FILENAME in chunkserver2.cfg file to the path of newly prepared mfshdd.cfg file, in this example /etc/hdd2.cfg
  4. Set CSSERV_LISTEN_PORT to non-default, unused port (like 9522) in /etc/chunkserver2.cfg
  5. Run the second chunkserver with mfschunkserver -c /etc/chunkserver2.cfg
  6. Repeat if you need even more chunkservers on the same machine (not very recommended though)

P.S. If you need to run the second chunkserver as a service, not just daemonized process, then you’d have to prepare a simple systemd/init.d script yourself for now, there are no out-of-the-box solutions yet. If you do prepare something like this, feel free to contribute via a pull request.




You can setup multiple chunkservers on the same machine very easily with docker.

docker run -d –restart always –net=host  -e MASTER_HOST=localhost \
    –name=chunk_HD01  \
    -e MFS_LABEL=HD01 \
    -v /mnt/HD01/:/mnt/HD01:rw \
    -e ACTION=chunk   hradec/docker_lizardfs_git

docker run -d –restart always –net=host  -e MASTER_HOST=localhost \
    –name=chunk_HD02 \
    -e MFS_LABEL=HD02 \
    -v /mnt/HD02/:/mnt/HD02:rw \
    -e ACTION=chunk   hradec/docker_lizardfs_git
where your first hard drive is mounted at /mnt/HD1, the second at /mnt/HD2, and son on…

This docker image let you set ANY of the mfschunkserver.conf options by using -e MFS_<option name> in the command line. (as for example, the chunkserver port number – -e MFS_CSSERV_LISTEN_PORT=9461)

By mounting the local /mnt/HD1 path in the container path /mnt/HD1 (-v /mnt/HD01/:/mnt/HD01:rw), triggers the image to auto-generate the mfshdds.conf file from whatever folder shows up at /mnt/*

You can mount just one path for each chunkservers (as I demonstrated), or you can mount as many paths as you want for a chunkserver, covering booth situations – 1 chunkserver for multiple disks (LizardFS Default setup), or multiple chunkservers for multiple disks!

The –net=host forces the container to use the main hardware nic’s in you machine, so theres no overhead of a virtual lan layer.

The image can be used to start any of the servers, include metalogger, shadow, master and cgi. Just set the server in the -e ACTION=<server name>

running the image like this:
docker run -ti –rm hradec/docker_lizardfs_git

will display a quick help on how to use it!


*edited comment from:

LizardFS & Proxmox

Frankfurt here we come!

In just 9 days at the Proxtalks conference you will get a chance to learn how to use LizardFS as a shared directory for Proxmox VE.

Join us at The Squaire – right at the airport to learn about distributed, parallel, scale-out storage for virtualisation and containers.

Bring your laptops – at the end we will use them to build LizardFS cluster together.

Hope to see you there.


LizardFS & NFON

LizardFS is becoming a standard for Distributed Data Storage in the Telecom industry.



Headquartered in Munich, NFON AG is the only pan-European cloud PBX provider – counting more than 15,000 companies across 13 European countries as customers. The cloud telephone system offers over 150 functions as well as a seamless integration of premium solutions which results in high-quality communication. NFON is the new freedom of business communication.

Rather than being stored on-premises at its customers’ offices or data centers, the NFON cloud PBX operates in the cloud, in several external high-performance computing data centers that are reached via a fully encrypted internet connection. NFON promises a 24/7 service with enterprise-grade availability so that these businesses can be consistently available for their customers and business              partners.

“We were impressed by the stability and fault tolerance LizardFS provided in our tests. The cluster setup was quite easily integrated in our configuration management, too. Another cluster, for a different purpose, was set up within minutes.”

Markus Stumpf
Vice President Operations and Infrastructure

Read the full case study:

Release Candidate of LizardFS 3.13 with built in High Availability finally out!

Dear Users,

3.13.0-rc1 (release candidate) is officially out!


  • uRaft High Availability
  • fixes to EC handling
  • nfs-ganesha plugin changed to use only C code
  • reduced number of secondary group retrievals
  • new fuse3 client
  • many fixes

Detailed info:

  • uRaft HA *

uRaft is HA solution designed for use with LizardFS. It allows for seamless switching of  the master server in case of hardware failure. More information about uRaft is available in LizardfFS Handbook.

  • fixes to EC handling *

After extensive tests, we decided to improve the mechanism of calculating parities greater than 4 e.g EC( 6,5). After this upgrade, the system will show chunks of parities as endangered until the system automatically recalculates.

  • nfs-ganesha plugin changed to use only C code *

In preparation for moving LizardFS nfs-ganesha plugin to official nfs-ganesha repository, we had to remove all occurrences of C++ code and replace it with plain C.

  • reduced number of secondary group retrievals *

In LizardFS we introduced the handling of secondary groups. Unfortunately, the function to retrieve secondary groups in FUSE library used a lot of CPU resources. Thanks to removing unnecessary calls to this function, mount performance increased significantly.

  • added fuse3 client *

LizardFS now includes mount3 client which uses FUSE3 library. Thanks to new features in FUSE3, now the mount performs much better in many scenarios. Here are the most important changes visible to LizardFS users:

  • big_writes option is now enabled by default (not recognized as a parameter anymore).
  • added writeback_cache option which with kernel 3.14 and later improves performance significantly
  • increased read/write performance (especially for small operations)

Because most of the Linux distributions don’t include FUSE3 library, we have build FUSE3 packages and made them available on the LizardFS website.


Looking for Partners

Storage for OpenNebula Cloud

You can now use LizardFS as a Cloud Storage in your OpenNebula deployments.

Scaling it to Petabytes and beyond was never easier – just add a drive or a node and the system will autobalance itself.


LizardFS TM drivers for OpenNebula

Based on the original OpenNebula code, changes by Carlo Daffara (NodeWeaver Srl) 2017-2018

For information and requests:

To contribute bug patches or new features, you can use the GitHub Pull Request model.

Code and documentation are under the Apache License 2.0 like OpenNebula.

This is the same set of drivers used within our NodeWeaver platform, compatible with Lizardfs 3.x and OpenNebula 5.4.x


  • lizardfs command executable by the user that launch the OpenNebula probes (usually oneadmin) and in the default path
  • the lizardfs datastore must be mounted and reachable from all the nodes where the TM drivers are in use, and with the same path

The TM drivers are derived from the latest 5.4.3 “shared” TM drivers, with all the copies and other operations modified to use the live snapshot feature of LizardFS.

To install in OpenNebula

Copy the drivers in /var/lib/one/remotes/tm/lizardfs

# fix ownership

# is you have changed the default user and group of OpenNebula, substitute  oneadmin.onedadmin with <installationuser>.<installationgroup>

chown -R oneadmin.oneadmin /var/lib/one/remotes/tm/lizardfs

Visit GitHub

Hostersi implemented LizardFS as a storage to simplify administration and increase performance of platform

Webankieta is a system for creating questionnaires. It serves such prestigious clients as ING Bank Slaski, Deutsche Bank, BZWBK, Itaka, Medicover, PKP, PZU, Danone, Jysk, Polska Press Grupa.


Hostersi first configured their new infrastructure and migrated the systems to a new Data Center in order to eliminate some of the bottlenecks and challenges it was experiencing.

The changes implemented by Hostersi resulted in full redundancy and High Availability. That means that even if a substantial part of the Customers infrastructure should fail, users would not even notice it.

In order to increase the performance of the applications, Hostersi implemented HTTP2 protocol. Another part of the project was to provide fast access to the platform from anywhere by implementing CDN (Content Delivery Network). The level of security was increased by building in DDoS Prevention mechanisms. Upgrading the overall infrastructure eliminating major security gaps related to meltdown.

Last (but not least) part of the project was the implementation of the log management tools set ELK Stack that consists of Elasticsearch (text search engine), Logstash (log aggregation) and Kibana (visualization).

The parallel Distributed Geo-Redundant File System LizardFS was used for storing and aggregating data from many applications before it gets to ELK.

Thanks to that configuration (Elasticsearch and LizardFS) Hostersi managed to build a storage with searching capabilities that are compliant with GDPR.

Data aggregation due to security precautions is done from flat files. These files are later processed by parsers in Logstash. The whole process is done this way so data can return to the canonical state of a log.

Thanks to LizardFS Hostersi can scale both up and down simply by adding a drive or a node – the system will automatically balance itself.

LizardFS being totally hardware agnosticism enables Hostersi to use their existing infrastructure and gives them the possibility to exit vendor lock-in if they are buying any new commodity components.

LizardFS and Elasticsearch are open source products, you can install them on existing infrastructure. They enable you to create a platform for content management and storage in a cost-effective way. High Availability and extreme scalability are just a few added value features provided by LizardFS.