Running multiple chunkservers on the same machine.

Chunkserver as a service


  1. Prepare another mfschunkserver.cfg file for the new chunkserver in a different path, let’s call it /etc/chunkserver2.cfg
  2. Prepare another mfshdd.cfg file for the new chunkserver in a different path, let’s call it /etc/hdd2.cfg
  3. Set HDD_CONF_FILENAME in chunkserver2.cfg file to the path of newly prepared mfshdd.cfg file, in this example /etc/hdd2.cfg
  4. Set CSSERV_LISTEN_PORT to non-default, unused port (like 9522) in /etc/chunkserver2.cfg
  5. Run the second chunkserver with mfschunkserver -c /etc/chunkserver2.cfg
  6. Repeat if you need even more chunkservers on the same machine (not very recommended though)

P.S. If you need to run the second chunkserver as a service, not just daemonized process, then you’d have to prepare a simple systemd/init.d script yourself for now, there are no out-of-the-box solutions yet. If you do prepare something like this, feel free to contribute via a pull request.




You can setup multiple chunkservers on the same machine very easily with docker.

docker run -d –restart always –net=host  -e MASTER_HOST=localhost \
    –name=chunk_HD01  \
    -e MFS_LABEL=HD01 \
    -v /mnt/HD01/:/mnt/HD01:rw \
    -e ACTION=chunk   hradec/docker_lizardfs_git

docker run -d –restart always –net=host  -e MASTER_HOST=localhost \
    –name=chunk_HD02 \
    -e MFS_LABEL=HD02 \
    -v /mnt/HD02/:/mnt/HD02:rw \
    -e ACTION=chunk   hradec/docker_lizardfs_git
where your first hard drive is mounted at /mnt/HD1, the second at /mnt/HD2, and son on…

This docker image let you set ANY of the mfschunkserver.conf options by using -e MFS_<option name> in the command line. (as for example, the chunkserver port number – -e MFS_CSSERV_LISTEN_PORT=9461)

By mounting the local /mnt/HD1 path in the container path /mnt/HD1 (-v /mnt/HD01/:/mnt/HD01:rw), triggers the image to auto-generate the mfshdds.conf file from whatever folder shows up at /mnt/*

You can mount just one path for each chunkservers (as I demonstrated), or you can mount as many paths as you want for a chunkserver, covering booth situations – 1 chunkserver for multiple disks (LizardFS Default setup), or multiple chunkservers for multiple disks!

The –net=host forces the container to use the main hardware nic’s in you machine, so theres no overhead of a virtual lan layer.

The image can be used to start any of the servers, include metalogger, shadow, master and cgi. Just set the server in the -e ACTION=<server name>

running the image like this:
docker run -ti –rm hradec/docker_lizardfs_git

will display a quick help on how to use it!


*edited comment from:

Storage for OpenNebula Cloud

You can now use LizardFS as a Cloud Storage in your OpenNebula deployments.

Scaling it to Petabytes and beyond was never easier – just add a drive or a node and the system will autobalance itself.


LizardFS TM drivers for OpenNebula

Based on the original OpenNebula code, changes by Carlo Daffara (NodeWeaver Srl) 2017-2018

For information and requests:

To contribute bug patches or new features, you can use the GitHub Pull Request model.

Code and documentation are under the Apache License 2.0 like OpenNebula.

This is the same set of drivers used within our NodeWeaver platform, compatible with Lizardfs 3.x and OpenNebula 5.4.x


  • lizardfs command executable by the user that launch the OpenNebula probes (usually oneadmin) and in the default path
  • the lizardfs datastore must be mounted and reachable from all the nodes where the TM drivers are in use, and with the same path

The TM drivers are derived from the latest 5.4.3 “shared” TM drivers, with all the copies and other operations modified to use the live snapshot feature of LizardFS.

To install in OpenNebula

Copy the drivers in /var/lib/one/remotes/tm/lizardfs

# fix ownership

# is you have changed the default user and group of OpenNebula, substitute  oneadmin.onedadmin with <installationuser>.<installationgroup>

chown -R oneadmin.oneadmin /var/lib/one/remotes/tm/lizardfs

Visit GitHub

Distributed, parallel, scale-out file system accessible via NFS protocol.

Lizard server

There are many ways to access Distributed File System. For us the most favorite way is through native clients – does not matter whether it is Linux, Mac or Windows.

But what when you cannot install third party software on your client?

Or you need storage for systems that there is no client for?

NFS might be the answer for you. The simplest way to use it would be to create server/gateway. That solution has obvious drawbacks (lack of HA, performance, poor scalability).

We knew that we can do better.

So we did.

How does it work?

Let’s start from NFS 3.x

On each chunk server, there is an NFS server which enables clients to connect to the LizardFS cluster and read/write files via the NFS protocol. Now you can use LizardFS to create a Network Attached Storage solution nearly out of the box. It doesn’t matter what system you are running, as long as it supports NFS you can mount up to 1 Exabyte of storage from a LizardFS cluster to your machine.

Some demanding users might immediately ask questions like: ok but what about chunkserver failure?

Well if you are that exigent you will not mind discussing a support contract with us to not only get peace of mind but also a truly highly available solution.

What about NFS 4.1 and pNFS?

The story is just getting more and more interesting here. Now LizardFS is not only supporting NFS 4.0 but also provides parallel reads and writes through the parallel Network File System (pNFS) plus you are getting High Availability as a bonus. Extra add-ons do not end here. Thanks to NFS4.x support you can use Kerberos authentication for the clients.

Obvious use cases of NFS support

With pNFS

RedHat Enterprise Linux > 6.4

SuSE Linux Enterprise Server > 11 sp. 3

System communication


We are going to test various virtualisation solutions and see how is performing with them. Obvious differences should be observed in the solutions that are already capable of using pNFS like:

  • oVirt
  • Proxmox
  • Redhat Virtualization
  • KVM on modern Linux systems
  • XEN on modern Linux Systems

We are also interested in seeing the results of tests with others:

  • VMware (Although vSphere 6 includes NFSv4.1 support, it does not include pNFS! I)
  • Citrix XenServer (no pNFS)
  • HyperV (no pNFS)

UNIX Hosts

  • AIX
  • HP/UX
  • Solaris


  • Windows Server (no pNFS)
  • Windows (no pNFS)


Although NFS seems to be one of the most popular protocols different solutions are supporting different versions of it. It has certain consequences. For instance, with NFS 3 there is no direct support for ACLs. No parallelism in that version has also substantial impact on the performance.

So while having a unified environment in regards to communication protocols sounds really good, you need to first analyze what OSs are running on your infrastructure before making the final decision of going that way.

Fortunately most of the times we have an option of using other protocols like SMB or once it is acceptable to install additional software on a client machine go with the option of native clients.

Key differentiators between NFS versions

NFS3 – stateless protocol, supports only UNIX semantics, weak security, identification via UID/GID, no delegations.

NFS4 – stateful protocol, UNIX and Windows semantics, strong authentication via kerberos, string based identification (user@host…), delegations possible

pNFS – all the adavantages of NFS4 plus parallelised access to resources

Which platforms support what versions and features of NFS

PlatformVersionNFS versionpNFS supportNFS brokenComments
RedHat6.34.1nativeup to 6.5 problems with NFS in general on RHRead more
SuSE SLES11 sp. 34.1native
Linux Kernel2.6.394.1nativerequires the proper version of nfs-utils to work
Ubuntu14.044.1nativesome broken support from 12.04VMWare seems to have problems implementing proper NFS support for ages now
VMWare6.54.1nonepNFS not implemented
Citrix XenServer74.1nonepNFS not implementedSee bug
oVirtnativeRead more
Proxmox44.1nativeBased on Debian 9, so full support for pNFS
Redhat Virtualization ServerNative pNFS support if based on RHEL > 6.5
XENDepends on OS, works on RHEL/Derivative > 6.4 and SLES > 11.3 and Debian >8 and Ubuntu >= 14.04. Not sure which others.
Oracle VM3.4nativeIf running on RHEL/Oracle Linux > 6.4
Windows Server20163noneWindows only supports NFS v3
Solaris114noneThere was a prototype made available a few years ago when OpenSolaris was still alive, as of today, Solaris has no support for pNFS.
AIX64nonepNFS not implemented
Amazon EFS4.1nonepNFS not implemented
Oracle dNFS12CR24.1nativesome minor problems that limit full performance, but still faster than NFSv3Oracle has a NFS implementation inside its RDBMS. It support pNFS from 12Cr2. The support is from 2017 and still has some little quirks.

Read more

OpenStackIcehouse4.1nativepNFS not implementedAs of the Icehouse release, the NFS driver (and other drivers based off it) will attempt to mount shares using version 4.1 of the NFS protocol (including pNFS).