LizardFS Software Defined Storage is a distributed, scalable, fault-tolerant, and highly available file system. It allows users to combine disk space located on several servers into a single namespace which is visible on Unix-like and Windows systems in the same way as other file systems.
LizardFS makes files secure by keeping all the data in many replicas spread over all available servers. It can be used also to build affordable storage because it runs without any problems on commodity hardware.
In LizardFS we understand that you can achieve the best performance of your cluster only if you know everything about it. That is why we want to teach you everything you should know about LizardFS. The architecture and configuration of our distributed file system will no longer be a mystery for you. Do not forget that after our training you can react faster and better to anything that may happen to your cluster.
The configuration is one of the most crucial things in using LizardFS. Every software that is not configured properly also does not work in the best way. Do not risk a failure of your data storage system caused by an elementary mistake made at the configuration process. We have configured LizardFS hundreds of times and we want to do the job for you. With us, you have a 100% guarantee that your software-defined storage solution is configured in the best possible way.
You don’t need a Master degree in Data Storage to use LizardFS – our technical team will take care of the cluster deployment and configuration. We will be also happy to teach you how it should be done and show you all the good practices.
LizardFS gives you the possibility to have access to your data not only from the Unix-like system but also from the Windows PC/laptop. That may be crucial if you are running a big company with loads of data and all your employees work on Windows machines. They don’t have to get a degree in Linux – our Windows Client will make everything work. It makes your computer see the LizardFS cluster as an external drive. That means, you will see your cluster in the files manager the same way you see your USB stick. Could it be even more comfortable? If you are interested in a Windows Client license, reach out to us using our contact form in order to get a free trial and pricing.
In LizardFS we have one goal: offer the best possible service. We operate globally providing 24/7 support from the best developers who created the code of LizardFS. We can help you improve performance, help with configuration, train you and solve almost every problem with the code, servers or nodes. Have peace of mind with our helpful, passionate support team on standby.
Once the storage capacity of a Disk Array was full, the only way to continue to store more data was to buy another costly “shelf” from the same producer.
When the possibility of adding additional elements was fully exhausted, one had to migrate all the data to a larger, usually much more expensive Disk Array. Thus, one was left with an old and now redundant Disk Array.
This technology turned out to be costly, generating large costs for ongoing maintenance and scalability.
LizardFS makes it possible to build storage on multiple servers, regardless of their manufacturer.
Increasing storage capacity is accomplished by adding additional server(s) with hard drives to the cluster.
This type of solution ensures readiness for exponential growth of storage whilst using servers from your preferred vendor.
Thus significantly decreasing overall data storage costs and minimizes the need for maintenance.
Your chunkservers are pretty simple to set up. Usually, if your /etc/hosts file is set up correctly with the address of the master server and you do not require labeling, the mfschunkserver.cfg file can stay as it is.
↳ Check out step-by-step instructions in our Documentation.
Once this is set up, your chunkserver is ready and actively taking part in your LizardFS cluster.
To remove a directory from being used by LizardFS, just add a * to the beginning of the line in mfshdd.cfg. LizardFS will replicate all the data from it somewhere else. Once you see in the web interface that all data has been safely copied away, you can update the file and remove the line and then remove the device associated with it from your chunkserver.
There are many ways to access Distributed File System. For us the most favorite way is through native clients – does not matter whether it is Linux, Mac or Windows.
But what when you cannot install third party software on your client?
Or you need storage for systems that there is no client for?
NFS might be the answer for you. The simplest way to use it would be to create server/gateway. That solution has obvious drawbacks (lack of HA, performance, poor scalability).
We knew that we can do better.
So we did.
How does it work?
Let’s start from NFS 3.x
On each chunk server, there is an NFS server which enables clients to connect to the LizardFS cluster and read/write files via the NFS protocol. Now you can use LizardFS to create a Network Attached Storage solution nearly out of the box. It doesn’t matter what system you are running, as long as it supports NFS you can mount up to 1 Exabyte of storage from a LizardFS cluster to your machine.
Some demanding users might immediately ask questions like: ok but what about chunkserver failure?
Well if you are that exigent you will not mind discussing a support contract with us to not only get peace of mind but also a truly highly available solution.
What about NFS 4.1 and pNFS?
The story is just getting more and more interesting here. Now LizardFS is not only supporting NFS 4.0 but also provides parallel reads and writes through the parallel Network File System (pNFS) plus you are getting High Availability as a bonus. Extra add-ons do not end here. Thanks to NFS4.x support you can use Kerberos authentication for the clients.
Obvious use cases of NFS support
RedHat Enterprise Linux > 6.4
SuSE Linux Enterprise Server > 11 sp. 3
We are going to test various virtualisation solutions and see how is performing with them. Obvious differences should be observed in the solutions that are already capable of using pNFS like:
– Redhat Virtualization
– KVM on modern Linux systems
– XEN on modern Linux Systems
We are also interested in seeing the results of tests with others:
– VMware (Although vSphere 6 includes NFSv4.1 support, it does not include pNFS! I)
– Citrix XenServer (no pNFS)
– HyperV (no pNFS)
– Windows Server (no pNFS)
– Windows (no pNFS)
Although NFS seems to be one of the most popular protocols different solutions are supporting different versions of it. It has certain consequences. For instance, with NFS 3 there is no direct support for ACLs. No parallelism in that version has also substantial impact on the performance. So while having a unified environment in regards to communication protocols sounds really good, you need to first analyze what OSs are running on your infrastructure before making the final decision of going that way. Fortunately most of the times we have an option of using other protocols like SMB or once it is acceptable to install additional software on a client machine go with the option of native clients.
Key differentiators between NFS versions
NFS3 – stateless protocol, supports only UNIX semantics, weak security, identification via UID/GID, no delegations.
NFS4 – stateful protocol, UNIX and Windows semantics, strong authentication via kerberos, string based identification (user@host…), delegations possible
pNFS – all the adavantages of NFS4 plus parallelised access to resources
1. Prepare another mfschunkserver.cfg file for the new chunkserver in a different path, let’s call it /etc/chunkserver2.cfg
2. Prepare another mfshdd.cfg file for the new chunkserver in a different path, let’s call it /etc/hdd2.cfg
3. Set HDD_CONF_FILENAME in chunkserver2.cfg file to the path of newly prepared mfshdd.cfg file, in this example /etc/hdd2.cfg
4. Set CSSERV_LISTEN_PORT to non-default, unused port (like 9522) in /etc/chunkserver2.cfg
5. Run the second chunkserver with mfschunkserver -c /etc/chunkserver2.cfg
6. Repeat if you need even more chunkservers on the same machine (not very recommended though)
P.S. If you need to run the second chunkserver as a service, not just daemonized process, then you’d have to prepare a simple systemd/init.d script yourself for now, there are no out-of-the-box solutions yet. If you do prepare something like this, feel free to contribute via a pull request.
You can setup multiple chunkservers on the same machine very easily with docker.
docker run -d –restart always –net=host -e MASTER_HOST=localhost \
-e MFS_CSSERV_LISTEN_PORT=9460 \
-e MFS_LABEL=HD01 \
-v /mnt/HD01/:/mnt/HD01:rw \
-e ACTION=chunk hradec/docker_lizardfs_git
docker run -d –restart always –net=host -e MASTER_HOST=localhost \
-e MFS_CSSERV_LISTEN_PORT=9461 \
-e MFS_LABEL=HD02 \
-v /mnt/HD02/:/mnt/HD02:rw \
-e ACTION=chunk hradec/docker_lizardfs_git
where your first hard drive is mounted at /mnt/HD1, the second at /mnt/HD2, and son on…
This docker image let you set ANY of the mfschunkserver.conf options by using -e MFS_
By mounting the local /mnt/HD1 path in the container path /mnt/HD1 (-v /mnt/HD01/:/mnt/HD01:rw), triggers the image to auto-generate the mfshdds.conf file from whatever folder shows up at /mnt/*
You can mount just one path for each chunkservers (as I demonstrated), or you can mount as many paths as you want for a chunkserver, covering booth situations – 1 chunkserver for multiple disks (LizardFS Default setup), or multiple chunkservers for multiple disks!
The –net=host forces the container to use the main hardware nic’s in you machine, so theres no overhead of a virtual lan layer.
The image can be used to start any of the servers, include metalogger,
shadow, master and cgi. Just set the server in the -e ACTION=
running the image like this:
docker run -ti –rm hradec/docker_lizardfs_git
will display a quick help on how to use it!
The simplest way is to create a new metadata file. Go to your metadata directory on your current master server (look at the DATA_PATH in the mfsmaster.cfg file), then stop the master and create a new empty metadata file by executing:
echo -n "MFSM NEW" > metadata.mfs
Start the master and your cluster will be clear, all remaining chunks will be deleted.
Copy your data directory somewhere safe (default path: /var/lib/mfs).
Files you should be interested in keeping are primarily:
• metadata.mfs – your metadata set in binary form. This file is updated hourly + on master server shutdown. You can also trigger a metadata dump with lizardfs-admin save-metadata HOST PORT, but an admin password needs to be set in the mfsmater.cfg first.
• sessions.mfs – additional information on user sessions.
• changelog*.mfs – changes to metadata that weren’t dumped to metadata.mfs yet.
This depends largely on your policies. Since LizardFS does round robin between chunkservers if using goals, rr would probably gain the best results. If you use erasure coding, advanced balancing in LACP would be probably the most optimal way to do it.
With the help of multiple chunkservers and good goals, files can be stored multiple times. Therefore, a certain level of high availability on a file-level can be achieved easily.
In addition, it is important to know that per default, the master service only can be active in a master role on one node at the time. If this node fails, e.g. because of broken hardware or out-of-memory situations, the current master has to be demoted (if still possible) and an existing shadow has to be promoted manually.
If the failover happens automatically, a good state of high availability is achieved on a service level. Thus the term “High Availability” refers to keeping the master role alive when everything goes down under.
There are multiple ways of keeping the master highly available.
One would be to demote and promote manually if you need to. The better way would be to delegate that task to a mechanism that knows the current state of all (possible) master nodes and can perform the failover procedure automatically.
Known methods, when only using open-source software, are building Pacemaker/ Corosync clusters with self-written OCF agents. Another way could be using keepalived.
An officially supported way to achieve High Availability of the master is to obtain the uRaft component from Skytechnology Sp. z o.o., the company behind LizardFS. Based on the raft algorithm, the uRaft service makes sure that all master nodes talk to each other and exchange information regarding their health states.
In order to ensure that a master exists, the nodes participate in votes. If the current master fails, uRaft moves a floating IP from the formerly active node to the new designated master. All uRaft nodes have to be part of one network and must be able to talk to each other.
uRaft will be available as a part of an open-sourse version of LizardFS 3.13 after the stable version is released.
You can now use LizardFS as a Cloud Storage in your OpenNebula deployments.
Scaling it to Petabytes and beyond was never easier – just add a drive or a node and the system will autobalance itself.
LizardFS TM drivers for OpenNebula
Based on the original OpenNebula code, changes by Carlo Daffara (NodeWeaver Srl) 2017-2018
For information and requests: firstname.lastname@example.org
To contribute bug patches or new features, you can use the GitHub Pull Request model.
Code and documentation are under the Apache License 2.0 like OpenNebula.
This is the same set of drivers used within our NodeWeaver platform, compatible with Lizardfs 3.x and OpenNebula 5.4.x
lizardfs command executable by the user that launch the OpenNebula probes (usually oneadmin) and in the default path
the lizardfs datastore must be mounted and reachable from all the nodes where the TM drivers are in use, and with the same path
The TM drivers are derived from the latest 5.4.3 “shared” TM drivers, with all the copies and other operations modified to use the live snapshot feature of LizardFS.
To install in OpenNebula
Copy the drivers in /var/lib/one/remotes/tm/lizardfs
# fix ownership
# if you have changed the default user and group of OpenNebula, substitute oneadmin.onedadmin with
chown -R oneadmin.oneadmin /var/lib/one/remotes/tm/lizardfs