In LizardFS we have one goal: offer the best possible service quality. We operate globally providing 24/7 technical support from the best engineers who created LizardFS. We can help you improve the performance, configure and set up the system, train your team and most importantly remove critical and non-critical issues of the LizardFS data storage.
Have peace of mind with our helpful, passionate support team on standby.
We offer 3 types of LizardFS Technical Support:
What do maintenance packages include:
➔ Configuration & Deployment – if you haven’t used LFS before.
➔ Technical Training – up to 5 hours for Standard Package and up to 10 hours for Premium Package.
➔ Tuning sessions – once per year for Standard Package and once per quarter for Premium Package.
➔ Troubleshooting sessions – once per year for Standard Package and once per quarter for Premium Package.
➔ Infrastructure Audits – once per quarter for Standard Package and once per month for Premium Package.
➔ Windows/macOS Clients – unlimited number of licenses.
➔ High Availability – license for High Availability configuration.
➔ Software Update – as soon as the next stable release arrives (LizardFS version 3.13).
get a quoteThe configuration is one of the most crucial things in using LizardFS. Every software that is not configured properly also does not work in the best way.
Do not risk a failure of your data storage system caused by an elementary mistake made during the configuration process. We have configured LizardFS hundreds of times and we can do the job for you.
With us, you have a 100% guarantee that your software-defined storage solution is configured in the best possible way. You don’t need a Master’s degree in Data Storage to use LizardFS – our technical team will take care of the cluster deployment and configuration.
Our Technical Training is a workshop that covers the principles of proper using, configuring, and maintaining the LizardFS infrastructure.
In LizardFS we understand that you can achieve the best performance of your cluster only if you know everything about it. That is why we want to teach you everything you should know about LizardFS. The architecture and configuration of our distributed file system will no longer be a mystery for you. Do not forget that after our training you can react faster and better to anything that may happen to your cluster.
A tuning session is a set of activities that optimize the software performance depending on your requirements.
Ever wondered whether you can squeeze, let’s say, 400% more performance from your LizardFS cluster? Well… we can do that. The thing is: every use-case is different and uses different LizardFS functionalities. By adjusting them just the way you need, we can make sure LizardFS will be doing it’s best for you.
Troubleshooting sessions – a set of activities that remove existing uncritical issues.
Unlike most of our competitors, we don’t offer just an insurance policy. That’s why we won’t just wait for a disaster to happen. We will regularly work with you on your cluster, making sure that there is no single little issue that could potentially cause a huge disaster.
Infrastructure Audits – a set of activities of reviewing the current performance and parameters of the infrastructure that enable to improve it as well as avoid potential downtime or errors.
As simple as it could be: you run our script that gathers information about the parameters of your cluster and we let you know whether everything is okay. If there is something to worry about, it will be fixed on the nearest Troubleshooting Session, and if the issue might be dangerous, it will be fixed immediately.
Windows/macOS Client – during the term of the support contract LFS will grant an unlimited number of licenses for Windows and/or macOS Client.
LizardFS gives you the possibility to have access to your data not only from the Unix-like system but also from the Windows PC/laptop or a Mac. That may be crucial if you are running a big company with loads of data and all your employees work on Windows or Mac machines.
With LizardFS they don’t have to get a degree in Linux – our Windows & macOS Clients will make everything work. They make your computer see the LizardFS cluster as an external drive. That means, you will see your cluster in the file manager the same way you see your USB stick. Could it be even more comfortable?
P.S. Windows & macOS clients are also available without Technical Support. If you are interested, reach out to us using our contact form in order to get a free trial and pricing.
High Availability – during the term of the support contract LFS will grant the license for High Availability configuration.
High Availability makes sure you won’t loose your cluster even if a Master server crashes. The HA mechanism will be a part of the nearest stable release of LizardFS 3.13. With current version (3.12) and older we will make sure you have this amazing feature by giving you a license for HA configuration.
Software Update – if approved by you we will update the infrastructure as soon as the next stable release arrives (the nearest stable release version is 3.13).
Being up-to-date is crucial in the data storage world. If you are using the older LizardFS version we will make sure the update to the current one is smooth and harmless.
Detailed documentation for LizardFS
TECHNICAL DOCUMENTATION DOWNLOAD LIZARDFS WHITE PAPER DOWNLOAD LizardFSYour chunkservers are pretty simple to set up. Usually, if your /etc/hosts file is set up correctly with the address of the master server and you do not require labeling, the mfschunkserver.cfg file can stay as it is.
↳ Check out step-by-step instructions in our Documentation.
Once this is set up, your chunkserver is ready and actively taking part in your LizardFS cluster.
To remove a directory from being used by LizardFS, just add a * to the beginning of the line in mfshdd.cfg. LizardFS will replicate all the data from it somewhere else. Once you see in the web interface that all data has been safely copied away, you can update the file and remove the line and then remove the device associated with it from your chunkserver.
There are many ways to access Distributed File System. For us the most favorite way is through native clients – does not matter whether it is Linux, Mac or Windows.
But what when you cannot install third party software on your client?
Or you need storage for systems that there is no client for?
NFS might be the answer for you. The simplest way to use it would be to create server/gateway. That solution has obvious drawbacks (lack of HA, performance, poor scalability).
We knew that we can do better.
So we did.
How does it work?
Let’s start from NFS 3.x
On each chunk server, there is an NFS server which enables clients to connect to the LizardFS cluster and read/write files via the NFS protocol. Now you can use LizardFS to create a Network Attached Storage solution nearly out of the box. It doesn’t matter what system you are running, as long as it supports NFS you can mount up to 1 Exabyte of storage from a LizardFS cluster to your machine.
Some demanding users might immediately ask questions like: ok but what about chunkserver failure?
Well if you are that exigent you will not mind discussing a support contract with us to not only get peace of mind but also a truly highly available solution.
What about NFS 4.1 and pNFS?
The story is just getting more and more interesting here. Now LizardFS is not only supporting NFS 4.0 but also provides parallel reads and writes through the parallel Network File System (pNFS) plus you are getting High Availability as a bonus. Extra add-ons do not end here. Thanks to NFS4.x support you can use Kerberos authentication for the clients.
Obvious use cases of NFS support
With pNFS
RedHat Enterprise Linux > 6.4
SuSE Linux Enterprise Server > 11 sp. 3
Virtualisation
We are going to test various virtualisation solutions and see how is performing with them. Obvious differences should be observed in the solutions that are already capable of using pNFS like:
– oVirt
– Proxmox
– Redhat Virtualization
– KVM on modern Linux systems
– XEN on modern Linux Systems
We are also interested in seeing the results of tests with others:
– VMware (Although vSphere 6 includes NFSv4.1 support, it does not include pNFS! I)
– Citrix XenServer (no pNFS)
– HyperV (no pNFS)
UNIX Hosts
– AIX
– HP/UX
– Solaris
Windows
– Windows Server (no pNFS)
– Windows (no pNFS)
Challenges
Although NFS seems to be one of the most popular protocols different solutions are supporting different versions of it. It has certain consequences. For instance, with NFS 3 there is no direct support for ACLs. No parallelism in that version has also substantial impact on the performance. So while having a unified environment in regards to communication protocols sounds really good, you need to first analyze what OSs are running on your infrastructure before making the final decision of going that way. Fortunately most of the times we have an option of using other protocols like SMB or once it is acceptable to install additional software on a client machine go with the option of native clients.
Key differentiators between NFS versions
NFS3 – stateless protocol, supports only UNIX semantics, weak security, identification via UID/GID, no delegations.
NFS4 – stateful protocol, UNIX and Windows semantics, strong authentication via kerberos, string based identification (user@host…), delegations possible
pNFS – all the adavantages of NFS4 plus parallelised access to resources
1. Prepare another mfschunkserver.cfg file for the new chunkserver in a different path, let’s call it /etc/chunkserver2.cfg
2. Prepare another mfshdd.cfg file for the new chunkserver in a different path, let’s call it /etc/hdd2.cfg
3. Set HDD_CONF_FILENAME in chunkserver2.cfg file to the path of newly prepared mfshdd.cfg file, in this example /etc/hdd2.cfg
4. Set CSSERV_LISTEN_PORT to non-default, unused port (like 9522) in /etc/chunkserver2.cfg
5. Run the second chunkserver with mfschunkserver -c /etc/chunkserver2.cfg
6. Repeat if you need even more chunkservers on the same machine (not very recommended though)
P.S. If you need to run the second chunkserver as a service, not just daemonized process, then you’d have to prepare a simple systemd/init.d script yourself for now, there are no out-of-the-box solutions yet. If you do prepare something like this, feel free to contribute via a pull request.
You can setup multiple chunkservers on the same machine very easily with docker.
docker run -d –restart always –net=host -e MASTER_HOST=localhost \
–name=chunk_HD01 \
-e MFS_CSSERV_LISTEN_PORT=9460 \
-e MFS_LABEL=HD01 \
-v /mnt/HD01/:/mnt/HD01:rw \
-e ACTION=chunk hradec/docker_lizardfs_git
docker run -d –restart always –net=host -e MASTER_HOST=localhost \
–name=chunk_HD02 \
-e MFS_CSSERV_LISTEN_PORT=9461 \
-e MFS_LABEL=HD02 \
-v /mnt/HD02/:/mnt/HD02:rw \
-e ACTION=chunk hradec/docker_lizardfs_git
where your first hard drive is mounted at /mnt/HD1, the second at /mnt/HD2, and son on…
This docker image let you set ANY of the mfschunkserver.conf options by using -e MFS_
By mounting the local /mnt/HD1 path in the container path /mnt/HD1 (-v /mnt/HD01/:/mnt/HD01:rw), triggers the image to auto-generate the mfshdds.conf file from whatever folder shows up at /mnt/*
You can mount just one path for each chunkservers (as I demonstrated), or you can mount as many paths as you want for a chunkserver, covering booth situations – 1 chunkserver for multiple disks (LizardFS Default setup), or multiple chunkservers for multiple disks!
The –net=host forces the container to use the main hardware nic’s in you machine, so theres no overhead of a virtual lan layer.
The image can be used to start any of the servers, include metalogger,
shadow, master and cgi. Just set the server in the -e ACTION=
running the image like this:
docker run -ti –rm hradec/docker_lizardfs_git
will display a quick help on how to use it!
The simplest way is to create a new metadata file. Go to your metadata directory on your current master server (look at the DATA_PATH in the mfsmaster.cfg file), then stop the master and create a new empty metadata file by executing:
echo -n "MFSM NEW" > metadata.mfs
Start the master and your cluster will be clear, all remaining chunks will be deleted.
Copy your data directory somewhere safe (default path: /var/lib/mfs).
Files you should be interested in keeping are primarily:
• metadata.mfs – your metadata set in binary form. This file is updated hourly + on master server shutdown. You can also trigger a metadata dump with lizardfs-admin save-metadata HOST PORT, but an admin password needs to be set in the mfsmater.cfg first.
• sessions.mfs – additional information on user sessions.
• changelog*.mfs – changes to metadata that weren’t dumped to metadata.mfs yet.
This depends largely on your policies. Since LizardFS does round robin between chunkservers if using goals, rr would probably gain the best results. If you use erasure coding, advanced balancing in LACP would be probably the most optimal way to do it.
With the help of multiple chunkservers and good goals, files can be stored multiple times. Therefore, a certain level of high availability on a file-level can be achieved easily.
In addition, it is important to know that per default, the master service only can be active in a master role on one node at the time. If this node fails, e.g. because of broken hardware or out-of-memory situations, the current master has to be demoted (if still possible) and an existing shadow has to be promoted manually.
If the failover happens automatically, a good state of high availability is achieved on a service level. Thus the term “High Availability” refers to keeping the master role alive when everything goes down under.
There are multiple ways of keeping the master highly available.
One would be to demote and promote manually if you need to. The better way would be to delegate that task to a mechanism that knows the current state of all (possible) master nodes and can perform the failover procedure automatically.
Known methods, when only using open-source software, are building Pacemaker/ Corosync clusters with self-written OCF agents. Another way could be using keepalived.
An officially supported way to achieve High Availability of the master is to obtain the uRaft component. Based on the raft algorithm, the uRaft service makes sure that all master nodes talk to each other and exchange information regarding their health states.
In order to ensure that a master exists, the nodes participate in votes. If the current master fails, uRaft moves a floating IP from the formerly active node to the new designated master. All uRaft nodes have to be part of one network and must be able to talk to each other.
uRaft will be available as a part of an open-sourse version of LizardFS 3.13 after the stable version is released.
You can now use LizardFS as a Cloud Storage in your OpenNebula deployments.
Scaling it to Petabytes and beyond was never easier – just add a drive or a node and the system will autobalance itself.
LizardFS TM drivers for OpenNebula
Based on the original OpenNebula code, changes by Carlo Daffara (NodeWeaver Srl) 2017-2018
For information and requests: info@nodeweaver.eu
To contribute bug patches or new features, you can use the GitHub Pull Request model.
Code and documentation are under the Apache License 2.0 like OpenNebula.
This is the same set of drivers used within our NodeWeaver platform, compatible with Lizardfs 3.x and OpenNebula 5.4.x
Prerequisites:
lizardfs command executable by the user that launch the OpenNebula probes (usually oneadmin) and in the default path
the lizardfs datastore must be mounted and reachable from all the nodes where the TM drivers are in use, and with the same path
The TM drivers are derived from the latest 5.4.3 “shared” TM drivers, with all the copies and other operations modified to use the live snapshot feature of LizardFS.
To install in OpenNebula
Copy the drivers in /var/lib/one/remotes/tm/lizardfs
# fix ownership
# if you have changed the default user and group of OpenNebula, substitute oneadmin.onedadmin with
chown -R oneadmin.oneadmin /var/lib/one/remotes/tm/lizardfs