LizardFS developer profile

LizardFS server

Hi, my name is Przemysław (pronounced pshemyswav), Przemek for short. I come from Poland.

 

I was born in Lublin. I could give you details like it’s the ninth largest city in Poland etc, but it’s just another ordinary city in Poland. Nothing particularly interesting about it to be honest.

 

After high school in Lublin I moved to Warsaw (the capital city of Poland) to study IT at Warsaw University of Technology, where I’m doing my bachelor’s. During studies I worked as an intern at Samsung where I was part of a team developing and maintaining a C/C++ library for IoT devices to communicate with cloud services. There I also created a simple but versatile cloud-service-mocking framework that helps in testing of this library.

 

I speak English and Polish. I know C, C++, Java, Python, Kotlin (Android). I have most experience programming in C++ and I use it on a daily basis – especially nowadays as I work as part of LizardFS team.

 

LizardFS: It’s open-source, it’s interesting, intimidating at first, but very exciting to work on. It makes use of low-level APIs, clever optimizations, exotic high-performance data structures and various open-source third-party libraries. There is always something new to discover in its source code. On the other hand, that’s also why it is so difficult and time-consuming to become familiar with. But it surely is a rewarding process. There are many things I have learned from its code already.

 

Right now we focus mainly on fixing bugs, most of which have been detected by the project’s community (thanks for that). While doing it, we try to put some effort into increasing the code quality of the parts concerning the bugs, where necessary. After that, an idea I have is to parallelize the master implementation, which currently is the main bottleneck of LizardFS, to speed up filesystem operations performed on Lizard.

 

I envisage LizardFS in the not-too-distant future as being a well-known solution that people trust, are satisfied with and willingly use in their environments. I would love it to become one of the top players in the field of storage solutions, one that is identified with high availability and being open-source.

 

More about me:

GitHub

Linkedin

We have had some interest from the community to take part in this series of articles, they will come shortly.

Anybody else that would like to take part drop me an email:

mark.mulrainey@lizardfs.com

Introduction to the community

My name is Patryk, I was born and raised in Warsaw, Poland, not Indiana.

Warsaw is a fantastic city to live in and has a large amount of very talented programmers.

I speak Polish and English fluently, dabbled a bit in German, Spanish and Italian for a few years. Now, I could probably manage ordering in a restaurant (maybe). I concentrate more on other types of languages, C++, C, Python, Perl, Bash and a few others.

I did my masters of science in engineering and computer science at Warsaw University of Technology, along with my Bachelors degree in engineering and computer science. Most of the creators of LizardFS come from there.

During my studies I gained experience working for 2 Polish companies, Comarch and Gemius, mainly doing C++ stuff. After studies I moved to Intel as a Linux kernel developer working on the Intel puma project (google it if you like, it’s quite interesting stuff). Since March this year I have been working on the LizardFS project.

LizardFS is by far the most interesting and exciting project I have been involved with. Really! It’s a great Polish project (yes I am patriotic), started in Poland by talented MIM UW postgraduates. Unfortunately all of those brilliant minds left the project pursuing new challenges. Now I and our new team try our best to learn the code base, fix bugs left in 3.13-rc1 and develop new features. As most of you understand, learning someone else’s code is never an easy task, but we are coping. We have the benefit of some sessions with the original developers to help us get acquainted with the code faster and push the project back to its former glory.

Currently, as you have seen from the last press release we are working on fixing 3.13-rc2, we succeeded in getting the first parts of that done on schedule, now we are pushing to finish it and have a bullet proof new 3.13 version by the end of the year. LizardFS was always known for its stability, we want that back. After that we will work on improving performance by adding the Agama project mount and Kubernetes support (seems to be the most frequently requested direction for us to take). Besides that I would like to add a security layer to the LizardFS protocol, so that it can be used even in untrusted subnets.

In the future, I would like LizardFS to be an open-source standard for storing files on corporate servers and small businesses, but also on nerds’ homemade server racks. 😉

If you’re interested, you can see more of me below:

StackOverflow

GitHub

LinkedIn

We have already had several of the community step forward and show interest in taking part in these articles, would be great to have many more.

Remember the idea is to try and pull the community together and make the project stronger, hopefully growing along the way.

Drop me an email if you would like to take part: mark.mulrainey@lizardfs.com

Version 3.13.0-rc2

Logo LizardFS

A Promise is a Promise!

 

The first 2 major bug fixes in 3.13.0-rc1 have been completed!

https://lizardfs.com/download/

 

Next we move on to finishing 3.13 by the end of the year. I hope this starts to restore some faith in the project from the community. We need you to make it work!

 

We have an idea to try and involve the community more, to make you more integrated with the whole project, hopefully gaining more and more members along the way.

 

First step: We would like to introduce you to our team, we will publish a series of articles detailing our guys, where they came from and what they expect to achieve etc.

It would be really good if we could do the same with the community, some simple questions to answer, they will tell a bit about you, your experience, your love/hate for LizardFS.

 

How does it sound?

 

Drop me an email if you are interested, I will send you the 10 questions, to help create an article from. You will have total control over what is published!

mark.mulrainey@lizardfs.com

 

Update on LizardFS project

As most of you will have noticed, the LizardFS project has been lacking commitment to the community, and development for the past I would say 2 years. It’s time for a change and I think you will agree. Very recently the project and company have had new owners, new management and new developers. I hope the community still have a glimmer of faith in LizardFS. We have a lot of plans and ideas of how to get LizardFS back to its former glory and way beyond that.

First, I hope you will understand the mess we have taken over, including a large financial burden that we need to work our way out of. With a new team, we will need a little patience from the community while they get up to speed. 3.13.0-rc1 was a total disaster, but it does have lots of good stuff in it and basically 4 major bugs + lots of minor bugs, so we plan to continue forward with that (remember 3.12 is rock solid for any production clusters meanwhile), and we will make 3.13 the way it should have been in the first place.

What I propose is that we are totally transparent with our roadmap and flexible enough to listen to the community’s needs. Below is our plan till the end of the year and the first quarter of 2020.

End of October release of 3.13.0-rc2: 2 major bug fixes issue #780 and #662. Plus lots of minor bug fixes and several enhancements. End of December release of full bulletproof version 3.13 with everything that was promised before but working this time.

In the 2nd quarter we will have the first installment of the agama project, a new mount that will give you minimum 3 times better performance than 3.12 ever produced.

Going further at the moment would be dreaming and just blah blah blah, I prefer to keep it realistic, follow through with what is promised and allow for the community to get behind the project again, then we can plan the rest of 2020 together.

I hope this shows a commitment from us to the project.

If any of the community would like to share knowledge and experience with our devs, it can only help their learning process and also help build a stronger community, we’ve opened up a gitter room for direct communication with the team to help this process. As always any help in bug fixing, enhancements, functionalities, testing the community would like to take part in, will be graciously accepted.

Let’s make LizardFS great again!

LizardFS is now available on openSUSE as well.

LizardFS for openSUSE

LizardFS in official repositories of openSUSE in filesystems subproject!

Download, install and enjoy!

https://build.opensuse.org/package/show/filesystems/lizardfs

LizardFS in QNAP club.

Have you ever run out of storage on your QNAP box?

If you would like to connect a few of them together and create scale-out storage cluster from them it has never been easier.

LizardFS can now be directly downloaded from QNAP club.

Enjoy!

https://qnapclub.eu/en/qpkg/602

LizardFS & Proxmox

Frankfurt here we come!

In just 9 days at the Proxtalks conference you will get a chance to learn how to use LizardFS as a shared directory for Proxmox VE.

Join us at The Squaire – right at the airport to learn about distributed, parallel, scale-out storage for virtualisation and containers.

Bring your laptops – at the end we will use them to build LizardFS cluster together.

Hope to see you there.

 

LizardFS & NFON

LizardFS is becoming a standard for Distributed Data Storage in the Telecom industry.

                                          

                                             Overview

Headquartered in Munich, NFON AG is the only pan-European cloud PBX provider – counting more than 15,000 companies across 13 European countries as customers. The cloud telephone system offers over 150 functions as well as a seamless integration of premium solutions which results in high-quality communication. NFON is the new freedom of business communication.

Rather than being stored on-premises at its customers’ offices or data centers, the NFON cloud PBX operates in the cloud, in several external high-performance computing data centers that are reached via a fully encrypted internet connection. NFON promises a 24/7 service with enterprise-grade availability so that these businesses can be consistently available for their customers and business              partners.

“We were impressed by the stability and fault tolerance LizardFS provided in our tests. The cluster setup was quite easily integrated in our configuration management, too. Another cluster, for a different purpose, was set up within minutes.”

Markus Stumpf
Vice President Operations and Infrastructure
NFON

Read the full case study:

https://lizardfs.com/case-studies/nfon/

Release Candidate of LizardFS 3.13 with built in High Availability finally out!

Dear Users,

3.13.0-rc1 (release candidate) is officially out!

Featuring:

  • uRaft High Availability
  • fixes to EC handling
  • nfs-ganesha plugin changed to use only C code
  • reduced number of secondary group retrievals
  • new fuse3 client
  • many fixes

Detailed info:

  • uRaft HA *

uRaft is HA solution designed for use with LizardFS. It allows for seamless switching of  the master server in case of hardware failure. More information about uRaft is available in LizardfFS Handbook.

  • fixes to EC handling *

After extensive tests, we decided to improve the mechanism of calculating parities greater than 4 e.g EC( 6,5). After this upgrade, the system will show chunks of parities as endangered until the system automatically recalculates.

  • nfs-ganesha plugin changed to use only C code *

In preparation for moving LizardFS nfs-ganesha plugin to official nfs-ganesha repository, we had to remove all occurrences of C++ code and replace it with plain C.

  • reduced number of secondary group retrievals *

In LizardFS we introduced the handling of secondary groups. Unfortunately, the function to retrieve secondary groups in FUSE library used a lot of CPU resources. Thanks to removing unnecessary calls to this function, mount performance increased significantly.

  • added fuse3 client *

LizardFS now includes mount3 client which uses FUSE3 library. Thanks to new features in FUSE3, now the mount performs much better in many scenarios. Here are the most important changes visible to LizardFS users:

  • big_writes option is now enabled by default (not recognized as a parameter anymore).
  • added writeback_cache option which with kernel 3.14 and later improves performance significantly
  • increased read/write performance (especially for small operations)

Because most of the Linux distributions don’t include FUSE3 library, we have build FUSE3 packages and made them available on the LizardFS website.

 

Looking for Partners