Hi everyone, my name is Wessam Aly. I’m from Egypt. I speak English, French, and Arabic. I’m also pretty fluent in C, and I do some coding in Python, Perl and Go as well.
I have personally implemented and tested quite a few distributed file systems already, namely Ceph, OpenIO SDS, Minio, GlusterFS and obviously LizardFS.
Professionally, I have used, implemented and tested Cohesity, Rubrik, Hedvig and Veritas CFS.
I also have experience with other storage systems:
- IBM (Flash system, Elastic Storage System, Elastic Cloud, SAN Volume Controller, etc.),
- EMC (Isilon, DataDomain, Unity, Clariion, DMX, etc.),
- HPE (3PAR),
- Hitachi (XP)
So, the most important part is, why did I choose LizardFS?
I have heard good things about its speed and reliability at scale (all out of the box, with no special tuning).
I only did a bit of testing, but so far I have discovered that LizardFS is one of the easiest and most direct implementations of an SDS solution among all products I have experience with.
I really like a lot of things about LizardFS, low system requirements and straightforward architecture are a huge plus, so are the clients available for Windows. Geo-replication is really useful too.
There are some bad things about the project too, unfortunately. I dislike master servers and metadata servers. The release cycle could be shorter too.
I would prefer for the metadata to be distributed among chunk servers the same way data is. If a clear changelog could be added together with a near-fixed release cycle that would mean a world to me.