Open Source changing media sector.

It is a general trend which is getting stronger and stronger. Moving away from closed-sourced, proprietary solutions to COTS, open infrastructure, solutions that enables people to use commodity hardware.

You could see that in a couple of talks at #NABshow. People are not only working on them, some of them are already using such solutions.

Embrace the shift from #fiberchannel to #ethernet Find out how you can benefit from #opensource hardware agnostic #scaleout high performance #distributed #filesystem Visit #lizardfs booth on #NAB2018 N2936sul-b (North Hall).

Hadoop plugin for LizardFS is here!

As everything that we do Hadoop Plugin for LizardFS is as simple as we could make it.

 

This is a java based solution allowing Hadoop to use LizardFS storage, implementing a HDFS interface to LizardFS.

It functions as kind of a File System Abstraction Layer.

It enables you to use Hadoop jobs to directly access the data on a LizardFS cluster.

The plugin translates LizardFS protocol and makes the metadata readable for Yarn and Map Reduce.

 

For performance Hadoop nodes should run on the same machines as LizardFS chunk servers.

 

LizardFS mount gives direct access to stored files, from the OS level. This allows you to use it as a shared storage in your company and a computation storage for HADOOP at the same time.

It is not required to use HADOOP tools to put/get files from your storage in comparison to HDFS.

 

We can also take advantage of Erasure Coding and save a lot of disk space (HDFS recommends to store 3 copies).

 

The function:

public BlockLocation[] getFileBlockLocations(FileStatus file, long start, long len)

Returns information where data blocks are held in your LizardFS installation. If Hadoop is run on the same machines, it can take advantage of data locality.

 

To install Hadoop with LizardFS:

1) Install and setup LizardFS cluster

2) Install HADOOP – but don’t start

3) Install LizardFS-HADOOP plugin on all HADOOP nodes

4) Configure LizardFS-Plugin in HADOOP (along side HDFS or replace it)

5) Start HADOOP

 

Let us know what you think of it.

Enjoy!

LizardFS @ NABSHOW 2018

 

Come visit our stand at NABSHOW in Las Vegas.

North Hall Central Lobby in the Startup Loft, Booth number: N2936SUL-B

LizardFS@NABSHOW (how to find us)

LizardFS entering Big Data world by releasing LizardFS plugin for Hadoop.

 

 

After so many tests we decided to release pre-alfa, cutting edge Hadoop connector for LizardFS.

You can download it from:

http://cr.skytechnology.pl:8081/#/c/3249/3

We are waiting for the feedback.

At the moment you will be required to build the binaries yourself.

We are looking forward to some feedback. Please bear in mind that we are not Hadoop experts, thu we might have missed some test scenarios.

We really need a help from the community site on this one. Help greatly appreciated and needed.

LizardFS@MWC18

Let us know if you will be at #MWC18

Would love to meet up and discuss storage

LizardFS AT HPA Tech Retreat

Come visit us at our stand and see how #LizardFS can help you!

Find us in the Innovation Zone at Hollywood Professionals Association Tech Retreat.

High Availability released to the open source community

As promised for some time now, we finally released our High Availability mechanism to our open source community. Enjoy and let us know what you think!

LizardFS at FOSDEM 2018

Even stronger presence this year.

You can meet us:

  • in our booth building K level 1 group C, also
  • join us for Lightening Talk to see the grand premiere of OpenNebula connector:

https://fosdem.org/2018/schedule/event/lizardfs_opennebula/

and

  • find out what was going on at LizardFS in 2017 “Year in development” in the Software Defined Storage room:

https://fosdem.org/2018/schedule/event/lizardfs/

Stay tuned – we will reveal a great suprise in just 4 days – the count down starts now!

 

 

LizardFS 3.12 official release.

 

It is official now. After a few weeks of tests with positive feedback from the community and our beta testers LizardFS 3.12 is out.

With promising initial feedback like: “it’s a huge step forward” or “with RichACLs there is no doubt it is a great enterprise product” we cannot wait to hear the outcomes after your upgrades.

Install it, run it and let us know.

We will be updating you with more details soon – so stayed tuned.

Things that we improved:

Documentation – corrected install guide and few other details.

Things added:

Completely new Windows Client with rich ACLs support, improved performance (200%) and stability (eliminated problems after one of the OS updates).

Rest looks just as great as before:

– C API

– nfs-ganesha plugin

– RichACL – a new POSIX + NFSv4 compatible ACL standard

– OSX ACL support through osxfuse

– ACL in-memory deduplication

– file lock fixes

– AVX2 support for erasure code goals

– MinGW compilation fixes

– more flexible chunkserver options

– many fixes

 

LizardFS 3.12.0 RC is out!

Release Candidate of 3.12.0

Please test it and let us know about any issues.

Description below.

Featuring:

 

– C API

– nfs-ganesha plugin

– RichACL – a new POSIX + NFSv4 compatible ACL standard

– OSX ACL support through osxfuse

– ACL in-memory deduplication

– file lock fixes

– AVX2 support for erasure code goals

– MinGW compilation fixes

– more flexible chunkserver options

– many fixes

Detailed info:

* C API *

LizardFS 3.12 comes with liblizardfs-client library and C language API header.

It’s now possible to build programs/plugins with direct support for LizardFS operations,

no FUSE needed. For reference, see:

src/mount/client/lizardfs_c_api.h

src/data/liblizardfs-client-example.c

 

For those building LizardFS from source, pass a -DENABLE_CLIENT_LIB=YES flag to cmake

in order to make sure you’re building client library as well.

* nfs-ganesha plugin *

 

Our official plugin for Ganesha NFS server is included as well. This plugin enables

a LizardFS FSAL (File System Abstraction Layer) to Ganesha, which is then used

to access LizardFS clusters directly. Our new plugin is pNFS and NFSv4.1 friendly.

 

For those building LizardFS from source, pass a -DENABLE_NFS_GANESHA=YES flag to cmake in order to make sure you’re building client library as well.

* RichACL *

In order to extend POSIX access control list implementation we introduced RichACL support.

Backward compatibility with POSIX ACLs is guaranteed. Additionally, it’s possible to use NFSv4-style ACL tools (nfs4_getfacl/nfs4_setfacl) and RichACL tools (getrichacl/setrichacl) to manage more complicated access control rules.

* OSX ACL *

Setting/getting ACLs is also possible on OSX via both command line chmod/ls -e interface and desktop.

 

* File lock fixes *

Global file locking mechanism is now fully fixed and passes all NFS lock tests from connectathon suite.

 

* AVX2 *

Erasure code goal computing routines now take full advantage of AVX2 processor extensions.

 

* MinGW *

LizardFS is now bug-free again for MinGW cross-compiling.

 

* Chunkserver options *

Replication limits are now fully configurable in chunkserver config.

Also, chunk test (a.k.a. scrubbing) has 1 millisecond precision now instead of previous 1 second, which allows users to turn on more aggressive scrubbing with simple chunkserver reload.

 

https://www.google.com/url?q=https%3A%2F%2Fgithub.com%2Flizardfs%2Flizardfs%2Freleases%2Ftag%2Fv3.12.0-rc1&sa=D&sntz=1&usg=AFQjCNFouUJf1ZZkQPypA9rHJHNpXxvKJA