Tools in the big data ecosystem. Summarized from Quora

    • MapReduce is the Google paper that started it all (Map Reduce Paper). It’s a paradigm for writing distributed code inspired by some elements of functional programming. The Google internal implementation is called MapReduce and Hadoop is it’s open-source implementation.
    • HDFS is an implementation inspired by the Google File System (GFS) to store files across a bunch of machines when it’s too big for one. Hadoop consumes data in HDFS (Hadoop Distributed File System).
    • Apache Spark is an emerging platform that has more flexibility than MapReduce but more structure than a basic message passing interface. It relies on the concept of distributed data structures (what it calls RDDs) and operators.
    • Because Spark is a lower level thing that sits on top of a message passing interface, it has higher level libraries to make it more accessible to data scientists. The Machine Learning library built on top of it is called MLib and there’s a distributed graph library called GraphX.
    • Zookeeper is a coordination and synchronization service that a distributed set of computer make decisions by consensus, handles failure, etc.
    • Flume and Scribe are logging services, Flume is an Apache project and Scribe is an open-source Facebook project. Both aim to make it easy to collect tons of logged data, analyze it, tail it, move it around and store it to a distributed store.
    • Google BigTable and it’s open source twin HBase were meant to be read-write distributed databases, originally built for the Google Crawler that sit on top of GFS/HDFS and MapReduce/Hadoop. Google Research Publication: BigTable
    • Hive and Pig are abstractions on top of Hadoop designed to help analysis of tabular data stored in a distributed file system (think of excel sheets too big to store on one machine). They operate on top of a data warehouse, so the high level idea is to dump data once and analyze it by reading and processing it instead of updating cells and rows and columns individually much.

 

  • Mahout (Scalable machine learning and data mining) is a collection of machine learning libraries written in the MapReduce paradigm,specifically for Hadoop. Google has it’s own internal version but they haven’t published a paper on it as far as I know.
  • Oozie is a workflow scheduler. The oversimplified description would be that it’s something that puts together a pipeline of the tools described above. For example, you can write an Oozie script that will scrape your production HBase data to a Hive warehouse nightly, then a Mahout script will train with this data. At the same time, you might use pig to pull in the test set into another file and when Mahout is done creating a model you can pass the testing data through the model and get results. You specify the dependency graph of these tasks through Oozie (I may be messing up terminology since I’ve never used Oozie but have used the Facebook equivalent).
  • Lucene is a bunch of search-related and NLP tools but it’s core feature is being a search index and retrieval system. It takes data from a store like HBase and indexes it for fast retrieval from a search query. Solr uses Lucene under the hood to provide a convenient REST API for indexing and searching data. ElasticSearch is similar to Solr.

Reference:  https://www.quora.com/How-do-I-learn-big-data-technologies-1/answer/Abhinav-Sharma?srid=hbNG