Hadoop Tutorial provides a introduction into working with big data in Hadoop via the Hortonworks Sandbox, HCatalog, Pig and Hive. Learn How to handle Big Data

How does master slave architecture in the Hadoop?

The MapReduce framework consists of a single master JobTracker and multiple slaves, each cluster-node will have one TaskskTracker.
The master is responsible for scheduling the jobs' component tasks on the slaves, monitoring them and re-executing the failed tasks. The slaves execute the tasks as directed by the master.

What is MapReduce ?

 Map reduce is an algorithm or concept to process Huge amount of data in a faster way. As per its name you can divide it Map and Reduce.
· The main MapReduce job usually splits the input data-set into independent chunks. (Big data sets in the multiple small datasets)
· MapTask: will process these chunks in a completely parallel manner (One node can process one or more chunks).
· The framework sorts the outputs of the maps.
· Reduce Task : And the above output will be the input for the reducetasks, produces the final result.

Your business logic would be written in the MappedTask and ReducedTask.
Typically both the input and the output of the job are stored in a file-system (Not database). The framework takes care of scheduling tasks, monitoring them and re-executes the failed tasks. 

Get Updates

Enter your email address:

Delivered by FeedBurner

Ask Questions


Email *

Message *

Popular Posts