Simplified Data Processing for Large Cluster: A MapReduce and Hadoop Based Study

: With the drastic development of computing technologies, there is an ever-increasing trend in the growth of data. Data scientists are overwhelmed with such a large and ever-increasing amount of data, as this now requires more processing channels. The big concern arising here for large-scale data is to provide support for the decision making process. Here in this study, the MapReduce programming model is applied, an associated implementation introduced by Google. This programming model involves the computation of two functions; Map and Reduce. The MapReduce libraries automatically parallelize the computation and handle complex tasks including big data distribution, loads and fault tolerance. This MapReduce implementation with the source formation of Google and the open-source mechanism, Hadoop has an objective of handling computation of large clusters of commodities. Our implication of MapReduce and Hadoop framework is aimed at discussing terabytes and petabytes of storage with thousands of machines parallel to every machine and process at identical times. This way, large processing and manipulation of big data are maintained with effective result orientations. This study will show up the basics of MapReduce programming and open-source Hadoop structure application. The Hadoop system can speed up the handling of big data and respond very fast.


Introduction
With the introduction and advancement of technology and computerized innovation, the growth of data is unimaginable and unreachable. Data scientists and handlers are getting overwhelmed and frustrated with such a large and everincreasing amount of data with its processing requirements ever-increasing and demanding more every time. With so large an ever-increasing data, there comes to some problems as well concerning its handling, processing, and management. These problems are faced by various fields in making use of this large scale, drawing meanings out of it, as well as, using it for decision making.
Data mining, data classification, handling, and processing are some of those technologies that can amend and draw new ways out of these large data sets. For many years in the past, this data mining technique with its pre-requisites is studied in all applicable scenarios; resulting it to be the phase of development of data mining methods and further their application to make them workable. Various hurdles in the wake of processing are faced by large-scale internet companies including Google, Yahoo, Facebook, LinkedIn, as well as, other bigger internet-solution providing companies that require processing a huge chunk of data not only in minimum timeframe but also keeping the cost-effective solution in an application.
Google had developed MapReduce and the Google File System, which is embracing to studied and investigated in this research study. Google has also built a database management system (DBMS) known as Big Table. This system can search millions of pages and return the results in milliseconds by employing some algorithms that work through the MapReduce system and Google File System [1].
In the recent past, MapReduce has made its place as an algorithm to handle computing paradigm and analysis of a large amount of data [2]. MapReduce has got fame while it was made part of the Google database management system and Google file system. MapReduce could be employed for measurability and is purely a fault-tolerant data processing tool that can handle and process huge data along with lowerbound computing nodes [3].
Discussing how MapReduce works, a distributed file system (DFS) first categorizes data in multiple categories, and then data is presented as a pair containing key and values. The MapReduce framework performs its applications and function on a single machine where the data may be preprocessed before map functions or post-process the output of MapReduce function performed [4]. As Hadoop is applied, which is a famous open-source application of MapReduce to handle large datasets. It employs an already provided userlevel filesystem to handle storage across the cluster [5]. This implication will provide you with a speedy output but less significant, yet giving you a reasonable speed as well as handling a larger dataset that tackles a large number of computing nodes and minimizes application time by 30% comparing with ordinary data mining techniques [6].
The map function will dig out the key from each recording and will forward the key with a matching record pair. On the other hand, the reduce function will give all pairs unchanged.

Related Works
Seema Maitrey with her fellow researchers has studied big data handling with a new technique under the name of "MapReduce: Simplified Data Analysis of Big Data".
This research study is focused on using the MapReduce technique that is based on cloud-based technologies. A famous application of cloud technology is Google, which works aligned with this technology and handles data and processes with care. They also discussed Hadoop that is used by companies other than Google, including Facebook, Yahoo, etc. The analytical processing of data using Hadoop and the application of MapReduce is verified and assessed with their research-based study [7].
Another researcher Jeffrey Dean with his fellow researchers has studied the MapReduce framework getting a lot of attention for the application on big data. They classified it as a programming model with implementation with the aim of processing and handling large datasets being responsive for a wide variety of real-world operations [8].

Richard M. Yoo and his fellows have studied Scalable
MapReduce with a large-scale shared-memory system and talked about dynamic runtimes with simplifying parallel programming, as well as automatically detecting scenarios. They discussed how a multi-layered approach works along that work for the optimizations on the algorithm, implementation, and OS interaction defining and channelizing significant speedup improvements with 256 threads. They also identified the hurdles or roadblocks which are involved in limiting the scalability of runtimes on sharedmemory systems [9].
Kyong Lee with his fellows had discussed Google's MapReduce technique that works for big data handling and processing more simply and smoothly together with the benefit of minimized cost. The main characteristic of this MapReduce model was that it able to process large data sets among others distributed among multiple nodes and multiple channels [10].
B Panda and his fellows had highlighted the MapReduce system and its applications with big data at an International Conference. They highlighted the MapReduce mechanism being a proprietary system of Google. They also discussed the distributed computing being great to extend simplified with implications of Map and Reduce functions, providing the basics and insights of achieving the desired performance [11].
Jeffrey Dean with his fellows had discussed simplified data processing on large clusters with the MapReduce framework. They stated this being the subsidiary infrastructure of Google's MapReduce that allocates to a distributed file system and enables the algorithms to locate data and make it available. They termed it easy to use as with the opinion of programmers as more than ten thousand distinct MapReduce programs are on implementation internally at Google within the last four-year span [12].
Bayardo Panda and his fellows have discussed massively parallel learning with the application of the MapReduce framework. They highlighted combining the MapReduce programming technique with the distributed file system, being a way to achieve distributed computing objectives with data processing over thousands of computing nodes [11]. Jaliya Ekanayake and her fellows have discussed MapReduce for data-intensive scientific analysis. They discussed the MapReduce technique due to its application to large parallel data analyses. They discussed this with efficient parallel/concurrent algorithms meeting the scalability and performance requirements to handle and process scientific data [13].
Anam Alam and her fellows have discussed the Hadoop Architecture and Its Issues, together with their implication at an international conference. Hadoop is categorized as a distributed program or framework used to handle a large amount of data. Hadoop is usually used for data-intensive applications. With its extensive application, every social media site has made use of it [14].
R. Vijayakumari and her fellows have discussed the comparative analysis of Google File System and Hadoop Distributed File System. They discussed this distributed computing, parallel computing, grid computing, and other parameters including; Design Goals, Processes, Fire management, Scalability, Protection, Security, cache management replication, etc. to compare both these methods and their application of the file system [15].

Methodology
The methods used may not look familiar to a common audience. The first one is MapReduce which is in fact oriented to programmers, rather than business users. This has gained popularity due to its easy application, efficiency and ability to control "Big Data" in a timely manner. MapReduce framework with its application and programming model is discussed above. An example of occurrences is discussed and employed with the MapReduce framework.

Hadoop
Another process employed and utilized is Hadoop which is connected with Java implementation and Java application.
This should be used in two different ways. These are the outputs advantageous API streamed output and the other involving building of Hadoop apps with C++. Hadoop Distributed File System is a target file system especially to use with MapReduce programs. This best applicable to the small number of very large files. With the use of replication, data availability could be made possible within Hadoop Distributed File System (HDFS).
To process all of the files created by the mapping mechanism, the Reduce program get access to internode data. When this is executed, map and reduce, both programs will write it down to the local file system to avoid the burden over the HDFS system. HDFS can support multiple readers and one writer (MROW) approach. The indexing mechanism might not apply to HDFS, so, this would just be applied to read-only applications that only scan and read the contents of the file.

Hadoop Architecture
Hadoop Distributed File System stores data within its computing nodes, providing customized and high aggregate bandwidth across the entire cluster. This file system installation has different nodes plus one single name node, called the master node and various data nodes, called slave nodes. The name node has held responsible for the management of the file system namespace and controls the access to files by clients. The data nodes or slave nodes are distributed in a way that one data node is assigned per machine in the cluster, managing data while attaching it to the machines where they run. The name node has an operation execution scenario within the file system namespace and assigns those data blocks to data nodes. Those data nodes are there to handle read and write requests from clients and performing operations with the instruction provided [16]. Hadoop Distributed File system manipulates and handle data chunks and replicates these data chunks across the server for performance keeping and mechanism, loadbalancing and resiliency. The processing application of any problem execution will specify the number of replicates of the file right when it is created, and this count or record can be changed any time after that. The name node has the ability to adopt different decisions concerning block replication.

Deploying Hadoop
Hadoop compiles in three different ways, the first one is a standalone mode, which is the default mode of Hadoop, running as a single Java process. The second one is Pseudodistributed mode, which involves the configuration of Hadoop to run on a single machine, whereas, with different Hadoop processes, run divergent Java processes. The third one is fully disseminating or cluster mode, involving one machine, as the name node and another, as the job tracker. There could be a secondary name node that might work for periodic handshaking with a name node for fault tolerance.

Replication Management
HDFS provides a reliable way to store huge data in a distributed environment as data blocks. The blocks are also replicated to provide fault tolerance. The default replication factor is 3 which is again configurable. So, as you can see in the figure below where each block is replicated three times and stored on different DataNodes (considering the default replication factor):

Hadoop Based Oozie Structure and Implementation
Apache Oozie manages all the tasks and makes them organized. So, this could be known as a scheduler for Hadoop. This mechanism provides workflow of dependent jobs that later on and helps to develop Directed Acyclic Graphs of workflows that allow jobs or tasks to run in parallel and sequentially in Hadoop. This type of Oozie workflow works with both action nodes and control-flow nodes. An action node represents a workflow task like moving files into HDFS, running a MapReduce, or running a shell script of a program written in Java. While a control-flow node controls the execution of the task by allowing different action nodes and controlling control nodes.

Results and Discussions
Discussing results and discussions, big data and its requisite technologies can bring about significant changes and benefits to your business. But with the increased and widespread use of technologies, it might turn into a difficult task for your organization to manage, control and tackle a heterogeneous collection of data and get your desired outcomes.
To handle the growth of individual companies, certain aspects should be followed so that timely results could be attained from Big Data since effective use of Big Data, the modernization, and effectiveness for entire divisions and economies are to be attained. Therefore, you should know how to ensure the effectiveness of usage, management and re-use of data sources, including public data to construct applications. There is a need to evaluate the best approach to use for filtering and analyzing the data. For the optimized processing, Hadoop with MapReduce could be employed. As we have used in this paper, with the basics of MapReduce programming and open-source Hadoop framework application. The Hadoop framework can speed up the processing of big data and respond very fast. The extensibility and simplicity of these frameworks will be the critical factors that make it a replenishing tool for big data handling, processing, and management.

Conclusion
MapReduce programming model is applied, an associated implementation introduced by Google. This programming model involves the computation of two gatherings; Map and Reduce.
Hadoop performance is made up of an ecosystem of tools and technologies that will requirement careful analysis and expertise to determine the suitable mapping of technologies to enable a smooth migration.
Hadoop is a highly scalable platform and is largely because of its ability that it stores and allocates large data sets across lots of servers. The servers used here are quite inexpensive and can operate in parallel. The processing power of the system can be improved with the addition of more servers.
Hadoop MapReduce programming model offers suppleness to process structure or unstructured data by several business organizations who can use the data and operate on different types of data. Thus, they can achieve a business value out of those meaningful and beneficial data for the business organizations for analysis.