If mapreduce jobs are not possible to run, issue with yarn because of less ram

If mapreduce jobs are not possible to run, issue with yarn because of less ram . 


update “yarn-site.xml” with below properties and restart “yarn”


<property>

<name>yarn.scheduler.minimum-allocation-mb</name>

<value>512</value>

</property>

<property>

<name>yarn.scheduler.maximum-allocation-mb</name>

<value>2048</value>

</property>

<property>

<name>yarn.nodemanager.resource.memory-mb</name>

<value>20480</value>

</property>

Share it ..Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedInShare on TumblrShare on StumbleUponPin on PinterestShare on RedditFlattr the authorDigg thisShare on YummlyShare on VKBuffer this page

How to change the hostname in ubuntu

  1. open “/etc/hostname” file using text editor
  2. command: sudo gedit /etc/hostname
  3. modify the existing hostname value with new
  4. open “/etc/hosts” file using text editor
  5. command:  sudo gedit /etc/hosts
  6. modify the existing hostname value with new
  7. execute below command to refresh the hostname
  8. command: sudo service hostname restart
  9. re-open the terminal to verify the changes

 

 

Share it ..Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedInShare on TumblrShare on StumbleUponPin on PinterestShare on RedditFlattr the authorDigg thisShare on YummlyShare on VKBuffer this page

Hadoop Interview Questions & Answers in Week1 : Kalyan Hadoop Training in Hyderabad

Hadoop Interview Questions & Answers in Week1 : Kalyan Hadoop Training in Hyderabad

Let's play with Week1 interview Questions & Answers in Hadoop. Kalyan Hadoop Training in Hyderabad.
Start
Congratulations - you have completed Hadoop Interview Questions & Answers in Week1 : Kalyan Hadoop Training in Hyderabad. You scored %%SCORE%% out of %%TOTAL%%. Your performance has been rated as %%RATING%%
Your answers are highlighted below.
Return
Shaded items are complete.
12345
678910
11End
Return
Share it ..Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedInShare on TumblrShare on StumbleUponPin on PinterestShare on RedditFlattr the authorDigg thisShare on YummlyShare on VKBuffer this page

12 KEY STEPS TO KEEP YOUR HADOOP CLUSTER RUNNING STRONG AND PERFORMING OPTIMUM

1. Hadoop deployment on 64bit OS:

  • 32bit OS have 3GB limit on Java Heap size so make sure that Hadoop Namenode/Datanode is running on 64bit OS.

 2. Mapper and Reducer count setup:

This is cluster specific value and reflects total number of mapper and reducers per tasktracker.
conf/mapred-site.xml mapred.tasktracker.map.tasks.maximum N The maximum number of map task slots to run simultaneously
conf/mapred-site.xml mapred.tasktracker.reduce.tasks.maximum N The maximum number of reduce task slots to run simultaneously

If no value is set the default is 2 and -1 specifies that the number of map/reduce task slots is based on the total amount of memory reserved for MapReduce by the sysadmin.

To set this value you would need to consider tasktracker CPU (+/- HT), DISK and Memory in account along with if your job is CPU intensive or not from a degree 1-10. For example if tasktracker is a quad core CPU with hyper-threading box, then there will be 4 physical and 4 virtual, total 8 CPU. For a high CPU intensive job we sure can assign 4 mappers and 4 reducer tasks however for a far less CPU intensive job, we can have up to 40 mappers & 40 reducers. 

You don’t need to have mapper or reducers count same as it is all depend on how the job are created. We can also have 6 Mappers and 2 Reducer also depend on  how much work is done by each mapper and reduce and to get this info, we can look at job specific counters. The number of mappers and reducer per tasktracker is depend of CPU utilization per task. You can also look at each reduce task counter to see how long CPU was utilized for the total map/reduce task time. If there is long wait then you may need to reduce the count however if everything is done very fast, it gives you some idea on adding either mapper or reducer count per tasktracker.

Users must understand that having larger mapper count compare to physical CPU cores, will result in CPU context switching, which may result as an overall slow job completion. However a balanced per CPU job configuration may results faster job completion results.

3. Per Task JVM Memory Configuration:

This particular memory configuration is important to setup based on total RAM in each tasktracker.
conf/mapred-site.xml mapred.child.java.opts -Xmx{YOUR_Value}M Larger heap-size for child jvms of maps/reduces.

The value for above parameter is depend on total mapper and reducer task per tasktracker so you must know these two parameters before setting. Here are few ways to calculate proper values for these parameters:

  • Lets consider there are 4 mappers and 4 reducer per tasktracker with 32GB total RAM in each machine
    • In this scenario there will be total 8 tasks running in any tasktracker
    • Lets consider about 2-4 GB RAM is required for Tasktracker to perform other jobs so there is about ~28GB RAM available for Hadoop Tasks
    • Now we can divide 28/8 and get 3.5GB per task RAM
    • The value in this case will be -Xmx3500M
  • Lets consider there are 8 mappers and 4 reducer per tasktracker with 32GB total RAM
    • In this scenario there will be total 12 tasks running in any tasktracker
    • Lets consider about 2-4 GB RAM is required for Tasktracker to perform other jobs so there is about ~28GB RAM available for Hadoop Tasks
    • Now we can divide 28/12 and get 2.35GB per task RAM
    • The value in this case will be -Xmx2300M
  • Lets consider there are 12 mappers and 8 reducer per tasktracker with 128GB total RAM, also one specific node is working as secondary namenode
    • It is not suggested to keep Secondary Namenode with Datanode/TaskTracker however in this example we will keep it here for the sake of calculation.
    • In this scenario there will be total 20 tasks running in any tasktracker
    • Lets consider about  8 GB RAM is required for Secondary namenode to perform its jobs and  4GB  for other jobs so there is about ~100GB RAM available for Hadoop Tasks
    • Now we can divide 100/20 and get 5GB per task RAM
    • The value in this case will be around -Xmx5000M
  • Note:
    • HDP 1.2 have some new JVM specific configuration which can be used for much more granular memory setting.
    • If Hadoop cluster does not have identical machines in memory (i.e. a collection of machines with 32GB & 64GB RAM) then user should use lower memory configuration as the base line.
    • It is always best to have ~20% memory left for other processes.
    • Do not overcommit the memory for total tasks, it sure will cause JVM OOM errors.

4. Setting mapper or reducer memory limit to unlimited:

Setting both mapred.job.{map|reduce}.memory.mb value to -1 or maximum helps mapreduce  jobs use maximum amount memory available.
MAP.JOB.MAP.MEMORY.MB
-1
THIS PROPERTY’S VALUE SETS THE VIRTUAL MEMORY SIZE OF A SINGLE MAP TASK FOR THE JOB.
mapred.job.reduce.memory.mb -1 This property’s value sets the virtual memory size of a single reduce task for the job
5. Setting No limit (or Maximum) for total number of tasks per job:
Setting this value to a certain limit put constraints on mapreduce job completion & performance. It is best to set it as -1 so it can use the maximum available.
mapred.jobtracker.maxtasks.per.job -1 Set this property’s value to any positive integer to set the maximum number of tasks for a single job. The default value of -1 indicates that there is no maximum.

 6. Memory configuration for sorting data within processes:

There are two values io.sort.factor and io.sort.mb in this segment.  Based on experience this value io.sort.mb should be 25-30% of mapred.child.java.opts value.
conf/core-site.xml io.sort.factor 100 More streams merged at once while sorting files.
conf/core-site.xml io.sort.mb NNN Higher memory-limit while sorting data.
So for example if mapred.child.java.opts is 2 GB, io.sort.mb can be 500MB or if mapred.child.java.opts is 3.5 GB then io.sort.mb can be 768MB.
Also after running a few mapreduce jobs, analyzing log messages will help you to determine a better settings for io.sort.mb memory size. User must know that having a low io.sort.mb will cause lot more time in sort procedure, however a higher value may result job failure.

7. Reducer Parallel copies configuration:

A large number of parallel copies would cause high memory utilization and cause java heap error. However a small number would cause slow job completion. Keeping this valve to optimum helps mapreduce jobs complete faster.
conf/mapred-site.xml mapred.reduce.parallel.copies 20
The default number of parallel transfers run by reduce during the copy(shuffle) phase.
Higher number of parallel copies run by reduces to fetch outputs from very large number of maps.
This value is very much network specific. Having a larger value means higher network activity between tasktrackers. With higher parallel reduce copies, reducers will create many network connections which congest the network in a Hadoop cluster. A lower number helps stable network connectivity in a Hadoop cluster. Users should choose this number depending on their network strength.  I think the recommended value can be between 12-18 in a gigabit network.

8. Setting Reducer Input limit to maximum:

Sometimes setting a lower limit to reducer input size may cause job failures. It is best to set the reducer input limit to maximum.
conf/mapred-site.xml mapreduce.reduce.input.limit -1 The limit on the input size of the reduce. If the estimated input size of the reduce is greater than this value, job is failed. A value of -1 means that there is no limit set.

This value is based on disk size and available space in the tasktracker. So if there is a cluster in which each datanode has variation in configured disk space, setting a specific value may cause job failures. Setting this value to -1 helps reducers to work based on available space.

9. Setting Map input split size:

During a mapreduce job execution,  map jobs are created per split. Having split size set to 0 helps jobtracker  to decide the split size based on data source.
mapred.min.split.size 0 The minimum size chunk that map input should be split into. File formats with minimum split sizes take priority over this setting.

 10. Setting HDFS block size:

  • Currently I have seen various Hadoop clusters running great with variety of HDFS block sizes.
  • A user can set dfs.block.size in hdfs-site.xml between 64MB and 1GB or more.

11. Setting  user priority, “High” in Hadoop Cluster:

  • In Hadoop clusters jobs, are submitted based on users priority if certain type of job scheduler are configured
  • If a hadoop user is lower in priority, the mappers and reducers task will have to wait longer to get task slots in tasktracker. This could ultimately cause longer mapreduce jobs.
    • In some cases a time out could occur and the mapreduce job may fail
  • If a job scheduler is configured, submitting job through high  job scheduling priority user, will result faster job completion in a Hadoop cluster.
 12. Secondary Namenode or Highly Available Namenode Configuration:
  • Having secondary namenode or Highly Available namenode helps Hadoop cluster to be always/highly available.
  • However I  have seen some cases where secondary namenode or HA namenode is running on a datanode which could impact the cluster performance.
  • Keeping Secondary Namenode or High Available Namenode separate from Datanode/JobTracker helps dedicated resources available for tasks assigned to the tasktracker.

Share it ..Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedInShare on TumblrShare on StumbleUponPin on PinterestShare on RedditFlattr the authorDigg thisShare on YummlyShare on VKBuffer this page

Hadoop-2.6.0 Cluster Commissioning and Decommissioning Nodes

To add new nodes to the cluster:

1. Add the network addresses of the new nodes to the include file.

hdfs-site.xml
<property>
<name>dfs.hosts</name>
<value>/<hadoop-home>/conf/include</value>
</property>


yarn-site.xml
<property>
<name>yarn.resourcemanager.nodes.include-path</name>
<value>/<hadoop-home>/conf/include</value>
</property>


Datanodes that are permitted to connect to the NameNode are specified in a
file whose name is specified by the dfs.hosts property.


Includes file resides on the namenode’s local filesystem, and it contains a line for each Datanode, specified by network address (as reported by the Datanode; you can see what this is by looking at the NameNodes web UI).
If you need to specify multiple network addresses for a Datanode, put them on one line,  separated by whitespace. 
eg :
slave01
slave02
slave03
…..

Similarly, NodeManagers that may connect to the ResourceManager are specified in a file whose name is specified by the yarn.resourcemanager.nodes.include-path property. 
In most cases, there is one shared file, referred to as the include file, that both dfs.hosts and  yarn.resourcemanager.nodes.include-path refer to, since nodes in the cluster run both DataNode and NodeManager daemons.
2. Update the NameNode with the new set of permitted DataNodes using this
command:
hdfs dfsadmin –refreshNodes

3. Update the ResourceManager with the new set of permitted NodeManagers using this command:
yarn rmadmin –refreshNodes

4. Update the slaves file with the new nodes, so that they are included in future
operations performed by the Hadoop control scripts.
5. Start the new DataNodes and NodeManagers.
hdfs dfsadmin -refreshNodes
yarn rmadmin –refreshNodes

6. Check that the new DataNodes and NodeManagers appear in the web UI.


To remove nodes from the cluster:

1. Add the network addresses of the nodes to be decommissioned to the exclude file. Do not update the include file at this point.
hdfs-site.xml
<property>
<name>dfs.hosts.exclude</name>
<value>/<hadoop-home>/conf/exclude</value>
</property>
yarn-site.xml
<property>
<name>yarn.resourcemanager.nodes.exclude-path</name>
<value>/<hadoop-home>/conf/exclude</value>
</property>
The decommissioning process is controlled by an exclude file, which for HDFS is set by the dfs.hosts.exclude property and for YARN by the yarn.resourcemanager.nodes.exclude-path property. It is often the case that these properties refer to the same file. The exclude file lists the nodes that are not permitted to connect to the cluster.
2. Update the NameNode with the new set of permitted DataNodes, using this
command:
hdfs dfsadmin –refreshNodes

3. Update the ResourceManager with the new set of permitted NodeManagers using this command:
yarn rmadmin –refreshNodes

4. Go to the web UI and check whether the admin state has changed to “Decommission In Progress” for the DataNodes being decommissioned. They will start copying their blocks to other DataNodes in the cluster.
5. When all the DataNodes report their state as “Decommissioned,” all the blocks have been replicated. Shut down the decommissioned nodes.
6. Remove the nodes from the include file, and run:
hdfs dfsadmin -refreshNodes
yarn rmadmin –refreshNodes

Note: Refer Hadoop-1.2.1 Cluster Commissioning and Decommissioning Nodes this link



Share it ..Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedInShare on TumblrShare on StumbleUponPin on PinterestShare on RedditFlattr the authorDigg thisShare on YummlyShare on VKBuffer this page

Introduction to Apache Spark with Examples and Use Cases

Today, Spark is being adopted by major players like Amazon, eBay, and Yahoo! Many organizations run Spark on clusters with thousands of nodes. According to the Spark FAQ, the largest known cluster has over 8000 nodes. Indeed, Spark is a technology well worth taking note of and learning about.
apache spark tutorial
This article provides an introduction to Spark including use cases and examples. It contains information from the Apache Spark website as well as the book Learning Spark – Lightning-Fast Big Data Analysis.

What is Apache Spark? An Introduction

Spark is an Apache project advertised as “lightning fast cluster computing”. It has a thriving open-source community and is the most active Apache project at the moment.
Spark provides a faster and more general data processing platform. Spark lets you run programs up to 100x faster in memory, or 10x faster on disk, than Hadoop. Last year, Spark took over Hadoop by completing the 100 TB Daytona GraySort contest 3x faster on one tenth the number of machines and it also became thefastest open source engine for sorting a petabyte.
Spark also makes it possible to write code more quickly as you have over 80 high-level operators at your disposal. To demonstrate this, let’s have a look at the “Hello World!” of BigData: the Word Count example. Written in Java for MapReduce it has around 50 lines of code, whereas in Spark (and Scala) you can do it as simply as this:
sparkContext.textFile("hdfs://...")
            .flatMap(line => line.split(" "))
            .map(word => (word, 1)).reduceByKey(_ + _)
            .saveAsTextFile("hdfs://...")
Another important aspect when learning how to use Apache Spark is the interactive shell (REPL) which it provides out-of-the box. Using REPL, one can test the outcome of each line of code without first needing to code and execute the entire job. The path to working code is thus much shorter and ad-hoc data analysis is made possible.
Additional key features of Spark include:
  • Currently provides APIs in Scala, Java, and Python, with support for other languages (such as R) on the way
  • Integrates well with the Hadoop ecosystem and data sources (HDFS, Amazon S3, Hive, HBase, Cassandra, etc.)
  • Can run on clusters managed by Hadoop YARN or Apache Mesos, and can also run standalone
The Spark core is complemented by a set of powerful, higher-level libraries which can be seamlessly used in the same application. These libraries currently include SparkSQL, Spark Streaming, MLlib (for machine learning), and GraphX, each of which is further detailed in this article. Additional Spark libraries and extensions are currently under development as well.
spark libraries and extensions

Spark Core

Spark Core is the base engine for large-scale parallel and distributed data processing. It is responsible for:
  • memory management and fault recovery
  • scheduling, distributing and monitoring jobs on a cluster
  • interacting with storage systems
Spark introduces the concept of an RDD (Resilient Distributed Dataset), an immutable fault-tolerant, distributed collection of objects that can be operated on in parallel. An RDD can contain any type of object and is created by loading an external dataset or distributing a collection from the driver program.
RDDs support two types of operations:
  • Transformations are operations (such as map, filter, join, union, and so on) that are performed on an RDD and which yield a new RDD containing the result.
  • Actions are operations (such as reduce, count, first, and so on) that return a value after running a computation on an RDD.
Transformations in Spark are “lazy”, meaning that they do not compute their results right away. Instead, they just “remember” the operation to be performed and the dataset (e.g., file) to which the operation is to be performed. The transformations are only actually computed when an action is called and the result is returned to the driver program. This design enables Spark to run more efficiently. For example, if a big file was transformed in various ways and passed to first action, Spark would only process and return the result for the first line, rather than do the work for the entire file.
By default, each transformed RDD may be recomputed each time you run an action on it. However, you may also persist an RDD in memory using the persist or cache method, in which case Spark will keep the elements around on the cluster for much faster access the next time you query it.

SparkSQL

SparkSQL is a Spark component that supports querying data either via SQL or via the Hive Query Language. It originated as the Apache Hive port to run on top of Spark (in place of MapReduce) and is now integrated with the Spark stack. In addition to providing support for various data sources, it makes it possible to weave SQL queries with code transformations which results in a very powerful tool. Below is an example of a Hive compatible query:
// sc is an existing SparkContext.
val sqlContext = new org.apache.spark.sql.hive.HiveContext(sc)

sqlContext.sql("CREATE TABLE IF NOT EXISTS src (key INT, value STRING)")
sqlContext.sql("LOAD DATA LOCAL INPATH 'examples/src/main/resources/kv1.txt' INTO TABLE src")

// Queries are expressed in HiveQL
sqlContext.sql("FROM src SELECT key, value").collect().foreach(println)

Spark Streaming

Spark Streaming supports real time processing of streaming data, such as production web server log files (e.g. Apache Flume and HDFS/S3), social media like Twitter, and various messaging queues like Kafka. Under the hood, Spark Streaming receives the input data streams and divides the data into batches. Next, they get processed by the Spark engine and generate final stream of results in batches, as depicted below.
spark streaming
The Spark Streaming API closely matches that of the Spark Core, making it easy for programmers to work in the worlds of both batch and streaming data.

MLlib

MLlib is a machine learning library that provides various algorithms designed to scale out on a cluster for classification, regression, clustering, collaborative filtering, and so on (check out Toptal’s article on machine learning for more information on that topic). Some of these algorithms also work with streaming data, such as linear regression using ordinary least squares or k-means clustering (and more on the way). Apache Mahout (a machine learning library for Hadoop) has already turned away from MapReduce and joined forces on Spark MLlib.
source: http://www.toptal.com/spark/introduction-to-apache-spark
Share it ..Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedInShare on TumblrShare on StumbleUponPin on PinterestShare on RedditFlattr the authorDigg thisShare on YummlyShare on VKBuffer this page

Interview Questions & Answers on Apache Spark [Part 3]

1.What is Apache Spark?
Spark is a fast, easy-to-use and flexible data processing framework. It has an advanced execution engine supporting cyclic data  flow and in-memory computing. Spark can run on Hadoop, standalone or in the cloud and is capable of accessing diverse data sources including HDFS, HBase, Cassandra and others.

2.Explain key features of Spark.

  • Allows Integration with Hadoop and files included in HDFS.
  • Spark has an interactive language shell as it has an independent Scala (the language in which Spark is written) interpreter
  • Spark consists of RDD’s (Resilient Distributed Datasets), which can be cached across computing nodes in a cluster.
  • Spark supports multiple analytic tools that are used for interactive query analysis , real-time analysis and graph processing

3.Define RDD.
RDD is the acronym for Resilient Distribution Datasets – a fault-tolerant collection of operational elements that run parallel. The partitioned data in RDD is immutable and distributed. There are primarily two types of RDD:

  • Parallelized Collections : The existing RDD’s running parallel with one another
  • Hadoop datasets: perform function on each file record in HDFS or other storage system

4.What does a Spark Engine do?
Spark Engine is responsible for scheduling, distributing and monitoring the data application across the cluster.

5.Define Partitions?
As the name suggests, partition is a smaller and logical division of data  similar to ‘split’ in MapReduce. Partitioning is the process to derive logical units of data to speed up the processing process. Everything in Spark is a partitioned RDD.

6.What operations RDD support?

  • Transformations
  • Actions

7.What do you understand by Transformations in Spark?
Transformations are functions applied on RDD, resulting into another RDD. It does not execute until an action occurs. map() and filer() are examples of transformations, where the former applies the function passed to it on each element of RDD and results into another RDD. The filter() creates a new RDD by selecting elements form current RDD that pass function argument.

8. Define Actions.
An action helps in bringing back the data from RDD to the local machine. An action’s execution is the result of all previously created transformations. reduce() is an action that implements the function passed again and again until one value if left. take() action takes all the values from RDD to local node.

9.Define functions of SparkCore.
Serving as the base engine, SparkCore performs various important functions like memory management, monitoring jobs, fault-tolerance, job scheduling and interaction with storage systems.

10.What is RDD Lineage?
Spark does not support data replication in the memory and thus, if any data is lost, it is rebuild using RDD lineage. RDD lineage is a process that reconstructs lost data partitions. The best is that RDD always remembers how to build from other datasets.

11.What is Spark Driver?
Spark Driver is the program that runs on the master node of the machine and declares transformations and actions on data RDDs. In simple terms, driver in Spark creates SparkContext, connected to a given Spark Master.

The driver also delivers the RDD graphs to Master, where the standalone cluster manager runs.

12.What is Hive on Spark?
Hive contains significant support for Apache Spark, wherein Hive execution is configured to Spark:
hive> set spark.home=/location/to/sparkHome;
hive> set hive.execution.engine=spark;

Hive on Spark supports Spark on yarn mode by default.

13.Name commonly-used Spark Ecosystems.

  • Spark SQL (Shark)- for developers
  • Spark Streaming for processing live data streams
  • GraphX for generating and computing graphs
  • MLlib (Machine Learning Algorithms)
  • SparkR to promote R Programming in Spark engine.

14.Define Spark Streaming.
Spark supports stream processing – an extension to the Spark API , allowing stream processing of live data streams. The data from different sources like Flume, HDFS is streamed and finally processed to file systems, live dashboards and databases. It is similar to batch processing as the input data is divided into streams like batches.

15.What is GraphX?
Spark uses GraphX for graph processing to build and transform interactive graphs. The GraphX component enables programmers to reason about structured data at scale.

16.What does MLlib do?
MLlib is scalable machine learning library provided by Spark. It aims at making machine learning easy and scalable with common learning algorithms and use cases like clustering, regression filtering, dimensional reduction, and alike.

17.What is Spark SQL?
SQL Spark, better known as Shark is a novel module introduced in Spark to work with structured data and perform structured data processing. Through this module, Spark executes relational SQL queries on the data. The core of the component supports an altogether different RDD called SchemaRDD, composed of rows objects and schema objects defining data type of each column in the row. It is similar to a table in relational database.

18.What is a Parquet file?
Parquet is a columnar format file supported by many other data processing systems. Spark SQL performs both read and write operations with Parquet file and consider it be one of the best big data analytics format so far.

19.What file systems Spark support?
• Hadoop Distributed File System (HDFS)
• Local File system
• S3

20.What is Yarn?
Similar to Hadoop, Yarn is one of the key features in Spark, providing a central and resource management platform to deliver scalable operations across the cluster . Running Spark on Yarn necessitates a binary distribution of Spar as built on Yarn support.

21.List the functions of Spark SQL.
Spark SQL is capable of:
• Loading data from a variety of structured sources
• Querying data using SQL statements, both inside a Spark program and from external tools that connect to Spark SQL through standard database connectors (JDBC/ODBC). For instance, using business intelligence tools like Tableau
• Providing rich integration between SQL and regular Python/Java/Scala code, including the ability to join RDDs and SQL tables, expose custom functions in SQL, and more

22.What are benefits of Spark over MapReduce?

  • Due to the availability of in-memory processing, Spark implements the processing around 10-100x faster than Hadoop MapReduce. MapReduce makes use of persistence storage for any of the data processing tasks.
  • Unlike Hadoop, Spark provides in-built libraries to perform multiple tasks form the same core like batch processing, Steaming, Machine learning, Interactive SQL queries. However, Hadoop only supports batch processing.
  • Hadoop is highly disk-dependent whereas Spark promotes caching and in-memory data storage
  • Spark is capable of performing computations multiple times on the same dataset. This is called iterative computation while there is no iterative computing implemented by Hadoop.

23.Is there any benefit of learning MapReduce, then?
Yes, MapReduce is a paradigm used by many big data tools including Spark as well. It is extremely relevant to use MapReduce when the data grows bigger and bigger. Most tools like Pig and Hive convert their queries into MapReduce phases to optimize them better.

24.What is Spark Executor?
When SparkContext connect to a cluster manager, it acquires an Executor on nodes in the cluster. Executors are Spark processes that run computations and store the data on the worker node. The final tasks by SparkContext are transferred to executors for their execution.

25.Name types of Cluster Managers in Spark.
The Spark framework supports three major types of Cluster Managers:

  • Standalone: a basic manager to set up a cluster
  • Apache Mesos: generalized/commonly-used cluster manager, also runs Hadoop MapReduce and other applications
  • Yarn: responsible for resource management in Hadoop

26.What do you understand by worker node?
Worker node refers to any node that can run the application code in a cluster.

27.What is PageRank?
A unique feature and algorithm in graph, PageRank is the measure of each vertex in the graph. For instance, an edge from u to v represents endorsement of v’s importance by u. In simple terms, if a user at Instagram is followed massively, it will rank high on that platform.

28.Do you need to install Spark on all nodes of Yarn cluster while running Spark on Yarn?
No because Spark runs on top of Yarn.

29.Illustrate some demerits of using Spark.
Since Spark utilizes more storage space compared to Hadoop and MapReduce, there may arise certain problems. Developers need to be careful while running their applications in Spark. Instead of running everything on a single node, the work must be distributed over multiple clusters. 

30.How to create RDD?
Spark provides two methods to create RDD:
• By parallelizing a collection in your Driver program. This makes use of SparkContext’s ‘parallelize’ method
val data = Array(2,4,6,8,10)
val distData = sc.parallelize(data)

• By loading an external dataset from external storage like HDFS, HBase, shared file system

Share it ..Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedInShare on TumblrShare on StumbleUponPin on PinterestShare on RedditFlattr the authorDigg thisShare on YummlyShare on VKBuffer this page