This notebook was prepared by Donne Martin. Source and license info is on GitHub.
The dev-setup repo contains scripts to install Spark and to automate the its integration with IPython Notebook through the pydata.sh script.
You can also follow the instructions provided here to configure IPython Notebook Support for PySpark with Python 2.
To run Python 3 with Spark 1.4+, check out the following posts on Stack Overflow or Reddit.
Start the pyspark shell (REPL):
!pyspark
View the spark context, the main entry point to the Spark API:
sc
From the following reference:
A DataFrame is a distributed collection of data organized into named columns. It is conceptually equivalent to a table in a relational database or a data frame in R/Python, but with richer optimizations under the hood.
Create a DataFrame from JSON files on S3:
users=context.load("s3n://path/to/users.json","json")
Create a new DataFrame that contains “young users” only:
young=users.filter(users.age<21)
Alternatively, using Pandas-like syntax:
young=users[users.age<21]
Increment everybody’s age by 1:
young.select(young.name,young.age+1)
Count the number of young users by gender:
young.groupBy("gender").count()
Join young users with another DataFrame called logs:
young.join(logs,logs.userId==users.userId,"left_outer")
Count the number of users in the young DataFrame:
young.registerTempTable("young")context.sql("SELECT count(*) FROM young")
Convert Spark DataFrame to Pandas:
pandas_df=young.toPandas()
Create a Spark DataFrame from Pandas:
spark_df=context.createDataFrame(pandas_df)
Given the Spark Context, create a SQLContext:
frompyspark.sqlimportSQLContextsqlContext=SQLContext(sc)
Create a DataFrame based on the content of a file:
df=sqlContext.jsonFile("file:/path/file.json")
Display the content of the DataFrame:
df.show()
Print the schema:
df.printSchema()
Select a column:
df.select("column_name")
Create a DataFrame with rows matching a given filter:
df.filter(df.column_name>10)
Aggregate the results and count:
df.groupBy("column_name").count()
Convert a RDD to a DataFrame (by inferring the schema):
df=sqlContext.inferSchema(my_data)
Register the DataFrame as a table:
df.registerTempTable("dataframe_name")
Run a SQL Query on a DataFrame registered as a table:
rdd_from_df=sqlContext.sql("SELECT * FROM dataframe_name")
Note: RDDs are included for completeness. In Spark 1.3, DataFrames were introduced which are recommended over RDDs. Check out the DataFrames announcement for more info.
Resilient Distributed Datasets (RDDs) are the fundamental unit of data in Spark. RDDs can be created from a file, from data in memory, or from another RDD. RDDs are immutable.
There are two types of RDD operations:
Create an RDD from the contents of a directory:
my_data=sc.textFile("file:/path/*")
Count the number of lines in the data:
my_data.count()
Return all the elements of the dataset as an array--this is usually more useful after a filter or other operation that returns a sufficiently small subset of the data:
my_data.collect()
Return the first 10 lines in the data:
my_data.take(10)
Create an RDD with lines matching the given filter:
my_data.filter(lambdaline:".txt"inline)
Chain a series of commands:
sc.textFile("file:/path/file.txt") \ .filter(lambdaline:".txt"inline) \ .count()
Create a new RDD mapping each line to an array of words, taking only the first word of each array:
first_words=my_data.map(lambdaline:line.split()[0])
Output each word in first_words:
forwordinfirst_words.take(10):printword
Save the first words to a text file:
first_words.saveAsTextFile("file:/path/file")
Pair RDDs contain elements that are key-value pairs. Keys and values can be any type.
Given a log file with the following space deilmited format: [date_time, user_id, ip_address, action], map each request to (user_id, 1):
DATE_TIME=0USER_ID=1IP_ADDRESS=2ACTION=3log_data=sc.textFile("file:/path/*")user_actions=log_data \ .map(lambdaline:line.split()) \ .map(lambdawords:(words[USER_ID],1)) \ .reduceByKey(lambdacount1,count2:count1+count2)
Show the top 5 users by count, sorted in descending order:
user_actions.map(lambdapair:(pair[0],pair[1])).sortyByKey(False).take(5)
Group IP addresses by user id:
user_ips=log_data \ .map(lambdaline:line.split()) \ .map(lambdawords:(words[IP_ADDRESS],words[USER_ID])) \ .groupByKey()
Given a user table with the following csv format: [user_id, user_info0, user_info1, ...], map each line to (user_id, [user_info...]):
user_data=sc.textFile("file:/path/*")user_profile=user_data \ .map(lambdaline:line.split(',')) \ .map(lambdawords:(words[0],words[1:]))
Inner join the user_actions and user_profile RDDs:
user_actions_with_profile=user_actions.join(user_profile)
Show the joined table:
for(user_id,(user_info,count))inuser_actions_with_profiles.take(10):printuser_id,count,user_info
Start the standalone cluster's Master and Worker daemons:
!sudo service spark-master start !sudo service spark-worker start
Stop the standalone cluster's Master and Worker daemons:
!sudo service spark-master stop !sudo service spark-worker stop
Restart the standalone cluster's Master and Worker daemons:
!sudo service spark-master stop !sudo service spark-worker stop
View the Spark standalone cluster UI:
http://localhost:18080//
Start the Spark shell and connect to the cluster:
!MASTER=spark://localhost:7077 pyspark
Confirm you are connected to the correct master:
sc.master
From the following reference:
Every SparkContext launches a web UI, by default on port 4040, that displays useful information about the application. This includes:
A list of scheduler stages and tasks A summary of RDD sizes and memory usage Environmental information. Information about the running executors
You can access this interface by simply opening http://
Note that this information is only available for the duration of the application by default. To view the web UI after the fact, set spark.eventLog.enabled to true before starting the application. This configures Spark to log Spark events that encode the information displayed in the UI to persisted storage.
http://localhost:4040/
From the following reference:
The Spark map() and flatMap() methods only operate on one input at a time, and provide no means to execute code before or after transforming a batch of values. It looks possible to simply put the setup and cleanup code before and after a call to map() in Spark:
valdbConnection=...lines.map(...dbConnection.createStatement(...)...)dbConnection.close()//Wrong!
However, this fails for several reasons:
In fact, neither map() nor flatMap() is the closest counterpart to a Mapper in Spark — it’s the important mapPartitions() method. This method does not map just one value to one other value, but rather maps an Iterator of values to an Iterator of other values. It’s like a “bulk map” method. This means that the mapPartitions() function can allocate resources locally at its start, and release them when done mapping many values.
defcount_txt(partIter):forlineinpartIter:if".txt"inline:txt_count+=1yield(txt_count)my_data=sc.textFile("file:/path/*") \ .mapPartitions(count_txt) \ .collect()# Show the partitioning print"Data partitions: ",my_data.toDebugString()
Caching an RDD saves the data in memory. Caching is a suggestion to Spark as it is memory dependent.
By default, every RDD operation executes the entire lineage. Caching can boost performance for datasets that are likely to be used by saving this expensive recomputation and is ideal for iterative algorithms or machine learning.
Disk memory is stored on the node, not on HDFS.
Replication is possible by using MEMORY_ONLY_2, MEMORY_AND_DISK_2, etc. If a cached partition becomes unavailable, Spark recomputes the partition through the lineage.
Serialization is possible with MEMORY_ONLY_SER and MEMORY_AND_DISK_SER. This is more space efficient but less time efficient, as it uses Java serialization by default.
# Cache RDD to memorymy_data.cache()# Persist RDD to both memory and disk (if memory is not enough), with replication of 2my_data.persist(MEMORY_AND_DISK_2)# Unpersist RDD, removing it from memory and diskmy_data.unpersist()# Change the persistence level after unpersistmy_data.persist(MEMORY_AND_DISK)
Caching maintains RDD lineage, providing resilience. If the lineage is very long, it is possible to get a stack overflow.
Checkpointing saves the data to HDFS, which provide fault tolerant storage across nodes. HDFS is not as fast as local storage for both reading and writing. Checkpointing is good for long lineages and for very large data sets that might not fit on local storage. Checkpointing removes lineage.
Create a checkpoint and perform an action by calling count() to materialize the checkpoint and save it to the checkpoint file:
# Enable checkpointing by setting the checkpoint directory, # which will contain all checkpoints for the given data:sc.setCheckpointDir("checkpoints")my_data=sc.parallelize([1,2,3,4,5])# Long loop that may cause a stack overflowforiinrange(1000):my_data=mydata.map(lambdamyInt:myInt+1)ifi%10==0:my_data.checkpoint()my_data.count()my_data.collect()# Display the lineageforrddstringinmy_data.toDebugString().split('\n'):printrddstring.strip()
Create a Spark application to count the number of text files:
importsysfrompysparkimportSparkContextif__name__=="__main__":iflen(sys.argv)<2:print>>sys.stderr,"Usage: App Name <file>"exit(-1)count_text_files()defcount_text_files():sc=SparkContext()logfile=sys.argv[1]text_files_count=sc.textFile(logfile).filter(lambdaline:'.txt'inline)text_files_count.cache()print("Number of text files: ",text_files_count.count())
Submit the script to Spark for processing:
!spark-submit --properties-file dir/myspark.conf script.py data/*
Run a Spark app and set the configuration options in the command line:
!spark-submit --master spark//localhost:7077 --name 'App Name' script.py data/*
Configure spark.conf:
spark.app.nameAppNamespark.ui.port4141spark.masterspark://localhost:7077
Run a Spark app and set the configuration options through spark.conf:
!spark-submit --properties-file spark.conf script.py data/*
Set the config options programmatically:
sconf=SparkConf() \ .setAppName("Word Count") \ .set("spark.ui.port","4141")sc=SparkContext(conf=sconf)
Set logging levels located in the following file, or place a copy in your pwd:
$SPARK_HOME/conf/log4j.properties.template
Start the Spark Shell locally with at least two threads (need a minimum of two threads for streaming, one for receiving, one for processing):
!spark-shell --master local[2]
Create a StreamingContext (similar to SparkContext in core Spark) with a batch duration of 1 second:
valssc=newStreamingContext(newSparkConf(),Seconds(1))valmy_stream=ssc.socketTextStream(hostname,port)
Get a DStream from a streaming data source (text from a socket):
vallogs=ssc.socketTextStream(hostname,port)
DStreams support regular transformations such as map, flatMap, and filter, and pair transformations such as reduceByKey, groupByKey, and joinByKey.
Apply a DStream operation to each batch of RDDs (count up requests by user id, reduce by key to get the count):
valrequests=my_stream.map(line=>(line.split(" ")(2),1)).reduceByKey((x,y)=>x+y)
The transform(function) method creates a new DStream by executing the input function on the RDDs.
valsorted_requests=requests.map(pair=>pair.swap).transform(rdd=>rdd.sortByKey(false))
foreachRDD(function) performs a function on each RDD in the DStream (map is like a shortcut not requiring you to get the RDD first before doing an operation):
sorted_requests.foreachRDD((rdd,time)=>{println("Top users @ "+time)rdd.take(5).foreach(pair=>printf("User: %s (%s)\n",pair._2,pair._1))}
Save the DStream result part files with the given folder prefix, the actual folder will be /dir/requests-timestamp0/:
requests.saveAsTextFiles("/dir/requests")
Start the execution of all DStreams:
ssc.start()
Wait for all background threads to complete before ending the main thread:
ssc.awaitTermination()
Enable checkpointing to prevent infinite lineages:
ssc.checkpoint("dir")
Compute a DStream based on the previous states plus the current state:
defupdateCount=(newCounts:Seq[Int],state:Option[Int])=>{valnewCount=newCounts.foldLeft(0)(_+_)valpreviousCount=state.getOrElse(0)Some(newCount+previousCount)}valtotalUserreqs=userreqs.updateStateByKey(updateCount)
Compute a DStream based Sliding window, every 30 seconds, count requests by user over the last 5 minutes:
valreqcountsByWindow=logs.map(line=>(line.split(' ')(2),1)).reduceByKeyAndWindow((x:Int,y:Int)=>x+y,Minutes(5),Seconds(30))
Collect statistics with the StreamingListener API:
//definelistenerclassMyListenerextendsStreamingListener{overridedefonReceiverStopped(...){streamingContext.stop()}}//attachlistenerstreamingContext.addStreamingListener(newMyListener())
Read in list of items to broadcast from a local file:
broadcast_file="broadcast.txt"broadcast_list=list(map(lambdal:l.strip(),open(broadcast_file)))
Broadcast the target list to all workers:
broadcast_list_sc=sc.broadcast(broadcast_list)
Filter based on the broadcast list:
log_file="hdfs://localhost/user/logs/*"filtered_data=sc.textFile(log_file)\ .filter(lambdaline:any(iteminlineforiteminbroadcast_list_sc.value))filtered_data.take(10)
Create an accumulator:
txt_count=sc.accumulator(0)
Count the number of txt files in the RDD:
my_data=sc.textFile(filePath)my_data.foreach(lambdaline:if'.txt'inline:txt_count.add(1))
Count the number of file types encountered:
jpg_count=sc.accumulator(0)html_count=sc.accumulator(0)css_count=sc.accumulator(0)defcountFileType(s):if'.jpg'ins:jpg_count.add(1)elif'.html'ins:html_count.add(1)elif'.css'ins:css_count.add(1)filename="hdfs://logs/*"logs=sc.textFile(filename)logs.foreach(lambdaline:countFileType(line))print'File Type Totals:'print'.css files: ',css_count.valueprint'.html files: ',html_count.valueprint'.jpg files: ',jpg_count.value