Python is the de facto language of Data Science & Engineering. (IMHO R is grand for statisticians, but Python is for the rest of us.)
As a prominent language in the field, it only makes sense that Apache Spark supports it with Python specific APIs. Spark makes it so easy to use Python that it can actually run slow (up to 30-40x slower than it should) due to naive missteps.
Remember: Spark is written in Scala primarily to leverage JVM performance characteristics, but also has APIs in Python, Java, and R
Even with the great (and rapidly growing!) support for Python on Spark (henceforth PySpark), there are some critical topics that need to be addressed. Do any of these packages look familiar, Python devs?
pandas (aka "pd") numpy (aka "np") scipy scikit-learn keras tensorflow sqlalchemy
Thought so. 🙂
Python is fantastic for Initial and Exploratory Data Analysis (IDA/EDA) plus training models and all sorts of other awesome stuff. However, it can’t scale to larger data sizes/parameters due to 2 primary issues:
1) it’s natively single-threaded* due to global interpreter lock (GIL) – out of scope for this post
2) It is not distributed** to multiple machines, so even if it were multi-threaded, it is still bound by one machine’s resources (CPU cores + memory + I/O + network)
* yes, there are different implementations of Python (CPython, Jython, Pypy, etc) that may or may not have mult-threaded/concurrency supported. We’re focusing on the default and predominantly used CPython implementation
** It can be distributed with a hot [relatively] new package “dask” that came out in 2014. We’re sticking with PySpark for this post, but the more you know… AWS does offer a single node (x1e.32xlarge) with 128 vCPUs and ~3.9TB of RAM. If you have to ask how much it costs, you can’t afford it. 😉
No, don’t use RDDs; use [Spark] SQL!
When people think of Spark, one of the first things that comes to mind is the age-old RDD. It is after all the original data structure API that brought Spark to prominence. RDDs still underpin Spark, however since version 2.0, the preferred data structure API for Python has become DataFrames (corollary in Scala/Java is the Datasets API)
DataFrames are under the [massive] umbrella that is “Spark SQL.” DataFrames are preferred to RDDs for two optimization-related reasons; the Catalyst Optimizer and Tungsten. The short explanation of what Catalyst and Tungsten do for DataFrames is automatically handling optimizations that were too low-level for RDDs. The higher level Spark SQL abstractions afforded these APIs with more introspection and greater execution flexibility, thus increasing performance for DataFrames substantially over RDDs. For PySpark users, the performance difference is even greater than for Scala/Java Spark users, because actions done using PySpark SQL APIs directly (e.g. trimming a string, summing an aggregate, regex, etc) are actually executed in the JVM directly rather than in a Python subprocess like RDDs require.
When you’re using DataFrames, be sure to use the baked-in Spark SQL functions (also compatible with hive functions), because they all run optimized in the JVM: https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#module-pyspark.sql.functions
Moral of the story: If you’re using RDDs (especially with PySpark!), evaluate and try to transition to DataFrames. If you’re not using RDDs yet, try DataFrames first.
Note: Dataset is the Scala and Java API that is strongly typed. DataFrame is literally an alias for Dataset[Row] in Scala DataFrame does not exist in Java (except as Dataset[Row]) *DataFrame is the only Spark SQL Data Structure API for Python, because Python is dynamically typed.*
This section wouldn’t be complete without a silly [but truthful] benchmark:
%%time # RDD lines = sc.textFile("yellow_tripdata_2017-06.csv") lineLengths = lines.map(lambda s: len(s)) totalLength = lineLengths.reduce(lambda a, b: a + b) print(totalLength) # => 832591506 CPU times: user 9.66 ms, sys: 2.99 ms, total: 12.6 ms Wall time: 7.7 s %%time # DataFrame linesDF = spark.read.text("yellow_tripdata_2017-06.csv") print( linesDF.selectExpr( "sum(length(value)) as slen") .collect()[0]['slen'] ) # => 832591506 CPU times: user 4.76 ms, sys: 1.74 ms, total: 6.5 ms Wall time: 3.84 s
All hope is not lost on “strong typing”-like features for PySpark
Python being dynamically typed is a double edged sword, because by being dynamically typed (therefore forced to only use DataFrames, not Datasets), PySpark DataFrames also natively support accessing variables/columns by name directly. For example…
linesDF = spark.read .option("header", "true") .option("inferSchema","true") .csv("yellow_tripdata_2017-06.csv") linesDF.agg( pyspark.sql.functions.sum( linesDF.passenger_count # <---- HERE ) ) .collect()[0]['sum(passenger_count)']
I want to use [insert awesome package here] with PySpark
You probably can, but just because you can doesn’t mean you should.
What I mean by that is when you start mixing Python native code (such as Pandas) with Spark, two bad things happen [usually]. Firstly, your performance nose dives, because as I alluded to briefly above, any code that runs natively in Python (as opposed to using the Spark SQL APIs) requires [de]serialization to/from Python/JVM and execution in Python subprocesses. More info here, which is a bit old by Spark standards, but any Spark-related wisdom by Holden Karau is great.
The Spark SQL DataFrame API only goes so far (it goes very far FWIW). So maybe you’re tempted to write a UDF (User Defined Function) to extend Spark’s functionality for your use case.
Before you write a UDF that uses Python-specific APIs (not from PySpark), have a look at this simple example and its implications.
Example
We want to convert timestamps in a column to seconds since epoch and count the distinct outputs (yes, this part is purely for benchmarking).
Using UDFs with Python-specific APIs
timestamp_to_epoch = F.udf(lambda t: int(t.strftime("%s")))
%%time
df.select(timestamp_to_epoch(df.tpep_pickup_datetime)).distinct().count()
CPU times: user 44.8 ms, sys: 18.3 ms, total: 63.1 ms
Wall time: 10min 10s
Out[46]: 2340959
Using PySpark SQL APIs
%%time
df.select(F.unix_timestamp(df.tpep_pickup_datetime)).distinct().count()
CPU times: user 2.67 ms, sys: 1.15 ms, total: 3.82 ms
Wall time: 16.5 s
Out[47]: 2340959
Yes, you read that right. PySpark SQL APIs are 30-40 TIMES FASTER (!!!)
UDF with Python-Specific APIs: 610 seconds
PySpark SQL API: 17 seconds
Fortunately for you, Spark 2.3 solves the issue – partly – by introducing an impressive feature allowing vectorization of Python UDFS, which you can read about more here. Vectorized UDFs for PySpark can again massively improve performance.
I’m a sucker for contrived benchmarks that make my case so clearly. 😀
Here’s a naïve way to build a Spark DataFrame using Pandas first. Note this not only takes over 4 minutes to run, it also uses ~3GB of memory (!!).
%%time pdf = pd.read_table("yellow_tripdata_2017-06.csv") spark.createDataFrame(pdf).count() CPU times: user 4min 2s, sys: 3.33 s, total: 4min 6s Wall time: 4min 10s
On the contrary, the same line count using the Spark SQL APIs directly (5 seconds, negligible memory impact):
%%time spark.read.csv("yellow_tripdata_2017-06.csv").count() CPU times: user 1.63 ms, sys: 1.15 ms, total: 2.77 ms Wall time: 4.57 s
If you’re going to use Python-native packages with Spark, be mindful of the [unintended] consequences, particularly as it relates to your jobs’ performance.
Special note about Backwards Compatibility
For minor version changes, such as 2.1 to 2.3, many APIs are backwards compatible. Major version changes, such as 1.6 to 2.0 should definitely anticipate backwards incompatible changes. Structured Streaming did go through some fairly significant changes from 2.0.0 to 2.2.0 when it transitioned from experimental to production-ready respectively.
Special Thanks
Thank you Wes Hoffman, Nikki Haas and Eric Lambert for your feedback and suggestions on relevant topics!
Further reading:
https://databricks.com/blog/2016/07/14/a-tale-of-three-apache-spark-apis-rdds-dataframes-and-datasets.html
https://databricks.com/blog/2017/10/30/introducing-vectorized-udfs-for-pyspark.html