Big data [Spark] and its small files problem

Often we log data in JSON, CSV or other text format to Amazon’s S3 as compressed files. This pattern is a) accessible and b) infinitely scalable by nature of being in S3 as common text files. However, there are some subtle but critical caveats that come with this pattern that can cause quite a bit… Continue reading


Spark File Format Showdown – CSV vs JSON vs Parquet

Apache Spark supports many different data sources, such as the ubiquitous Comma Separated Value (CSV) format and web API friendly JavaScript Object Notation (JSON) format. A common format used primarily for big data analytical purposes is Apache Parquet. Parquet is a fast columnar data format that you can read more about in two of my… Continue reading


Using Spark Efficiently | Understanding Spark Event 7/29/17

This page is dedicated to resources related to the 7/29/17 Understanding Spark event presentation in Bellevue, WA. Slides Great [FREE!] resources on all things Spark: https://jaceklaskowski.gitbooks.io/mastering-apache-spark/ https://spark.apache.org/docs/latest/sql-programming-guide.html Databricks was founded by the original creators of Spark and is currently the largest contributor to Apache Spark. As such, they are a phenomenal resource for information and… Continue reading


Switching between Scala and Python on Spark tips

Switching between Scala and Python on Spark is relatively straightforward, but there are a few differences that can cause some minor frustration. Here are some of the little things I’ve run into and how to adjust for them. PySpark Shell does not support code completion (autocomplete) by default. Why? PySpark uses the basic Python interpreter… Continue reading