Description
of
Tutorial (full day)

Title :
Large Scale Distributed Data Science from scratch using Apache Spark 2.2

Organizers :

  • James Shanahan
  • Liang Dai

Abstract :

Apache Spark is an open-source cluster computing framework. It has emerged as the next generation big data processing engine, overtaking Hadoop MapReduce which helped ignite the big data revolution. Spark maintains MapReduce’s linear scalability and fault tolerance, but extends it in a few important ways: it is much faster (100 times faster for certain applications), much easier to program in due to its rich APIs in Python, Java, Scala, SQL and R (MapReduce has 2 core calls), and its core data abstraction, the distributed data frame. In addition, it goes far beyond batch applications to support a variety of compute-intensive tasks, including interactive queries, streaming, machine learning, and graph processing.

This tutorial will provide an accessible introduction to large-scale distributed machine learning and data mining, and to Spark and its potential to revolutionize academic and commercial data science practices. It is divided into three parts: the first part will cover fundamental Spark concepts, including Spark Core, functional programming ala map-reduce, RDDs/data frames/datasets, the Spark Shell; the second part will introduce specialist packages in Spark for Streaming, SQL, MLlib (and its new counterpart ML), GraphFrame, and more; the third part will focus on hands-on algorithmic design and development with Spark, developing algorithms from scratch such as linear regression via stochastic gradient descent, decision tree learning, association rule mining (aPriori), graph processing algorithms such as pagerank/shortest path, gradient descent algorithms such as support vectors machines and matrix factorization, and deep learning.

The home homegrown implementations will help shed some light on the internals of the MLlib packages (and on the difficulties of parallelizing some key machine learning algorithms). Industrial applications and deployments of Spark will also be presented. Example code will be made available in python (pySpark) notebooks.