Aside: NumPy/Pandas Speed

CMPT 353, Fall 2019

Aside: NumPy/Pandas Speed

In Exercise 4, the Cities: Temperatures and Density question had very different running times, depending how you approached the haversine calculation.

Why? You were doing the same basic computation either way.

The answer will lead nicely into problems we'll see again the the Big Data topic.

Why So Slow?

Here's a reduced version of the problem: we'll create this DataFrame:

n = 100000000
df = pd.DataFrame({
    'a': np.random.randn(n),
    'b': np.random.randn(n),
    'c': np.random.randn(n),
})

… and (for some reason) we want to calculate

\[ \sin(a-1) + 1\,. \]

Why So Slow?

The underlying problem: DataFrames (in Pandas here, but also Spark) are an abstraction of what's really going on. Underneath, there's some memory being moved around and computation happening.

The abstraction is leaky, as they all are.

Having a sense of what's happening behind-the-scenes will help us use the tools effectively.

Why So Slow?

One fact to notice: each Pandas Series is stored as a NumPy array.

i.e. this is an array that is already in memory, so refering to it is basically free.

df['col'].values

This isn't in memory (in this form) and must be constructed:

df.iloc[0] # a row object

Why So Slow?

So, any time we operate on a Pandas series as a unit, it's probably going to be fast.

Pandas is column-oriented: it stores columns in contiguous memory.

NumPy Expression

The solution I was hoping for:

def do_work_numpy(a):
    return np.sin(a - 1) + 1

result = do_work_numpy(df['a'])

The arithmetic is done as single operations on NumPy arrays.

NumPy Expression

def do_work_numpy(a):
    return np.sin(a - 1) + 1

result = do_work_numpy(df['a'])

The np.sin and the +/​- operations are done by NumPy at C speeds (with SSE instructions).

Running time: 1.30 s.

Applying to a Series

This wasn't a option on E4 because you needed two columns, but for this problem:

def do_work(a):
    return math.sin(a - 1) + 1

result = df['a'].apply(do_work)

The do_work function gets called \(n\) times (once for each element in the series). Arithmetic done in Python.

Running time: 20.6 s.

Vectorizing

I saw something like this a few times:

def do_work(a):
    return math.sin(a - 1) + 1
do_work_vector = np.vectorize(do_work, otypes=[np.float])

result = do_work_vector(df['a'])

The do_work function still gets called \(n\) times, but it's hidden by vectorize, which makes it look like a NumPy function. Arithmetic still done in Python.

Running time: 17.1 s.

Applying By Row

If you used .apply() twice in E4, it was like:

def do_work_row(row):
    return math.sin(row['a'] - 1) + 1

result = df.apply(do_work_row, axis=1)

This is a by-row application: do_work_row is called on every row in the DataFrame. But the rows don't exist in memory, so they must be constructed. Then the function called, and arithmetic done in Python.

Running time: 736 s.

Using Python

This is what the no loops restriction prevented:

def do_work_python(a):
    result = np.empty(a.shape)
    for i in range(a.size):
        result[i] = math.sin(a[i] - 1) + 1
    return result

result = do_work_python(df['a'])

The loop is done in Python; the arithmetic is done in Python.

Running time: 1230 s.

With NumExpr

Let's look again at the best-so-far version:

def do_work_numpy(a):
    return np.sin(a - 1) + 1

result = do_work_numpy(df['a'])

NumPy has to calculate and store each intermediate result, which creates overhead. This is a limitation of Python & the NumPy API: NumPy calculates a-1, then calls np.sin on the result, then adds to that result.

With NumExpr

The NumExpr package overcomes this: has its own expression syntax that gets compiled internally. Then you can apply that expression (to the local variables).

import numexpr
def do_work_numexpr(a):
    expr = 'sin(a - 1) + 1'
    return numexpr.evaluate(expr, local_dict=locals())

result = do_work_numexpr(df['a'])

With NumExpr

This way, the whole expression can be calculated (on each element, in some C code somewhere, using the SSE instructions), and the result stored in a new array.

Running time: 0.193 s.

NumExpr also powers the Pandas eval function: pd.eval('sin(a - 1) + 1', engine='numexpr').

Summary

MethodTimeRelative Time
NumPy expression1.30 s1.00
Series.apply20.6 s15.85
Vectorized17.1 s13.10
DataFrame.apply736 s565.02
Python loop1230 s944.32
NumExpr0.193 s0.15

Summary

But are any of these fast?

The same operation in C code took 0.994 s (gcc -O3). Faster than NumPy, but several times slower than NumExpr.

It's not obvious, but NumExpr does the calculations in parallel by default. This was a six-core processor and it got a 6.74× speedup over plain NumPy.

Summary

Lessons:

  • The abstractions you're using need to be in the back of your head somewhere.
  • Moving data around in memory is expensive.
  • Python is still slow, but NumPy (and friends) do a good job insulating us from that.

Don't believe me? Notebook with the code.

Don't believe me even more? A Beginner’s Guide to Optimizing Pandas Code for Speed. [I did it first, I swear!]

Pandas vs Dask

Extra bonus late addition to these slides: a notebook that times Pandas vs Dask on haversine calculations.

For a large enough data set, Dask should speed things up. (8.8M rows in this test.)

Pandas vs Dask

Times on my office desktop (4 core/​8 thread):

MethodTimeRelative Time
Pandas1.24 s1.00
NumExpr2.03 s1.64
Dask, single file1.43 s1.15
Dask, partitioned0.76 s0.61

Pandas vs Dask

Times in CSIL, giving Dask 10 workstations as workers:

MethodTimeRelative Time
Pandas0.85 s1.00
NumExpr1.30 s1.53
Dask, single file0.60 s0.71
Dask, partitioned0.53 s0.62

Pandas vs Dask

The visualization of the task graph that Dask generates (for the partitioned input case) gives a lot of info about what it's going to do.