Functional Python Programming

(Wang) #1

Optimizations and


Improvements


In this chapter, we'll look at a few optimizations that we can make to create high
performance functional programs. We'll expand on the @lru_cache decorator
from Chapter 10, The Functools Module. We have a number of ways to implement the
memoization algorithm. We'll also discuss how to write our own decorators. More
importantly, we'll see how we use a Callable object to cache memoized results.


We'll also look at some optimization techniques that were presented in Chapter
6 , Recursions and Reductions. We'll review the general approach to tail recursion
optimization. For some algorithms, we can combine memoization with a recursive
implementation and achieve good performance. For other algorithms, memoization
isn't really very helpful and we have to look elsewhere for performance improvements.


In most cases, small changes to a program will lead to small improvements in
performance. Replacing a function with a lambda object will have a tiny impact on
performance. If we have a program that is unacceptably slow, we often have to locate
a completely new algorithm or data structure. Some algorithms have bad "big-O"
complexity; nothing will make them magically run faster.


One place to start is http://www.algorist.com. This is a resource that may help to
locate better algorithms for a given problem.


Memoization and caching


As we saw in Chapter 10, The Functools Module, many algorithms can benefit from
memoization. We'll start with a review of some previous examples to characterize
the kinds of functions that can be helped with memoization.

Free download pdf