The Multiprocessing and Threading Modules
The most effective parallel processing occurs where there are no dependencies
among the tasks being performed. With some careful design, we can approach
parallel programming as an ideal processing technique. The biggest difficulty in
developing parallel programs is coordinating updates to shared resources.
When following functional design patterns and avoiding stateful programs, we can
also minimize concurrent updates to shared objects. If we can design software where
lazy, non-strict evaluation is central, we can also design software where concurrent
evaluation is possible.
Programs will always have some strict dependencies where ordering of operations
matters. In the 2*(3+a) expression, the (3+a) subexpression must be evaluated
first. However, when working with a collection, we often have situations where the
processing order among items in the collection doesn't matter.
Consider the following two examples:
x = list(func(item) for item in y)
x = list(reversed([func(item) for item in y[::-1]]))
Both of these commands have the same result even though the items are evaluated in
the reverse order.
Indeed, even this following command snippet has the same result:
import random
indices= list(range(len(y)))
random.shuffle(indices)
x = [None]*len(y)
for k in indices:
x[k] = func(y[k])
The evaluation order is random. As the evaluation of each item is independent, the
order of evaluation doesn't matter. This is the case with many algorithms that permit
non-strict evaluation.
What concurrency really means
In a small computer, with a single processor and a single core, all evaluations are
serialized only through the core of the processor. The operating system will interleave
multiple processes and multiple threads through clever time-slicing arrangements.