Reversing : The Hacker's Guide to Reverse Engineering

(ff) #1
What happens when a thread doesn’t just give up the processor? This could
easily happen if it just has a lot of work to do. Think of a thread performing
some kind of complex algorithm that involves billions of calculations. Such
code could take hours before relinquishing the CPU—and could theoretically
jam the entire system. To avoid such problems operating systems use what’s
called preemptive scheduling, which means that threads are given a limited
amount of time to run before they are interrupted.
Every thread is assigned a quantum, which is the maximum amount of time
the thread is allowed to run continuously. While a thread is running, the oper-
ating system uses a low-level hardware timer interrupt to monitor how long
it’s been running. Once the thread’s quantum is up, it is temporarily inter-
rupted, and the system allows other threads to run. If no other threads need
the CPU, the thread is immediately resumed. The process of suspending and
resuming the thread is completely transparent to the thread—the kernel stores
the state of all CPU registers before suspending the thread and restores that
state when the thread is resumed. This way the thread has no idea that is was
ever interrupted.

Synchronization Objects


For software developers, the existence of threads is a mixed blessing. On one
hand, threads offer remarkable flexibility when developing a program; on the
other hand, synchronizing multiple threads within the same programs is not
easy, especially because they almost always share data structures between
them. Probably one of the most important aspects of designing multithreaded
software is how to properly design data structures and locking mechanisms
that will ensure data validity at all times.
The basic design of all synchronization objects is that they allow two or
more threads to compete for a single resource, and they help ensure that only
a controlled number of threads actually access the resource at any given
moment. Threads that are blocked are put in a special wait stateby the kernel
and are not dispatched until that wait state is satisfied. This is the reason why
synchronization objects are implemented by the operating system; the sched-
uler must be aware of their existence in order to know when a wait state has
been satisfied and a specific thread can continue execution.
Windows supports several built-in synchronization objects, each suited to
specific types of data structures that need to be protected. The following are
the most commonly used ones:
Events An event is a simple Boolean synchronization object that can be
set to either True or False. An event is waited on by one of the standard
Win32 wait APIs such as WaitForSingleObjector WaitForMulti-
pleObjects.

86 Chapter 3

Free download pdf