The History of Mathematics: A Brief Course

(coco) #1
470 16. THE CALCULUS

time approach nearer to each other than by any given difference,
become ultimately equal.
If you deny it, suppose them to be ultimately unequal, and let
D be their ultimate difference. Therefore they cannot approach
nearer to equality than by that given difference D; which is con-
trary to the supposition.

If only the phrase become ultimately equal had some clear meaning, as Newton
seemed to assume, this argument might have been convincing. As it is, it comes
close to being a definition of ultimately equal, or, as we would say, equal in the limit.
Newton came close to stating the modern concept of a limit, when he described
the "ultimate ratios" (derivatives) as "limits towards which the ratios of quantities
decreasing without limits do always converge, and to which they approach nearer
than by any given difference." Here one can almost see the "arbitrarily small å"
that plays the central role in the concept of a limit.


2.2. Gottfried Wilhelm von Leibniz. Leibniz believed in the reality of infinites-
imals, quantities so small that any finite sum of them is still less than any assignable
positive number, but which are nevertheless not zero, so that one is allowed to di-
vide by them. The three kinds of numbers (finite, infinite, and infinitesimal) could,
in Leibniz^1 view, be multiplied by one another, and the result of multiplying an
infinite number by an infinitesimal might be any one of the three kinds. This po-
sition was rejected in the nineteenth century but was resurrected in the twentieth
century and made logically sound. It lies at the heart of what is called nonstandard
analysis, a subject that has not penetrated the undergraduate curriculum. The
radical step that must be taken in order to believe in infinitesimals is a rejection of
the Archimedean axiom that for any two positive quantities of the same kind a suf-
ficient number of bisections of one will lead to a quantity smaller than the second.
This principle was essential to the use of the method of exhaustion, which was one
of the crowning glories of Euclidean geometry. It is no wonder that mathematicians
were reluctant to give it up.
Leibniz invented the expression dx to indicate the difference of two infinitely
close values of x, dy to indicate the difference of two infinitely close values of y,
and dyjdx to indicate the ratio of these two values. This notation was beautifully
intuitive and is still the preferred notation for thinking about calculus. Its logical
basis at the time was questionable, since it avoided the objections listed above by
claiming that the two quantities have not vanished at all but have yet become less
than any assigned positive number. However, at the time, consistency would have
been counterproductive in mathematics and science.
The integral calculus and the fundamental theorem of calculus flowed very
naturally from Leibniz' approach. Leibniz could argue that the ordinates to the
points on a curve represent infinitesimal rectangles of height y and width dx, and
hence finding the area under the curve— "summing all the lines in the figure" —
amounted to summing infinitesimal differences in area dA, which collapsed to give
the total area. Since it was obvious that on the infinitesimal level dA = ydx, the
fundamental theorem of calculus was an immediate consequence. Leibniz first set
it out in geometric form in a paper on quadratures in the 1693 Acta eruditorum.
There he considered two curves: one, which we denote y = f(x) with its graph
above a horizontal axis, the other, which we denote æ = F(x), with its graph below

Free download pdf