Free Novel Read

Why Beauty is Truth Page 17


  The mathematics was impressive, but in Hamilton’s hands it led to immediate experimental payoff. Hamilton noticed that his method implied the existence of “conical refraction,” in which a single light ray hitting a suitable crystal would emerge as an entire cone of rays. In 1832 this prediction, which was a big surprise to everyone who worked in optics, was dramatically confirmed by Humphry Lloyd using a crystal of the mineral aragonite. Overnight, Hamilton became a household name in science.

  By 1830, Hamilton was thinking about settling down and considered marrying Ellen de Vere, telling Wordsworth he “admired her mind.” Again he resorted to sending her poems, and was just getting ready to propose when she told him she could never leave her home village of Curragh. He interpreted this as a tactful brush-off, and he may have been right, because a year later she married someone else and moved away.

  Eventually, he married Helen Bayly, a local lass who lived near the observatory. Hamilton described her as “not at all brilliant.” The honeymoon was a disaster; Hamilton worked on optics and Helen was ill. In 1834 they had a son, William Edwin. Then Helen went away for most of a year. A second son, Archibald Henry, followed in 1835, but the marriage was falling apart.

  Posterity holds Hamilton’s mechanico-optical analogy to have been his greatest discovery. But in his own mind, right up to his death and with increasing obsession, that honor was reserved for something very different: quaternions.

  Quaternions are an algebraic structure, a close relative of complex numbers. Hamilton was convinced they held the key to the deepest regions of physics; indeed, in his final years he believed they held the key to virtually everything. History seemed to disagree, and over the next century quaternions slowly faded from public view, becoming an obscure backwater of abstract algebra with few important applications.

  Very recently, however, quaternions have enjoyed a revival. While they may never measure up to Hamilton’s hopes, they are increasingly recognized as an important source of significant mathematical structures. Quaternions, it turns out, are very special beasts, and they are special in just the way modern theories of physics require.

  When first discovered, quaternions started a major revolution in algebra. They broke one of the important algebraic rules. Within twenty years, virtually all of the rules of algebra were routinely being broken, often bringing huge benefits, equally often leading to sterile dead ends. What the mathematicians of the mid-1850s had considered inviolable rules turned out to be merely convenient assumptions that made life simpler for algebraists but did not always match the deeper needs of mathematics itself.

  In the brave new post-Galois world, algebra was no longer just about using symbols for numbers in equations. It was about the deep structure of equations—not numbers but processes, transformations, symmetries. These radical innovations changed the face of mathematics. They made it more abstract, but also more general and more powerful. And the whole area had a weird, often baffling beauty.

  Until the Renaissance mathematicians of Bologna started wondering whether minus one could have a sensible square root, all of the numbers appearing in mathematics belonged to a single system. Even today, as a legacy of historical confusion about the relation of mathematics to reality, this system is known as the real numbers. The name is unfortunate, because it suggests that these numbers somehow belong to the fabric of the universe rather than being generated by human attempts to understand it. They are not. They are no more real than the other “number systems” invented by human imagination over the past 150 years. They do, however, bear a more direct relation to reality than most new systems. They correspond very closely to an idealized form of measurement.

  A real number, essentially, is a decimal. Not as regards that specific type of notation, which is merely a convenient way to write real numbers in a form suitable for calculations, but as regards the deeper properties that decimals possess. The real numbers were born from simpler, less ambitious ancestors. First, humanity stumbled its way towards the system of “natural numbers,” 0, 1, 2, 3, 4, and so on. I say “stumbled” because in the early stages, several of these numbers were not recognized as numbers at all. There was a time when ancient Greeks refused to consider 2 a number; it was too small to be typical of “numerosity.” Numbers began at 3. Eventually, they allowed that 2 was as much a number as 3, 4, or 5, but then they balked at 1. After all, if someone claimed to have “a number of cows,” and then you found that he owned just one cow, he was guilty of wild exaggeration. “Number” surely meant “plurality,” which ruled out the singular.

  But as notational systems developed, it became blindingly obvious that 1 was just as much a part of the computational system as its larger brethren. So it became a number—but a special, very small one. In some ways it was the most important number of all, because that’s where numbers started. Adding lots of 1’s together got you everything else—and for a time the notation did literally that, so that “seven” would be written as seven strokes, | | | | | | |.

  Much later, Indian mathematicians recognized that there was an even more important number that preceded 1. That wasn’t where numbers started, after all. They started at zero—now symbolized by 0. Later still, it turned out to be useful to throw negative numbers into the mix—numbers less than nothing. So the negative whole numbers joined the system, and humanity invented the integers: . . ., –3, –2, –1, 0, 1, 2, 3,. . . But it didn’t stop there.

  The problem with whole numbers is that they fail to represent many useful quantities. A farmer trading grain, for instance, might wish to specify a quantity of wheat somewhere between 1 sack and 2 sacks. If it seemed about midway between, then it constituted 1½ sacks. Maybe it was a bit less, 1⅓, or a bit more, 1⅔. And so fractions were invented, with a variety of notations. Fractions interpolated between the whole numbers. Sufficiently complicated fractions interpolated exceedingly finely, as we have already seen with Babylonian arithmetic. Surely any quantity could be represented by a fraction.

  Enter Pythagoras and his eponymous theorem. An immediate consequence is that the length of the diagonal of a unit square is a number whose square is exactly 2. That is to say, the diagonal has length equal to the square root of 2. Such a number must exist, because you can draw a square and it obviously has a diagonal, and the diagonal must have length. But as Hippasus realized to his sorrow, whatever the square root of 2 might be, it cannot be an exact fraction. It is irrational. So even more numbers were needed to fill invisible gaps in the system of fractions.

  Eventually, this process seemed to halt. The Greeks abandoned numerical schemes in favor of geometry, but in 1585, a Flemish mathematician and engineer named Simon Stevin, who lived in the town of Bruges, was appointed by William the Silent to tutor his son Maurice of Nassau. Stevin rose to become inspector of the dikes, quartermaster-general of the army, and minister of finance. These appointments, especially the last two, impressed on him the need for proper bookkeeping, and he borrowed the clerical systems of the Italians. Seeking a way of representing fractions that had the flexibility of Hindu-Arabic place notation and the fine precision of Babylonian sexagesimals, Stevin came up with a base-ten analogue of the Babylonian base-60 system: decimals.

  Stevin published an essay describing his new notational system. He was sufficiently alert to marketing issues to include a statement that the ideas had been subjected to “a thorough trial by practical men who found it so useful that they had voluntarily discarded the short cuts of their own invention to adopt this new one.” Further, he claimed that his decimal system “teaches how all computations that are met in business may be performed by integers alone without the aid of fractions.”

  Stevin’s notation does not use today’s decimal point, but it is directly related. Where we would write 3.1416, Stevin wrote 3 1 4 1 6 . The symbol indicated a whole number, indicated one tenth, one hundredth, and so on. As people got used to the system they dispensed with , , etc., and retained only , which mutated into the decimal point.

 
We can’t actually write the square root of two in decimals—not if we ever plan to stop. But neither can we write the fraction ⅓ in decimals. It is close to 0.33, but 0.333 is closer, and 0.3333 is closer still. An exact representation exists—to use that word in a novel way—only if we contemplate an infinitely long list of 3’s. But if that’s acceptable, we can in principle write down the square root of two exactly. There’s no evident pattern in the digits, but by taking enough of them we can get a number whose square is as close to 2 as we please. Conceptually, if we take all of them, we get a number whose square is exactly 2.

  With the acceptance of “infinite decimals,” the real number system was complete. It could represent any number required by a businessman or mathematician to any desired accuracy. Every conceivable measurement could be stated as a decimal. If it was useful to write down negative numbers, the decimal system handled them with ease. But no other kind of number could possibly be needed. There were no gaps left to fill.

  Except.

  That confounded cubic formula of Cardano’s seemed to be telling us something, but whatever it was, it was terribly obscure. If you started with an apparently harmless cubic—one where you knew a root—the formula did not give you that answer explicitly. Instead it offered a messy recipe requiring you to take cube roots of things that were even messier, and those things seemed to ask for the impossible, the square root of a negative number. The Pythagoreans had balked at the square root of two, but the square root of minus one was even more baffling.

  For several hundred years, the possibility of giving sensible meaning to the square root of minus one had flitted in and out of the collective mathematical consciousness. No one had any idea whether such a number could exist. But they began to realize that it would be extremely useful if it did.

  At first, such “imaginary” quantities had exactly one use: to indicate that a problem had no solutions. If you wanted to find a number whose square was minus one, the formal solution “square root of minus one” was imaginary, so no solution existed. No less a thinker than René Descartes made just this point. In 1637 he distinguished “real” numbers from “imaginary” ones, insisting that the presence of imaginaries signals the absence of solutions. Newton said the same thing. But both of these luminaries were reckoning without Bombelli, who had noticed centuries earlier that sometimes imaginaries signal the presence of solutions. But the signal is hard to decipher.

  In 1673 the English mathematician John Wallis, who was born in Ashford, about fifteen miles from my hometown in the county of Kent, made a fantastic breakthrough. He found a simple way to represent imaginary numbers—even “complex” ones that combined real numbers with imaginaries—as points in the plane. The first step is the now-familiar concept of the real “number line,” a kind of ruler extending to infinity in both directions, with 0 in the middle, the positive real numbers wandering off to the right, and the negative ones to the left.

  Every real number can be located on the number line. Each successive decimal place requires a subdivision of the unit length into ten, a hundred, a thousand, etc., equal parts, but that is no problem. Numbers like can be located as accurately as we wish, somewhere in between 1 and 2, a bit to the left of 1.5. The number π sits a little to the right of 3, and so on.

  The real number line.

  But where does go? There is no place for it on the real number line. It is neither positive nor negative; it can go neither to the right nor to the left of 0.

  So Wallis put it somewhere else. He introduced a second number line, to include the imaginaries—the multiples of i—and placed it at right angles to the real number line. It was literally a case of “lateral thinking.”

  Two copies of the real number line, placed at right angles.

  The two number lines, real and imaginary, have to cross at 0. It is very easy to prove that if numbers make sense at all, then 0 times i must equal 0, so the origins of the real and imaginary lines coincide.

  A complex number consists of two parts: one real, one imaginary. To locate this number in the plane, Wallis told his readers to measure off the real part along the horizontal “real” line, and then measure off the imaginary part vertically—parallel to the imaginary line.

  The complex plane according to Wallis.

  This proposal completely solved the problem of giving meaning to imaginary and complex numbers. It was simple but decisive, a true work of genius.

  It was totally ignored.

  Despite the lack of public recognition, Wallis’s breakthrough must have percolated into the mathematical consciousness, because mathematicians started to employ subconscious images that related directly to Wallis’s basic idea: there is no complex number line, there is a complex plane.

  As mathematics became more versatile, mathematicians started trying to calculate ever more complicated things. In 1702, Johann Bernoulli, trying to solve a calculus problem, found that he needed to evaluate the logarithm of a complex number. By 1712, Bernoulli and Leibniz were doing battle over a core issue: what is the logarithm of a negative number? If you could solve that, you could find the logarithm of any complex number because the logarithm of a number’s square root is just half the logarithm of that number. So the logarithm of i is half that of –1. But what is the logarithm of –1?

  The issue at stake was simple. Leibniz believed the logarithm of –1 had to be complex. Bernoulli said it had to be real. Bernoulli based his contention on a simple piece of calculus; Leibniz objected that neither the method nor the answer made sense. In 1749, Euler resolved the controversy, coming down heavily in favor of Leibniz. Bernoulli, he pointed out, had forgotten something. His calculus calculation was of a kind that involved the addition of an “arbitrary constant.” In his enthusiasm for complex calculus, Bernoulli had tacitly assumed that this constant was zero. It wasn’t. It was imaginary. This omission explained the discrepancy between Bernoulli’s answer and Leibniz’s.

  The pace of “complexification” of mathematics was heating up. More and more ideas originating in the study of real numbers were being extended to complex ones. In 1797, a Norwegian named Caspar Wessel published a method to represent complex numbers as points in a plane. Caspar came from a family of church ministers and was the sixth of fourteen children. At that time, Norway had no universities but was united with Denmark, so in 1761 he went to the University of Copenhagen. He and his brother Ole studied law, and Ole worked on the side as a surveyor, to stretch the family finances. Later, Caspar became Ole’s assistant.

  While working as a surveyor, Caspar invented a way to represent the geometry of the plane—especially its lines and their directions—in terms of complex numbers. Conversely, his ideas could be seen as representing complex numbers in terms of the geometry of the plane. He presented the work—his one and only research paper in mathematics—to the Royal Danish Academy in 1797.

  Hardly any leading mathematicians read Danish, and the work languished unread until it was translated into French a century later. Meanwhile, the French mathematician Jean-Robert Argand independently had the same idea and published it in 1806. By 1811 it had occurred to Gauss, independently again, that complex numbers could be viewed as the points of a plane. The terms “Argand diagram,” “Wessel plane,” and “Gauss plane” began to circulate. Different nationalities tended to employ different phrases.

  Hamilton took the final step. In 1837, almost three hundred years after Cardano’s formula had suggested that “imaginary” numbers might be useful, Hamilton removed the geometric element and reduced complex numbers to pure algebra. His idea was simple; it was implicit in Wallis’s proposal and in the equivalent ideas of Wessel, Argand, and Gauss. But no one had made it explicit.

  Algebraically, said Hamilton, a point in the plane can be identified with a pair of real numbers, its coordinates (x, y). If you look at Wallis’s diagram (or Wessel’s, Argand’s, or Gauss’s) you can see that x is the real part of the number and y is its imaginary part. A complex number x + iy is “really” just a
pair of real numbers, (x, y). You can even lay down rules for adding and multiplying these pairs, and the main step is to observe that since i corresponds to the pair (0, 1), then (0, 1) × (0, 1) must equal (–1, 0). At this point Gauss revealed, in a letter to the Hungarian geometer Wolfgang Bolyai, that exactly the same idea had occurred to him in 1831. Once again, the fox had covered his tracks—so completely that nothing had been visible.

  Problem solved. A complex number is just a pair of real numbers, manipulated according to a short list of simple rules. Since a pair of real numbers is surely just as “real” as a single real number, real and complex numbers are equally closely related to reality, and “imaginary” is misleading.

  Today’s view is rather different: it is that “real” is what’s misleading. Both real and imaginary numbers are equally figments of human imagination.

  The reaction to Hamilton’s solution of a three-hundred-year-old conundrum was distinctly muted. Once mathematicians had woven the notion of complex numbers into a powerful coherent theory, fears about the existence of complex numbers became unimportant. But Hamilton’s use of pairs turned out nonetheless to be very significant. Even though the issue of complex numbers was no longer a source of excitement, the idea of building new number systems from old ones took root in the mathematical consciousness.

  Complex numbers, it turned out, were useful not only in algebra and basic calculus. They constituted a powerful method for solving problems about fluid flow, heat, gravity, sound, almost every area of mathematical physics. But they had one major limitation: they solved these problems in two-dimensional space, not the three that we live in. Some problems, such as the motion of the skin of a drum or the flow of a thin layer of fluid, can be reduced to two dimensions, so the news isn’t all bad. But mathematicians became increasingly irritated that their complex-number methods could not be extended from a plane to space of three dimensions.