History of Mathematics

To read the desired contents, just click on the content.

Prehistoric Mathematics

Our prehistoric ancestors would have had a general sensibility about amounts, The Ishango bone, a tally stick from central Africa, dates from about 20,000 years ago and would have instinctively known the difference between, say, one and two antelopes. But the intellectual leap from the concrete idea of two things to the invention of a symbol or word for the abstract idea of "two" took many ages to come about.

Even today, there are isolated hunter-gatherer tribes in Amazonia which only have words for "one", "two" and "many", and others which only have words for numbers up to five. In the absence of settled agriculture and trade, there is little need for a formal system of numbers.

Early man kept track of regular occurrences such as the phases of the moon and the seasons. Some of the very earliest evidence of mankind thinking about numbers is from notched bones in Africa dating back to 35,000 to 20,000 years ago. But this is really mere counting and tallying rather than mathematics as such.

Pre-dynastic Egyptians and Sumerians represented geometric designs on their artefacts as early as the 5th millennium BC, as did some megalithic societies in northern Europe in the 3rd millennium BC or before. But this is more art and decoration than the systematic treatment of figures, patterns, forms and quantities that has come to be considered as mathematics.

Mathematics proper initially developed largely as a response to bureaucratic needs when civilizations settled and developed agriculture - for the measurement of plots of land, the taxation of individuals, etc - and this first occurred in the Sumerian and Babylonian civilizations of Mesopotamia (roughly, modern Iraq) and in ancient Egypt.

According to some authorities, there is evidence of basic arithmetic and geometric notations on the petroglyphs at Knowth and Newgrange burial mounds in Ireland (dating from about 3500 BC and 3200 BC respectively). These utilize a repeated zig-zag glyph for counting, a system which continued to be used in Britain and Ireland into the 1st millennium BC. Stonehenge, a Neolithic ceremonial and astronomical monument in England, which dates from around 2300 BC, also arguably exhibits examples of the use of 60 and 360 in the circle measurements, a practice which presumably developed quite independently of the sexagesimal counting system of the ancient Sumerian and Babylonians.

Sumerian/Babylonian Mathematics

Sumer (a region of Mesopotamia, modern-day Iraq) was the birthplace of writing, the wheel, agriculture, the arch, the plow, irrigation and many other innovations, and is often referred to as the Cradle of Civilization. The Sumerians developed the earliest known writing system - a pictographic writing system known as cuneiform script, using wedge-shaped characters inscribed on baked clay tablets - and this has meant that we actually have more knowledge of ancient Sumerian and Babylonian mathematics than of early Egyptian mathematics. Indeed, we even have what appear to school exercises in arithmetic and geometric problems.

As in Egypt, Sumerian mathematics initially developed largely as a response to bureaucratic needs when their civilization settled and developed agriculture (possibly as early as the 6th millennium BC) for the measurement of plots of land, the taxation of individuals, etc. In addition, the Sumerians and Babylonians needed to describe quite large numbers as they attempted to chart the course of the night sky and develop their sophisticated lunar calendar.

They were perhaps the first people to assign symbols to groups of objects in an attempt to make the description of larger numbers easier. They moved from using separate tokens or symbols to represent sheaves of wheat, jars of oil, etc, to the more abstract use of a symbol for specific numbers of anything. Starting as early as the 4th millennium BC, they began using a small clay cone to represent one, a clay ball for ten, and a large cone for sixty. Over the course of the third millennium, these objects were replaced by cuneiform equivalents so that numbers could be written with the same stylus that was being used for the words in the text. A rudimentary model of the abacus was probably in use in Sumeria from as early as 2700 - 2300 BC.

Babylonian Numerals

Babylonian Numerals

Sumerian and Babylonian mathematics was based on a sexegesimal, or base 60, numeric system, which could be counted physically using the twelve knuckles on one hand the five fingers on the other hand. Unlike those of the Egyptians, Greeks and Romans, Babylonian numbers used a true place-value system, where digits written in the left column represented larger values, much as in the modern decimal system, although of course using base 60 not base 10. Thus, 1 1 1 in the Babylonian system represented 3,600 plus 60 plus 1, or 3,661. Also, to represent the numbers 1 - 59 within each place value, two distinct symbols were used, a unit symbol (1) and a ten symbol (10) which were combined in a similar way to the familiar system of Roman numerals (e.g. 23 would be shown as 23). Thus, 1 23 represents 60 plus 23, or 83. However, the number 60 was represented by the same symbol as the number 1 and, because they lacked an equivalent of the decimal point, the actual place value of a symbol often had to be inferred from the context.

It has been conjectured that Babylonian advances in mathematics were probably facilitated by the fact that 60 has many divisors (1, 2, 3, 4, 5, 6, 10, 12, 15, 20, 30 and 60 - in fact, 60 is the smallest integer divisible by all integers from 1 to 6), and the continued modern-day usage of of 60 seconds in a minute, 60 minutes in an hour, and 360 (60 x 6) degrees in a circle, are all testaments to the ancient Babylonian system. It is for similar reasons that 12 (which has factors of 1, 2, 3, 4 and 6) has been such a popular multiple historically (e.g. 12 months, 12 inches, 12 pence, 2 x 12 hours, etc).

The Babylonians also developed another revolutionary mathematical concept, something else that the Egyptians, Greeks and Romans did not have, a circle character for zero, although its symbol was really still more of a placeholder than a number in its own right.

We have evidence of the development of a complex system of metrology in Sumer from about 3000 BC, and multiplication and reciprocal (division) tables, tables of squares, square roots and cube roots, geometrical exercises and division problems from around 2600 BC onwards. Later Babylonian tablets dating from about 1800 to 1600 BC cover topics as varied as fractions, algebra, methods for solving linear, quadratic and even some cubic equations, and the calculation of regular reciprocal pairs (pairs of number which multiply together to give 60). One Babylonian tablet gives an approximation to √2 accurate to an astonishing five decimal places. Others list the squares of numbers up to 59, the cubes of numbers up to 32 as well as tables of compound interest. Yet another gives an estimate for π of 3 18 (3.125, a reasonable approximation of the real value of 3.1416).

Babylonian Clay tablets from c. 2100 BC showing a problem concerning the area of an irregular shape

Babylonian Clay tablets from c. 2100 BC showing a problem concerning the area of an irregular shape

The idea of square numbers and quadratic equations (where the unknown quantity is multiplied by itself, e.g. x2) naturally arose in the context of the meaurement of land, and Babylonian mathematical tablets give us the first ever evidence of the solution of quadratic equations. The Babylonian approach to solving them usually revolved around a kind of geometric game of slicing up and rearranging shapes, although the use of algebra and quadratic equations also appears. At least some of the examples we have appear to indicate problem-solving for its own sake rather than in order to resolve a concrete practical problem.

The Babylonians used geometric shapes in their buildings and design and in dice for the leisure games which were so popular in their society, such as the ancient game of backgammon. Their geometry extended to the calculation of the areas of rectangles, triangles and trapezoids, as well as the volumes of simple shapes such as bricks and cylinders (although not pyramids).

The famous and controversial Plimpton 322 clay tablet, believed to date from around 1800 BC, suggests that the Babylonians may well have known the secret of right-angled triangles (that the square of the hypotenuse equals the sum of the square of the other two sides) many centuries before the Greek Pythagoras. The tablet appears to list 15 perfect Pythagorean triangles with whole number sides, although some claim that they were merely academic exercises, and not deliberate manifestations of Pythagorean triples.


The early Egyptians settled along the fertile Nile valley as early as about 6000 BC, and they began to record the patterns of lunar phases and the seasons, both for agricultural and religious reasons. The Pharaoh’s surveyors used measurements based on body parts (a palm was the width of the hand, a cubit the measurement from elbow to fingertips) to measure land and buildings very early in Egyptian history, and a decimal numeric system was developed based on our ten fingers. The oldest mathematical text from ancient Egypt discovered so far, though, is the Moscow Papyrus, which dates from the Egyptian Middle Kingdom around 2000 - 1800 BC.

It is thought that the Egyptians introduced the earliest fully-developed base 10 numeration system at least as early as 2700 BC (and probably much early). Written numbers used a stroke for units, a heel-bone symbol for tens, a coil of rope for hundreds and a lotus plant for thousands, as well as other hieroglyphic symbols for higher powers of ten up to a million. However, there was no concept of place value, so larger numbers were rather unwieldy (although a million required just one character, a million minus one required fifty-four characters).

Ancient Egyptian method of multiplication

Ancient Egyptian method of multiplication

The Rhind Papyrus, dating from around 1650 BC, is a kind of instruction manual in arithmetic and geometry, and it gives us explicit demonstrations of how multiplication and division was carried out at that time. It also contains evidence of other mathematical knowledge, including unit fractions, composite and prime numbers, arithmetic, geometric and harmonic means, and how to solve first order linear equations as well as arithmetic and geometric series. The Berlin Papyrus, which dates from around 1300 BC, shows that ancient Egyptians could solve second-order algebraic (quadratic) equations.

Multiplication, for example, was achieved by a process of repeated doubling of the number to be multiplied on one side and of one on the other, essentially a kind of multiplication of binary factors similar to that used by modern computers (see the example at right). These corresponding blocks of counters could then be used as a kind of multiplication reference table: first, the combination of powers of two which add up to the number to be multiplied by was isolated, and then the corresponding blocks of counters on the other side yielded the answer. This effectively made use of the concept of binary numbers, over 3,000 years before Leibniz introduced it into the west, and many more years before the development of the computer was to fully explore its potential.

Practical problems of trade and the market led to the development of a notation for fractions. The papyri which have come down to us demonstrate the use of unit fractions based on the symbol of the Eye of Horus, where each part of the eye represented a different fraction, each half of the previous one (i.e. half, quarter, eighth, sixteenth, thirty-second, sixty-fourth), so that the total was one-sixty-fourth short of a whole, the first known example of a geometric series.

Ancient Egyptian method of division

Ancient Egyptian method of division

Unit fractions could also be used for simple division sums. For example, if they needed to divide 3 loaves among 5 people, they would first divide two of the loaves into thirds and the third loaf into fifths, then they would divide the left over third from the second loaf into five pieces. Thus, each person would receive one-third plus one-fifth plus one-fifteenth (which totals three-fifths, as we would expect).

The Egyptians approximated the area of a circle by using shapes whose area they did know. They observed that the area of a circle of diameter 9 units, for example, was very close to the area of a square with sides of 8 units, so that the area of circles of other diameters could be obtained by multiplying the diameter by 89 and then squaring it. This gives an effective approximation of π accurate to within less than one percent.

The pyramids themselves are another indication of the sophistication of Egyptian mathematics. Setting aside claims that the pyramids are first known structures to observe the golden ratio of 1 : 1.618 (which may have occurred for purely aesthetic, and not mathematical, reasons), there is certainly evidence that they knew the formula for the volume of a pyramid - 13 times the height times the length times the width - as well as of a truncated or clipped pyramid. They were also aware, long before Pythagoras, of the rule that a triangle with sides 3, 4 and 5 units yields a perfect right angle, and Egyptian builders used ropes knotted at intervals of 3, 4 and 5 units in order to ensure exact right angles for their stonework (in fact, the 3-4-5 right triangle is often called "Egyptian").


Thales' Intercept Theorem

As the Greek empire began to spread its sphere of influence into Asia Minor, Mesopotamia and beyond, the Greeks were smart enough to adopt and adapt useful elements from the societies they conquered. This was as true of their mathematics as anything else, and they adopted elements of mathematics from both the Babylonians and the Egyptians. But they soon started to make important contributions in their own right and, for the first time, we can acknowledge contributions by individuals. By the Hellenistic period, the Greeks had presided over one of the most dramatic and important revolutions in mathematical thought of all time.

The ancient Greek numeral system, known as Attic or Herodianic numerals, was fully developed by about 450 BC, and in regular use possibly as early as the 7th Century BC. It was a base 10 system similar to the earlier Egyptian one (and even more similar to the later Roman system), with symbols for 1, 5, 10, 50, 100, 500 and 1,000 repeated as many times needed to represent the desired number. Addition was done by totalling separately the symbols (1s, 10s, 100s, etc) in the numbers to be added, and multiplication was a laborious process based on successive doublings (division was based on the inverse of this process).

Thales' Intercept Theorem

Thales' Intercept Theorem

But most of Greek mathematics was based on geometry. Thales, one of the Seven Sages of Ancient Greece, who lived on the Ionian coast of Asian Minor in the first half of the 6th Century BC, is usually considered to have been the first to lay down guidelines for the abstract development of geometry, although what we know of his work (such as on similar and right triangles) now seems quite elementary.

Thales established what has become known as Thales' Theorem, whereby if a triangle is drawn within a circle with the long side as a diameter of the circle, then the opposite angle will always be a right angle (as well as some other related properties derived from this). He is also credited with another theorem, also known as Thales' Theorem or the Intercept Theorem, about the ratios of the line segments that are created if two intersecting lines are intercepted by a pair of parallels (and, by extension, the ratios of the sides of similar triangles).

To some extent, however, the legend of the 6th Century BC mathematician Pythagoras of Samos has become synonymous with the birth of Greek mathematics. Indeed, he is believed to have coined both the words "philosophy" ("love of wisdom") and "mathematics" ("that which is learned"). Pythagoras was perhaps the first to realize that a complete system of mathematics could be constructed, where geometric elements corresponded with numbers. Pythagoras’ Theorem (or the Pythagorean Theorem) is one of the best known of all mathematical theorems. But he remains a controversial figure, as we will see, and Greek mathematics was by no means limited to one man.

The Three Classical Problems

The Three Classical Problems

Three geometrical problems in particular, often referred to as the Three Classical Problems, and all to be solved by purely geometric means using only a straight edge and a compass, date back to the early days of Greek geometry: “the squaring (or quadrature) of the circle”, “the doubling (or duplicating) of the cube” and “the trisection of an angle”. These intransigent problems were profoundly influential on future geometry and led to many fruitful discoveries, although their actual solutions (or, as it turned out, the proofs of their impossibility) had to wait until the 19th Century.

Hippocrates of Chios (not to be confused with the great Greek physician Hippocrates of Kos) was one such Greek mathematician who applied himself to these problems during the 5th Century BC (his contribution to the “squaring the circle” problem is known as the Lune of Hippocrates). His influential book “The Elements”, dating to around 440 BC, was the first compilation of the elements of geometry, and his work was an important source for Euclid's later work.

Zeno's Paradox of Achilles and the Tortoise

Zeno's Paradox of Achilles and the Tortoise

It was the Greeks who first grappled with the idea of infinity, such as described in the well-known paradoxes attributed to the philosopher Zeno of Elea in the 5th Century BC. The most famous of his paradoxes is that of Achilles and the Tortoise, which describes a theoretical race between Achilles and a tortoise. Achilles gives the much slower tortoise a head start, but by the time Achilles reaches the tortoise's starting point, the tortoise has already moved ahead. By the time Achilles reaches that point, the tortoise has moved on again, etc, etc, so that in principle the swift Achilles can never catch up with the slow tortoise.

Paradoxes such as this one and Zeno's so-called Dichotomy Paradox are based on the infinite divisibility of space and time, and rest on the idea that a half plus a quarter plus an eighth plus a sixteenth, etc, etc, to infinity will never quite equal a whole. The paradox stems, however, from the false assumption that it is impossible to complete an infinite number of discrete dashes in a finite time, although it is extremely difficult to definitively prove the fallacy. The ancient Greek Aristotle was the first of many to try to disprove the paradoxes, particularly as he was a firm believer that infinity could only ever be potential and not real.

Democritus, most famous for his prescient ideas about all matter being composed of tiny atoms, was also a pioneer of mathematics and geometry in the 5th - 4th Century BC, and he produced works with titles like "On Numbers", "On Geometrics", "On Tangencies", "On Mapping" and "On Irrationals", although these works have not survived. We do know that he was among the first to observe that a cone (or pyramid) has one-third the volume of a cylinder (or prism) with the same base and height, and he is perhaps the first to have seriously considered the division of objects into an infinite number of cross-sections.

However, it is certainly true that Pythagoras in particular greatly influenced those who came after him, including Plato, who established his famous Academy in Athens in 387 BC, and his protégé Aristotle, whose work on logic was regarded as definitive for over two thousand years. Plato the mathematician is best known for his description of the five Platonic solids, but the value of his work as a teacher and popularizer of mathematics can not be understated.

Plato’s student Eudoxus of Cnidus is usually credited with the first implementation of the “method of exhaustion” (later developed by Archimedes), an early method of integration by successive approximations which he used for the calculation of the volume of the pyramid and cone. He also developed a general theory of proportion, which was applicable to incommensurable (irrational) magnitudes that cannot be expressed as a ratio of two whole numbers, as well as to commensurable (rational) magnitudes, thus extending Pythagoras’ incomplete ideas.

Perhaps the most important single contribution of the Greeks, though - and Pythagoras, Plato and Aristotle were all influential in this respect - was the idea of proof, and the deductive method of using logical steps to prove or disprove theorems from initial assumed axioms. Older cultures, like the Egyptians and the Babylonians, had relied on inductive reasoning, that is using repeated observations to establish rules of thumb. It is this concept of proof that give mathematics its power and ensures that proven theories are as true today as they were two thousand years ago, and which laid the foundations for the systematic approach to mathematics of Euclid and those who came after him.


It is sometimes claimed that we owe pure mathematics to Pythagoras, and he is often called the first "true" mathematician. But, although his contribution was clearly important, he nevertheless remains a controversial figure. He left no mathematical writings himself, and much of what we know about Pythagorean thought comes to us from the writings of Philolaus and other later Pythagorean scholars. Indeed, it is by no means clear whether many (or indeed any) of the theorems ascribed to him were in fact solved by Pythagoras personally or by his followers.

The school he established at Croton in southern Italy around 530 BC was the nucleus of a rather bizarre Pythagorean sect. Although Pythagorean thought was largely dominated by mathematics, it was also profoundly mystical, and Pythagoras imposed his quasi-religious philosophies, strict vegetarianism, communal living, secret rites and odd rules on all the members of his school (including bizarre and apparently random edicts about never urinating towards the sun, never marrying a woman who wears gold jewellery, never passing an ass lying in the street, never eating or even touching black fava beans, etc) .

The members were divided into the "mathematikoi" (or "learners"), who extended and developed the more mathematical and scientific work that Pythagoras himself began, and the "akousmatikoi" (or "listeners"), who focused on the more religious and ritualistic aspects of his teachings. There was always a certain amount of friction between the two groups and eventually the sect became caught up in some fierce local fighting and ultimately dispersed. Resentment built up against the secrecy and exclusiveness of the Pythagoreans and, in 460 BC, all their meeting places were burned and destroyed, with at least 50 members killed in Croton alone.

The over-riding dictum of Pythagoras's school was “All is number” or “God is number”, and the Pythagoreans effectively practised a kind of numerology or number-worship, and considered each number to have its own character and meaning. For example, the number one was the generator of all numbers; two represented opinion; three, harmony; four, justice; five, marriage; six, creation; seven, the seven planets or “wandering stars”; etc. Odd numbers were thought of as female and even numbers as male.

The Pythagorean Tetractys

The Pythagorean Tetractys

The holiest number of all was "tetractys" or ten, a triangular number composed of the sum of one, two, three and four. It is a great tribute to the Pythagoreans' intellectual achievements that they deduced the special place of the number 10 from an abstract mathematical argument rather than from something as mundane as counting the fingers on two hands.

However, Pythagoras and his school - as well as a handful of other mathematicians of ancient Greece - was largely responsible for introducing a more rigorous mathematics than what had gone before, building from first principles using axioms and logic. Before Pythagoras, for example, geometry had been merely a collection of rules derived by empirical measurement. Pythagoras discovered that a complete system of mathematics could be constructed, where geometric elements corresponded with numbers, and where integers and their ratios were all that was necessary to establish an entire system of logic and truth.

He is mainly remembered for what has become known as Pythagoras’ Theorem (or the Pythagorean Theorem): that, for any right-angled triangle, the square of the length of the hypotenuse (the longest side, opposite the right angle) is equal to the sum of the square of the other two sides (or “legs”). Written as an equation: a2 + b2 = c2. What Pythagoras and his followers did not realize is that this also works for any shape: thus, the area of a pentagon on the hypotenuse is equal to the sum of the pentagons on the other two sides, as it does for a semi-circle or any other regular (or even irregular( shape.

Pythagoras' (Pythagorean) Theorem

Pythagoras' (Pythagorean) Theorem

The simplest and most commonly quoted example of a Pythagorean triangle is one with sides of 3, 4 and 5 units (32 + 42 = 52, as can be seen by drawing a grid of unit squares on each side as in the diagram at right), but there are a potentially infinite number of other integer “Pythagorean triples”, starting with (5, 12 13), (6, 8, 10), (7, 24, 25), (8, 15, 17), (9, 40, 41), etc. It should be noted, however that (6, 8, 10) is not what is known as a “primitive” Pythagorean triple, because it is just a multiple of (3, 4, 5).

Pythagoras’ Theorem and the properties of right-angled triangles seems to be the most ancient and widespread mathematical development after basic arithmetic and geometry, and it was touched on in some of the most ancient mathematical texts from Babylon and Egypt, dating from over a thousand years earlier. One of the simplest proofs comes from ancient China, and probably dates from well before Pythagoras' birth. It was Pythagoras, though, who gave the theorem its definitive form, although it is not clear whether Pythagoras himself definitively proved it or merely described it. Either way, it has become one of the best-known of all mathematical theorems, and as many as 400 different proofs now exist, some geometrical, some algebraic, some involving advanced differential equations, etc.

It soom became apparent, though, that non-integer solutions were also possible, so that an isosceles triangle with sides 1, 1 and √2, for example, also has a right angle, as the Babylonians had discovered centuries earlier. However, when Pythagoras’s student Hippasus tried to calculate the value of √2, he found that it was not possible to express it as a fraction, thereby indicating the potential existence of a whole new world of numbers, the irrational numbers (numbers that can not be expressed as simple fractions of integers). This discovery rather shattered the elegant mathematical world built up by Pythagoras and his followers, and the existence of a number that could not be expressed as the ratio of two of God's creations (which is how they thought of the integers) jeopardized the cult's entire belief system.

Poor Hippasus was apparently drowned by the secretive Pythagoreans for broadcasting this important discovery to the outside world. But the replacement of the idea of the divinity of the integers by the richer concept of the continuum, was an essential development in mathematics. It marked the real birth of Greek geometry, which deals with lines and planes and angles, all of which are continuous and not discrete.

Among his other achievements in geometry, Pythagoras (or at least his followers, the Pythagoreans) also realized that the sum of the angles of a triangle is equal to two right angles (180°), and probably also the generalization which states that the sum of the interior angles of a polygon with n sides is equal to (2n - 4) right angles, and that the sum of its exterior angles equals 4 right angles. They were able to construct figures of a given area, and to use simple geometrical algebra, for example to solve equations such as a(a - x) = x2 by geometrical means.

The Pythagoreans also established the foundations of number theory, with their investigations of triangular, square and also perfect numbers (numbers that are the sum of their divisors). They discovered several new properties of square numbers, such as that the square of a number n is equal to the sum of the first n odd numbers (e.g. 42 = 16 = 1 + 3 + 5 + 7). They also discovered at least the first pair of amicable numbers, 220 and 284 (amicable numbers are pairs of numbers for which the sum of the divisors of one number equals the other number, e.g. the proper divisors of 220 are 1, 2, 4, 5, 10, 11, 20, 22, 44, 55 and 110, of which the sum is 284; and the proper divisors of 284 are 1, 2, 4, 71, and 142, of which the sum is 220).

Pythagoras is credited with the discovery of the ratios between harmonious musical tones

Pythagoras is credited with the discovery of the ratios between harmonious musical tones

Pythagoras is also credited with the discovery that the intervals between harmonious musical notes always have whole number ratios. For instance, playing half a length of a guitar string gives the same note as the open string, but an octave higher; a third of a length gives a different but harmonious note; etc. Non-whole number ratios, on the other hand, tend to give dissonant sounds. In this way, Pythagoras described the first four overtones which create the common intervals which have become the primary building blocks of musical harmony: the octave (1:1), the perfect fifth (3:2), the perfect fourth (4:3) and the major third (5:4). The oldest way of tuning the 12-note chromatic scale is known as Pythagorean tuning, and it is based on a stack of perfect fifths, each tuned in the ratio 3:2.

The mystical Pythagoras was so excited by this discovery that he became convinced that the whole universe was based on numbers, and that the planets and stars moved according to mathematical equations, which corresponded to musical notes, and thus produced a kind of symphony, the “Musical Universalis” or “Music of the Spheres”.


Although usually remembered today as a philosopher, Plato was also one of ancient Greece’s most important patrons of mathematics. Inspired by Pythagoras, he founded his Academy in Athens in 387 BC, where he stressed mathematics as a way of understanding more about reality. In particular, he was convinced that geometry was the key to unlocking the secrets of the universe. The sign above the Academy entrance read: “Let no-one ignorant of geometry enter here”.

Plato played an important role in encouraging and inspiring Greek intellectuals to study mathematics as well as philosophy. His Academy taught mathematics as a branch of philosophy, as Pythagoras had done, and the first 10 years of the 15 year course at the Academy involved the study of science and mathematics, including plane and solid geometry, astronomy and harmonics. Plato became known as the "maker of mathematicians", and his Academy boasted some of the most prominent mathematicians of the ancient world, including Eudoxus, Theaetetus and Archytas.

He demanded of his students accurate definitions, clearly stated assumptions, and logical deductive proof, and he insisted that geometric proofs be demonstrated with no aids other than a straight edge and a compass. Among the many mathematical problems Plato posed for his students’ investigation were the so-called Three Classical Problems (“squaring the circle”, “doubling the cube” and “trisecting the angle”) and to some extent these problems have become identified with Plato, although he was not the first to pose them.

Platonic Solids

Platonic Solids

Plato the mathematician is perhaps best known for his identification of 5 regular symmetrical 3-dimensional shapes, which he maintained were the basis for the whole universe, and which have become known as the Platonic Solids: the tetrahedron (constructed of 4 regular triangles, and which for Plato represented fire), the octahedron (composed of 8 triangles, representing air), the icosahedron (composed of 20 triangles, and representing water), the cube (composed of 6 squares, and representing earth), and the dodecahedron (made up of 12 pentagons, which Plato obscurely described as “the god used for arranging the constellations on the whole heaven”).

The tetrahedron, cube and dodecahedron were probably familiar to Pythagoras, and the octahedron and icosahedron were probably discovered by Theaetetus, a contemporary of Plato. Furthermore, it fell to Euclid, half a century later, to prove that these were the only possible convex regular polyhedra. But they nevertheless became popularly known as the Platonic Solids, and inspired mathematicians and geometers for many centuries to come. For example, around 1600, the German astronomer Johannes Kepler devised an ingenious system of nested Platonic solids and spheres to approximate quite well the distances of the known planets from the Sun (although he was enough of a scientist to abandon his elegant model when it proved to be not accurate enough).


By the 3rd Century BC, in the wake of the conquests of Alexander the Great, mathematical breakthroughs were also beginning to be made on the edges of the Greek Hellenistic empire.

The Sieve of Eratosthenes

In particular, Alexandria in Egypt became a great centre of learning under the beneficent rule of the Ptolemies, and its famous Library soon gained a reputation to rival that of the Athenian Academy. The patrons of the Library were arguably the first professional scientists, paid for their devotion to research. Among the best known and most influential mathematicians who studied and taught at Alexandria were Euclid, Archimedes, Eratosthenes, Heron, Menelaus and Diophantus.

During the late 4th and early 3rd Century BC, Euclid was the great chronicler of the mathematics of the time, and one of the most influential teachers in history. He virtually invented classical (Euclidean) geometry as we know it. Archimedes spent most of his life in Syracuse, Sicily, but also studied for a while in Alexandria. He is perhaps best known as an engineer and inventor but, in the light of recent discoveries, he is now considered of one of the greatest pure mathematicians of all time. Eratosthenes of Alexandria was a near contemporary of Archimedes in the 3rd Century BC. A mathematician, astronomer and geographer, he devised the first system of latitude and longitude, and calculated the circumference of the earth to a remarkable degree of accuracy. As a mathematician, his greatest legacy is the “Sieve of Eratosthenes” algorithm for identifying prime numbers.

Menelaus of Alexandria introduced the concept of spherical triangle

Menelaus of Alexandria introduced the concept of spherical triangle

It is not known exactly when the great Library of Alexandria burned down, but Alexandria remained an important intellectual centre for some centuries. In the 1st century BC, Heron (or Hero) was another great Alexandrian inventor, best known in mathematical circles for Heronian triangles (triangles with integer sides and integer area), Heron’s Formula for finding the area of a triangle from its side lengths, and Heron’s Method for iteratively computing a square root. He was also the first mathematician to confront at least the idea of √-1 (although he had no idea how to treat it, something which had to wait for Tartaglia and Cardano in the 16th Century).

Menelaus of Alexandria, who lived in the 1st - 2nd Century AD, was the first to recognize geodesics on a curved surface as the natural analogues of straight lines on a flat plane. His book “Sphaerica” dealt with the geometry of the sphere and its application in astronomical measurements and calculations, and introduced the concept of spherical triangle (a figure formed of three great circle arcs, which he named "trilaterals").

In the 3rd Century AD, Diophantus of Alexandria was the first to recognize fractions as numbers, and is considered an early innovator in the field of what would later become known as algebra. He applied himself to some quite complex algebraic problems, including what is now known as Diophantine Analysis, which deals with finding integer solutions to kinds of problems that lead to equations in several unknowns (Diophantine equations). Diophantus’ “Arithmetica”, a collection of problems giving numerical solutions of both determinate and indeterminate equations, was the most prominent work on algebra in all Greek mathematics, and his problems exercised the minds of many of the world's best mathematicians for much of the next two millennia.

Conic sections of Apollonius

Conic sections of Apollonius

But Alexandria was not the only centre of learning in the Hellenistic Greek empire. Mention should also be made of Apollonius of Perga (a city in modern-day southern Turkey) whose late 3rd Century BC work on geometry (and, in particular, on conics and conic sections) was very influential on later European mathematicians. It was Apollonius who gave the ellipse, the parabola, and the hyperbola the names by which we know them, and showed how they could be derived from different sections through a cone.

Hipparchus, who was also from Hellenistic Anatolia and who live in the 2nd Century BC, was perhaps the greatest of all ancient astronomers. He revived the use of arithmetic techniques first developed by the Chaldeans and Babylonians, and is usually credited with the beginnings of trigonometry. He calculated (with remarkable accuracy for the time) the distance of the moon from the earth by measuring the different parts of the moon visible at different locations and calculating the distance using the properties of triangles. He went on to create the first table of chords (side lengths corresponding to different angles of a triangle). By the time of the great Alexandrian astronomer Ptolemy in the 2nd Century AD, however, Greek mastery of numerical procedures had progressed to the point where Ptolemy was able to include in his “Almagest” a table of trigonometric chords in a circle for steps of ¼° which (although expressed sexagesimally in the Babylonian style) is accurate to about five decimal places.

By the middle of the 1st Century BC and thereafter, however, the Romans had tightened their grip on the old Greek empire. The Romans had no use for pure mathematics, only for its practical applications, and the Christian regime that followed it even less so. The final blow to the Hellenistic mathematical heritage at Alexandria might be seen in the figure of Hypatia, the first recorded female mathematician, and a renowned teacher who had written some respected commentaries on Diophantus and Apollonius. She was dragged to her death by a Christian mob in 415 AD.



By the middle of the 1st Century BC, Roman arithmetic
the Roman had tightened their grip on the old Greek and Hellenistic empires, and the mathematical revolution of the Greeks ground to halt. Despite all their advances in other respects, no mathematical innovations occurred under the Roman Empire and Republic, and there were no mathematicians of note. The Romans had no use for pure mathematics, only for its practical applications, and the Christian regime that followed it (after Christianity became the official religion of the Roman empire) even less so.

Roman arithmetic

Roman arithmetic

Roman numerals are well known today, and were the dominant number system for trade and administration in most of Europe for the best part of a millennium. It was decimal (base 10) system but not directly positional, and did not include a zero, so that, for arithmetic and mathematical purposes, it was a clumsy and inefficient system. It was based on letters of the Roman alphabet - I, V, X, L, C, D and M - combines to signify the sum of their values (e.g. VII = V + I + I = 7).

Later, a subtractive notation was also adopted, where VIIII, for example, was replaced by IX (10 - 1 = 9), which simplified the writing of numbers a little, but made calculation even more difficult, requiring conversion of the subtractive notation at the beginning of a sum and then its re-application at the end (see image at right). Due to the difficulty of written arithmetic using Roman numeral notation, calculations were usually performed with an abacus, based on earlier Babylonian and Greek abaci.


The Mayan civilisation had settled in the region of Central America from about 2000 BC, although the so-called Classic Period stretches from about 250 AD to 900 AD. At its peak, it was one of the most densely populated and culturally dynamic societies in the world.

The importance of astronomy and calendar calculations in Mayan mayan_numerals
society required mathematics, and the Maya constructed quite early a very sophisticated number system, possibly more advanced than any other in the world at the time (although the dating of developments is quite difficult).

The Mayan and other Mesoamerican cultures used a vigesimal number system based on base 20 (and, to some extent, base 5), probably originally developed from counting on fingers and toes. The numerals consisted of only three symbols: zero, represented as a shell shape; one, a dot; and five, a bar. Thus, addition and subtraction was a relatively simple matter of adding up dots and bars. After the number 19, larger numbers were written in a kind of vertical place value format using powers of 20: 1, 20, 400, 8000, 160000, etc (see image above), although in their calendar calculations they gave the third position a value of 360 instead of 400 (higher positions revert to multiples of 20).

The pre-classic Maya and their neighbours had independently developed the concept of zero by at least as early as 36 BC, and we have evidence of their working with sums up to the hundreds of millions, and with dates so large it took several lines just to represent them. Despite not possessing the concept of a fraction, they produced extremely accurate astronomical observations using no instruments other than sticks, and were able to measure the length of the solar year to a far higher degree of accuracy than that used in Europe (their calculations produced 365.242 days, compared to the modern value of 365.242198), as well as the length of the lunar month (their estimate was 29.5308 days, compared to the modern value of 29.53059).

However, due to the geographical disconnect, Mayan and Mesoamerican mathematics had absolutely no influence on Old World (European and Asian) numbering systems and mathematics.


Even as mathematical developments in the ancient Greek world chinese_numerals
were beginning to falter during the final centuries BC, the burgeoning trade empire of China was leading Chinese mathematics to ever greater heights.

The simple but efficient ancient Chinese numbering system, which dates back to at least the 2nd millennium BC, used small bamboo rods arranged to represent the numbers 1 to 9, which were then places in columns representing units, tens, hundreds, thousands, etc. It was therefore a decimal place value system, very similar to the one we use today - indeed it was the first such number system, adopted by the Chinese over a thousand years before it was adopted in the West - and it made even quite complex calculations very quick and easy.

Written numbers, however, employed the slightly less efficient system of using a different symbol for tens, hundreds, thousands, etc. This was largely because there was no concept or symbol of zero, and it had the effect of limiting the usefulness of the written number in Chinese.

The use of the abacus is often thought of as a Chinese idea, although some type of abacus was in use in Mesopotamia, Egypt and Greece, probably much earlier than in China (the first Chinese abacus, or “suanpan”, we know of dates to about the 2nd Century BC).

Lo Shu magic square, with its traditional graphical representation

Lo Shu magic square, with its traditional graphical representation

There was a pervasive fascination with numbers and mathematical patterns in ancient China, and different numbers were believed to have cosmic significance. In particular, magic squares - squares of numbers where each row, column and diagonal added up to the same total - were regarded as having great spiritual and religious significance.

The Lo Shu Square, an order three square where each row, column and diagonal adds up to 15, is perhaps the earliest of these, dating back to around 650 BC (the legend of Emperor Yu’s discovery of the the square on the back of a turtle is set as taking place in about 2800 BC). But soon, bigger magic squares were being constructed, with even greater magical and mathematical powers, culminating in the elaborate magic squares, circles and triangles of Yang Hui in the 13th Century (Yang Hui also produced a trianglular representation of binomial coefficients identical to the later Pascals’ Triangle, and was perhaps the first to use decimal fractions in the modern form).

Early Chinese method of solving equations

Early Chinese method of solving equations

But the main thrust of Chinese mathematics developed in response to the empire’s growing need for mathematically competent administrators. A textbook called “Jiuzhang Suanshu” or “Nine Chapters on the Mathematical Art” (written over a period of time from about 200 BC onwards, probably by a variety of authors) became an important tool in the education of such a civil service, covering hundreds of problems in practical areas such as trade, taxation, engineering and the payment of wages.

It was particularly important as a guide to how to solve equations - the deduction of an unknown number from other known information - using a sophisticated matrix-based method which did not appear in the West until Carl Friedrich Gauss re-discovered it at the beginning of the 19th Century (and which is now known as Gaussian elimination).

Among the greatest mathematicians of ancient China was Liu Hui, who produced a detailed commentary on the “Nine Chapters” in 263 AD, was one of the first mathematicians known to leave roots unevaluated, giving more exact results instead of approximations. By an approximation using a regular polygon with 192 sides, he also formulated an algorithm which calculated the value of π as 3.14159 (correct to five decimal places), as well as developing a very early forms of both integral and differential calculus.

The Chinese Remainder Theorem

The Chinese Remainder Theorem

The Chinese went on to solve far more complex equations using far larger numbers than those outlined in the “Nine Chapters”, though. They also started to pursue more abstract mathematical problems (although usually couched in rather artificial practical terms), including what has become known as the Chinese Remainder Theorem. This uses the remainders after dividing an unknown number by a succession of smaller numbers, such as 3, 5 and 7, in order to calculate the smallest value of the unknown number. A technique for solving such problems, initially posed by Sun Tzu in the 3rd Century AD and considered one of the jewels of mathematics, was being used to measure planetary movements by Chinese astronomers in the 6th Century AD, and even today it has practical uses, such as in Internet cryptography.

By the 13th Century, the Golden Age of Chinese mathematics, there were over 30 prestigious mathematics schools scattered across China. Perhaps the most brilliant Chinese mathematician of this time was Qin Jiushao, a rather violent and corrupt imperial administrator and warrior, who explored solutions to quadratic and even cubic equations using a method of repeated approximations very similar to that later devised in the West by Sir Isaac Newton in the 17th Century. Qin even extended his technique to solve (albeit approximately) equations involving numbers up to the power of ten, extraordinarily complex mathematics for its time.


Despite developing quite independently of Chinese (and probably also of Babylonian mathematics), some very advanced mathematical discoveries were made at a very early time in India.


Mantras from the early Vedic period (before 1000 BC) invoke powers of ten from a hundred all the way up to a trillion, and provide evidence of the use of arithmetic operations such as addition, subtraction, multiplication, fractions, squares, cubes and roots. A 4th Century AD Sanskrit text reports Buddha enumerating numbers up to 1053, as well as describing six more numbering systems over and above these, leading to a number equivalent to 10421. Given that there are an estimated 1080 atoms in the whole universe, this is as close to infinity as any in the ancient world came. It also describes a series of iterations in decreasing size, in order to demonstrate the size of an atom, which comes remarkably close to the actual size of a carbon atom (about 70 trillionths of a metre).

As early as the 8th Century BC, long before Pythagoras, a text known as the “Sulba Sutras” (or "Sulva Sutras") listed several simple Pythagorean triples, as well as a statement of the simplified Pythagorean theorem for the sides of a square and for a rectangle (indeed, it seems quite likely that Pythagoras learned his basic geometry from the "Sulba Sutras"). The Sutras also contain geometric solutions of linear and quadratic equations in a single unknown, and give a remarkably accurate figure for the square root of 2, obtained by adding 1 + 13 + 1(3 x 4) + 1(3 x 4 x 34), which yields a value of 1.4142156, correct to 5 decimal places.

As early as the 3rd or 2nd Century BC, Jain mathematicians recognized five different types of infinities: infinite in one direction, in two directions, in area, infinite everywhere and perpetually infinite. Ancient Buddhist literature also demonstrates a prescient awareness of indeterminate and infinite numbers, with numbers deemed to be of three types: countable, uncountable and infinite.

Like the Chinese, the Indians early discovered the benefits of a decimal place value number system, and were certainly using it before about the 3rd Century AD. They refined and perfected the system, particularly the written representation of the numerals, creating the ancestors of the nine numerals that (thanks to its dissemination by medieval Arabic mathematicans) we use across the world today, sometimes considered one of the greatest intellectual innovations of all time.

The earliest use of a circle character for the number zero was in India

The earliest use of a circle character for the number zero was in India

The Indians were also responsible for another hugely important development in mathematics. The earliest recorded usage of a circle character for the number zero is usually attributed to a 9th Century engraving in a temple in Gwalior in central India. But the brilliant conceptual leap to include zero as a number in its own right (rather than merely as a placeholder, a blank or empty space within a number, as it had been treated until that time) is usually credited to the 7th Century Indian mathematicians Brahmagupta - or possibly another Indian, Bhaskara I - even though it may well have been in practical use for centuries before that. The use of zero as a number which could be used in calculations and mathematical investigations, would revolutionize mathematics.

Brahmagupta established the basic mathematical rules for dealing with zero: 1 + 0 = 1; 1 - 0 = 1; and 1 x 0 = 0 (the breakthrough which would make sense of the apparently non-sensical operation 1 ÷ 0 would also fall to an Indian, the 12th Century mathematician Bhaskara II). Brahmagupta also established rules for dealing with negative numbers, and pointed out that quadratic equations could in theory have two possible solutions, one of which could be negative. He even attempted to write down these rather abstract concepts, using the initials of the names of colours to represent unknowns in his equations, one of the earliest intimations of what we now know as algebra.

The so-called Golden Age of Indian mathematics can be said to extend from the 5th to 12th Centuries, and many of its mathematical discoveries predated similar discoveries in the West by several centuries, which has led to some claims of plagiarism by later European mathematicians, at least some of whom were probably aware of the earlier Indian work. Certainly, it seems that Indian contributions to mathematics have not been given due acknowledgement until very recently in modern history.

Indian astronomers used trigonometry tables to estimate the relative distance of the Earth to the Sun and Moon

Indian astronomers used trigonometry tables to estimate the relative distance of the Earth to the Sun and Moon

Golden Age Indian mathematicians made fundamental advances in the theory of trigonometry, a method of linking geometry and numbers first developed by the Greeks. They used ideas like the sine, cosine and tangent functions (which relate the angles of a triangle to the relative lengths of its sides) to survey the land around them, navigate the seas and even chart the heavens. For instance, Indian astronomers used trigonometry to calculated the relative distances between the Earth and the Moon and the Earth and the Sun. They realized that, when the Moon is half full and directly opposite the Sun, then the Sun, Moon and Earth form a right angled triangle, and were able to accurately measure the angle as 17°. Their sine tables gave a ratio for the sides of such a triangle as 400:1, indicating that the Sun is 400 times further away from the Earth than the Moon.

Although the Greeks had been able to calculate the sine function of some angles, the Indian astronomers wanted to be able to calculate the sine function of any given angle. A text called the “Surya Siddhanta”, by unknown authors and dating from around 400 AD, contains the roots of modern trigonometry, including the first real use of sines, cosines, inverse sines, tangents and secants.

As early as the 6th Century AD, the great Indian mathematician and astronomer Aryabhata produced categorical definitions of sine, cosine, versine and inverse sine, and specified complete sine and versine tables, in 3.75° intervals from 0° to 90°, to an accuracy of 4 decimal places. Aryabhata also demonstrated solutions to simultaneous quadratic equations, and produced an approximation for the value of π equivalent to 3.1416, correct to four decimal places. He used this to estimate the circumference of the Earth, arriving at a figure of 24,835 miles, only 70 miles off its true value. But, perhaps even more astonishing, he seems to have been aware that π is an irrational number, and that any calculation can only ever be an approximation, something not proved in Europe until 1761.

Illustration of infinity as the reciprocal of zero

Illustration of infinity as the reciprocal of zero

Bhaskara II, who lived in the 12th Century, was one of the most accomplished of all India’s great mathematicians. He is credited with explaining the previously misunderstood operation of division by zero. He noticed that dividing one into two pieces yields a half, so 1 ÷ 12 = 2. Similarly, 1 ÷ 13 = 3. So, dividing 1 by smaller and smaller factions yields a larger and larger number of pieces. Ultimately, therefore, dividing one into pieces of zero size would yield infinitely many pieces, indicating that 1 ÷ 0 = ∞ (the symbol for infinity).

However, Bhaskara II also made important contributions to many different areas of mathematics from solutions of quadratic, cubic and quartic equations (including negative and irrational solutions) to solutions of Diophantine equations of the second order to preliminary concepts of infinitesimal calculus and mathematical analysis to spherical trigonometry and other aspects of trigonometry. Some of his findings predate similar discoveries in Europe by several centuries, and he made important contributions in terms of the systemization of (then) current knowledge and improved methods for known solutions.

The Kerala School of Astronomy and Mathematics was founded in the late 14th Century by Madhava of Sangamagrama, sometimes called the greatest mathematician-astronomer of medieval India. He developed infinite series approximations for a range of trigonometric functions, including π, sine, etc. Some of his contributions to geometry and algebra and his early forms of differentiation and integration for simple functions may have been transmitted to Europe via Jesuit missionaries, and it is possible that the later European development of calculus was influenced by his work to some extent.


The great 7th Century Indian mathematician and astronomer Brahmagupta wrote some important works on both mathematics and astronomy. He was from the state of Rajasthan of northwest India (he is often referred to as Bhillamalacarya, the teacher from Bhillamala), and later became the head of the astronomical observatory at Ujjain in central India. Most of his works are composed in elliptic verse, a common practice in Indian mathematics at the time, and consequently have something of a poetic ring to them.

It seems likely that Brahmagupta's works, especially his most famous text, the “Brahmasphutasiddhanta”, were brought by the 8th Century Abbasid caliph Al-Mansur to his newly founded centre of learning at Baghdad on the banks of the Tigris, providing an important link between Indian mathematics and astronomy and the nascent upsurge in science and mathematics in the Islamic world.

In his work on arithmetic, Brahmagupta explained how to find the cube and cube-root of an integer and gave rules facilitating the computation of squares and square roots. He also gave rules for dealing with five types of combinations of fractions. He gave the sum of the squares of the first n natural numbers as n(n + 1)(2n + 1)⁄ 6 and the sum of the cubes of the first n natural numbers as (n(n + 1)2)².

Brahmagupta’s rules for dealing with zero and negative numbers

Brahmagupta’s rules for dealing with zero and negative numbers

Brahmagupta’s genius, though, came in his treatment of the concept of (then relatively new) the number zero. Although often also attributed to the 7th Century Indian mathematician Bhaskara I, his “Brahmasphutasiddhanta” is probably the earliest known text to treat zero as a number in its own right, rather than as simply a placeholder digit as was done by the Babylonians, or as a symbol for a lack of quantity as was done by the Greeks and Romans.

Brahmagupta established the basic mathematical rules for dealing with zero (1 + 0 = 1; 1 - 0 = 1; and 1 x 0 = 0), although his understanding of division by zero was incomplete (he thought that 1 ÷ 0 = 0). Almost 500 years later, in the 12th Century, another Indian mathematician, Bhaskara II, showed that the answer should be infinity, not zero (on the grounds that 1 can be divided into an infinite number of pieces of size zero), an answer that was considered correct for centuries. However, this logic does not explain why 2 ÷ 0, 7 ÷ 0, etc, should also be zero - the modern view is that a number divided by zero is actually "undefined" (i.e. it doesn't make sense).

Brahmagupta’s view of numbers as abstract entities, rather than just for counting and measuring, allowed him to make yet another huge conceptual leap which would have profound consequence for future mathematics. Previously, the sum 3 - 4, for example, was considered to be either meaningless or, at best, just zero. Brahmagupta, however, realized that there could be such a thing as a negative number, which he referred to as “debt” as a opposed to “property”. He expounded on the rules for dealing with negative numbers (e.g. a negative times a negative is a positive, a negative times a positive is a negative, etc).

Furthermore, he pointed out, quadratic equations (of the type x2 + 2 = 11, for example) could in theory have two possible solutions, one of which could be negative, because 32 = 9 and -32 = 9. In addition to his work on solutions to general linear equations and quadratic equations, Brahmagupta went yet further by considering systems of simultaneous equations (set of equations containing multiple variables), and solving quadratic equations with two unknowns, something which was not even considered in the West until a thousand years later, when Fermat was considering similar problems in 1657.

Brahmagupta’s Theorem on cyclic quadrilaterals

Brahmagupta’s Theorem on cyclic quadrilaterals

Brahmagupta even attempted to write down these rather abstract concepts, using the initials of the names of colours to represent unknowns in his equations, one of the earliest intimations of what we now know as algebra.

Brahmagupta dedicated a substantial portion of his work to geometry and trigonometry. He established √10 (3.162277) as a good practical approximation for π (3.141593), and gave a formula, now known as Brahmagupta's Formula, for the area of a cyclic quadrilateral, as well as a celebrated theorem on the diagonals of a cyclic quadrilateral, usually referred to as Brahmagupta's Theorem.


Madhava sometimes called the greatest mathematician-astronomer of medieval India. He came from the town of Sangamagrama in Kerala, near the southern tip of India, and founded the Kerala School of Astronomy and Mathematics in the late 14th Century.

Although almost all of Madhava's original work is lost, he is referred to in the work of later Kerala mathematicians as the source for several infinite series expansions (including the sine, cosine, tangent and arctangent functions and the value of π), representing the first steps from the traditional finite processes of algebra to considerations of the infinite, with its implications for the future development of calculus and mathematical analysis.

Unlike most previous cultures, which had been rather nervous about the concept of infinity, Madhava was more than happy to play around with infinity, particularly infinite series. He showed how, although one can be approximated by adding a half plus a quarter plus an eighth plus a sixteenth, etc, (as even the ancient Egyptians and Greeks had known), the exact total of one can only be achieved by adding up infinitely many fractions.

Madhava’s method for approximating π by an infinite series of fractions

Madhava’s method for approximating π by an infinite series of fractions

But Madhava went further and linked the idea of an infinite series with geometry and trigonometry. He realized that, by successively adding and subtracting different odd number fractions to infinity, he could home in on an exact formula for π (this was two centuries before Leibniz was to come to the same conclusion in Europe). Through his application of this series, Madhava obtained a value for π correct to an astonishing 13 decimal places.

He went on to use the same mathematics to obtain infinite series expressions for the sine formula, which could then be used to calculate the sine of any angle to any degree of accuracy, as well as for other trigonometric functions like cosine, tangent and arctangent. Perhaps even more remarkable, though, is that he also gave estimates of the error term or correction term, implying that he quite understood the limit nature of the infinite series.

Madhava’s use of infinite series to approximate a range of trigonometric functions, which were further developed by his successors at the Kerala School, effectively laid the foundations for the later development of calculus and analysis, and either he or his disciples developed an early form of integration for simple functions. Some historians have suggested that Madhava's work, through the writings of the Kerala School, may have been transmitted to Europe via Jesuit missionaries and traders who were active around the ancient port of Cochin (Kochi) at the time, and may have had an influence on later European developments in calculus.

Among his other contributions, Madhava discovered the solutions of some transcendental equations by a process of iteration, and found approximations for some transcendental numbers by continued fractions. In astronomy, he discovered a procedure to determine the positions of the Moon every 36 minutes, and methods to estimate the motions of the planets.


The Islamic Empire established across Persia, the Middle East, Central Asia, North Africa, Iberia and parts of India from the 8th Century onwards made significant contributions towards mathematics. They were able to draw on and fuse together the mathematical developments of both Greece and India.


One consequence of the Islamic prohibition on depicting the human form was the extensive use of complex geometric patterns to decorate their buildings, raising mathematics to the form of an art. In fact, over time, Muslim artists discovered all the different forms of symmetry that can be depicted on a 2-dimensional surface.

The Qu’ran itself encouraged the accumulation of knowledge, and a Golden Age of Islamic science and mathematics flourished throughout the medieval period from the 9th to 15th Centuries. The House of Wisdom was set up in Baghdad around 810, and work started almost immediately on translating the major Greek and Indian mathematical and astronomy works into Arabic.

The outstanding Persian mathematician Muhammad Al-Khwarizmi was an early Director of the House of Wisdom in the 9th Century, and one of the greatest of early Muslim mathematicians. Perhaps Al-Khwarizmi’s most important contribution to mathematics was his strong advocacy of the Hindu numerical system (1 - 9 and 0), which he recognized as having the power and efficiency needed to revolutionize Islamic (and, later, Western) mathematics, and which was soon adopted by the entire Islamic world, and later by Europe as well.

Al-Khwarizmi's other important contribution was algebra, and he introduced the fundamental algebraic methods of “reduction” and “balancing” and provided an exhaustive account of solving polynomial equations up to the second degree. In this way, he helped create the powerful abstract mathematical language still used across the world today, and allowed a much more general way of analyzing problems other than just the specific problems previously considered by the Indians and Chinese.

Binomial Theorem

Binomial Theorem

The 10th Century Persian mathematician Muhammad Al-Karaji worked to extend algebra still further, freeing it from its geometrical heritage, and introduced the theory of algebraic calculus. Al-Karaji was the first to use the method of proof by mathematical induction to prove his results, by proving that the first statement in an infinite sequence of statements is true, and then proving that, if any one statement in the sequence is true, then so is the next one.

Among other things, Al-Karaji used mathematical induction to prove the binomial theorem. A binomial is a simple type of algebraic expression which has just two terms which are operated on only by addition, subtraction, multiplication and positive whole-number exponents, such as (x + y)2. The co-efficients needed when a binomial is expanded form a symmetrical triangle, usually referred to as Pascal’s Triangle after the 17th Century French mathematician Blaise Pascal, although many other mathematicians had studied it centuries before him in India, Persia, China and Italy, including Al-Karaji.

Some hundred years after Al-Karaji, Omar Khayyam (perhaps better known as a poet and the writer of the “Rubaiyat”, but an important mathematician and astronomer in his own right) generalized Indian methods for extracting square and cube roots to include fourth, fifth and higher roots in the early 12th Century. He carried out a systematic analysis of cubic problems, revealing there were actually several different sorts of cubic equations. Although he did in fact succeed in solving cubic equations, and although he is usually credited with identifying the foundations of algebraic geometry, he was held back from further advances by his inability to separate the algebra from the geometry, and a purely algebraic method for the solution of cubic equations had to wait another 500 years and the Italian mathematicians del Ferro and Tartaglia.

Al-Tusi was a pioneer in the field of spherical trigonometry

Al-Tusi was a pioneer in the field of spherical trigonometry

The 13th Century Persian astronomer, scientist and mathematician Nasir Al-Din Al-Tusi was perhaps the first to treat trigonometry as a separate mathematical discipline, distinct from astronomy. Building on earlier work by Greek mathematicians such as Menelaus of Alexandria and Indian work on the sine function, he gave the first extensive exposition of spherical trigonometry, including listing the six distinct cases of a right triangle in spherical trigonometry. One of his major mathematical contributions was the formulation of the famous law of sines for plane triangles, a(sin A) = b(sin B) = c(sin C), although the sine law for spherical triangles had been discovered earlier by the 10th Century Persians Abul Wafa Buzjani and Abu Nasr Mansur.

Other medieval Muslim mathematicians worthy of note include:

  • the 9th Century Arab Thabit ibn Qurra, who developed a general formula by which amicable numbers could be derived, re-discovered much later by both Fermat and Descartes(amicable numbers are pairs of numbers for which the sum of the divisors of one number equals the other number, e.g. the proper divisors of 220 are 1, 2, 4, 5, 10, 11, 20, 22, 44, 55 and 110, of which the sum is 284; and the proper divisors of 284 are 1, 2, 4, 71, and 142, of which the sum is 220);
  • the 10th Century Arab mathematician Abul Hasan al-Uqlidisi, who wrote the earliest surviving text showing the positional use of Arabic numerals, and particularly the use of decimals instead of fractions (e.g. 7.375 insead of 738);
  • the 10th Century Arab geometer Ibrahim ibn Sinan, who continued Archimedes' investigations of areas and volumes, as well as on tangents of a circle;
  • the 11th Century Persian Ibn al-Haytham (also known as Alhazen), who, in addition to his groundbreaking work on optics and physics, established the beginnings of the link between algebra and geometry, and devised what is now known as "Alhazen's problem" (he was the first mathematician to derive the formula for the sum of the fourth powers, using a method that is readily generalizable); and
  • the 13th Century Persian Kamal al-Din al-Farisi, who applied the theory of conic sections to solve optical problems, as well as pursuing work in number theory such as on amicable numbers, factorization and combinatorial methods;
  • the 13th Century Moroccan Ibn al-Banna al-Marrakushi, whose works included topics such as computing square roots and the theory of continued fractions, as well as the discovery of the first new pair of amicable numbers since ancient times (17,296 and 18,416, later re-discovered by Fermat) and the the first use of algebraic notation since Brahmagupta.
  • With the stifling influence of the Turkish Ottoman Empire from the 14th or 15th Century onwards, Islamic mathematics stagnated, and further developments moved to Europe.


    One of the first Directors of the House of Wisdom in Bagdad in the early 9th Century was an outstanding Persian mathematician called Muhammad Al-Khwarizmi. He oversaw the translation of the major Greek and Indian mathematical and astronomy works (including those of Brahmagupta) into Arabic, and produced original work which had a lasting influence on the advance of Muslim and (after his works spread to Europe through Latin translations in the 12th Century) later European mathematics.

    The word “algorithm” is derived from the Latinization of his name, and the word "algebra" is derived from the Latinization of "al-jabr", part of the title of his most famous book, in which he introduced the fundamental algebraic methods and techniques for solving equations.

    Perhaps his most important contribution to mathematics was his strong advocacy of the Hindu numerical system, which Al-Khwarizmi recognized as having the power and efficiency needed to revolutionize Islamic and Western mathematics. The Hindu numerals 1 - 9 and 0 - which have since become known as Hindu-Arabic numerals - were soon adopted by the entire Islamic world. Later, with translations of Al-Khwarizmi’s work into Latin by Adelard of Bath and others in the 12th Century, and with the influence of Fibonacci’s “Liber Abaci” they would be adopted throughout Europe as well.

    An example of Al-Khwarizmi’s “completing the square” method for solving quadratic equations

    An example of Al-Khwarizmi’s “completing the square” method for solving quadratic equations

    Al-Khwarizmi’s other important contribution was algebra, a word derived from the title of a mathematical text he published in about 830 called “Al-Kitab al-mukhtasar fi hisab al-jabr wa'l-muqabala” (“The Compendious Book on Calculation by Completion and Balancing”). Al-Khwarizmi wanted to go from the specific problems considered by the Indians and Chinese to a more general way of analyzing problems, and in doing so he created an abstract mathematical language which is used across the world today.

    His book is considered the foundational text of modern algebra, although he did not employ the kind of algebraic notation used today (he used words to explain the problem, and diagrams to solve it). But the book provided an exhaustive account of solving polynomial equations up to the second degree, and introduced for the first time the fundamental algebraic methods of “reduction” (rewriting an expression in a simpler form), “completion” (moving a negative quantity from one side of the equation to the other side and changing its sign) and “balancing” (subtraction of the same quantity from both sides of an equation, and the cancellation of like terms on opposite sides).

    In particular, Al-Khwarizmi developed a formula for systematically solving quadratic equations (equations involving unknown numbers to the power of 2, or x2) by using the methods of completion and balancing to reduce any equation to one of six standard forms, which were then solvable. He described the standard forms in terms of "squares" (what would today be "x2"), "roots" (what would today be "x") and "numbers" (regular constants, like 42), and identified the six types as: squares equal roots (ax2 = bx), squares equal number (ax2 = c), roots equal number (bx = c), squares and roots equal number (ax2 + bx = c), squares and number equal roots (ax2 + c = bx), and roots and number equal squares (bx + c = ax2).

    Al-Khwarizmi is usually credited with the development of lattice (or sieve) multiplication method of multiplying large numbers, a method algorithmically equivalent to long multiplication. His lattice method was later introduced into Europe by Fibonacci.

    In addition to his work in mathematics, Al-Khwarizmi made important contributions to astronomy, also largely based on methods from India, and he developed the first quadrant (an instrument used to determine time by observations of the Sun or stars), the second most widely used astronomical instrument during the Middle Ages after the astrolabe. He also produced a revised and completed version of Ptolemy's “Geography”, consisting of a list of 2,402 coordinates of cities throughout the known world.


    Abu al-Rehan Muhammad ibn Ahmed al-Beruni, (born 5 September 973 in Kath, Khwarezm, Abu_Reyhan_Biruni-Earth_Circumference.
    now region in Uzbekistan, died 13 December 1048 in Ghazni) known as Alberonius in Latin and Al-Biruni in English, was a Persian Chorasmian Muslim scholar and polymath of the 11th century. Al-Biruni is regarded as one of the greatest scholars of the medieval Islamic era and was well versed in physics, mathematics, astronomy, and natural sciences, and also distinguished himself as a historian, chronologist and linguist.

    Diagram illustrating a method proposed and used by Al-Biruni to estimate the radius and circumference of the Earth. Ninety-five of 146 books known to have been written by Bīrūnī, about 65 percent, were devoted to astronomy, mathematics, and related subjects like math­ematical geography. Biruni's major work on astrology is primarily an astronomical and mathematical text, only the last chapter concerns astrological prognostication. His endorsement of astrology is limited, in so far as he condemns horary astrology as 'sorcery'. Biruni wrote an extensive commentary on Indian astronomy in the Kitab ta'rikh al-Hind, in which he claims to have resolved the matter of Earth's rotation in a work on astronomy that is no longer extant, his Miftah-ilm-alhai'a (Key to Astronomy):

    The rotation of the earth does in no way impair the value of astronomy, as all appearances of an astronomic character can quite as well be explained according to this theory as to the other. There are, however, other reasons which make it impossible. This question is most difficult to solve. The most prominent of both modem and ancient astronomers have deeply studied the question of the moving of the earth, and tried to refute it. We, too, have composed a book on the subject called Miftah-ilm-alhai'a (Key to Astronomy), in which we think we have surpassed our predecessors, if not in the words, at all events In the matter. In his description of Sijzi's astrolabe's he hints at contemporary debates over the movement of the earth. He carried on a lengthy correspondence and sometimes heated debate with Ibn Sina, in which Biruni repeatedly attacks Aristotle's celestial physics: he argues by simple experiment that vacuum must exist; he is "amazed" by the weakness of Aristotle's argument against elliptical orbits on the basis that they would create vacuum; he attacks the immutability of the celestial spheres; and so on. In his major extant astronomical work, the Mas'ud Canon, he regards heliocentric and geocentric hypotheses as mathematically equivalent but heliocentrism as physically impossible, yet approves of the theory that the earth rotates on its axis. He utilizes his observational data to disprove Ptolemy's immobile solar apogee. More recently, Biruni's eclipse data was used by Dunthorne in 1749 to help determine the acceleration of the moon and his observational data has entered the larger astronomical historical record and is still used today in geophysics and astronomy.


    Medieval abacus, based on the Roman/Greek model

    During the centuries in which the Chinese, Indian and Islamic mathematicians had been in the ascendancy, Europe had fallen into the Dark Ages, in which science, mathematics and almost all intellectual endeavour stagnated. Scholastic scholars only valued studies in the humanities, such as philosophy and literature, and spent much of their energies quarrelling over subtle subjects in metaphysics and theology, such as "How many angels can stand on the point of a needle?"

    From the 4th to 12th Centuries, European knowledge and study of arithmetic, geometry, astronomy and music was limited mainly to Boethius’ translations of some of the works of ancient Greek masters such as Nicomachus and Euclid. All trade and calculation was made using the clumsy and inefficient Roman numeral system, and with an abacus based on Greek and Roman models.

    By the 12th Century, though, Europe, and particularly Italy, was beginning to trade with the East, and Eastern knowledge gradually began to spread to the West. Robert of Chester translated Al-Khwarizmi's important book on algebra into Latin in the 12th Century, and the complete text of Euclid's “Elements” was translated in various versions by Adelard of Bath, Herman of Carinthia and Gerard of Cremona. The great expansion of trade and commerce in general created a growing practical need for mathematics, and arithmetic entered much more into the lives of common people and was no longer limited to the academic realm.

    The advent of the printing press in the mid-15th Century also had a huge impact. Numerous books on arithmetic were published for the purpose of teaching business people computational methods for their commercial needs and mathematics gradually began to acquire a more important position in education.

    Europe’s first great medieval mathematician was the Italian Leonardo of Pisa, better known by his nickname Fibonacci. Although best known for the so-called Fibonacci Sequence of numbers, perhaps his most important contribution to European mathematics was his role in spreading the use of the Hindu-Arabic numeral system throughout Europe early in the 13th Century, which soon made the Roman numeral system obsolete, and opened the way for great advances in European mathematics.

    Oresme was one of the first to use graphical analysis

    Oresme was one of the first to use graphical analysis

    An important (but largely unknown and underrated) mathematician and scholar of the 14th Century was the Frenchman Nicole Oresme. He used a system of rectangular coordinates centuries before his countryman René Descartes popularized the idea, as well as perhaps the first time-speed-distance graph. Also, leading from his research into musicology, he was the first to use fractional exponents, and also worked on infinite series, being the first to prove that the harmonic series 11 + 12 + 13 + 14 + 15... is a divergent infinite series (i.e. not tending to a limit, other than infinity).

    The German scholar Regiomontatus was perhaps the most capable mathematician of the 15th Century, his main contribution to mathematics being in the area of trigonometry. He helped separate trigonometry from astronomy, and it was largely through his efforts that trigonometry came to be considered an independent branch of mathematics. His book "De Triangulis", in which he described much of the basic trigonometric knowledge which is now taught in high school and college, was the first great book on trigonometry to appear in print.

    Mention should also be made of Nicholas of Cusa (or Nicolaus Cusanus), a 15th Century German philosopher, mathematician and astronomer, whose prescient ideas on the infinite and the infinitesimal directly influenced later mathematicians like Gottfried Leibniz and Georg Cantor. He also held some distinctly non-standard intuitive ideas about the universe and the Earth's position in it, and about the elliptical orbits of the planets and relative motion, which foreshadowed the later discoveries of Copernicus and Kepler.


    The 13th Century Italian Leonardo of Pisa, better known by his nickname Fibonacci, was perhaps the most talented Western mathematician of the Middle Ages. Little is known of his life except that he was the son of a customs offical and, as a child, he travelled around North Africa with his father, where he learned about Arabic mathematics. On his return to Italy, he helped to disseminate this knowledge throughout Europe, thus setting in motion a rejuvenation in European mathematics, which had lain largely dormant for centuries during the Dark Ages.

    In particular, in 1202, he wrote a hugely influential book called “Liber Abaci” ("Book of Calculation"), in which he promoted the use of the Hindu-Arabic numeral system, describing its many benefits for merchants and mathematicians alike over the clumsy system of Roman numerals then in use in Europe. Despite its obvious advantages, uptake of the system in Europe was slow (this was after all during the time of the Crusades against Islam, a time in which anything Arabic was viewed with great suspicion), and Arabic numerals were even banned in the city of Florence in 1299 on the pretext that they were easier to falsify than Roman numerals. However, common sense eventually prevailed and the new system was adopted throughout Europe by the 15th century, making the Roman system obsolete. The horizontal bar notation for fractions was also first used in this work (although following the Arabic practice of placing the fraction to the left of the integer).

    The discovery of the famous Fibonacci sequence

    The discovery of the famous Fibonacci sequence

    Fibonacci is best known, though, for his introduction into Europe of a particular number sequence, which has since become known as Fibonacci Numbers or the Fibonacci Sequence. He discovered the sequence - the first recursive number sequence known in Europe - while considering a practical problem in the “Liber Abaci” involving the growth of a hypothetical population of rabbits based on idealized assumptions. He noted that, after each monthly generation, the number of pairs of rabbits increased from 1 to 2 to 3 to 5 to 8 to 13, etc, and identified how the sequence progressed by adding the previous two terms (in mathematical terms, Fn = Fn-1 + Fn-2), a sequence which could in theory extend indefinitely.

    The sequence, which had actually been known to Indian mathematicians since the 6th Century, has many interesting mathematical properties, and many of the implications and relationships of the sequence were not discovered until several centuries after Fibonacci's death. For instance, the sequence regenerates itself in some surprising ways: every third F-number is divisible by 2 (F3 = 2), every fourth F-number is divisible by 3 (F4 = 3), every fifth F-number is divisible by 5 (F5 = 5), every sixth F-number is divisible by 8 (F6 = 8), every seventh F-number is divisible by 13 (F7 = 13), etc. The numbers of the sequence has also been found to be ubiquitous in nature: among other things, many species of flowering plants have numbers of petals in the Fibonacci Sequence; the spiral arrangements of pineapples occur in 5s and 8s, those of pinecones in 8s and 13s, and the seeds of sunflower heads in 21s, 34s, 55s or even higher terms in the sequence; etc.

    The Golden Ratio φ can be derived from the Fibonacci Sequence

    The Golden Ratio φ can be derived from the Fibonacci Sequence

    In the 1750s, Robert Simson noted that the ratio of each term in the Fibonacci Sequence to the previous term approaches, with ever greater accuracy the higher the terms, a ratio of approximately 1 : 1.6180339887 (it is actually an irrational number equal to (1 + √5)2 which has since been calculated to thousands of decimal places). This value is referred to as the Golden Ratio, also known as the Golden Mean, Golden Section, Divine Proportion, etc, and is usually denoted by the Greek letter phi φ (or sometimes the capital letter Phi Φ). Essentially, two quantities are in the Golden Ratio if the ratio of the sum of the quantities to the larger quantity is equal to the ratio of the larger quantity to the smaller one. The Golden Ratio itself has many unique properties, such as 1φ = φ - 1 (0.618...) and φ2 = φ + 1 (2.618...), and there are countless examples of it to be found both in nature and in the human world.

    A rectangle with sides in the ratio of 1 : φ is known as a Golden Rectangle, and many artists and architects throughout history (dating back to ancient Egypt and Greece, but particularly popular in the Renaissance art of Leonardo da Vinci and his contemporaries) have proportioned their works approximately using the Golden Ratio and Golden Rectangles, which are widely considered to be innately aesthetically pleasing. An arc connecting opposite points of ever smaller nested Golden Rectangles forms a logarithmic spiral, known as a Golden Spiral. The Golden Ratio and Golden Spiral can also be found in a surprising number of instances in Nature, from shells to flowers to animal horns to human bodies to storm systems to complete galaxies.

    It should be remembered, though, that the Fibonacci Sequence was actually only a very minor element in “Liber Abaci” - indeed, the sequence only received Fibonacci's name in 1877 when Eduouard Lucas decided to pay tribute to him by naming the series after him - and that Fibonacci himself was not responsible for identifying any of the interesting mathematical properties of the sequence, its relationship to the Golden Mean and Golden Rectangles and Spirals, etc.

    Fibonacci introduced lattice multiplication to Europe

    Fibonacci introduced lattice multiplication to Europe

    However, the book's influence on medieval mathematics is undeniable, and it does also include discussions of a number of other mathematical problems such as the Chinese Remainder Theorem, perfect numbers and prime numbers, formulas for arithmetic series and for square pyramidal numbers, Euclidean geometric proofs, and a study of simultaneous linear equations along the lines of Diophantus and Al-Karaji. He also described the lattice (or sieve) multiplication method of multiplying large numbers, a method - originally pioneered by Islamic mathematicians like Al-Khwarizmi - algorithmically equivalent to long multiplication.

    Neither was “Liber Abaci” Fibonacci’s only book, although it was his most important one. His “Liber Quadratorum” (“The Book of Squares”), for example, is a book on algebra, published in 1225 in which appears a statement of what is now called Fibonacci's identity - sometimes also known as Brahmagupta’s identity after the much earlier Indian mathematician who also came to the same conclusions - that the product of two sums of two squares is itself a sum of two squares e.g. (12 + 42)(22 + 72) = 262 + 152 = 302 + 12.


    The cultural, intellectual and artistic movement of the Renaissance, The supermagic square shown in Albrecht Dürer's engraving Melencolia I
    which saw a resurgence of learning based on classical sources, began in Italy around the 14th Century, and gradually spread across most of Europe over the next two centuries. Science and art were still very much interconnected and intermingled at this time, as exemplified by the work of artist/scientists such as Leonardo da Vinci, and it is no surprise that, just as in art, revolutionary work in the fields of philosophy and science was soon taking place.

    It is a tribute to the respect in which mathematics was held in Renaissance Europe that the famed German artist Albrecht Dürer included an order-4 magic square in his engraving "Melencolia I". In fact, it is a so-called "supermagic square" with many more lines of addition symmetry than a regular 4 x 4 magic square (see image at right). The year of the work, 1514, is shown in the two bottom central squares.

    An important figure in the late 15th and early 16th Centuries is an Italian Franciscan friar called Luca Pacioli, who published a book on arithmetic, geometry and book-keeping at the end of the 15th Century which became quite popular for the mathematical puzzles it contained. It also introduced symbols for plus and minus for the first time in a printed book (although this is also sometimes attributed to Giel Vander Hoecke, Johannes Widmann and others), symbols that were to become standard notation. Pacioli also investigated the Golden Ratio of 1 : 1.618... (see the section on Fibonacci) in his 1509 book "The Divine Proportion", concluding that the number was a message from God and a source of secret knowledge about the inner beauty of things.

    Basic mathematical notation, with dates of first use

    Basic mathematical notation, with dates of first use

    During the 16th and early 17th Century, the equals, multiplication, division, radical (root), decimal and inequality symbols were gradually introduced and standardized. The use of decimal fractions and decimal arithmetic is usually attributed to the Flemish mathematician Simon Stevin the late 16th Century, although the decimal point notation was not popularized until early in the 17th Century. Stevin was ahead of his time in enjoining that all types of numbers, whether fractions, negatives, real numbers or surds (such as √2) should be treated equally as numbers in their own right.

    In the Renaissance Italy of the early 16th Century, Bologna University in particular was famed for its intense public mathematics competitions. It was in just such a competion that the unlikely figure of the young, self-taught Niccolò Fontana Tartaglia revealed to the world the formula for solving first one type, and later all types, of cubic equations (equations with terms including x3), an achievement hitherto considered impossible and which had stumped the best mathematicians of China, India and the Islamic world.

    Building on Tartaglia’s work, another young Italian, Lodovico Ferrari, soon devised a similar method to solve quartic equations (equations with terms including x4) and both solutions were published by Gerolamo Cardano. Despite a decade-long fight over the publication, Tartaglia, Cardano and Ferrari between them demonstrated the first uses of what are now known as complex numbers, combinations of real and imaginary numbers (although it fell to another Bologna resident, Rafael Bombelli, to explain what imaginary numbers really were and how they could be used). Tartaglia went on to produce other important (although largely ignored) formulas and methods, and Cardano published perhaps the first systematic treatment of probability.

    With Hindu-Arabic numerals, standardized notation and the new language of algebra at their disposal, the stage was set for the European mathematical revolution of the 17th Century.


    In the Renaissance Italy of the early 16th Century, Bologna University in particular was famed for its intense public mathematics competitions. It was in just such a competition, in 1535, that the unlikely figure of the young Venetian Tartaglia first revealed a mathematical finding hitherto considered impossible, and which had stumped the best mathematicians of China, India and the Islamic world.

    Niccolò Fontana became known as Tartaglia (meaning “the stammerer”) for a speech defect he suffered due to an injury he received in a battle against the invading French army. He was a poor engineer known for designing fortifications, a surveyor of topography (seeking the best means of defence or offence in battles) and a bookkeeper in the Republic of Venice.

    But he was also a self-taught, but wildly ambitious, mathematician. He distinguised himself by producing, among other things, the first Italian translations of works by Archimedes and Euclid from uncorrupted Greek texts (for two centuries, Euclid's "Elements" had been taught from two Latin translations taken from an Arabic source, parts of which contained errors making them all but unusable), as well as an acclaimed compilation of mathematics of his own.

    Cubic equations were first solved algebraically by del Ferro and Tartaglia

    Cubic equations were first solved algebraically by del Ferro and Tartaglia

    Tartaglia's greates legacy to mathematical history, though, occurred when he won the 1535 Bologna University mathematics competition by demonstrating a general algebraic formula for solving cubic equations (equations with terms including x3), something which had come to be seen by this time as an impossibility, requiring as it does an understanding of the square roots of negative numbers. In the competition, he beat Scipione del Ferro (or at least del Ferro's assistant, Fior), who had coincidentally produced his own partial solution to the cubic equation problem not long before. Although del Ferro's solution perhaps predated Tartaglia’s, it was much more limited, and Tartaglia is usually credited with the first general solution. In the highly competitive and cut-throat environment of 16th Century Italy, Tartaglia even encoded his solution in the form of a poem in an attempt to make it more difficult for other mathematicians to steal it.

    Tartaglia’s definitive method was, however, leaked to Gerolamo Cardano (or Cardan), a rather eccentric and confrontational mathematician, doctor and Renaissance man, and author throughout his lifetime of some 131 books. Cardano published it himself in his 1545 book "Ars Magna" (despite having promised Tartaglia that he would not), along with the work of his own brilliant student Lodovico Ferrari. Ferrari, on seeing Tartaglia's cubic solution, had realized that he could use a similar method to solve quartic equations (equations with terms including x4).

    In this work, Tartaglia, Cardano and Ferrari between them demonstrated the first uses of what are now known as complex numbers, combinations of real and imaginary numbers of the type a + bi, where i is the imaginary unit √-1. It fell to another Bologna resident, Rafael Bombelli, to explain, at the end of the 1560's, exactly what imaginary numbers really were and how they could be used.

    Although both of the younger men were acknowledged in the foreword of Cardano's book, as well as in several places within its body, Tartgalia engaged Cardano in a decade-long fight over the publication. Cardano argued that, when he happened to see (some years after the 1535 competition) Scipione del Ferro's unpublished independent cubic equation solution, which was dated before Tartaglia's, he decided that his promise to Tartaglia could legitimately be broken, and he included Tartaglia's solution in his next publication, along with Ferrari's quartic solution.

    Ferrari eventually came to understand cubic and quartic equations much better than Tartaglia. When Ferrari challenged Tartaglia to another public debate, Tartaglia initially accepted, but then (perhaps wisely) decided not to show up, and Ferrari won by default. Tartaglia was thoroughly discredited and became effectively unemployable.

    Poor Tartaglia died penniless and unknown, despite having produced (in addition to his cubic equation solution) the first translation of Euclid’s “Elements” in a modern European language, formulated Tartaglia's Formula for the volume of a tetrahedron, devised a method to obtain binomial coefficients called Tartaglia's Triangle (an earlier version of Pascal's Triangle), and become the first to apply mathematics to the investigation of the paths of cannonballs (work which was later validated by Galileo's studies on falling bodies). Even today, the solution to cubic equations is usually known as Cardano’s Formula and not Tartgalia’s.

    Ferrari, on the other hand, obtained a prestigious teaching post while still in his teens after Cardano resigned from it and recommended him, and was eventually able to retired young and quite rich, despite having started out as Cardano’s servant.

    Cardano himself, an accomplished gambler and chess player, wrote a book called "Liber de ludo aleae" ("Book on Games of Chance") when he was just 25 years old, which contains perhaps the first systematic treatment of probability (as well as a section on effective cheating methods). The ancient Greeks, Romans and Indians had all been inveterate gamblers, but none of them had ever attempted to understand randomness as being governed by mathematical laws.

    The circles used to generate hypocycloids are known as Cardano Circles

    The circles used to generate hypocycloids are known as Cardano Circles

    The book described the - now obvious, but then revolutionary - insight that, if a random event has several equally likely outcomes, the chance of any individual outcome is equal to the proportion of that outcome to all possible outcomes. The book was far ahead of its time, though, and it remained unpublished until 1663, nearly a century after his death. It was the only serious work on probability until Pascal's work in the 17th Century.

    Cardano was also the first to describe hypocycloids, the pointed plane curves generated by the trace of a fixed point on a small circle that rolls within a larger circle, and the generating circles were later named Cardano (or Cardanic) circles.

    The colourful Cardano remained notoriously short of money thoughout his life, largely due to his gambling habits, and was accused of heresy in 1570 after publishing a horoscope of Jesus (apparently, his own son contributed to the prosecution, bribed by Tartaglia).


    In the wake of the Renaissance, the 17th Century saw an unprecedented explosion of mathematical and scientific ideas across Europe, Logarithms were invented by John Napier, early in the 17th Century
    a period sometimes called the Age of Reason. Hard on the heels of the “Copernican Revolution” of Nicolaus Copernicus in the 16th Century, scientists like Galileo Galilei, Tycho Brahe and Johannes Kepler were making equally revolutionary discoveries in the exploration of the Solar system, leading to Kepler’s formulation of mathematical laws of planetary motion.

    The invention of the logarithm in the early 17th Century by John Napier (and later improved by Napier and Henry Briggs) contributed to the advance of science, astronomy and mathematics by making some difficult calculations relatively easy. It was one of the most significant mathematical developments of the age, and 17th Century physicists like Kepler and Newton could never have performed the complex calculatons needed for their innovations without it. The French astronomer and mathematician Pierre Simon Laplace remarked, almost two centuries later, that Napier, by halving the labours of astronomers, had doubled their lifetimes.

    The logarithm of a number is the exponent when that number is expressed as a power of 10 (or any other base). It is effectively the inverse of exponentiation. For example, the base 10 logarithm of 100 (usually written log10 100 or lg 100 or just log 100) is 2, because 102 = 100. The value of logarithms arises from the fact that multiplication of two or more numbers is equivalent to adding their logarithms, a much simpler operation. In the same way, division involves the subtraction of logarithms, squaring is as simple as multiplying the logarithm by two (or by three for cubing, etc), square roots requires dividing the logarithm by 2 (or by 3 for cube roots, etc).

    Although base 10 is the most popular base, another common base for logarithms is the number e which has a value of 2.7182818... and which has special properties which make it very useful for logarithmic calculations. These are known as natural logarithms, and are written loge or ln. Briggs produced extensive lookup tables of common (base 10) logarithms, and by 1622 William Oughted had produced a logarithmic slide rule, an instrument which became indispensible in technological innovation for the next 300 years.

    Napier also improved Simon Stevin's decimal notation and popularized the use of the decimal point, and made lattice multiplication (originally developed by the Persian mathematician Al-Khwarizmi and introduced into Europe by Fibonacci) more convenient with the introduction of “Napier's Bones”, a multiplication tool using a set of numbered rods.

    Graph of the number of digits in the known Mersenne primes

    Graph of the number of digits in the known Mersenne primes

    Although not principally a mathematician, the role of the Frenchman Marin Mersenne as a sort of clearing house and go-between for mathematical thought in France during this period was crucial. Mersenne is largely remembered in mathematics today in the term Mersenne primes - prime numbers that are one less than a power of 2, e.g. 3 (22-1), 7 (23-1), 31 (25-1), 127 (27-1), 8191 (213-1), etc. In modern times, the largest known prime number has almost always been a Mersenne prime, but in actual fact, Mersenne’s real connection with the numbers was only to compile a none-too-accurate list of the smaller ones (when Edouard Lucas devised a method of checking them in the 19th Century, he pointed out that Mersenne had incorrectly included 267-1 and left out 261-1, 289-1 and 2107-1 from his list).

    The Frenchman René Descartes is sometimes considered the first of the modern school of mathematics. His development of analytic geometry and Cartesian coordinates in the mid-17th Century soon allowed the orbits of the planets to be plotted on a graph, as well as laying the foundations for the later development of calculus (and much later multi-dimensional geometry). Descartes is also credited with the first use of superscripts for powers or exponents.

    Two other great French mathematicians were close contemporaries of Descartes: Pierre de Fermat and Blaise Pascal. Fermat formulated several theorems which greatly extended our knowlege of number theory, as well as contributing some early work on infinitesimal calculus. Pascal is most famous for Pascal’s Triangle of binomial coefficients, although similar figures had actually been produced by Chinese and Persian mathematicians long before him.

    It was an ongoing exchange of letters between Fermat and Pascal that led to the development of the concept of expected values and the field of probability theory. The first published work on probability theory, however, and the first to outline the concept of mathematical expectation, was by the Dutchman Christiaan Huygens in 1657, although it was largely based on the ideas in the letters of the two Frenchmen.

    Desargues’ perspective theorem

    Desargues’ perspective theorem

    The French mathematician and engineer Girard Desargues is considered one of the founders of the field of projective geometry, later developed further by Jean Victor Poncelet and Gaspard Monge. Projective geometry considers what happens to shapes when they are projected on to a non-parallel plane. For example, a circle may be projected into an ellipse or a hyperbola, and so these curves may all be regarded as equivalent in projective geometry. In particular, Desargues developed the pivotal concept of the “point at infinity” where parallels actually meet. His perspective theorem states that, when two triangles are in perspective, their corresponding sides meet at points on the same collinear line.

    By “standing on the shoulders of giants”, the Englishman Sir Isaac Newton was able to pin down the laws of physics in an unprecedented way, and he effectively laid the groundwork for all of classical mechanics, almost single-handedly. But his contribution to mathematics should never be underestimated, and nowadays he is often considered, along with Archimedes and Gauss, as one of the greatest mathematicians of all time.

    Newton and, independently, the German philosopher and mathematician Gottfried Leibniz, completely revolutionized mathematics (not to mention physics, engineering, economics and science in general) by the development of infinitesimal calculus, with its two main operations, differentiation and integration. Newton probably developed his work before Leibniz, but Leibniz published his first, leading to an extended and rancorous dispute. Whatever the truth behind the various claims, though, it is Leibniz’s calculus notation that is the one still in use today, and calculus of some sort is used extensively in everything from engineering to economics to medicine to astronomy.

    Both Newton and Leibniz also contributed greatly in other areas of mathematics, including Newton’s contributions to a generalized binomial theorem, the theory of finite differences and the use of infinite power series, and Leibniz’s development of a mechanical forerunner to the computer and the use of matrices to solve linear equations.

    However, credit should also be given to some earlier 17th Century mathematicians whose work partially anticipated, and to some extent paved the way for, the development of infinitesimal calculus. As early as the 1630s, the Italian mathematician Bonaventura Cavalieri developed a geometrical approach to calculus known as Cavalieri's principle, or the “method of indivisibles”. The Englishman John Wallis, who systematized and extended the methods of analysis of Descartes and Cavalieri, also made significant contributions towards the development of calculus, as well as originating the idea of the number line, introducing the symbol ∞ for infinity and the term “continued fraction”, and extending the standard notation for powers to include negative integers and rational numbers. Newton's teacher Isaac Barrow is usually credited with the discovery (or at least the first rigorous statrement of) the fundamental theorem of calculus, which essentially showed that integration and differentiation are inverse operations, and he also made complete translations of Euclid into Latin and English.


    René Descartes has been dubbed the "Father of Modern Philosophy", but he was also one of the key figures in the Scientific Revolution of the 17th Century, and is sometimes considered the first of the modern school of mathematics.

    As a young man, he found employment for a time as a soldier (essentially as a mercenary in the pay of various forces, both Catholic and Protestant). But, after a series of dreams or visions, and after meeting the Dutch philosopher and scientist Isaac Beeckman, who sparked his interest in mathematics and the New Physics, he concluded that his real path in life was the pursuit of true wisdom and science.

    Back in France, the young Descartes soon came to the conclusion that the key to philosophy, with all its uncertainties and ambiguity, was to build it on the indisputable facts of mathematics. To pursue his rather heretical ideas further, though, he moved from the restrictions of Catholic France to the more liberal environment of the Netherlands, where he spent most of his adult life, and where he worked on his dream of merging algebra and geometry.

    In 1637, he published his ground-breaking philosophical and mathematical treatise "Discours de la méthode" (the “Discourse on Method”), and one of its appendices in particular, "La Géométrie", is now considered a landmark in the history of mathematics. Following on from early movements towards the use of symbolic expressions in mathematics by Diophantus, Al-Khwarizmi and François Viète, "La Géométrie" introduced what has become known as the standard algebraic notation, using lowercase a, b and c for known quantities and x, y and z for unknown quantities. It was perhaps the first book to look like a modern mathematics textbook, full of a's and b's, x2's, etc.

    Cartesian Coordinates

    Cartesian Coordinates

    It was in "La Géométrie" that Descartes first proposed that each point in two dimensions can be described by two numbers on a plane, one giving the point’s horizontal location and the other the vertical location, which have come to be known as Cartesian coordinates. He used perpendicular lines (or axes), crossing at a point called the origin, to measure the horizontal (x) and vertical (y) locations, both positive and negative, thus effectively dividing the plane up into four quadrants.

    Any equation can be represented on the plane by plotting on it the solution set of the equation. For example, the simple equation y = x yields a straight line linking together the points (0,0), (1,1), (2,2), (3,3), etc. The equation y = 2x yields a straight line linking together the points (0,0), (1,2), (2,4), (3,6), etc. More complex equations involving x2, x3, etc, plot various types of curves on the plane.

    As a point moves along a curve, then, its coordinates change, but an equation can be written to describe the change in the value of the coordinates at any point in the figure. Using this novel approach, it soon became clear that an equation like x2 + y2 = 4, for example, describes a circle; y2 - 16x a curve called a parabola; x2a2 + y2b2 = 1 an ellipse; x2a2 - y2b2 = 1 a hyperbola; etc.

    Descartes’ ground-breaking work, usually referred to as analytic geometry or Cartesian geometry, had the effect of allowing the conversion of geometry into algebra (and vice versa). Thus, a pair of simultaneous equations could now be solved either algebraically or graphically (at the intersection of two lines). It allowed the development of Newton’s and Leibniz’s subsequent discoveries of calculus. It also unlocked the possibility of navigating geometries of higher dimensions, impossible to physically visualize - a concept which was to become central to modern technology and physics - thus transforming mathematics forever.

    Descartes' Rule of Signs

    Descartes' Rule of Signs

    Although analytic geometry was far and away Descartes’ most important contribution to mathematics, he also: developed a “rule of signs” technique for determining the number of positive or negative real roots of a polynomial; "invented" (or at least popularized) the superscript notation for showing powers or exponents (e.g. 24 to show 2 x 2 x 2 x 2); and re-discovered Thabit ibn Qurra's general formula for amicable numbers, as well as the amicable pair 9,363,584 and 9,437,056 (which had also been discovered by another Islamic mathematician, Yazdi, almost a century earlier).

    For all his importance in the development of modern mathematics, though, Descartes is perhaps best known today as a philosopher who espoused rationalism and dualism. His philosophy consisted of a method of doubting everything, then rebuilding knowledge from the ground, and he is particularly known for the often-quoted statement “Cogito ergo sum”(“I think, therefore I am”).

    He also had an influential rôle in the development of modern physics, a rôle which has been, until quite recently, generally under-appreciated and under-investigated. He provided the first distinctly modern formulation of laws of nature and a conservation principle of motion, made numerous advances in optics and the study of the reflection and refraction of light, and constructed what would become the most popular theory of planetary motion of the late 17th Century. His commitment to the scientific method was met with strident opposition by the church officials of the day.

    His revolutionary ideas made him a centre of controversy in his day, and he died in 1650 far from home in Stockholm, Sweden. 13 years later, his works were placed on the Catholic Church's "Index of Prohibited Books".


    Another Frenchman of the 17th Century, Pierre de Fermat, effectively invented modern number theory virtually single-handedly, despite being a small-town amateur mathematician. Stimulated and inspired by the “Arithmetica” of the Hellenistic mathematician Diophantus, he went on to discover several new patterns in numbers which had defeated mathematicians for centuries, and throughout his life he devised a wide range of conjectures and theorems. He is also given credit for early developments that led to modern calculus, and for early progress in probability theory.

    Although he showed an early interest in mathematics, he went on study law at Orléans and received the title of councillor at the High Court of Judicature in Toulouse in 1631, which he held for the rest of his life. He was fluent in Latin, Greek, Italian and Spanish, and was praised for his written verse in several languages, and eagerly sought for advice on the emendation of Greek texts.

    Fermat's mathematical work was communicated mainly in letters to friends, often with little or no proof of his theorems. Although he himself claimed to have proved all his arithmetic theorems, few records of his proofs have survived, and many mathematicians have doubted some of his claims, especially given the difficulty of some of the problems and the limited mathematical tools available to Fermat.

    Fermat’s  Theorem on Sums of Two Squares

    Fermat’s Theorem on Sums of Two Squares

    One example of his many theorems is the Two Square Theorem, which shows that any prime number which, when divided by 4, leaves a remainder of 1 (i.e. can be written in the form 4n + 1), can always be re-written as the sum of two square numbers (see image at right for examples).

    His so-called Little Theorem is often used in the testing of large prime numbers, and is the basis of the codes which protect our credit cards in Internet transactions today. In simple (sic) terms, it says that if we have two numbers a and p, where p is a prime number and not a factor of a, then a multiplied by itself p-1 times and then divided by p, will always leave a remainder of 1. In mathematical terms, this is written: ap-1 = 1(mod p). For example, if a = 7 and p = 3, then 72 ÷ 3 should leave a remainder of 1, and 49 ÷ 3 does in fact leave a remainder of 1.

    Fermat identified a subset of numbers, now known as Fermat numbers, which are of the form of one less than 2 to the power of a power of 2, or, written mathematically, 22n + 1. The first five such numbers are: 21 + 3 = 3; 22 + 1 = 5; 24 + 1 = 17; 28 + 1 = 257; and 216 + 1 = 65,537. Interestingly, these are all prime numbers (and are known as Fermat primes), but all the higher Fermat numbers which have been painstakingly identified over the years are NOT prime numbers, which just goes to to show the value of inductive proof in mathematics.

    Fermat’s  Last Theorem

    Fermat’s Last Theorem

    Fermat's pièce de résistance, though, was his famous Last Theorem, a conjecture left unproven at his death, and which puzzled mathematicians for over 350 years. The theorem, originally described in a scribbled note in the margin of his copy of Diophantus' “Arithmetica”, states that no three positive integers a, b and c can satisfy the equation an + bn = cn for any integer value of n greater than two (i.e. squared). This seemingly simple conjecture has proved to be one of the world’s hardest mathematical problems to prove.

    There are clearly many solutions - indeed, an infinite number - when n = 2 (namely, all the Pythagorean triples), but no solution could be found for cubes or higher powers. Tantalizingly, Fermat himself claimed to have a proof, but wrote that “this margin is too small to contain it”. As far as we know from the papers which have come down to us, however, Fermat only managed to partially prove the theorem for the special case of n = 4, as did several other mathematicians who applied themselves to it (and indeed as had earlier mathematicians dating back to Fibonacci, albeit not with the same intent).

    Over the centuries, several mathematical and scientific academies offered substantial prizes for a proof of the theorem, and to some extent it single-handedly stimulated the development of algebraic number theory in the 19th and 20th Centuries. It was finally proved for ALL numbers only in 1995 (a proof usually attributed to British mathematician Andrew Wiles, although in reality it was a joint effort of several steps involving many mathematicians over several years). The final proof made use of complex modern mathematics, such as the modularity theorem for semi-stable elliptic curves, Galois representations and Ribet’s epsilon theorem, all of which were unavailable in Fermat’s time, so it seems clear that Fermat's claim to have solved his last theorem was almost certainly an exaggeration (or at least a misunderstanding).

    In addition to his work in number theory, Fermat anticipated the development of calculus to some extent, and his work in this field was invaluable later to Newton and Leibniz. While investigating a technique for finding the centres of gravity of various plane and solid figures, he developed a method for determining maxima, minima and tangents to various curves that was essentially equivalent to differentiation. Also, using an ingenious trick, he was able to reduce the integral of general power functions to the sums of geometric series.

    Fermat’s correspondence with his friend Pascal also helped mathematicians grasp a very important concept in basic probability which, although perhaps intuitive to us now, was revolutionary in 1654, namely the idea of equally probable outcomes and expected values.


    The Frenchman Blaise Pascal was a prominent 17th Century scientist, philosopher and mathematician. Like so many great mathematicians, he was a child prodigy and pursued many different avenues of intellectual endeavour throughout his life. Much of his early work was in the area of natural and applied sciences, and he has a physical law named after him (that “pressure exerted anywhere in a confined liquid is transmitted equally and undiminished in all directions throughout the liquid”), as well as the international unit for the meaurement of pressure. In philosophy, Pascals’ Wager is his pragmatic approach to believing in God on the grounds that is it is a better “bet” than not to.

    But Pascal was also a mathematician of the first order. At the age of sixteen, he wrote a significant treatise on the subject of projective geometry, known as Pascal's Theorem, which states that, if a hexagon is inscribed in a circle, then the three intersection points of opposite sides lie on a single line, called the Pascal line. As a young man, he built a functional calculating machine, able to perform additions and subtractions, to help his father with his tax calculations.

    The table of binomial coefficients known as Pascal’s Triangle

    The table of binomial coefficients known as Pascal’s Triangle

    He is best known, however, for Pascal’s Triangle, a convenient tabular presentation of binomial co-efficients, where each number is the sum of the two numbers directly above it. A binomial is a simple type of algebraic expression which has just two terms operated on only by addition, subtraction, multiplication and positive whole-number exponents, such as (x + y)2. The co-efficients produced when a binomial is expanded form a symmetrical triangle (see image at right).

    Pascal was far from the first to study this triangle. The Persian mathematician Al-Karaji had produced something very similar as early as the 10th Century, and the Triangle is called Yang Hui's Triangle in China after the 13th Century Chinese mathematician, and Tartaglia’s Triangle in Italy after the eponymous 16th Century Italian. But he did contribute an elegant proof by defining the numbers by recursion, and he also discovered many useful and interesting patterns among the rows, columns and diagonals of the array of numbers. For instance, looking at the diagonals alone, after the outside "skin" of 1's, the next diagonal (1, 2, 3, 4, 5,...) is the natural numbers in order. The next diagonal within that (1, 3, 6, 10, 15,...) is the triangular numbers in order. The next (1, 4, 10, 20, 35,...) is the pyramidal triangular numbers, etc, etc. It is also possible to find prime numbers, Fibonacci numbers, Catalan numbers, and many pother series, and even to find fractal patterns with it.

    Pascal also made the conceptual leap to use the Triangle to help solve problems in probability theory. In fact, it was through his collaboration and correspondence with his French contemporary Pierre de Fermat and the Dutchman Christiaan Huygens on the subject that the mathematical theory of probability was born. Before Pascal, there was no actual theory of probability - notwithstanding Gerolamo Cardano’s early exposition in the 16th Century - merely an understanding (of sorts) of how to compute “chances” in dice and card games by counting equally probable outcomes. Some apparently quite elementary problems in probability had eluded some of the best mathematicians, or given rise to incorrect solutions.

    It fell to Pascal (with Fermat's help) to bring together the separate threads of prior knowledge (including Cardano's early work) and to introduce entirely new mathematical techniques for the solution of problems that had hitherto resisted solution. Two such intransigent problems which Pascal and Fermat applied themselves to were the Gambler’s Ruin (determining the chances of winning for each of two men playing a particular dice game with very specific rules) and the Problem of Points (determining how a game's winnings should be divided between two equally skilled players if the game was ended prematurely). His work on the Problem of Points in particular, although unpublished at the time, was highly influential in the unfolding new field.

    Fermat and Pascal’s solution to the Problem of Points

    Fermat and Pascal’s solution to the Problem of Points

    The Problem of Points at its simplest can be illustrated by a simple game of “winner take all” involving the tossing of a coin. The first of the two players (say, Fermat and Pascal) to achieve ten points or wins is to receive a pot of 100 francs. But, if the game is interrupted at the point where Fermat, say, is winning 8 points to 7, how is the 100 franc pot to divided? Fermat claimed that, as he needed only two more points to win the game, and Pascal needed three, the game would have been over after four more tosses of the coin (because, if Pascal did not get the necessary 3 points for your victory over the four tosses, then Fermat must have gained the necessary 2 points for his victory, and vice versa. Fermat then exhaustively listed the possible outcomes of the four tosses, and concluded that he would win in 11 out of the 16 possible outcomes, so he suggested that the 100 francs be split 1116 (0.6875) to him and 516 (0.3125) to Pascal.

    Pascal then looked for a way of generalizing the problem that would avoid the tedious listing of possibilities, and realized that he could use rows from his triangle of coefficients to generate the numbers, no matter how many tosses of the coin remained. As Fermat needed 2 more points to win the game and Pascal needed 3, he went to the fifth (2 + 3) row of the triangle, i.e. 1, 4, 6, 4, 1. The first 3 terms added together (1 + 4 + 6 = 11) represented the outcomes where Fermat would win, and the last two terms (4 + 1 = 5) the outcomes where Pascal would win, out of the total number of outcomes represented by the sum of the whole row (1 + 4 + 6 +4 +1 = 16).

    Pascal and Fermat had grasped through their correspondence a very important concept that, though perhaps intuitive to us today, was all but revolutionary in 1654. This was the idea of equally probable outcomes, that the probability of something occurring could be computed by enumerating the number of equally likely ways it could occur, and dividing this by the total number of possible outcomes of the given situation. This allowed the use of fractions and ratios in the calculation of the likelhood of events, and the operation of multiplication and addition on these fractional probabilities. For example, the probability of throwing a 6 on a die twice is 16 x 16 = 136 ("and" works like multiplication); the probability of throwing either a 3 or a 6 is 16 + 16 = 13 ("or" works like addition).

    Later in life, Pascal and his sister Jacqueline strongly identified with the extreme Catholic religious movement of Jansenism. Following the death of his father and a "mystical experience" in late 1654, he had his "second conversion" and abandoned his scientific work completely, devoting himself to philosophy and theology. His two most famous works, the "Lettres provinciales" and the "Pensées", date from this period, the latter left incomplete at his death in 1662. They remain Pascal’s best known legacy, and he is usually remembered today as one of the most important authors of the French Classical Period and one of the greatest masters of French prose, much more than for his contributions to mathematics.


    In the heady atmosphere of 17th Century England, with the expansion of the British empire in full swing, grand old universities like Oxford and Cambridge were producing many great scientists and mathematicians. But the greatest of them all was undoubtedly Sir Isaac Newton.

    Physicist, mathematician, astronomer, natural philosopher, alchemist and theologian, Newton is considered by many to be one of the most influential men in human history. His 1687 publication, the "Philosophiae Naturalis Principia Mathematica" (usually called simply the "Principia"), is considered to be among the most influential books in the history of science, and it dominated the scientific view of the physical universe for the next three centuries.

    Although largely synonymous in the minds of the general public today with gravity and the story of the apple tree, Newton remains a giant in the minds of mathematicians everywhere (on a par with the all-time greats like Archimedes and Gauss), and he greatly influenced the subsequent path of mathematical development.

    Over two miraculous years, during the time of the Great Plague of 1665-6, the young Newton developed a new theory of light, discovered and quantified gravitation, and pioneered a revolutionary new approach to mathematics: infinitesimal calculus. His theory of calculus built on earlier work by his fellow Englishmen John Wallis and Isaac Barrow, as well as on work of such Continental mathematicians as René Descartes, Pierre de Fermat, Bonaventura Cavalieri, Johann van Waveren Hudde and Gilles Personne de Roberval. Unlike the static geometry of the Greeks, calculus allowed mathematicians and engineers to make sense of the motion and dynamic change in the changing world around us, such as the orbits of planets, the motion of fluids, etc.

    Differentiation (derivative) approximates the slope of a curve as the interval approaches zero

    Differentiation (derivative) approximates the slope of a curve as the interval approaches zero

    The initial problem Newton was confronting was that, although it was easy enough to represent and calculate the average slope of a curve (for example, the increasing speed of an object on a time-distance graph), the slope of a curve was constantly varying, and there was no method to give the exact slope at any one individual point on the curve i.e. effectively the slope of a tangent line to the curve at that point.

    Intuitively, the slope at a particular point can be approximated by taking the average slope (“rise over run”) of ever smaller segments of the curve. As the segment of the curve being considered approaches zero in size (i.e. an infinitesimal change in x), then the calculation of the slope approaches closer and closer to the exact slope at a point (see image at right).

    Without going into too much complicated detail, Newton (and his contemporary Gottfried Leibniz independently) calculated a derivative function f ‘(x) which gives the slope at any point of a function f(x). This process of calculating the slope or derivative of a curve or function is called differential calculus or differentiation (or, in Newton’s terminology, the “method of fluxions” - he called the instantaneous rate of change at a particular point on a curve the "fluxion", and the changing values of x and y the "fluents"). For instance, the derivative of a straight line of the type f(x) = 4x is just 4; the derivative of a squared function f(x) = x2 is 2x; the derivative of cubic function f(x) = x3 is 3x2, etc. Generalizing, the derivative of any power function f(x) = xr is rxr-1. Other derivative functions can be stated, according to certain rules, for exponential and logarithmic functions, trigonometric functions such as sin(x), cos(x), etc, so that a derivative function can be stated for any curve without discontinuities. For example, the derivative of the curve f(x) = x4 - 5p3 + sin(x2) would be f ’(x) = 4x3 - 15x2 + 2xcos(x2).

    Having established the derivative function for a particular curve, it is then an easy matter to calcuate the slope at any particular point on that curve, just by inserting a value for x. In the case of a time-distance graph, for example, this slope represents the speed of the object at a particular point.

    Integration approximates the area under a curve as the size of the samples approaches zero

    Integration approximates the area under a curve as the size of the samples approaches zero

    The “opposite” of differentiation is integration or integral calculus (or, in Newton’s terminology, the “method of fluents”), and together differentiation and integration are the two main operations of calculus. Newton’s Fundamental Theorem of Calculus states that differentiation and integration are inverse operations, so that, if a function is first integrated and then differentiated (or vice versa), the original function is retrieved.

    The integral of a curve can be thought of as the formula for calculating the area bounded by the curve and the x axis between two defined boundaries. For example, on a graph of velocity against time, the area “under the curve” would represent the distance travelled. Essentially, integration is based on a limiting procedure which approximates the area of a curvilinear region by breaking it into infinitesimally thin vertical slabs or columns. In the same way as for differentiation, an integral function can be stated in general terms: the integral of any power f(x) = xr is xr+1r+1, and there are other integral functions for exponential and logarithmic functions, trigonometric functions, etc, so that the area under any continuous curve can be obtained between any two limits.

    Newton chose not to publish his revolutionary mathematics straight away, worried about being ridiculed for his unconventional ideas, and contented himself with circulating his thoughts among friends. After all, he had many other interests such as philosophy, alchemy and his work at the Royal Mint. However, in 1684, the German Leibniz published his own independent version of the theory, whereas Newton published nothing on the subject until 1693. Although the Royal Society, after due deliberation, gave credit for the first discovery to Newton (and credit for the first publication to Leibniz), something of a scandal arose when it was made public that the Royal Society’s subsequent accusation of plagiarism against Leibniz was actually authored by none other Newton himself, causing an ongoing controversy which marred the careers of both men.

    Newton's Method for approximating the roots of a curve by successive interations after an initial guess

    Newton's Method for approximating the roots of a curve by successive interations after an initial guess

    Despite being by far his best known contribution to mathematics, calculus was by no means Newton’s only contribution. He is credited with the generalized binomial theorem, which describes the algebraic expansion of powers of a binomial (an algebraic expression with two terms, such as a2 - b2); he made substantial contributions to the theory of finite differences (mathematical expressions of the form f(x + b) - f(x + a)); he was one of the first to use fractional exponents and coordinate geometry to derive solutions to Diophantine equations (algebraic equations with integer-only variables); he developed the so-called “Newton's method” for finding successively better approximations to the zeroes or roots of a function; he was the first to use infinite power series with any confidence; etc.

    In 1687, Newton published his “Principia” or “The Mathematical Principles of Natural Philosophy”, generally recognized as the greatest scientific book ever written. In it, he presented his theories of motion, gravity and mechanics, explained the eccentric orbits of comets, the tides and their variations, the precession of the Earth's axis and the motion of the Moon.

    Later in life, he wrote a number of religious tracts dealing with the literal interpretation of the Bible, devoted a great deal of time to alchemy, acted as Member of Parliament for some years, and became perhaps the best-known Master of the Royal Mint in 1699, a position he held until his death in 1727. In 1703, he was made President of the Royal Society and, in 1705, became the first scientist ever to be knighted. Mercury poisoning from his alchemical pursuits perhaps explained Newton's eccentricity in later life, and possibly also his eventual death.


    The German polymath Gottfried Wilhelm Leibniz occupies a grand place in the history of philosophy. He was, along with René Descartes and Baruch Spinoza, one of the three great 17th Century rationalists, and his work anticipated modern logic and analytic philosophy. Like many great thinkers before and after him, Leibniz was a child prodigy and a contributor in many different fields of endeavour.

    But, between his work on philosophy and logic and his day job as a politician and representative of the royal house of Hanover, Leibniz still found time to work on mathematics. He was perhaps the first to explicitly employ the mathematical notion of a function to denote geometric concepts derived from a curve, and he developed a system of infinitesimal calculus, independently of his contemporary Sir Isaac Newton. He also revived the ancient method of solving equations using matrices, invented a practical calculating machine and pioneered the use of the binary system.

    Like Newton, Leibniz was a member of the Royal Society in London, and was almost certainly aware of Newton’s work on calculus. During the 1670s (slightly later than Newton’s early work), Leibniz developed a very similar theory of calculus, apparently completely independently. Within the short period of about two months he had developed a complete theory of differential calculus and integral calculus (see the section on Newton for a brief description and explanation of the development of calculus).

    Leibniz’s and Newton’s notation for Calculus

    Leibniz’s and Newton’s notation for Calculus

    Unlike Newton, however, he was more than happy to publish his work, and so Europe first heard about calculus from Leibniz in 1684, and not from Newton (who published nothing on the subject until 1693). When the Royal Society was asked to adjudicate between the rival claims of the two men over the development of the theory of calculus, they gave credit for the first discovery to Newton, and credit for the first publication to Leibniz. However, the Royal Society, by then under the rather biassed presidency of Newton himself, later also accused Leibniz of plagiarism, a slur from which Leibniz never really recovered.

    Ironically, it was Leibniz’s mathematics that eventually triumphed, and his notation and his way of writing calculus, not Newton’s more clumsy notation, is the one still used in mathematics today.

    In addition to calculus, Leibniz re-discovered a method of arranging linear equations into an array, now called a matrix, which could then be manipulated to find a solution. A similar method had been pioneered by Chinese mathematicians almost two millennia earlier, but had long fallen into disuse. Leibniz paved the way for later work on matrices and linear algebra by Carl Friedrich Gauss. He also introduced notions of self-similarity and the principle of continuity which foreshadowed an area of mathematics which would come to be called topology.

    Binary Number System

    Binary Number System

    During the 1670s, Leibniz worked on the invention of a practical calculating machine, which used the binary system and was capable of multiplying, dividing and even extracting roots, a great improvement on Pascal’s rudimentary adding machine and a true forerunner of the computer. He is usually credited with the early development of the binary number system (base 2 counting, using only the digits 0 and 1), although he himself was aware of similar ideas dating back to the I Ching of Ancient China. Because of the ability of binary to be represented by the two phases "on" and "off", it would later become the foundation of virtually all modern computer systems, and Leibniz's documentation was essential in the development process.

    Leibniz is also often considered the most important logician between Aristotle in Ancient Greece and George Boole and Augustus De Morgan in the 19th Century. Even though he actually published nothing on formal logic in his lifetime, he enunciated in his working drafts the principal properties of what we now call conjunction, disjunction, negation, identity, set inclusion and the empty set.


    Most of the late 17th Century and a good part of the early 18th were taken up by the work of disciples of Newton and Leibniz, who applied their ideas on calculus to solving a variety of problems in physics, astronomy and engineering.

    Calculus of variations

    The period was dominated, though, by one family, the Bernoulli’s of Basel in Switzerland, which boasted two or three generations of exceptional mathematicians, particularly the brothers, Jacob and Johann. They were largely responsible for further developing Leibniz’s infinitesimal calculus - paricularly through the generalization and extension of calculus known as the "calculus of variations" - as well as Pascal and Fermat’s probability and number theory.

    Basel was also the home town of the greatest of the 18th Century mathematicians, Leonhard Euler, although, partly due to the difficulties in getting on in a city dominated by the Bernoulli family, Euler spent most of his time abroad, in Germany and St. Petersburg, Russia. He excelled in all aspects of mathematics, from geometry to calculus to trigonometry to algebra to number theory, and was able to find unexpected links between the different fields. He proved numerous theorems, pioneered new methods, standardized mathematical notation and wrote many influential textbooks throughout his long academic life.

    In a letter to Euler in 1742, the German mathematician Christian Goldbach proposed the Goldbach Conjecture, which states that every even integer greater than 2 can be expressed as the sum of two primes (e.g. 4 = 2 + 2; 8 = 3 + 5; 14 = 3 + 11 = 7 + 7; etc) or, in another equivalent version, every integer greater than 5 can be expressed as the sum of three primes. Yet another version is the so-called “weak” Goldbach Conjecture, that all odd numbers greater than 7 are the sum of three odd primes. They remain among the oldest unsolved problems in number theory (and in all of mathematics), although the weak form of the conjecture appears to be closer to resolution than the strong one. Goldbach also proved other theorems in number theory such as the Goldbach-Euler Theorem on perfect powers.

    Despite Euler’s and the Bernoullis’ dominance of 18th Century mathematics, many of the other important mathematicians were from France. In the early part of the century, Abraham de Moivre is perhaps best known for de Moivre's formula, (cosx + isinx)n = cos(nx) + isin(nx), which links complex numbers and trigonometry. But he also generalized Newton’s famous binomial theorem into the multinomial theorem, pioneered the development of analytic geometry, and his work on the normal distribution (he gave the first statement of the formula for the normal distribution curve) and probability theory were of great importance.

    France became even more prominent towards the end of the century, and a handful of late 18th Century French mathematicians in particular deserve mention at this point, beginning with “the three L’s”.

    Joseph Louis Lagrange collaborated with Euler in an important joint work on the calculus of variation, but he also contributed to differential equations and number theory, and he is usually credited with originating the theory of groups, which would become so important in 19th and 20th Century mathematics. His name is given an early theorem in group theory, which states that the number of elements of every sub-group of a finite group divides evenly into the number of elements of the original finite group.

    Lagrange’s Mean value Theorem

    Lagrange’s Mean value Theorem

    Lagrange is also credited with the four-square theorem, that any natural number can be represented as the sum of four squares (e.g. 3 = 12 + 12 + 12 + 02; 31 = 52 + 22 + 12 + 12; 310 = 172 + 42 + 22 + 12; etc), as well as another theorem, confusingly also known as Lagrange’s Theorem or Lagrange’s Mean Value Theorem, which states that, given a section of a smooth continuous (differentiable) curve, there is at least one point on that section at which the derivative (or slope) of the curve is equal (or parallel) to the average (or mean) derivative of the section. Lagrange’s 1788 treatise on analytical mechanics offered the most comprehensive treatment of classical mechanics since Newton, and formed a basis for the development of mathematical physics in the 19th Century.

    Pierre-Simon Laplace, sometimes referred to as “the French Newton”, was an important mathematician and astronomer, whose monumental work “Celestial Mechanics” translated the geometric study of classical mechanics to one based on calculus, opening up a much broader range of problems. Although his early work was mainly on differential equations and finite differences, he was already starting to think about the mathematical and philosophical concepts of probability and statistics in the 1770s, and he developed his own version of the so-called Bayesian interpretation of probability independently of Thomas Bayes. Laplace is well known for his belief in complete scientific determinism, and he maintained that there should be a set of scientific laws that would allow us - at least in principle - to predict everything about the universe and how it works.

    The first six Legendre polynomials (solutions to Legendre’s differential equation)

    The first six Legendre polynomials (solutions to Legendre’s differential equation)

    Adrien-Marie Legendre also made important contributions to statistics, number theory, abstract algebra and mathematical analysis in the late 18th and early 19th Centuries, athough much of his work (such as the least squares method for curve-fitting and linear regression, the quadratic reciprocity law, the prime number theorem and his work on elliptic functions) was only brought to perfection - or at least to general notice - by others, particularly Gauss. His “Elements of Geometry”, a re-working of Euclid’s book, became the leading geometry textbook for almost 100 years, and his extremely accurate measurement of the terrestrial meridian inspired the creation, and almost universal adoption, of the metric system of measures and weights.

    Yet another Frenchman, Gaspard Monge was the inventor of descriptive geometry, a clever method of representing three-dimensional objects by projections on the two-dimensional plane using a specific set of procedures, a technique which would later become important in the fields of engineering, architecture and design. His orthographic projection became the graphical method used in almost all modern mechanical drawing.

    After many centuries of increasingly accurate approximations, Johann Lambert, a Swiss mathematician and prominent astronomer, finally provided a rigorous proof in 1761 that π is irrational, i.e. it can not be expressed as a simple fraction using integers only or as a terminating or repeating decimal. This definitively proved that it would never be possible to calculate it exactly, although the obsession with obtaining more and more accurate approximations continues to this day. (Over a hundred years later, in 1882, Ferdinand von Lindemann would prove that π is also transcendental, i.e. it cannot be the root of any polynomial equation with rational coefficients). Lambert was also the first to introduce hyperbolic functions into trigonometry and made some prescient conjectures regarding non-Euclidean space and the properties of hyperbolic triangles.


    Unusually in the history of mathematics, a single family, the Bernoulli’s, produced half a dozen outstanding mathematicians over a couple of generations at the end of the 17th and start of the 18th Century.

    The Bernoulli family was a prosperous family of traders and scholars from the free city of Basel in Switzerland, which at that time was the great commercial hub of central Europe.The brothers, Jacob and Johann Bernoulli, however, flouted their father's wishes for them to take over the family spice business or to enter respectable professions like medicine or the ministry, and began studying mathematics together.

    After Johann graduated from Basel University, the two developed a rather jealous and competitive relationship. Johann in particular was jealous of the elder Jacob's position as professor at Basel University, and the two often attempted to outdo each other. After Jacob's early death from tuberculosis, Johann took over his brother's position, one of his young students being the great Swiss mathematician Leonhard Euler. However, Johann merely shifted his jealousy toward his own talented son, Daniel (at one point, Johann published a book based on Daniel's work, even changing the date to make it look as though his book had been published before his son's).

    Johann received a taste of his own medicine, though, when his student Guillaume de l'Hôpital published a book in his own name consisting almost entirely of Johann's lectures, including his now famous rule about 0 ÷ 0 (a problem which had dogged mathematicians since Brahmagupta's initial work on the rules for dealing with zero back in the 7th Century). This showed that 0 ÷ 0 does not equal zero, does not equal 1, does not equal infinity, and is not even undefined, but is "indeterminate" (meaning it could equal any number). The rule is still usually known as l'Hôpital's Rule, and not Bernoulli's Rule.

    Despite their competitive and combative personal relationship, though, the brothers both had a clear aptitude for mathematics at a high level, and constantly challenged and inspired each other. They established an early correspondence with Gottfried Leibniz, and were among the first mathematicians to not only study and understand infinitesimal calculus but to apply it to various problems. They became instrumental in disseminating the newly-discovered knowledge of calculus, and helping to make it the cornerstone of mathematics it has become today.

    The Bernoulli’s first derived the brachistrochrone curve, using his calculus of variation method

    The Bernoulli’s first derived the brachistrochrone curve, using his calculus of variation method

    But they were more than just disciples of Leibniz, and they also made their own important contributions. One well known and topical problem of the day to which they applied themselves was that of designing a sloping ramp which would allow a ball to roll from the top to the bottom in the fastest possible time. Johann Bernoulli demonstrated through calculus that neither a straight ramp or a curved ramp with a very steep initial slope were optimal, but actually a less steep curved ramp known as a brachistochrone curve (a kind of upside-down cycloid, similar to the path followed by a point on a moving bicycle wheel) is the curve of fastest descent.

    This application was an example of the “calculus of variations”, a generalization of infinitesimal calculus that the Bernoulli brothers developed together, and has since proved useful in fields as diverse as engineering, financial investment, architecture and construction, and even space travel. Johann also derived the equation for a catenary curve, such as that formed by a chain hanging between two posts, a problem presented to him by his brother Jacob.

    Bernoulli Numbers

    Bernoulli Numbers

    Jacob Bernoulli’s book “The Art of Conjecture”, published posthumously in 1713, consolidated existing knowledge on probability theory and expected values, as well as adding personal contributions, such as his theory of permutations and combinations, Bernoulli trials and Bernoulli distribution, and some important elements of number theory, such as the Bernoulli Numbers sequence. He also published papers on transcendental curves, and became the first person to develop the technique for solving separable differential equations (the set of non-linear, but solvable, differential equations are now named after him). He invented polar coordinates (a method of describing the location of points in space using angles and distances) and was the first to use the word “integral” to refer to the area under a curve.

    Jacob Bernoulli also discovered the appropximate value of the irrational number e while exploring the compound interest on loans. When compounded at 100% interest annually, $1.00 becomes $2.00 after one year; when compounded semi-annually it ppoduces $2.25; compounded quarterly $2.44; monthly $2.61; weekly $2.69; daily $2.71; etc. If it were to be compounded continuously, the $1.00 would tend towards a value of $2.7182818... after a year, a value which became known as e. Alegbraically, it is the value of the infinite series (1 + 11)1.(1 + 12)2.(1 + 13)3.(1 + 14)4...

    Johann’s sons Nicolaus, Daniel and Johann II, and even his grandchildren Jacob II and Johann III, were all accomplished mathematicians and teachers. Daniel Bernoulli, in particular, is well known for his work on fluid mechanics (especially Bernoulli’s Principle on the inverse relationship between the speed and pressure of a fluid or gas), as much as for his work on probability and statistics.


    Leonhard Euler was one of the giants of 18th Century mathematics. Like the Bernoulli’s, he was born in Basel, Switzerland, and he studied for a while under Johann Bernoulli at Basel University. But, partly due to the overwhelming dominance of the Bernoulli family in Swiss mathematics, and the difficulty of finding a good position and recognition in his hometown, he spent most of his academic life in Russia and Germany, especially in the burgeoning St. Petersburg of Peter the Great and Catherine the Great.

    Despite a long life and thirteen children, Euler had more than his fair share of tragedies and deaths, and even his blindness later in life did not slow his prodigious output - his collected works comprise nearly 900 books and, in the year 1775, he is said to have produced on average one mathematical paper every week - as he compensated for it with his mental calculation skills and photographic memory (for example, he could repeat the Aeneid of Virgil from beginning to end without hesitation, and for every page in the edition he could indicate which line was the first and which the last).

    Today, Euler is considered one of the greatest mathematicians of all time. His interests covered almost all aspects of mathematics, from geometry to calculus to trigonometry to algebra to number theory, as well as optics, astronomy, cartography, mechanics, weights and measures and even the theory of music.

    Mathematical notation created or popularized by Euler

    Mathematical notation created or popularized by Euler

    Much of the notation used by mathematicians today - including e, i, f(x), , and the use of a, b and c as constants and x, y and z as unknowns - was either created, popularized or standardized by Euler. His efforts to standardize these and other symbols (including π and the trigonometric functions) helped to internationalize mathematics and to encourage collaboration on problems.

    He even managed to combine several of these together in an amazing feat of mathematical alchemy to produce one of the most beautiful of all mathematical equations, eiπ = -1, sometimes known as Euler’s Identity. This equation combines arithmetic, calculus, trigonometry and complex analysis into what has been called "the most remarkable formula in mathematics", "uncanny and sublime" and "filled with cosmic beauty", among other descriptions. Another such discovery, often known simply as Euler’s Formula, is eix = cosx + isinx. In fact, in a recent poll of mathematicians, three of the top five most beautiful formulae of all time were Euler’s. He seemed to have an instinctive ability to demonstrate the deep relationships between trigonometry, exponentials and complex numbers.

    The discovery that initially sealed Euler’s reputation was announced in 1735 and concerned the calculation of infinite sums. It was called the Basel problem after the Bernoulli’s had tried and failed to solve it, and asked what was the precise sum of the of the reciprocals of the squares of all the natural numbers to infinity i.e. 112 + 122 + 132 + 142 ... (a zeta function using a zeta constant of 2). Euler’s friend Daniel Bernoulli had estimated the sum to be about 135, but Euler’s superior method yielded the exact but rather unexpected result of π26. He also showed that the infinite series was equivalent to an infinite product of prime numbers, an identity which would later inspire Riemann’s investigation of complex zeta functions.

    The Seven Bridges of Königsberg Problem

    The Seven Bridges of Königsberg Problem

    Also in 1735, Euler solved an intransigent mathematical and logical problem, known as the Seven Bridges of Königsberg Problem, which had perplexed scholars for many years, and in doing so laid the foundations of graph theory and presaged the important mathematical idea of topology. The city of Königsberg in Prussia (modern-day Kaliningrad in Russia) was set on both sides of the Pregel River, and included two large islands which were connected to each other and the mainland by seven bridges. The problem was to find a route through the city that would cross each bridge once and only once.

    In fact, Euler proved that the problem has no solution, but in doing so he made the important conceptual leap of pointing out that the choice of route within each landmass is irrelevant and the only important feature is the sequence of bridges crossed. This allowed him to reformulate the problem in abstract terms, replacing each land mass with an abstract node and each bridge with an abstract connection. This resulted in a mathematical structure called a “graph”, a pictorial representation made up of points (vertices) connected by non-intersecting curves (arcs), which may be distorted in any way without changing the graph itself. In this way, Euler was able to deduce that, because the four land masses in the original problem are touched by an odd number of bridges, the existence of a walk traversing each bridge once only inevitably leads to a contradiction. If Königsberg had had one fewer bridges, on the other hand, with an even number of bridges leading to each piece of land, then a solution would have been possible.

    The Euler Characteristic

    The Euler Characteristic

    The list of theorems and methods pioneered by Euler is immense, and largely outside the scope of an entry-level study such as this, but mention could be made of just some of them:

  • the demonstration of geometrical properties such as Euler’s Line and Euler’s Circle;
  • the definition of the Euler Characteristic χ (chi) for the surfaces of polyhedra, whereby the number of vertices minus the number of edges plus the number of faces always equals 2 (see table at right);
  • a new method for solving quartic equations;
  • the Prime Number Theorem, which describes the asymptotic distribution of the prime numbers;
  • proofs (and in some cases disproofs) of some of Fermat’s theorems and conjectures;
  • the discovery of over 60 amicable numbers (pairs of numbers for which the sum of the divisors of one number equals the other number), although some were actually incorrect;
  • a method of calculating integrals with complex limits (foreshadowing the development of modern complex analysis);
  • the calculus of variations, including its best-known result, the Euler-Lagrange equation; a proof of the infinitude of primes, using the divergence of the harmonic series;
  • the integration of Leibniz's differential calculus with Newton's Method of Fluxions into a form of calculus we would recognize today, as well as the development of tools to make it easier to apply calculus to real physical problems;
  • etc, etc.
  • In 1766, Euler accepted an invitation from Catherine the Great to return to the St. Petersburg Academy, and spent the rest of his life in Russia. However, his second stay in the country was marred by tragedy, including a fire in 1771 which cost him his home (and almost his life), and the loss in 1773 of his dear wife of 40 years, Katharina. He later married Katharina's half-sister, Salome Abigail, and this marriage would last until his death from a brain hemorrhage in 1783.


    The 19th Century saw an unprecedented increase in the breadth and complexity of mathematical concepts. Both France and Germany were caught up in the age of revolution which swept Europe in the late 18th Century, but the two countries treated mathematics quite differently.

    Approximation of a periodic function by the Fourier Series

    After the French Revolution, Napoleon emphasized the practical usefulness of mathematics and his reforms and military ambitions gave French mathematics a big boost, as exemplified by “the three L’s”, Lagrange, Laplace and Legendre (see the section on 18th Century Mathematics), Fourier and Galois.

    Joseph Fourier's study, at the beginning of the 19th Century, of infinite sums in which the terms are trigonometric functions were another important advance in mathematical analysis. Periodic functions that can be expressed as the sum of an infinite series of sines and cosines are known today as Fourier Series, and they are still powerful tools in pure and applied mathematics. Fourier (following Leibniz, Euler, Lagrange and others) also contributed towards defining exactly what is meant by a function, although the definition that is found in texts today - defining it in terms of a correspondence between elements of the domain and the range - is usually attributed to the 19th Century German mathematician Peter Dirichlet.

    In 1806, Jean-Robert Argand published his paper on how complex numbers (of the form a + bi, where i is √-1) could be represented on geometric diagrams and manipulated using trigonometry and vectors. Even though the Dane Caspar Wessel had produced a very similar paper at the end of the 18th Century, and even though it was Gauss who popularized the practice, they are still known today as Argand Diagrams.

    The Frenchman Évariste Galois proved in the late 1820s that there is no general algebraic method for solving polynomial equations of any degree greater than four, going further than the Norwegian Niels Henrik Abel who had, just a few years earlier, shown the impossibility of solving quintic equations, and breaching an impasse which had existed for centuries. Galois' work also laid the groundwork for further developments such as the beginnings of the field of abstract algebra, including areas like algebraic geometry, group theory, rings, fields, modules, vector spaces and non-commutative algebra.

    Germany, on the other hand, under the influence of the great educationalist Wilhelm von Humboldt, took a rather different approach, supporting pure mathematics for its own sake, detached from the demands of the state and military. It was in this environment that the young German prodigy Carl Friedrich Gauss, sometimes called the “Prince of Mathematics”, received his education at the prestigious University of Göttingen. Some of Gauss’ ideas were a hundred years ahead of their time, and touched on many different parts of the mathematical world, including geometry, number theory, calculus, algebra and probability. He is widely regarded as one of the three greatest mathematicians of all times, along with Archimedes and Newton.

    Euclidean, hyperbolic and elliptic geometry

    Euclidean, hyperbolic and elliptic geometry

    Later in life, Gauss also claimed to have investigated a kind of non-Euclidean geometry using curved space but, unwilling to court controversy, he decided not to pursue or publish any of these avant-garde ideas. This left the field open for János Bolyai and Nikolai Lobachevsky (respectively, a Hungarian and a Russian) who both independently explored the potential of hyperbolic geometry and curved spaces.

    The German Bernhard Riemann worked on a different kind of non-Euclidean geometry called elliptic geometry, as well as on a generalized theory of all the different types of geometry. Riemann, however, soon took this even further, breaking away completely from all the limitations of 2 and 3 dimensional geometry, whether flat or curved, and began to think in higher dimensions. His exploration of the zeta function in multi-dimensional complex numbers revealed an unexpected link with the distribution of prime numbers, and his famous Riemann Hypothesis, still unproven after 150 years, remains one of the world’s great unsolved mathematical mysteries and the testing ground for new generations of mathematicians.

    British mathematics also saw something of a resurgence in the early and mid-19th century. Although the roots of the computer go back to the geared calculators of Pascal and Leibniz in the 17th Century, it was Charles Babbage in 19th Century England who designed a machine that could automatically perform computations based on a program of instructions stored on cards or tape. His large "difference engine" of 1823 was able to calculate logarithms and trigonometric functions, and was the true forerunner of the modern electronic computer. Although never actually built in his lifetime, a machine was built almost 200 years later to his specifications and worked perfectly. He also designed a much more sophisticated machine he called the "analytic engine", complete with punched cards, printer and computational abilities commensurate with modern computers.

    Another 19th Century Englishman, George Peacock, is usually credited with the invention of symbolic algebra, and the extension of the scope of algebra beyond the ordinary systems of numbers. This recognition of the possible existence of non-arithmetical algebras was an important stepping stone toward future developments in abstract algebra.

    In the mid-19th Century, the British mathematician George Boole devised an algebra (now called Boolean algebra or Boolean logic), in which the only operators were AND, OR and NOT, and which could be applied to the solution of logical problems and mathematical functions. He also described a kind of binary system which used just two objects, "on" and "off" (or "true" and "false", 0 and 1, etc), in which, famously, 1 + 1 = 1. Boolean algebra was the starting point of modern mathematical logic and ultimately led to the development of computer science.

    Hamilton’s quaternion

    Hamilton’s quaternion

    The concept of number and algebra was further extended by the Irish mathematician William Hamilton, whose 1843 theory of quaternions (a 4-dimensional number system, where a quantity representing a 3-dimensional rotation can be described by just an angle and a vector). Quaternions, and its later generalization by Hermann Grassmann, provided the first example of a non-commutative algebra (i.e. one in which a x b does not always equal b x a), and showed that several different consistent algebras may be derived by choosing different sets of axioms.

    The Englishman Arthur Cayley extended Hamilton's quaternions and developed the octonions. But Cayley was one of the most prolific mathematicians in history, and was a pioneer of modern group theory, matrix algebra, the theory of higher singularities, and higher dimensional geometry (anticipating the later ideas of Klein), as well as the theory of invariants.

    Throughout the 19th Century, mathematics in general became ever more complex and abstract. But it also saw a re-visiting of some older methods and an emphasis on mathematical rigour. In the first decades of the century, the Bohemian priest Bernhard Bolzano was one of the earliest mathematicians to begin instilling rigour into mathematical analysis, as well as giving the first purely analytic proof of both the fundamental theorem of algebra and the intermediate value theorem, and early consideration of sets (collections of objects defined by a common property, such as "all the numbers greater than 7" or "all right triangles", etc). When the German mathematician Karl Weierstrass discovered the theoretical existence of a continuous function having no derivative (in other words, a continuous curve possessing no tangent at any of its points), he saw the need for a rigorous “arithmetization” of calculus, from which all the basic concepts of analysis could be derived.

    Along with Riemann and, particularly, the Frenchman Augustin-Louis Cauchy, Weierstrass completely reformulated calculus in an even more rigorous fashion, leading to the development of mathematical analysis, a branch of pure mathematics largely concerned with the notion of limits (whether it be the limit of a sequence or the limit of a function) and with the theories of differentiation, integration, infinite series and analytic functions. In 1845, Cauchy also proved Cauchy's theorem, a fundamental theorem of group theory, which he discovered while examining permutation groups. Carl Jacobi also made important contributions to analysis, determinants and matrices, and especially his theory of periodic functions and elliptic functions and their relation to the elliptic theta function.

    Non-orientable surfaces with no identifiable 'inner' and 'outer' sides

    Non-orientable surfaces with no identifiable "inner" and "outer" sides

    August Ferdinand Möbius is best known for his 1858 discovery of the Möbius strip, a non-orientable two-dimensional surface which has only one side when embedded in three-dimensional Euclidean space (actually a German, Johann Benedict Listing, devised the same object just a couple of months before Möbius, but it has come to hold Möbius' name). Many other concepts are also named after him, including the Möbius configuration, Möbius transformations, the Möbius transform of number theory, the Möbius function and the Möbius inversion formula. He also introduced homogeneous coordinates and discussed geometric and projective transformations.

    Felix Klein also pursued more developments in non-Euclidean geometry, include the Klein bottle, a one-sided closed surface which cannot be embedded in three-dimensional Euclidean space, only in four or more dimensions. It can be best visualized as a cylinder looped back through itself to join with its other end from the "inside". Klein’s 1872 Erlangen Program, which classified geometries by their underlying symmetry groups (or their groups of transformations), was a hugely influential synthesis of much of the mathematics of the day, and his work was very important in the later development of group theory and function theory.

    The Norwegian mathematician Marius Sophus Lie also applied algebra to the study of geometry. He largely created the theory of continuous symmetry, and applied it to the geometric theory of differential equations by means of continuous groups of transformations known as Lie groups.

    In an unusual occurrence in 1866, an unknown 16-year old Italian, Niccolò Paganini, discovered the second smallest pair of amicable numbers (1,184 and 1210), which had been completely overlooked by some of the greatest mathematicians in history (including Euler, who had identified over 60 such numbers in the 18th Century, some of them huge).

    In the later 19th Century, Georg Cantor established the first foundations of set theory, which enabled the rigorous treatment of the notion of infinity, and which has since become the common language of nearly all mathematics. In the face of fierce resistance from most of his contemporaries and his own battle against mental illness, Cantor explored new mathematical worlds where there were many different infinities, some of which were larger than others.

    Venn diagram

    Venn diagram

    Cantor’s work on set theory was extended by another German, Richard Dedekind, who defined concepts such as similar sets and infinite sets. Dedekind also came up with the notion, now called a Dedekind cut which is now a standard definition of the real numbers. He showed that any irrational number divides the rational numbers into two classes or sets, the upper class being strictly greater than all the members of the other lower class. Thus, every location on the number line continuum contains either a rational or an irrational number, with no empty locations, gaps or discontinuities. In 1881, the Englishman John Venn introduced his “Venn diagrams” which become useful and ubiquitous tools in set theory.

    Building on Riemann’s deep ideas on the distribution of prime numbers, the year 1896 saw two independent proofs of the asymptotic law of the distribution of prime numbers (known as the Prime Number Theorem), one by Jacques Hadamard and one by Charles de la Vallée Poussin, which showed that the number of primes occurring up to any number x is asymptotic to (or tends towards) xlog x.

    Minkowski space-time

    Minkowski space-time

    Hermann Minkowski, a great friend of David Hilbert and teacher of the young Albert Einstein, developed a branch of number theory called the "geometry of numbers" late in the 19th Century as a geometrical method in multi-dimensional space for solving number theory problems, involving complex concepts such as convex sets, lattice points and vector space. Later, in 1907, it was Minkowski who realized that the Einstein’s 1905 special theory of relativity could be best understood in a four-dimensional space, often referred to as Minkowski space-time.

    Gottlob Frege’s 1879 “Begriffsschrift” (roughly translated as “Concept-Script”) broke new ground in the field of logic, including a rigorous treatment of the ideas of functions and variables. In his attempt to show that mathematics grows out of logic, he devised techniques that took him far beyond the logical traditions of Aristotle (and even of George Boole). He was the first to explicitly introduce the notion of variables in logical statements, as well as the notions of quantifiers, universals and existentials. He extended Boole's "propositional logic" into a new "predicate logic" and, in so doing, set the stage for the radical advances of Giuseppe Peano, Bertrand Russell and David Hilbert in the early 20th Century.

    Henri Poincaré came to prominence in the latter part of the 19th Century with at least a partial solution to the “three body problem”, a deceptively simple problem which had stubbornly resisted resolution since the time of Newton, over two hundred years earlier. Although his solution actually proved to be erroneous, its implications led to the early intimations of what would later become known as chaos theory. In between his important work in theoretical physics, he also greatly extended the theory of mathematical topology, leaving behind a knotty problem known as the Poincaré conjecture which remined unsolved until 2002.

    Poincaré was also an engineer and a polymath, and perhaps the last of the great mathematicians to adhere to an older conception of mathematics, which championed a faith in human intuition over rigour and formalism. He is sometimes referred to as the “Last Univeralist” as he was perhaps the last mathematician able to shine in almost all of the various aspects of what had become by now a huge, encyclopedic and incredibly complex subject. The 20th Century would belong to the specialists.


    Évariste Galois was radical republican in French mathematical history. He died in a duel at the young age of 20, but the work he published shortly before his death made his name in mathematical circles, and would go on to allow proofs by later mathematicians of problems which had been impossible for many centuries. It also laid the groundwork for many later developments in mathematics, particularly the beginnings of the important fields of abstract algebra and group theory.

    Despite his lacklustre performance at school (he twice failed entrance exams to the École Polytechnique), the young Galois devoured the work of Legendre and Lagrange in his spare time. At the tender age of 17, he began making fundamental discoveries in the theory of polynomial equations (equations constructed from variables and constants, using only the operations of addition, subtraction, multiplication and non-negative whole-number exponents, such as x2 - 4x + 7 = 0). He effectively proved that there can be no general formula for solving quintic equations (polynomials including a term of x5), just as the young Norwegian Niels Henrik Abel had a few years earlier, although by a different method. But he was also able to prove the more general, and more powerful, idea that there is no general algebraic method for solving polynomial equations of any degree greater than four.

    An example of Galois’ rather undisciplined notes

    An example of Galois’ rather undisciplined notes

    Galois achieved this general proof by looking at whether or not the “permutation group” of its roots (now known as its Galois group) had a certain structure. He was the first to use the term “group” in its modern mathematical sense of a group of permutations (foreshadowing the modern field of group theory), and his fertile approach, now known as Galois theory, was adapted by later mathematicians to many other fields of mathematics besides the theory of equations.

    Galois’ breakthrough in turn led to definitive proofs (or rather disproofs) later in the century of the so-called “Three Classical Problems” problems which had been first formulated by Plato and others back in ancient Greece: the doubling of the cube and the trisection of an angle (both were proved impossible in 1837), and the squaring of the circle (also proved impossible, in 1882).

    Galois was a hot-headed political firebrand (he was arrested several times for political acts), and his political affiliations and activities as a staunch republican during the rule of Louis-Philippe continually distracted him from his mathematical work. He was killed in a duel in 1832, under rather shady circumstances, but he had spent the whole of the previous night outlining his mathematical ideas in a detailed letter to his friend Auguste Chevalier, as though convinced of his impending death.

    Ironically, his young contemporary Abel also had a promising career cut short. He died in poverty of tubercolosis at the age of just 26, although his legacy lives on in the term “abelian” (usually written with a small "a"), which has since become commonplace in discussing concepts such as the abelian group, abelian category and abelian variety.


    Carl Friedrich Gauss is sometimes referred to as the "Prince of Mathematicians" and the "greatest mathematician since antiquity". He has had a remarkable influence in many fields of mathematics and science and is ranked as one of history's most influential mathematicians.

    Gauss was a child prodigy. There are many anecdotes concerning his precocity as a child, and he made his first ground-breaking mathematical discoveries while still a teenager.

    At just three years old, he corrected an error in his father payroll calculations, and he was looking after his father’s accounts on a regular basis by the age of 5. At the age of 7, he is reported to have amazed his teachers by summing the integers from 1 to 100 almost instantly (having quickly spotted that the sum was actually 50 pairs of numbers, with each pair summing to 101, total 5,050). By the age of 12, he was already attending gymnasium and criticizing Euclid’s geometry.

    Although his family was poor and working class, Gauss' intellectual abilities attracted the attention of the Duke of Brunswick, who sent him to the Collegium Carolinum at 15, and then to the prestigious University of Göttingen (which he attended from 1795 to 1798). It was as a teenager attending university that Gauss discovered (or independently rediscovered) several important theorems.

    Graphs of the density of prime numbers

    Graphs of the density of prime numbers

    At 15, Gauss was the first to find any kind of a pattern in the occurrence of prime numbers, a problem which had exercised the minds of the best mathematicians since ancient times. Although the occurrence of prime numbers appeared to be almost competely random, Gauss approached the problem from a different angle by graphing the incidence of primes as the numbers increased. He noticed a rough pattern or trend: as the numbers increased by 10, the probability of prime numbers occurring reduced by a factor of about 2 (e.g. there is a 1 in 4 chance of getting a prime in the number from 1 to 100, a 1 in 6 chance of a prime in the numbers from 1 to 1,000, a 1 in 8 chance from 1 to 10,000, 1 in 10 from 1 to 100,000, etc). However, he was quite aware that his method merely yielded an approximation and, as he could not definitively prove his findings, and kept them secret until much later in life.

    17-sided heptadecagon constructed by Gauss

    17-sided heptadecagon constructed by Gauss

    In Gauss’s annus mirabilis of 1796, at just 19 years of age, he constructed a hitherto unknown regular seventeen-sided figure using only a ruler and compass, a major advance in this field since the time of Greek mathematics, formulated his prime number theorem on the distribution of prime numbers among the integers, and proved that every positive integer is representable as a sum of at most three triangular numbers.

    Although he made contributions in almost all fields of mathematics, number theory was always Gauss’ favourite area, and he asserted that “mathematics is the queen of the sciences, and the theory of numbers is the queen of mathematics”. An example of how Gauss revolutionized number theory can be seen in his work with complex numbers (combinations of real and imaginary numbers).

    Representation of complex numbers

    Representation of complex numbers

    Gauss gave the first clear exposition of complex numbers and of the investigation of functions of complex variables in the early 19th Century. Although imaginary numbers involving i (the imaginary unit, equal to the square root of -1) had been used since as early as the 16th Century to solve equations that could not be solved in any other way, and despite Euler’s ground-breaking work on imaginary and complex numbers in the 18th Century, there was still no clear picture of how imaginary numbers connected with real numbers until the early 19th Century. Gauss was not the first to intepret complex numbers graphically (Jean-Robert Argand produced his Argand diagrams in 1806, and the Dane Caspar Wessel had described similar ideas even before the turn of the century), but Gauss was certainly responsible for popularizing the practice and laos formally introduced the standard notation a + bi for complex numbers. As a result, the theory of complex numbers received a notable expansion, and its full potential began to be unleashed.

    At the age of just 22, he proved what is now known as the Fundamental Theorem of Algebra (although it was not really about algebra). The theorem states that every non-constant single-variable polynomial over the complex numbers has at least one root (although his initial proof was not rigorous, he improved on it later in life). What it also showed was that the field of complex numbers is algebraically "closed" (unlike real numbers, where the solution to a polynomial with real co-efficients can yield a solution in the complex number field).

    Then, in 1801, at 24 years of age, he published his book “Disquisitiones Arithmeticae”, which is regarded today as one of the most influential mathematics books ever written, and which laid the foundations for modern number theory. Among many other things, the book contained a clear presentation of Gauss’ method of modular arithmetic, and the first proof of the law of quadratic reciprocity (first conjectured by Euler and Legendre).

    Line of best fit by Gauss’ least squares method

    Line of best fit by Gauss’ least squares method

    For much of his life, Gauss also retained a strong interest in theoretical astrononomy, and he held the post of Director of the astronomical observatory in Göttingen for many years. When the planetoid Ceres was in the process of being identified in the late 17th Century, Gauss made a prediction of its position which varied greatly from the predictions of most other astronomers of the time. But, when Ceres was finally discovered in 1801, it was almost exacly where Gauss had predicted. Although he did not explain his methods at the time, this was one of the first applications of the least squares approximation method, usually attributed to Gauss, although also claimed by the Frenchman Legendre. Gauss claimed to have done the logarithmic calculations in his head.

    As Gauss’ fame spread, though, and he became known throughout Europe as the go-to man for complex mathematical questions, his character deteriorated and he became increasingly arrogant, bitter, dismissive and unpleasant, rather than just shy. There are many stories of the way in which Gauss had dismissed the ideas of young mathematicians or, in some cases, claimed them as his own.

    Gaussian, or normal, probability curve

    Gaussian, or normal, probability curve

    In the area of probability and statistics, Gauss introduced what is now known as Gaussian distribution, the Gaussian function and the Gaussian error curve. He showed how probability could be represented by a bell-shaped or “normal” curve, which peaks around the mean or expected value and quickly falls off towards plus/minus infinity, which is basic to descriptions of statistically distributed data.

    He also made ths first systematic study of modular arithmetic - using integer division and the modulus - which now has applications in number theory, abstract algebra, computer science, cryptography, and even in visual and musical art.

    While engaged on a rather banal surveying job for the Royal House of Hanover in the years after 1818, Gauss was also looking into the shape of the Earth, and starting to speculate on revolutionary ideas like shape of space itself. This led him to question one of the central tenets of the whole of mathematics, Euclidean geometry, which was clearly premised on a flat, and not a curved, universe. He later claimed to have considered a non-Euclidean geometry (in which Euclid's parallel axiom, for example, does not apply), which was internally consistent and free of contradiction, as early as 1800. Unwilling to court controversy, however, Gauss decided not to pursue or publish any of his avant-garde ideas in this area, leaving the field open to Bolyai and Lobachevsky, although he is still considered by some to be a pioneer of non-Euclidean geometry.

    Gaussian curvature

    Gaussian curvature

    The Hanover survey work also fuelled Gauss' interest in differential geometry (a field of mathematics dealing with curves and surfaces) and what has come to be known as Gaussian curvature (an intrinsic measure of curvature, dependent only on how distances are measured on the surface, not on the way it is embedded in space). All in all, despite the rather pedestrian nature of his employment, the responsibilities of caring for his sick mother and the constant arguments with his wife Minna (who desperately wanted to move to Berlin), this was a very fruitful period of his academic life, and he published over 70 papers between 1820 and 1830.

    Gauss’ achievements were not limited to pure mathematics, however. During his surveying years, he invented the heliotrope, an instrument that uses a mirror to reflect sunlight over great distances to mark positions in a land survey. In later years, he collaborated with Wilhelm Weber on measurements of the Earth's magnetic field, and invented the first electric telegraph. In recognition of his contributions to the theory of electromagnetism, the international unit of magnetic induction is known as the gauss.


    János Bolyai was a Hungarian mathematician who spent most of his life in a little-known backwater of the Hapsburg Empire, in the wilds of the Transylvanian mountains of modern-day Romania, far from the mainstream mathematical communities of Germany, France and England. No original portrait of Bolyai survives, and the picture that appears in many encyclopedias and on a Hungarian postage stamp is known to be unauthentic.

    His father and teacher, Farkas Bolyai, was himself an accomplished mathematician and had been a student of the great German mathematician Gauss for a time, but the cantankerous Gauss refused to take on the young prodigy János as a student. So, he was forced to join the army in order to earn a living and support his family, although he persevered with his mathematics in his spare time. He was also a talented linguist, speaking nine foreign languages, including Chinese and Tibetan.

    Euclid's parallel postulate

    Euclid's parallel postulate

    In particular, Bolyai became obsessed with Euclid's fifth postulate (often referred to as the parallel postulate), a fundamental principle of geometry for over two millennia, which essentially states that only one line can be drawn through a given point so that the line is parallel to a given line that does not contain the point, along with its corollary that the interior angles of a triangle sum to 180° or two right angles. In fact, he became obsessed to such an extent that his father warned him that it may take up all his time and deprive him of his "health, peace of mind and happiness in life", a tragic irony given the unfolding of subsequent events.

    Bolyai, however, persisted in his quest, and eventually came to the radical conclusion that it was in fact possible to have consistent geometries that were independent of the parallel postulate. In the early 1820s, Bolyai explored what he called “imaginary geometry” (now known as hyperbolic geometry), the geometry of curved spaces on a saddle-shaped plane, where the angles of a triangle did NOT add up to 180° and apparently parallel lines were NOT actually parallel. In curved space, the shortest distance between two points a and b is actually a curve, or geodesic, and not a straight line. Thus, the angles of a triangle in hyperbolic space sum to less than 180°, and two parallel lines in hyperbolic space actually diverge from each other. In a letter to his father, Bolyai marvelled, “Out of nothing I have created a strange new universe”.

    Although it is easy to visualize a flat surface and a surface with positive curvature (e.g. a sphere, such as a the Earth), it is impossible to visualize a hyperbolic surface with negative curvature, other than just over a small localized area, where it would look like a saddle or a Pringle. So the very concept of a hyperbolic surface appeared to go against all sense of reality. It certainly represented a radical departure from Euclidean geometry, and the first step along the road which would lead to Einstein’s Theory of Relativity among other applications (although it still fell well short of the multi-dimensional geometry which was to be later realized by Riemann). Between 1820 and 1823, Bolyai prepared, but did not immediately publish, a treatise on a complete system of non-Euclidean geometry.

    His work was, however, only published in 1832, and then only a short exposition in the appendix of a textbook by his father. On reading this, Gauss clearly recognized the genius of the younger Bolyai’s ideas, but he refused to encourage the young man, and even tried to claim his ideas as his own. Further disheartened by the news that the Russian mathematician Lobachevski had published something quite similar two years before his own paper, Bolyai became a recluse and gradually went insane. He died in obscurity in 1860. Although he only ever published the 24 pages of the appendix, Bolyai left more than 20,000 pages of mathematical manuscripts when he died (including the development of a rigorous geometric concept of complex numbers as ordered pairs of real numbers).

    Hyperbolic Bolyai-Lobachevskian geometry

    Hyperbolic Bolyai-Lobachevskian geometry

    Completely independent from Bolyai, in the distant provincial Russian city of Kazan, Nikolai Ivanovich Lobachevsky had also been working, along very similar lines as Bolyai, to develop a geometry in which Euclid’s fifth postulate did not apply. His work on hyperbolic geometry was first reported in 1826 and published in 1830, although it did not have general circulation until some time later.

    This early non-Euclidean geometry is now often referred to as Lobachevskian geometry or Bolyai-Lobachevskian geometry, thus sharing the credit. Gauss’ claims to have originated, but not published, the ideas are difficult to judge in retrospect. Other much earlier claims are credited to the 11th Century Persian mathematician Omar Khayyam, and to the early 18th Century Italian priest Giovanni Saccheri, but their work was much more speculative and inconclusive in nature.

    Lobachevsky also died in poverty and obscurity, nearly blind and unable to walk. Among his other mathematical achievements, largely unknown during his lifetime, was the development of a method for approximating the roots of algebraic equations (a method now known as the Dandelin-Gräffe method, named after two other mathematicians who discovered it independently), and the definition of a function as a correspondence between two sets of real numbers (usually credited to Dirichlet, who gave the same definition independently soon after Lobachevsky).


    Bernhard Riemann was another mathematical giant hailing from northern Germany. Poor, shy, sickly and devoutly religious, the young Riemann constantly amazed his teachers and exhibited exceptional mathematical skills (such as fantastic mental calculation abilities) from an early age, but suffered from timidity and a fear of speaking in public. He was, however, given free rein of the school library by an astute teacher, where he devoured mathematical texts by Legendre and others, and gradually groomed himself into an excellent mathematician. He also continued to study the Bible intensively, and at one point even tried to prove mathematically the correctness of the Book of Genesis.

    Although he started studying philology and theology in order to become a priest and help with his family's finances, Riemann's father eventually managed to gather enough money to send him to study mathematics at the renowned University of Göttingen in 1846, where he first met, and attended the lectures of, Carl Friedrich Gauss. Indeed, he was one of the very few who benefited from the support and patronage of Gauss, and he gradually worked his way up the University's hierarchy to become a professor and, eventually, head of the mathematics department at Göttingen.

    Elliptic geometry

    Elliptic geometry

    Riemann developed a type of non-Euclidean geometry, different to the hyperbolic geometry of Bolyai and Lobachevsky, which has come to be known as elliptic geometry. As with hyperbolic geometry, there is no such thing as parallel lines, and the angles of a triangle do not sum to 180° (in this case, however, they sum to more than 180º). He went on to develop Riemannian geometry, which unified and vastly generalized the three types of geometry, as well as the concept of a manifold or mathematical space, which generalized the ideas of curves and surfaces.

    A turning point in his career occurred in 1852 when, at the age of 26, have gave a lecture on the foundations of geometry and outlined his vision of a mathematics of many different kinds of space, only one of which was the flat, Euclidean space which we appear to inhabit. He also introduced one-dimensional complex manifolds known as Riemann surfaces. Although it was not widely understood at the time, Riemann’s mathematics changed how we look at the world, and opened the way to higher dimensional geometry, a potential which had existed, unrealized, since the time of Descartes.

    2-D representation of Riemann’s zeta function

    2-D representation of Riemann’s zeta function

    With his “Riemann metric”, Riemann completely broke away from all the limitations of 2 and 3 dimensional geometry, even the geometry of curved spaces of Bolyai and Lobachevsky, and began to think in higher dimensions, extending the differential geometry of surfaces into n dimensions. His conception of multi-dimensional space (known as Riemannian space or Riemannian manifold or simply “hyperspace”) enabled the later development of general relativity, and is at the heart of much of today’s mathematics, in geometry, number theory and other branches of mathematics.

    He introduced a collection of numbers (known as a tensor) at every point in space, which would describe how much it was bent or curved. For instance, in four spatial dimensions, a collection of ten numbers is needed at each point to describe the properties of the mathematical space or manifold, no matter how distorted it may be.

    Riemann’s big breakthrough occurred while working on a function in the complex plane called the Riemann zeta function (an extension of the simpler zeta function first explored by Euler in the previous century). He realized that he could use it to build a kind of 3-dimensional landscape, and furthermore that the contours of that imaginary landscape might be able to unlock the Holy Grail of mathematics, the age-old secret of prime numbers.

    3-D representation of Riemann’s zeta function and Riemann’s Hypothesis

    3-D representation of Riemann’s zeta function and Riemann’s Hypothesis

    Riemann noticed that, at key places, the surface of his 3-dimensional graph dipped down to height zero (known simply as “the zeroes”) and was able to show that at least the first ten zeroes inexplicably appeared to line up in a straight line through the 3-dimensional landscape of the zeta-function, known as the critical line, where the real part of the value is equal to ½.

    With a huge imaginative leap, Riemann realized that these zeroes had a completely unexpected connection with the way the prime numbers are distributed. It began to seem that they could be used to correct Gauss’ inspired guesswork regarding the number of primes as numbers as one counts higher and higher.

    The famous Riemann Hypothesis, which remains unproven, suggests that ALL the zeroes would be on the same straight line. Although he never provided a definitive proof of this hypothesis, Riemann’s work did at least show that the 15-year-old Gauss’ initial approximations of the incidence of prime numbers were perhaps more accurate than even he could have known, and that the primes were in fact distributed over the universe of numbers in a regular, balanced and beautiful way.

    The discovery of the Riemann zeta function and the relationship of its zeroes to the prime numbers brought Riemann instant fame when it was published in 1859. He too, though, died young at just 39 years of age, in 1866, and many of his loose papers were accidentally destroyed after his death, so we will never know just how close he was to proving his own hypothesis. Over 150 years later, the Riemann Hypothesis is still considered one of the fundamental questions of number theory, and indeed of all mathematics, and a prize of $1 million has been offered for the final solution.


    The British mathematician and philosopher George Boole, along with his near contemporary and countryman Augustus de Morgan, was one of the few since Leibniz to give any serious thought to logic and its mathematical implications. Unlike Leibniz, though, Boole came to see logic as principally a discipline of mathematics, rather than of philosophy.

    His extraordinary mathematical talents did not manifest themselves in early life. He received his early lessons in mathematics from his father, a tradesman with an amateur interest in in mathematics and logic, but his favourite subject at school was classics. He was a quiet, serious and modest young man from a humble working class background, and largely self-taught in his mathematics (he would borrow mathematical journals from his local Mechanics Institute).

    It was only at university and afterwards that his mathematical skills began to be fully realized, although, even then, he was all but unknown in his own time, other than for a few insightful but rather abstruse papers on differential equations and the calculus of finite differences. By the age of 34, though, he was well respected enough in his field to be appointed as the first professor of mathematics of Queen's College (now University College) in Cork, Ireland.

    But it was his contributions to the algebra of logic which were later to be viewed as immensely important and influential. Boole began to see the possibilities for applying his algebra to the solution of logical problems, and he pointed out a deep analogy between the symbols of algebra and those that can be made to represent logical forms and syllogisms. In fact, his ambitions stretched to a desire to devise and develop a system of algebraic logic that would systematically define and model the function of the human brain. His novel views of logical method were due to his profound confidence in symbolic reasoning, and he speculated on what he called a “calculus of reason” during the 1840s and 1850s.

    Boolean logic

    Boolean logic

    Determined to find a way to encode logical arguments into a language that could be manipulated and solved mathematically, he came up with a type of linguistic algebra, now known as Boolean algebra. The three most basic operations of this algebra were AND, OR and NOT, which Boole saw as the only operations necessary to perform comparisons of sets of things, as well as basic mathematical functions.

    Boole’s use of symbols and connectives allowed for the simplification of logical expressions, including such important algebraic identities as: (X or Y) = (Y or X); not(not X) = X; not(X and Y) = (not X) or (not Y); etc.

    He also developed a novel approach based on a binary system, processing only two objects (“yes-no”, “true-false”, “on-off”, “zero-one”). Therefore, if “true” is represented by 1 and “false” is represented by 0, and two propositions are both true, then it is possible under Boolean algebra for 1 + 1 to equal 1 ( the “+” is an alternative representation of the OR operator)

    Despite the standing he had won in the academic community by that time, Boole’s revolutionary ideas were largely criticized or just ignored, until the American logician Charles Sanders Peirce (among others) explained and elaborated on them some years after Boole’s death in 1864.

    Almost seventy years later, Claude Shannon made a major breakthrough in realizing that Boole's work could form the basis of mechanisms and processes in the real world, and particularly that electromechanical relay circuits could be used to solve Boolean algebra problems. The use of electrical switches to process logic is the basic concept that underlies all modern electronic digital computers, and so Boole is regarded in hindsight as a founder of the field of computer science, and his work led to the development of applications he could never have imagined.


    Cantor's starting point was to say that, if it was possible to add Cantor’s procedure of bijection or one-to-one correspondence to compare infinite sets
    1 and 1, or 25 and 25, etc, then it ought to be possible to add infinity and infinity. He realized that it was actually possible to add and subtract infinities, and that beyond what was normally thought of as infinity existed another, larger infinity, and then other infinities beyond that. In fact, he showed that there may be infinitely many sets of infinite numbers - an infinity of infinities - some bigger than others, a concept which clearly has philosophical, as well as just mathematical, significance. The sheer audacity of Cantor’s theory set off a quiet revolution in the mathematical community, and changed forever the way mathematics is approached.

    His first intimations of all this came in the early 1870s when he considered an infinite series of natural numbers (1, 2, 3, 4, 5, ...), and then an infinite series of multiples of ten (10, 20 , 30, 40, 50, ...). He realized that, even though the multiples of ten were clearly a subset of the natural numbers, the two series could be paired up on a one-to-one basis (1 with 10, 2 with 20, 3 with 30, etc) - a process known as bijection - to show that they were the same “sizes” of infinite sets, in that they had the same number of elements.

    This clearly also applies to other subsets of the natural numbers, such as the even numbers 2, 4, 6, 8, 10, etc, or the squares 1, 4, 9, 16, 25, etc, and even to the set of negative numbers and integers. In fact, Cantor realized that he could, in the same way, even pair up all the fractions (or rational numbers) with all the whole numbers, thus showing that rational numbers were also the same sort of infinity as the natural numbers, despite the intuitive feeling that there must be more fractions than whole numbers.

    Cantor’s diagonal argument for the existence of uncountable sets

    Cantor’s diagonal argument for the existence of uncountable sets

    However, when Cantor considered an infinite series of decimal numbers, which includes irrational numbers like π, e and √2, this method broke down. He used several clever arguments (one being the "diagonal argument" explained in the box on the right) to show how it was always possible to construct a new decimal number that was missing from the original list, and so proved that the infinity of decimal numbers (or, technically, real numbers) was in fact bigger than the infinity of natural numbers.

    He also showed that they were “non-denumerable” or "uncountable" (i.e. contained more elements than could ever be counted), as opposed to the set of rational numbers which he had shown were technically (even if not practically) “denumerable” or "countable". In fact, it can be argued that there are an infinite number of irrational numbers in between each and every rational number. The patternless decimals of irrational numbers fill the "spaces" between the patterns of the rational numbers.

    Cantor coined the new word “transfinite” in an attempt to distinguish these various levels of infinite numbers from an absolute infinity, which the religious Cantor effectively equated with God (he saw no contradiction between his mathematics and the traditional concept of God). Although the cardinality (or size) of a finite set is just a natural number indicating the number of elements in the set, he also needed a new notation to describe the sizes of infinite sets, and he used the Hebrew letter aleph (Aleph). He defined Aleph0 (aleph-null or aleph-nought) as the cardinality of the countably infinite set of natural numbers; Aleph1 (aleph-one) as the next larger cardinality, that of the uncountable set of ordinal numbers; etc. Because of the unique properties of infinite sets, he showed that Aleph0 + Aleph0 = Aleph0, and also that Aleph0 x Aleph0 = Aleph0.

    All of this represented a revolutionary step, and opened up new possibilities in mathematics. However, it also opened up the possibility of other infinities, for instance an infinity - or even many infinities - between the infinity of the whole numbers and the larger infinity of the decimal numbers. This idea is known as the continuum hypothesis, and Cantor believed (but could not actually prove) that there was NO such intermediate infinite set. The continuum hypothesis was one of the 23 important open problems identified by David Hilbert in his famous 1900 Paris lecture, and it remained unproved - and indeed appeared to be unprovable - for almost a century, until the work of Robinson and Matiyasevich in the 1950s and 1960s.

    Basic set theory notation

    Basic set theory notation

    Just as importantly, though, this work of Cantor's between 1874 and 1884 marks the real origin of set theory, which has since become a fundamental part of modern mathematics, and its basic concepts are used throughout all the various branches of mathematics. Although the concept of a set had been used implicitly since the beginnings of mathematics, dating back to the ideas of Aristotle, this was limited to everyday finite sets. In contradistinction, the “infinite” was kept quite separate, and was largely considered a topic for philosophical, rather than mathematical, discussion. Cantor, however, showed that, just as there were different finite sets, there could be infinite sets of different sizes, some of which are countable and some of which are uncountable.

    Throughout the 1880s and 1890s, he refined his set theory, defining well-ordered sets and power sets and introducing the concepts of ordinality and cardinality and the arithmetic of infinite sets. What is now known as Cantor's theorem states generally that, for any set A, the power set of A (i.e. the set of all subsets of A) has a strictly greater cardinality than A itself. More specificially, the power set of a countably infinite set is uncountably infinite.

    Despite the central position of set theory in modern mathematics, it was often deeply mistrusted and misunderstood by other mathematicians of the day. One quote, usually attributed to Henri Poincaré, claimed that "later generations will regard Mengenlehre (set theory) as a disease from which one has recovered". Others, however, were quick to see the value and potential of the method, and David Hilbert declared in 1926 that "no one shall expel us from the Paradise that Cantor has created".

    Cantor had few other mathematicians with whom he could discuss his ground-breaking work, and most were distinctly unnerved by his contemplation of the infinite. During the 1880s, he encountered resistance, sometimes fierce resistance, from mathematical contemporaries such as his old professor Leopold Kronecker and Henri Poincaré, as well as from philosophers like Ludwig Wittgenstein and even from some Christian theologians, who saw Cantor's work as a challenge to their view of the nature of God. Cantor himself, a deeply religious man, noted some annoying paradoxes thrown up by his own work, but some went further and saw it as the wilful destruction of the comprehensible and logical base on which the whole of mathematics was based.

    As he aged, Cantor suffered from more and more recurrences of mental illness, which some have directly linked to his constant contemplation of such complex, abstract and paradoxical concepts. In the last decades of his life, he did no mathematical work at all, but wrote extensively on his two obsessions: that Shakespeare’s plays were actually written by the English philosopher Sir Francis Bacon, and that Christ was the natural son of Joseph of Arimathea. He spent long periods in the Halle sanatorium recovering from attacks of manic depression and paranoia, and it was there, alone in his room, that he finally died in 1918, his great project still unfinished.


    Paris was a great centre for world mathematics towards the end of the 19th Century, and Henri Poincaré was one of its leading lights in almost all fields - geometry, algebra, analysis - for which he is sometimes called the “Last Universalist”.

    Even as a youth at the Lycée in Nancy, he showed himself to be a polymath, and he proved to be one of the top students in every topic he studied. He continued to excel after he entered the École Polytechnique to study mathematics in 1873, and, for his doctoral thesis, he devised a new way of studying the properties of differential equations. Beginning in 1881, he taught at the Sorbonne in Paris, where he would spend the rest of his illustrious career. He was elected to the French Academy of Sciences at the young age of 32, became its president in 1906, and was elected to the Académie française in 1909.

    Poincaré deliberately cultivated a work habit that has been compared to a bee flying from flower to flower. He observed a strict work regime of 2 hours of work in the morning and two hours in the early evening, with the intervening time left for his subconscious to carry on working on the problem in the hope of a flash of inspiration. He was a great believer in intuition, and claimed that "it is by logic that we prove, but by intuition that we discover".

    It was one such flash of inspiration that earned Poincaré a generous prize from the King of Sweden in 1887 for his partial solution to the “three-body problem”, a problem that had defeated mathematicians of the stature of Euler, Lagrange and Laplace. Newton had long ago proved that the paths of two planets orbiting around each other would remain stable, but even the addition of just one more orbiting body to this already simplified solar system resulted in the involvement of as many as 18 different variables (such as position, velocity in each direction, etc), making it mathematically too complex to predict or disprove a stable orbit. Poincaré’s solution to the “three-body problem”, using a series of approximations of the orbits, although admittedly only a partial solution, was sophisticated enough to win him the prize.

    Computer representation of the paths generated by Poincaré’s analysis of the three body problem

    Computer representation of the paths generated by Poincaré’s analysis of the three body problem

    But he soon realized that he had actually made a mistake, and that his simplifications did not indicate a stable orbit after all. In fact, he realized that even a very small change in his initial conditions would lead to vastly different orbits. This serendipitous discovery, born from a mistake, led indirectly to what we now know as chaos theory, a burgeoning field of mathematics most familiar to the general public from the common example of the flap of a butterfly’s wings leading to a tornado on the other side of the world. It was the first indication that three is the minimum threshold for chaotic behaviour.

    Paradoxically, owning up to his mistake only served to enhance Poincaré’s reputation, if anything, and he continued to produce a wide range of work throughout his life, as well as several popular books extolling the importance of mathematics.

    Poincaré also developed the science of topology, which Leonhard Euler had heralded with his solution to the famous Seven Bridges of Königsberg problem. Topology is a kind of geometry which involves one-to-one correspondence of space. It is sometimes referred to as “bendy geometry” or “rubber sheet geometry” because, in topology, two shapes are the same if one can be bent or morphed into the other without cutting it. For example, a banana and a football are topologically equivalent, as are a donut (with its hole in the middle) and a teacup (with its handle); but a football and a donut, are topologically different because there is no way to morph one into the other. In the same way, a traditional pretzel, with its two holes is topological different from all of these examples.

    A 2-dimensional representation of the 3-dimensional problem in the Poincaré conjecture

    A 2-dimensional representation of the 3-dimensional problem in the Poincaré conjecture

    In the late 19th Century, Poincaré described all the possible 2-dimensional topological surfaces but, faced with the challenge of describing the shape of our 3-dimensional universe, he came up with the famous Poincaré conjecture, which became one of the most important open questions in mathematics for almost a century. The conjecture looks at a space that, locally, looks like ordinary 3-dimensional space but is connected, finite in size and lacks any boundary (technically known as a closed 3-manifold or 3-sphere). It asserts that, if a loop in that space can be continuously tightened to a point, in the same way as a loop drawn on a 2-dimensional sphere can, then the space is just a three-dimensional sphere. The problem remained unsolved until 2002, when an extremely complex solution was provided by the eccentric and reclusive Russian mathematician Grigori Perelman, involving the ways in which 3-dimensional shapes can be “wrapped up” in higher dimensions.

    Poincaré’s work in theoretical physics was also of great significance, and his symmetrical presentation of the Lorentz transformations in 1905 was an important and necessary step in the formulation of Einstein’s theory of special relativity (some even hold that Poincaré and Lorentz were the true discoverers of relativity). He also made important contribution in a whole host of other areas of physics including fluid mechanics, optics, electricity, telegraphy, capillarity, elasticity, thermodynamics, potential theory, quantum theory and cosmology.


    The 20th Century continued the trend of the 19th towards increasing generalization and abstraction in mathematics, in which the notion of axioms as “self-evident truths” was largely discarded in favour of an emphasis on such logical concepts as consistency and completeness.

    Fields of Mathematics

    It also saw mathematics become a major profession, involving thousands of new Ph.D.s each year and jobs in both teaching and industry, and the development of hundreds of specialized areas and fields of study, such as group theory, knot theory, sheaf theory, topology, graph theory, functional analysis, singularity theory, catastrophe theory, chaos theory, model theory, category theory, game theory, complexity theory and many more.

    The eccentric British mathematician G.H. Hardy and his young Indian protégé Srinivasa Ramanujan, were just two of the great mathematicians of the early 20th Century who applied themselves in earnest to solving problems of the previous century, such as the Riemann hypothesis. Although they came close, they too were defeated by that most intractable of problems, but Hardy is credited with reforming British mathematics, which had sunk to something of a low ebb at that time, and Ramanujan proved himself to be one of the most brilliant (if somewhat undisciplined and unstable) minds of the century.

    Others followed techniques dating back millennia but taken to a 20th Century level of complexity. In 1904, Johann Gustav Hermes completed his construction of a regular polygon with 65,537 sides (216 + 1), using just a compass and straight edge as Euclid would have done, a feat that took him over ten years.

    The early 20th Century also saw the beginnings of the rise of the field of mathematical logic, building on the earlier advances of Gottlob Frege, which came to fruition in the hands of Giuseppe Peano, L.E.J. Brouwer, David Hilbert and, particularly, Bertrand Russell and A.N. Whitehead, whose monumental joint work the “Principia Mathematica” was so influential in mathematical and philosophical logicism.

    Part of the transcript of Hilbert’s 1900 Paris lecture, in which he set out his 23 problems

    Part of the transcript of Hilbert’s 1900 Paris lecture, in which he set out his 23 problems

    The century began with a historic convention at the Sorbonne in Paris in the summer of 1900 which is largely remembered for a lecture by the young German mathematician David Hilbert in which he set out what he saw as the 23 greatest unsolved mathematical problems of the day. These “Hilbert problems” effectively set the agenda for 20th Century mathematics, and laid down the gauntlet for generations of mathematicians to come. Of these original 23 problems, 10 have now been solved, 7 are partially solved, and 2 (the Riemann hypothesis and the Kronecker-Weber theorem on abelian extensions) are still open, with the remaining 4 being too loosely formulated to be stated as solved or not.

    Hilbert was himself a brilliant mathematician, responsible for several theorems and some entirely new mathematical concepts, as well as overseeing the development of what amounted to a whole new style of abstract mathematical thinking. Hilbert's approach signalled the shift to the modern axiomatic method, where axioms are not taken to be self-evident truths. He was unfailingly optimistic about the future of mathematics, famously declaring in a 1930 radio interview “We must know. We will know!”, and was a well-loved leader of the mathematical community during the first part of the century.

    However, the Austrian Kurt Gödel was soon to put some very severe constraints on what could and could not be solved, and turned mathematics on its head with his famous incompleteness theorem, which proved the unthinkable - that there could be solutions to mathematical problems which were true but which could never be proved.

    Alan Turing, perhaps best known for his war-time work in breaking the German enigma code, spent his pre-war years trying to clarify and simplify Gödel’s rather abstract proof. His methods led to some conclusions that were perhaps even more devastating than Gödel’s, including the idea that there was no way of telling beforehand which problems were provable and which unprovable. But, as a spin-off, his work also led to the development of computers and the first considerations of such concepts as artificial intelligence.

    With the gradual and wilful destruction of the mathematics community of Germany and Austria by the anti-Jewish Nazi regime in the 1930 and 1940s, the focus of world mathematics moved to America, particularly to the Institute for Advanced Study in Princeton, which attempted to reproduce the collegiate atmosphere of the old European universities in rural New Jersey. Many of the brightest European mathematicians, including Hermann Weyl, John von Neumann, Kurt Gödel and Albert Einstein, fled the Nazis to this safe haven.

    Von Neumann’s computer architecture design

    Von Neumann’s computer architecture design

    John von Neumann is considered one of the foremost mathematicians in modern history, another mathematical child prodigy who went on to make major contributions to a vast range of fields. In addition to his physical work in quantum theory and his role in the Manhattan Project and the development of nuclear physics and the hydrogen bomb, he is particularly remembered as a pioneer of game theory, and particularly for his design model for a stored-program digital computer that uses a processing unit and a separate storage structure to hold both instructions and data, a general architecture that most electronic computers follow even today.

    André Weil was another refugee from the war in Europe, after narrowly avoiding death on a couple of occasions. His theorems, which allowed connections to be made between number theory, algebra, geometry and topology, are considered among the greatest achievements of modern mathematics. He was also responsible for setting up a group of French mathematicians who, under the secret nom-de-plume of Nicolas Bourbaki, wrote many influential books on the mathematics of the 20th Century.

    Perhaps the greatest heir to Weil’s legacy was Alexander Grothendieck, a charismatic and beloved figure in 20th Century French mathematics. Grothendieck was a structuralist, interested in the hidden structures beneath all mathematics, and in the 1950s he created a powerful new language which enabled mathematical structures to be seen in a new way, thus allowing new solutions in number theory, geometry, even in fundamental physics. His “theory of schemes” allowed certain of Weil's number theory conjectures to be solved, and his “theory of topoi” is highly relevant to mathematical logic. In addition, he gave an algebraic proof of the Riemann-Roch theorem, and provided an algebraic definition of the fundamental group of a curve. Although, after the 1960s, Grothendieck all but abandoned mathematics for radical politics, his achievements in algebraic geometry have fundamentally transformed the mathematical landscape, perhaps no less than those of Cantor, Gödel and Hilbert, and he is considered by some to be one of the dominant figures of the whole of 20th Century mathematics.

    Paul Erdös was another inspired but distinctly non-establishment figure of 20th Century mathematics. The immensely prolific and famously eccentric Hungarian mathematician worked with hundreds of different collaborators on problems in combinatorics, graph theory, number theory, classical analysis, approximation theory, set theory, and probability theory. As a humorous tribute, an "Erdös number" is given to mathematicians according to their collaborative proximity to him. He was also known for offering small prizes for solutions to various unresolved problems (such as the Erdös conjecture on arithmetic progressions), some of which are still active after his death.

    The Mandelbrot set, the most famous example of a fractal

    The Mandelbrot set, the most famous example of a fractal

    The field of complex dynamics (which is defined by the iteration of functions on complex number spaces) was developed by two Frenchmen, Pierre Fatou and Gaston Julia, early in the 20th Century. But it only really gained much attention in the 1970s and 1980s with the beautiful computer plottings of Julia sets and, particularly, of the Mandelbrot sets of yet another French mathematician, Benoît Mandelbrot. Julia and Mandelbrot fractals are closely related, and it was Mandelbrot who coined the term fractal, and who became known as the father of fractal geometry.

    The Mandelbrot set involves repeated iterations of complex quadratic polynomial equations of the form zn+1 = zn2 + c, (where z is a number in the complex plane of the form x + iy). The iterations produce a form of feedback based on recursion, in which smaller parts exhibit approximate reduced-size copies of the whole, and which are infinitely complex (so that, however much one zooms in and magifies a part, it exhibits just as much complexity).

    Paul Cohen is an example of a second generation Jewish immigrant who followed the American dream to fame and success. His work rocked the mathematical world in the 1960s, when he proved that Cantor's continuum hypothesis about the possible sizes of infinite sets (one of Hilbert’s original 23 problems) could be both true AND not true, and that there were effectively two completely separate but valid mathematical worlds, one in which the continuum hypothesis was true and one where it was not. Since this result, all modern mathematical proofs must insert a statement declaring whether or not the result depends on the continuum hypothesis.

    Another of Hilbert’s problems was finally resolved in 1970, when the young Russian Yuri Matiyasevich finally proved that Hilbert’s tenth problem was impossible, i.e. that there is no general method for determining when polynomial equations have a solution in whole numbers. In arriving at his proof, Matiyasevich built on decades of work by the American mathematician Julia Robinson, in a great show of internationalism at the height of the Cold War.

    In additon to complex dynamics, another field that benefitted greatly from the advent of the electronic computer, and particulary from its ability to carry out a huge number of repeated iterations of simple mathematical formulas which would be impractical to do by hand, was chaos theory. Chaos theory tells us that some systems seem to exhibit random behaviour even though they are not random at all, and conversely some systems may have roughly predictable behaviour but are fundamentally unpredictable in any detail. The possible behaviours that a chaotic system may have can also be mapped graphically, and it was discovered that these mappings, known as "strange attractors", are fractal in nature (the more you zoom in, the more detail can be seen, although the overall pattern remains the same).

    An early pioneer in modern chaos theory was Edward Lorenz, whose interest in chaos came about accidentally through his work on weather prediction. Lorenz's discovery came in 1961, when a computer model he had been running was actually saved using three-digit numbers rather than the six digits he had been working with, and this tiny rounding error produced dramatically different results. He discovered that small changes in initial conditions can produce large changes in the long-term outcome - a phenomenon he described by the term “butterfly effect” - and he demonstrated this with his Lorenz attractor, a fractal structure corresponding to the behaviour of the Lorenz oscillator (a 3-dimensional dynamical system that exhibits chaotic flow).

    Example of a four-colour map

    Example of a four-colour map

    1976 saw a proof of the four colour theorem by Kenneth Appel and Wolfgang Haken, the first major theorem to be proved using a computer. The four colour conjecture was first proposed in 1852 by Francis Guthrie (a student of Augustus De Morgan), and states that, in any given separation of a plane into contiguous regions (called a “map”) the regions can be coloured using at most four colours so that no two adjacent regions have the same colour. One proof was given by Alfred Kempe in 1879, but it was shown to be incorrect by Percy Heawood in 1890 in proving the five colour theorem. The eventual proof that only four colours suffice turned out to be significantly harder. Appel and Haken’s solution required some 1,200 hours of computer time to examine around 1,500 configurations.

    Also in the 1970s, origami became recognized as a serious mathematical method, in some cases more powerful than Euclidean geometry. In 1936, Margherita Piazzola Beloch had shown how a length of paper could be folded to give the cube root of its length, but it was not until 1980 that an origami method was used to solve the "doubling the cube" problem which had defeated ancient Greek geometers. An origami proof of the equally intractible "trisecting the angle" problem followed in 1986. The Japanese origami expert Kazuo Haga has at least three mathematical theorems to his name, and his unconventional folding techniques have demonstrated many unexpected geometrical results.

    The British mathematician Andrew Wiles finally proved Fermat’s Last Theorem for ALL numbers in 1995, some 350 years after Fermat’s initial posing. It was an achievement Wiles had set his sights on early in life and pursued doggedly for many years. In reality, though, it was a joint effort of several steps involving many mathematicans over several years, including Goro Shimura, Yutaka Taniyama, Gerhard Frey, Jean-Pierre Serre and Ken Ribet, with Wiles providing the links and the final synthesis and, specifically, the final proof of the Taniyama-Shimura Conjecture for semi-stable elliptic curves. The proof itself is over 100 pages long.

    The most recent of the great conjectures to be proved was the Poincaré Conjecture, which was solved in 2002 (over 100 years after Poincaré first posed it) by the eccentric and reclusive Russian mathematician Grigori Perelman. However, Perelman, who lives a frugal life with his mother in a suburb of St. Petersburg, turned down the $1 million prize, claiming that "if the proof is correct then no other recognition is needed". The conjecture, now a theorem, states that, if a loop in connected, finite boundaryless 3-dimensional space can be continuously tightened to a point, in the same way as a loop drawn on a 2-dimensional sphere can, then the space is a three-dimensional sphere. Perelman provided an elegant but extremely complex solution involving the ways in which 3-dimensional shapes can be “wrapped up” in even higher dimensions. Perelman has also made landmark contributions to Riemannian geometry and geometric topology.

    John Nash, the American economist and mathematician whose battle against paranoid schizophrenia has recently been popularized by the Hollywood movie “A Beautiful Mind”, did some important work in game theory, differential geometry and partial differential equations which have provided insight into the forces that govern chance and events inside complex systems in daily life, such as in market economics, computing, artificial intelligence, accounting and military theory.

    The Englishman John Horton Conway established the rules for the so-called "Game of Life" in 1970, an early example of a "cellular automaton" in which patterns of cells evolve and grow in a gridm which became extremely popular among computer scientists. He has made important contributions to many branches of pure mathematics, such as game theory, group theory, number theory and geometry, and has also come up with some wonderful-sounding concepts like surreal numbers, the grand antiprism and monstrous moonshine, as well as mathematical games such as Sprouts, Philosopher's Football and the Soma Cube.

    Other mathematics-based recreational puzzles became even more popular among the general public, including Rubik's Cube (1974) and Sudoku (1980), both of which developed into full-blown crazes on a scale only previously seen with the 19th Century fads of Tangrams (1817) and the Fifteen puzzle (1879). In their turn, they generated attention from serious mathematicians interested in exploring the theoretical limits and underpinnings of the games.

    Computers continue to aid in the identification of phenomena such as Mersenne primes numbers (a prime number that is one less than a power of two - see the section on 17th Century Mathematics). In 1952, an early computer known as SWAC identified 2257-1 as the 13th Mersenne prime number, the first new one to be found in 75 years, before going on to identify several more even larger.

    Approximations for π

    Approximations for π

    With the advent of the Internet in the 1990s, the Great Internet Mersenne Prime Search (GIMPS), a collaborative project of volunteers who use freely available computer software to search for Mersenne primes, has led to another leap in the discovery rate. Currently, the 13 largest Mersenne primes were all discovered in this way, and the largest (the 45th Mersenne prime number and also the largest known prime number of any kind) was discovered in 2009 and contains nearly 13 million digits. The search also continues for ever more accurate computer approximations for the irrational number π, with the current record standing at over 5 trillion decimal places.

    The P versus NP problem, introduced in 1971 by the American-Canadian Stephen Cook, is a major unsolved problem in computer science and the burgeoning field of complexity theory, and is another of the Clay Mathematics Institute's million dollar Millennium Prize problems. At its simplest, it asks whether every problem whose solution can be efficiently checked by a computer can also be efficiently solved by a computer (or put another way, whether questions exist whose answer can be quickly checked, but which require an impossibly long time to solve by any direct procedure). The solution to this simple enough sounding problem, usually known as Cook's Theorem or the Cook-Levin Theorem, has eluded mathematicians and computer scientists for 40 years. A possible solution by Vinay Deolalikar in 2010, claiming to prove that P is not equal to NP (and thus such insolulable-but-easily-checked problems do exist), has attracted much attention but has not as yet been fully accepted by the computer science community.


    The eccentric British mathematician G.H. Hardy is known for his achievements in number theory and mathematical analysis. But he is perhaps even better known for his adoption and mentoring of the self-taught Indian mathematical genius, Srinivasa Ramanujan.

    Hardy himself was a prodigy from a young age, and stories are told about how he would write numbers up to millions at just two years of age, and how he would amuse himself in church by factorizing the hymn numbers. He graduated with honours from Cambridge University, where he was to spend most of the rest of his academic career.

    Hardy is sometimes credited with reforming British mathematics in the early 20th Century by bringing a Continental rigour to it, more characteristic of the French, Swiss and German mathematics he so much admired, rather than British mathematics. He introduced into Britain a new tradition of pure mathematics (as opposed to the traditional British forte of applied mathematics in the shadow of Newton), and he proudly declared that nothing he had ever done had any commercial or military usefulness (he was also an outspoken pacifist).

    Just before the First World War, Hardy (who was given to flamboyant gestures) made mathematical headlines when he claimed to have proved the Riemann Hypothesis. In fact, he was able to prove that there were infinitely many zeroes on the critical line, but was not able to prove that there did not exist other zeroes that were NOT on the line (or even infinitely many off the line, given the nature of infinity).

    Meanwhile, in 1913, Srinivasa Ramanujan, a 23-year old shipping clerk from Madras, India, wrote to Hardy (and other academics at Cambridge), claiming, among other things, to have devised a formula that calculated the number of primes up to a hundred million with generally no error. The self-taught and obsessive Ramanujan had managed to prove all of Riemann’s results and more with almost no knowledge of developments in the Western world and no formal tuition. He claimed that most of his ideas came to him in dreams.

    Hardy was only one to recognize Ramanujan's genius, and brought him to Cambridge University, and was his friend and mentor for many years. The two collaborated on many mathematical problems, although the Riemann Hypothesis continued to defy even their joint efforts.

    Hardy-Ramanujan taxicab numbers

    Hardy-Ramanujan "taxicab numbers"

    A common anecdote about Ramanujan during this time relates how Hardy arrived at Ramanujan's house in a cab numbered 1729, a number he claimed to be totally uninteresting. Ramanujan is said to have stated on the spot that, on the contrary, it was actually a very interesting number mathematically, being the smallest number representable in two different ways as a sum of two cubes. Such numbers are now sometimes referred to as "taxicab numbers".

    It is estimated that Ramanujan conjectured or proved over 3,000 theorems, identities and equations, including properties of highly composite numbers, the partition function and its asymptotics and mock theta functions. He also carried out major investigations in the areas of gamma functions, modular forms, divergent series, hypergeometric series and prime number theory.

    Among his other achievements, Ramanujan identified several efficient and rapidly converging infinite series for the calculation of the value of π, some of which could compute 8 additional decimal places of π with each term in the series. These series (and variations on them) have become the basis for the fastest algorithms used by modern computers to compute π to ever increasing levels of accuracy (currently to about 5 trillion decimal places).

    Eventually, though, the frustrated Ramanujan spiralled into depression and illness, even attempting suicide at one time. After a period in a sanatorium and a brief return to his family in India, he died in 1920 at the tragically young age of 32. Some of his original and highly unconventional results, such as the Ramanujan prime and the Ramanujan theta function, have inspired vast amounts of further research and have have found applications in fields as diverse as crystallography and string theory.

    Hardy lived on for some 27 years after Ramanujan’s death, to the ripe old age of 70. When asked in an interview what his greatest contribution to mathematics was, Hardy unhesitatingly replied that it was the discovery of Ramanujan, and even called their collaboration "the one romantic incident in my life". However, Hardy too became depressed later in life and attempted suicide by an overdose at one point. Some have blamed the Riemann Hypothesis for Ramanujan and Hardy's instabilities, giving it something of the reputation of a curse.


    Bertrand Russell and Alfred North Whitehead were British mathematicians, logicians and philosophers, who were in the vanguard of the British revolt against Continental idealism in the early 20th Century and, between them, they made important contributions in the fields of mathematical logic and set theory.

    Whitehead was the elder of the two and came from a more pure mathematics background. He became Russell’s tutor at Trinity College, Cambridge in the 1890s, and then collaborated with his more celebrated ex-student in the first decade of the 20th Century on their monumental work, the “Principia Mathematica”. After the First World War, though, much of which Russell spent in prison due to his pacifist activities, the collaboration petered out, and Whitehead’s academic career remained ever after in the shadow of that of the more flamboyant Russell. He emigrated to the United States in the 1920s, and spent the rest of his life there.

    Russell was born into a wealthy family of the British aristocracy, although his parents were extremely liberal and radical for the times. His parents died when Russell was quite young and he was largely brought up by his staunchly Victorian (although quite progressive) grandmother. His adolescence was very lonely and he suffered from bouts of depression, later claiming that it was only his love of mathematics that kept him from suicide. He studied mathematics and philosophy at Cambridge University under G.E. Moore and A.N. Whitehead, where he developed into an innovative philosopher, a prolific writer on many subjects, a committed atheist and an inspired mathematician and logician. Today, he is considered one of the founders of analytic philosophy, but he wrote on almost every major area of philosophy, particularly metaphysics, ethics, epistemology, the philosophy of mathematics and the philosophy of language.

    Russell was a committed and high-profile political activist throughout his long life. He was a prominent anti-war activist during both the First and Second World Wars, championed free trade and anti-imperialism, and later became a strident campaiger for nuclear disarmament and socialism, and against Adolf Hitler, Soviet totalitarianism and the USA’s involvement in the Vietnam War.

    Russell’s Paradox

    Russell’s Paradox

    Russell's mathematics was greatly influenced by the set theory and logicism Gottlob Frege had developed in the wake of Cantor's groundbreaking early work on sets. In his 1903 "The Principles of Mathematics", though, he identified what has come to be known as Russell's Paradox (a set containing sets that are not members of themselves), which showed that Frege's naive set theory could in fact lead to contradictions. The paradox is sometimes illustrated by this simplistic example: "If a barber shaves all and only those men in the village who do not shave themselves, does he shave himself?"

    The paradox seemed to imply that the very foundations of the whole of mathematics could no longer be trusted, and that, even in mathematics, the truth could never be known absolutely (Gödel's and Turing's later work would only make this worse). Russell's criticism was enough to rock Frege’s confidence in the entire edifice of logicism, and he was gracious enough to admit this openly in a hastily written appendix to Volume II of his "Basic Laws of Arithmetic".

    But Russell's magnum opus was the monolithic “Principia Mathematica”, published in three volumes in 1910, 1912 and 1913. The first volume was co-written by Whitehead, although the later two were almost all Russell’s work. The aspiration of this ambitious work was nothing less than an attempt to derive all of mathematics from purely logical axioms, while avoiding the kinds of paradoxes and contradictions found in Frege’s earlier work on set theory. Russell achieved this by employing a theory or system of "types”, whereby each mathematical entity is assigned to a type within a hierarchy of types, so that objects of a given type are built exclusively from objects of preceding types lower in the hierarchy, thus preventing loops. Each set of elements, then, is of a different type than each of its elements, so that one can not speak of the "set of all sets" and similar constructs, which lead to paradoxes.

    However, the “Principia" required, in addition to the basic axioms of type theory, three further axioms that seemed to not be true as mere matters of logic, namely the “axiom of infinity” (which guarantees the existence of at least one infinite set, namely the set of all natural numbers), the “axiom of choice” (which ensures that, given any collection of “bins”, each containing at least one object, it is possible to make a selection of exactly one object from each bin, even if there are infinitely many bins, and that there is no "rule" for which object to pick from each) and Russell’s own “axiom of reducibility” (which states that any propositional truth function can be expressed by a formally equivalent predicative truth function).

    During the ten years or so that Russell and Whitehead spent on the "Principia", draft after draft was begun and abandoned as Russell constantly re-thought his basic premises. Russell and his wife Alys even moved in with the Whiteheads in order to expedite the work, although his own marriage suffered as Russell became infatuated with Whitehead's young wife, Evelyn. Eventually, Whitehead insisted on publication of the work, even if it was not (and might never be) complete, although they were forced to publish it at their own expense as no commercial publishers would touch it.

    A small part of the long  proof that 1+1 =2 in the Principia Mathematica

    A small part of the long proof that 1+1 =2 in the “Principia Mathematica”

    Some idea of the scope and comprehensiveness of the “Principia” can be gleaned from the fact that it takes over 360 pages to prove definitively that 1 + 1 = 2. Today, it is widely considered to be one of the most important and seminal works in logic since Aristotle's "Organon". It seemed remarkably successful and resilient in its ambitious aims, and soon gained world fame for Russell and Whitehead. Indeed, it was only Gödel's 1931 incompleteness theorem that finally showed that the “Principia” could not be both consistent and complete.

    Russell was awarded the Order of Merit in 1949 and the Nobel Prize in Literature in the following year. His fame continued to grow, even outside of academic circles, and he became something of a household name in later life, although largely as a result of his philosophical contributions and his political and social activism, which he continued until the end of his long life. He died of influenza in his beloved Wales at the grand old age of 97.


    David Hilbert was a great leader and spokesperson for the discipline of mathematics in the early 20th Century. But he was an extremely important and respected mathematician in his own right.

    Like so many great German mathematicians before him, Hilbert was another product of the University of Göttingen, at that time the mathematical centre of the world, and he spent most of his working life there. His formative years, though, were spent at the University of Königsberg, where he developed an intense and fruitful scientific exchange with fellow mathematicians Hermann Minkowski and Adolf Hurwitz.

    Sociable, democratic and well-loved both as a student and as a teacher, and often seen as bucking the trend of the formal and elitist system of German mathematics, Hilbert’s mathematical genius nevertheless spoke for itself. He has many mathematical terms named after him, including Hilbert space (an infinite dimensional Euclidean space), Hilbert curves, the Hilbert classification and the Hilbert inequality, as well as several theorems, and he gradually established himself as the most famous mathematician of his time.

    His pithy enumeration of the 23 most important open mathematical questions at the 1900 Paris conference of the International Congress of Mathematicians at the Sorbonne set the stage for almost the whole of 20th Century mathematics. The details of some of these individual problems are highly technical; some are very precise, while some are quite vague and subject to interpretation; several problems have now already been solved, or at least partially solved, while some may be forever unresolvable as stated; some relate to rather abstruse backwaters of mathematical thought, while some deal with more mainstream and well-known issues such as the Riemann hypothesis, the continuum hypothesis, group theory, theories of quadratic forms, real algebraic curves, etc.

    Hilbert’s algorithm for space-filling curves

    Hilbert’s algorithm for space-filling curves

    As a young man, Hilbert began by pulling together all of the may strands of number theory and abstract algebra, before changing field completely to pursue studies in integral equations, where he revolutionized the then current practices. In the early 1890s, he developed continuous fractal space-filling curves in multiple dimensions, building on earlier work by Guiseppe Peano. As early as 1899, he proposed a whole new formal set of geometrical axioms, known as Hilbert's axioms, to substitute the traditional axioms of Euclid.

    But perhaps his greatest legacy is his work on equations, often referred to as his finiteness theorem. He showed that although there were an infinite number of possible equations, it was nevertheless possible to split them up into a finite number of types of equations which could then be used, almost like a set of building blocks, to produce all the other equations.

    Interestingly, though, Hilbert could not actually construct this finite set of equations, just prove that it must exist (sometimes referred to as an existence proof, rather than constructive proof). At the time, some critics passed this off as mere theology or smoke-and-mirrors, but it effectively marked the beginnings of a whole new style of abstract mathematics.

    Among other things, Hilbert space can be used to study the harmonics of vibrating strings

    Among other things, Hilbert space can be used to study the harmonics of vibrating strings

    This use of an existence proof rather than constructive proof was also implicit in his development, during the first decade of the 20th Century, of the mathematical concept of what came to be known as Hilbert space. Hilbert space is a generalization of the notion of Euclidean space which extends the methods of vector algebra and calculus to spaces with any finite (or even infinite) number of dimensions. Hilbert space provided the basis for important contributions to the mathematics of physics over the following decades, and may still offer one of the best mathematical formulations of quantum mechanics.

    Hilbert was unfailingly optimistic about the future of mathematics, never doubting that his 23 problems would soon be solved. In fact, he went so far as to claim that there are absolutely no unsolvable problems - a famous quote of his (dating from 1930, and also engraved on his tombstone) proclaimed, “We must know! We will know!” - and he was convinced that the whole of mathematics could, and ultimately would, be put on unshakable logical foundations. Another of his rallying cries was “in mathematics there is no ignorabimus”, a reference to the traditional position on the limits of scientific knowledge.

    Unlike Russell, Hilbert’s formalism was premised on the idea that the ultimate base of mathematics lies, not in logic itself, but in a simpler system of pre-logical symbols which can be collected together in strings or axioms and manipulated according to a set of “rules of inference”. His ambitious program to find a complete and consistent set of axioms for all of mathematics (which became known as Hilbert’s Program), received a severe set-back, however, with the incompleteness theorems of Kurt Gödel in the early 1930s. Nevertheless, Hilbert's work had started logic on a course of clarification, and the need to understand Gödel's work then led to the development of recursion theory and mathematical logic as an autonomous discipline in the 1930s, and later provided the basis for theoretical computer science.

    For a time, Hilbert bravely spoke out against the Nazi repression of his Jewish mathematician friends in Germany and Austria in the mid 1930s. But, after mass evictions, several suicides, many deaths in concentration camps, and even direct assassinations, he too eventually lapsed into silence, and could only watch as one of the greatest mathematical centres of all time was systematically destroyed. By the time of his death in 1943, little remained of the great mathematics community at Göttingen, and Hilbert was buried in relative obscurity, his funeral attended by fewer than dozen people and hardly reported in the press.


    Kurt Gödel grew up a rather strange, sickly child in Vienna. From an early age his parents took to referring to him as “Herr Varum”, Mr Why, for his insatiable curiosity. At the University of Vienna, Gödel first studied number theory, but soon turned his attention to mathematical logic, which was to consume him for most of the rest of his life. As a young man, he was, like Hilbert, optimistic and convinced that mathematics could be made whole again, and would recover from the uncertainties introduced by the work of Cantor and Riemann.

    Between the wars, Gödel joined in the cafe discussions of a group of intense intellectuals and philosophers known as the Vienna Circle, which included logical positivists such as Moritz Schlick, Hans Hahn and Rudolf Carnap, who rejected metaphysics as meaningless and sought to codify all knowledge in a single standard language of science.

    Although Gödel did not necessarily share the positivistic philosophical outlook of the Vienna Circle, it was in this enviroment that Gödel pursued his dream of solving the second, and perhaps most overarching, of Hilbert’s 23 problems, which sought to find a logical foundation for all of mathematics. The ideas he came up with would revolutionize mathematics, as he effectively proved, mathematically and philosophically, that Hilbert’s (and his own) optimism was unfounded and that such a foundation was just not possible.

    His first achievement, which actually served to advance Hilbert's Program, was his completeness theorem, which showed that all valid statements in Freges's "first order logic" can be proved from a set of simple axioms. However, he then turned his attention to "second order logic", i.e a logic powerful enough to support arithmetic and more complex mathematical theories (essentially, one able to accept sets as values of variables).

    Gödel’s incompleteness theorem (technically "incompleteness theorems", plural, as there were actually two separate theorems, although they are usually spoken of together) of 1931 showed that, within any logical system for mathematics (or at least in any system that is powerful and complex enough to be able to describe the arithmetic of the natural numbers, and therefore to be interesting to most mathematicians), there will be some statements about numbers which are true but which can NEVER be proved. This was enough to prompt John von Neumann to comment that "it's all over".

    Gödel’s Incompleteness Theorem

    Gödel’s Incompleteness Theorem

    His approach began with the plain language assertion such as “this statement cannot be proved”, a version of the ancient “liar paradox”, and a statement which itself must be either true or false. If the statement is false, then that means that the statement can be proved, suggesting that it is actually true, thus generating a contradiction. For this to have implications in mathematics, though, Gödel needed to convert the statement into a "formal language" (i.e. a pure statement of arithmetic). He did this using a clever code based on prime numbers, where strings of primes play the roles of natural numbers, operators, grammatical rules and all the other requirements of a formal language. The resulting mathematical statement therefore appears, like its natural language equivalent, to be true but unprovable, and must therefore remain undecided.

    The incompleteness theorem - surely a mathematician’s worst nightmare - led to something of a crisis in the mathematical community, raising the spectre of a problem which may turn out to be true but is still unprovable, something which had not been even considered in the whole two millennia plus history of mathematics. Gödel effectively put paid, at a stroke, to the ambitions of mathematicians like Bertrand Russell and David Hilbert who sought to find a complete and consistent set of axioms for all of mathematics. His work PROVED that any system of logic or numbers that mathematicians ever come up with will always rest on at least a few unprovable assumptions. His conclusions also imply that not all mathematical questions are even computable, and that it is impossible, even in principle, to create a machine or computer that will be able to do all that a human mind can do.

    Representation of the Gödel Metric, an exact solution to Einstein's field equations

    Representation of the Gödel Metric, an exact solution to Einstein's field equations

    Unfortunately, the theorems also led to a personal crisis for Gödel. In the mid 1930s, he suffered a series of mental breakdowns and spent some significant time in a sanatorium. Nevertheless, he threw himself into the same problem that had destroyed the mental well-being of Georg Cantor during the previous century, the continuum hypothesis. In fact, he made an important step in the resolution of that notoriously difficult problem (by proving that the the axiom of choice is independence from finite type theory), without which Paul Cohen would probably never have been able to come to his definitive solution. Like Cantor and others after him, though, Gödel too suffered a gradual deterioration in his mental and physical health.

    He was only kept afloat at all by the love of his life, Adele Numbursky. Together, they witnessed the virtual destruction of the German and Austrian mathematics community by the Nazi regime. Eventually, along with many other eminent European mathematicians and scholars, Gödel fled the Nazis to the safety of Princeton in the USA, where he became a close friend of fellow exile Albert Einstein, contributing some demonstrations of paradoxical solutions to Einstein's field equations in general relativity (including his celebrated Gödel metric of 1949).

    But, even in the USA, he was not able to escape his demons, and was dogged by depression and paranoia, suffering several more nervous breakdowns. Eventually, he would only eat food that had been tested by his wife Adele, and, when Adele herself was hospitalized in 1977, Gödel simply refused to eat and starved himself to death.

    Gödel’s legacy is ambivalent. Although he is recognized as one of the great logicians of all time, many were just not prepared to accept the almost nihilistic consequences of his conclusions, and his explosion of the traditional formalist view of mathematics. Worse news was still to come, though, as the mathematical community (including, as we will see, Alan Turing) struggled to come to grips with Gödel’s findings.


    The British mathematician Alan Turing is perhaps most famous for his war-time work at the British code-breaking centre at Bletchley Park where his work led to the breaking of the German enigma code (according to some, shortening the Second World War at a stroke, and potentially saving thousands of lives). But he was also responsible for making Gödel’s already devastating incompleteness theorem even more bleak and discouraging, and it is mainly on this - and the development of computer science that his work gave rise to - that Turing’s mathematical legacy rests.

    Despite attending an expensive private school which strongly emphasized the classics rather than the sciences, Turing showed early signs of the genius which was to become more prominent later, solving advanced problems as a teenager without having even studied elementary calculus, and immersing himself in the complex mathematics of Albert Einstein's work. He became a confirmed atheist after the death of his close friend and fellow Cambridge student Christopher Morcom, and throughout his life he was an accomplished and committed long-distance runner.

    In the years following the publication of Gödel’s incompleteness theorem, Turing desperately wanted to clarify and simplify Gödel’s rather abstract and abstruse theorem, and to make it more concrete. But his solution - which was published in 1936 and which, he later claimed, had come to him in a vision - effectively involved the invention of something that has come to shape the entire modern world, the computer.

    Representation of a Turing Machine

    Representation of a Turing Machine

    During the 1930s, Turing recast incompleteness in terms of computers (or, more specifically, a theoretical device that manipulates symbols, known as a Turing machine), replacing Gödel's universal arithmetic-based formal language with this formal and simple device. He first proved that such a machine would be capable of performing any conceivable mathematical computation if it were representable as an algorithm. He then went on to show that, even for such a logical machine, essentially driven by arithmetic, there would always be some problems they would never be able to solve, and that a machine fed such a problem would never stop trying to solve it, but would never succeed (known as the “halting problem”).

    In the process, he also proved that there was no way of telling beforehand which problems were the unprovable ones, thus providing a negative proof to the so-called Entscheidungsproblem or “decision problem“, posed by David Hilbert in 1928. This was a further slap in the face for a mathematics community still reeling from Gödel’s crushing incompleteness theorem.

    After the war, Turing continued the work he had begun, and worked on the development of early computers such as ACE (Automatic Computing Engine) and the Manchester Mark 1. Although the computer he developed was a very basic and limited machine by modern standards, Turing clearly saw its potential, and dreamed that one day computers would be more than machines, capable of learning, thinking and communicating. He was the first to develop ideas for a chess-playing computer program, and saw mastery in the game as one of the goals that designers of intelligent machines should strive for.

    Turing test

    Turing test

    Indeed, he was the first to address the problem of artificial intelligence, and proposed an experiment now known as the Turing Test in an attempt to define a standard for a machine to be called "intelligent". By this test, a computer could be said to "think" if it could fool a human interrogator into thinking that the conversation was with a human. This showed remarkable foresight at a time long before the Internet, when the only available computers were the size of a room and less powerful than a modern pocket calculator.

    Turing’s personal philosophy was to be free from hypocrisy, compromise and deceit. He was, for example, a homosexual at a time when it was both illegal and even dangerous, yet he never hid it nor made it an issue. Unlike Gödel (who strongly believed in the power of intuition, and who was convinced that the human mind was capable of going beyond the limitations of the systems he described), Turing clearly felt a certain affinity with computers and, to some extent, he saw them as embodying this admirable absence of lies or hypocrisy.

    After the war, he was kept under surveillance as a potential security risk by the authorities and eventually, in 1952, he was arrested, charged and found guilty of engaging in a homosexual act. As a result, he was chemically castrated by an injection of the female hormone estrogen, which caused him to grow breasts and also affected his mind. In 1954, Turing was found dead, having committed suicide with cyanide.


    André Weil was a very influential French mathematician around the middle of the 20th Century. Born into a properous Jewish family in Paris, he was brother to the well-known philosopher and writer Simone Weil, and both were child prodigies. He was passionately addicted to mathematics by the age of ten, but he also loved to travel and study languages (by the age of sixteen he had read the "Bhagavad Gita" in the original Sanskrit).

    He studied (and later taught) in Paris, Rome, Göttingen and elsewhere, as well as at the Aligarh Muslim University in Uttar Pradesh, India, were he further explored what would become a life-long interest in Hinduism and Sanskrit literature.

    Even as a young man, Weil made substantial contributions in many areas of mathematics, and was particularly animated by the idea of discovering profound connections between algebraic geometry and number theory. His fascination with Diophantine equations led to his first substantial piece of mathematical research on the theory of algebraic curves. During the 1930s, he introduced the adele ring, a topological ring in algebraic number theory and topological algebra, which is built on the field of rational numbers.

    Weil was an early leader of the Bourbaki group who published many influential textbooks on modern mathematics

    Weil was an early leader of the Bourbaki group who published many influential textbooks on modern mathematics

    It was also at this time that he became a founding member, and the de facto early leader, of the so-called Bourbaki group of French mathematicians. This influential group published many textbooks on advanced 20th Century mathematics under the assumed name of Nicolas Bourbaki, in an attempt to give a unified description of all mathematics founded on set theory. Bourbaki has the distinction of having been refused membership of the American Mathematical Society for being non-existent (although he was a member of the Mathematical Society of France!)

    When the Second World War broke out, Weil, a committed conscientious objector, fled to Finland, where he was mistakenly arrested as a possible spy. Having made his way back to France, he was again arrested and imprisoned as for refusing to report for military service. In his trial, he cited the Bhagavad Gita to justify his stand, arguing that his true dharma was the pursuit of mathematics, not assisting in the war effort, however just the cause. Given the choice of five more years in prison or joining a French combat unit, though, he chose the latter, an especially lucky decision given that the prison was blown up shortly afterwards.

    But it was in 1940, in a prison near Rouen, that Weil did the work that really made his reputation (although his full proofs had to wait until 1948, and even more rigorous proofs were supplied by Pierre Deligne in 1973). Building on the prescient work of his countryman Évariste Galois in the previous century, Weil picked up the idea of using geometry to analyze equations, and developed algebraic geometry, a whole new language for understanding solutions to equations.

    An illustration of the

    An illustration of the "cycle évanescent" or "vanishing cycle" described in Deligne's proof of the Weil conjectures

    The Weil conjectures on local zeta-functions effectively proved the Riemann hypothesis for curves over finite fields, by counting the number of points on algebraic varieties over finite fields. In the process, he introduced for the first time the notion of an abstract algebraic variety and thereby laid the foundations for abstract algebraic geometry and the modern theory of abelian varieties, as well as the theory of modular forms, automorphic functions and automorphic representations. His work on algebraic curves has influenced a wide variety of areas, including some outside of mathematics, such as elementary particle physics and string theory.

    In 1941, Weil and his wife took the opportunity to sail for the United States, where they spent the rest of the War and the rest of their lives. In the late 1950s, Weil formulated another important conjecture, this time on Tamagawa numbers, which remained resistant to proof until 1989. He was instrumental in the formulation of the so-called Shimura-Taniyama-Weil conjecture on elliptic curves which was used by Andrew Wiles as a link in the proof of Fermat’s Last Theorem. He also developed the Weil representation, an infinite-dimensional linear representation of theta functions which gave a contemporary framework for understanding the classical theory of quadratic forms.

    Over his lifetime, Weil received many honorary memberships, including the London Mathematical Society, the Royal Society of London, the French Academy of Sciences and the American National Academy of Sciences. He remained active as professor emeritus at the Institute for Advanced Studies at Princeton until a few years before his death.


    Paul Cohen was one of a new generation of American mathematician inspired by the influx of European exiles over the War years. He himself was a second generation Jewish immigrant, but he was dauntingly intelligent and extremely ambitious. By sheer intelligence and force of will, he went on to garner for himself fame, riches and the top mathematical prizes.

    He was educated at New York, Brooklyn and the University of Chicago, before working his way up to a professorship at Stanford University. He went on to win the prestigious Fields Medal in mathematics, as well as the National Medal of Science and the Bôcher Memorial Prize in mathematical analysis. His mathematical interests were very broad, ranging from mathematical analysis and differential equations to mathematical logic and number theory.

    In the early 1960s, he earnestly applied himself to the first of Hilbert’s 23 list of open problems, Cantor’s continuum hypothesis, whether or not there exists a set of numbers of numbers bigger than the set of all natural (or whole) numbers but smaller than the set of real (or decimal) numbers. Cantor was convinced that the answer was “no” but was not able to prove it satisfactorily, and neither was anyone else who had applied themselves to the problem since.

    One of several alternative formulations of the Zermelo-Fraenkel Axioms and Axiom of Choice

    One of several alternative formulations of the Zermelo-Fraenkel Axioms and Axiom of Choice

    Some progress had been made since Cantor. The Zermelo-Fraenkel set theory, as modified by the Axiom of Choice (commonly abbreviated together as ZFC), developed between about 1908 and 1922, had become accepted as the standard form of axiomatic set theory and the most common foundation of mathematics.

    Kurt Gödel had demonstrated in 1940 that the continuum hypothesis is consistent with ZFC (more specifically, that the continuum hypothesis cannot be disproved from the standard Zermelo-Fraenkel set theory, even if the axiom of choice is adopted. Cohen’s task, then, was to show that the continuum hypothesis was independent of ZFC (or not), and specifically to prove the independence of the axiom of choice.

    Cohen’s extraordinary and daring conclusion, arrived at using a new technique he developed himself called "forcing", was that both answers could be true, i.e. that the continuum hypothesis and the axiom of choice were completely independent from ZFC set theory. Thus, there could be two different, internally consistent mathematics, one where the continuum hypothesis was true, and there was no such set of numbers, one where and the hypothesis was false and a set of numbers did exist. The proof seemed to be correct, but Cohen’s methods (particularly his new technique of “forcing”) were so new that no-one was really quite sure until Gödel finally gave his stamp of approval in 1963.

    His findings were as revolutionary as Gödel’s own. Since that time, mathematicians have built up two different mathematical worlds, one in which the continuum hypothesis applies and one in which it does not, and modern mathematical proofs must insert a statement declaring whether or not the result depends on the continuum hypothesis.

    Cohen’s paradigm-changing proof brought him fame, riches and mathematical prizes galore, and he became a top professor at Stanford and Princeton. Flushed with success, he decided to tackle the Holy Grail of modern mathematics, Hilbert’s eighth problem, the Riemann hypothesis. However, he ended up spending the last 40 years of his life, until his death in 2007, on the problem, still with no resolution (although his approach has given new hope to others, including his brilliant student, Peter Sarnak).


    In a field almost completely dominated by men, Julia Robinson was one of the very few women to have made a serious impact on mathematics - others who merit mention are Sophie Germain and Sofia Kovaleskaya in the 19th Century, and Alicia Stout and Emmy Noether in the 20th - and she became the first women to be elected as president of the American Mathematical Society.

    Brought up in the deserts of Arizona, Robinson was a shy and sickly child but showed an innate love for, and facility with, numbers from an early age. She had to overcome many obstacles and to fight to be allowed to continue studying mathematics, but she persevered, obtained her PhD at Berkeley and married a mathematician, her Berkeley professor, Raphael Robinson.

    She spent most of her career pursuing computability and “decision problems”, questions in formal systems with “yes” or “no” answers, depending on the values of some input parameters. Her particular passion was Hilbert’s tenth problem, and she applied herself to it obsessively. The problem was to ascertain whether there was any way of telling whether or not any particular Diophantine equation (a polynomial equation whose variables can only be integers) had whole number solutions. The growing belief was that no such universal method was possible, but it seemed very difficult to actually prove that it would NEVER be possible to come up with such a method.

    Throughout the 1950s and 1960s, Robinson, along with her colleagues Martin Davis and Hilary Putnam, doggedly pursued the problem, and eventually developed what became known as the Robinson hypothesis, which suggested that, in order to show that no such method existed, all that was needed was to construct one equation whose solution was a very specific set of numbers, one which grew exponentially.

    The problem had obsessed Robinson for over twenty years and she confessed to a desperate desire to see its solution before she died, whoever might achieve it. In order to progress further, though, she needed input from the young Russian mathematician, Yuri Matiyasevich.

    Born and educated in Leningrad (St. Petersburg), Matiyasevich had already distinguished himself as a mathematical prodigy, and won numerous prizes in mathematics. He turned to Hilbert’s tenth problem as the subject of his doctoral thesis at Leningrad State University, and began to correspond with Robinson about her progress, and to search for a way forward.

    After pursuing the problem during the late 1960s, Matiyasevich finally discovered the final missing piece of the jigsaw in 1970, when he was just 22 years old. He saw how he could capture the famous Fibonacci sequence of numbers using the equations that were at the heart of Hilbert’s tenth problem, and so, building on Robinson’s earlier work, it was finally proved that it is in fact impossible to devise a process by which it can be determined in a finite number of operations whether Diophantine equations are solvable in rational integers.

    Matiyasevich-Stechkin visual sieve for prime numbers

    Matiyasevich-Stechkin visual sieve for prime numbers

    In a poignant example of the internationalism of mathematics at the height of the Cold War, Matiyasevich freely acknowledged his debt to Robinson’s work, and the two went on to work together on other problems until Robinson’s death in 1984.

    Among, his other achievements, Matiyasevich and his colleague Boris Stechkin also developed an interesting “visual sieve” for prime numbers, which effectively “crosses out” all the composite numbers, leaving only the primes. He has a theorem on recursively enumerable sets named after him, as well as a polynomial related to the colourings of triangulation of spheres. He is head of the Laboratory of Mathematical Logic at the St. Petersburg Department of the Steklov Institute of Mathematics of Russian Academy of Sciences, and is a member of several mathematical societies and boards.