Addition is the most basic operation of arithmetic. In its simplest form, addition combines two numbers, the addends, into a single number, the sum. Adding more than two numbers can be viewed as repeated addition; this procedure is known as summation and includes ways to add infinitely many numbers in an infinite series.
Addition can also be defined for mathematical objects other than numbers — for example, matrices or polynomials. Regardless of the nature and number of objects being added, the individual constituents of a sum typically are called summands or terms. (This is to be distinguished from factors, which are multiplied.)
- 1 Notation
- 2 Interpretations
- 3 Basic properties
- 4 Generalizations
- 5 Related operations
- 6 See also
- 7 Notes
- 8 References
- 9 External links
- 1 + 1 = 2
- 2 + 2 = 4
- 5 + 4 + 2 = 11 (see "associativity" below)
- 3 + 3 + 3 + 3 = 12 (see "multiplication" below)
There are also situations where addition is "understood" even though no symbol appears:
- A column of numbers, with the last number in the column underlined, usually (but not always) indicates that the numbers in the column are to be added, with the sum written below the underlined number.
- A whole number followed immediately by a fraction indicates the sum of the two, called a mixed number. For example,
- 31⁄2 = 3 + 1⁄2 = 3.5.
- This notation can cause confusion, since in most other contexts, juxtaposition denotes multiplication instead.
Addition is used to model countless physical processes. Even for the simple case of adding natural numbers, there are many possible interpretations and even more visual representations.
Possibly the most fundamental interpretation of addition lies in combining sets:
- When two or more collections are combined into a single collection, the number of objects in the single collection is the sum of the number of objects in the original collections.
This interpretation is well-suited to quick proofs of the properties of natural number addition, and it is easy to visualize, with little danger of ambiguity. However, it is not obvious how one should extend this version of addition to include fractional numbers or negative numbers. See this article for an example of the sophistication involved in adding with sets of "fractional cardinality".
One possible fix is to consider collections of objects that can be easily divided, such as pies or, still better, segmented rods. Rather than just combining collections of segments, rods can be joined end-to-end.
- This section is under construction.
Extending a measure
- When an original measure is extended by a given amount, the final measure is the sum of the original measure and the measure of the extension.
Under this interpretation, the parts of a sum a + b play asymmetric roles; instead of calling both a and b addends, it is more appropriate to call a the augend, since a plays a passive role. In geometry, a might be a point and b a vector; their sum is then another point, the translation of a by b. In analytic geometry, a and b might both be represented by ordered pairs of numbers, but they remain conceptually different.
Here, the addition operation is not so much a binary operation as a family of unary operations; the function (+b) is acting on a. The unary and binary views are formally equivalent: if X is the set of all possible augends and Y is the set of all possible addends, there is a natural identification of sets of functions
This formula is a special case of a law of exponentiation that may be more familiar for numbers.
The unary view is useful, for example, when discussing subtraction. Addition and subtraction are not inverses as binary operations, but they are inverses as families of unary operations.
- This section is under construction.
- When two motions are performed in succession, the measure of the resulting motion is the sum of the measures of the original motions.
- This section is under construction.
Addition is commutative, meaning that one can reverse the terms in a sum left-to-right, and the result will be the same. Symbolically, if a and b are any two numbers, then
- a + b = b + a.
The fact that addition is commutative is known as the "commutative law of addition". This phrase suggests that there are other commutative laws: for example, there is a commutative law of multiplication. However, many binary operations are not commutative, such as subtraction and division, so it is misleading to speak of an unqualified "commutative law".
A somewhat subtler property of addition is associativity, which comes up when one tries to define repeated addition. Should the expression
- "a + b + c"
be defined to mean (a + b) + c or a + (b + c)? That addition is associative tells us that the choice of definition is irrelevant. For any three numbers a, b, and c, it is true that
- (a + b) + c = a + (b + c).
Not all operations are associative, so in expressions with operations other than addition, it is important to specify the order of operations.
Zero and one
- a + 0 = 0 + a = a.
The sum of any number and its additive inverse (in contexts where such a thing exists) is zero.
In the context of integers, addition of one plays a special role: for any integer a, the integer (a + 1) is the least integer greater than a, also known as the successor of a.
In order to numerically add certain types of numbers, such as vulgar fractions and physical quantities with units, they must first be expressed with a common denominator. For example, if a measure of 5 feet is extended by 2 inches, the sum is 62 inches, since 60 inches is another name for 5 feet. On the other hand, it is usually meaningless to try to add 3 meters and 4 square meters, since those units are incomparable; this sort of consideration is fundamental in dimensional analysis.
- There are many things that can be added: numbers, vectors, matrices, spaces, shapes, sets, functions, equations, strings, chains... —Alexander Bogomolny
Addition is first defined on the natural numbers. In set theory, addition is then extended to larger sets that include the natural numbers: the integers, the rational numbers, and the real numbers. (In mathematics education, positive fractions are added before negative numbers are even considered; this is also the historical route.) In turn, real addition extends to addition operations on even larger sets, such as the set of complex numbers or a many-dimensional vector space in linear algebra.
There are many more sets that support an operation called addition.
There are already infinitely many natural numbers, and the set of real numbers is even larger. It is also useful to study addition on smaller sets, even finite ones. In modular arithmetic, the set of integers modulo 12 has twelve elements; it inherits an addition operation from the integers that is central to musical set theory. The set of integers modulo 2 has just two elements; the addition operation it inherits is known in Boolean logic as "exclusive or".
The ideas of extending and compacting sets can be combined. In geometry, the sum of two angles is often taken to be their sum as two real numbers modulo 2π. This amounts to an addition operation on the circle, which in turn generalizes to addition operations on many-dimensional tori.
A general form of addition occurs in abstract algebra, where addition may be almost any well-defined binary operation on a set. For an operation to be called "addition" in abstract algebra, it is required to be associative and commutative.
Addition of sets
One extraordinary generalization of the addition of natural numbers is the addition of ordinal numbers. Unlike most addition operations, ordinal addition is not commutative. However, passing to the "smaller" class of cardinal numbers, we recover a commutative operation. Cardinal addition is closely related to the disjoint union of two sets. In category theory, the disjoint union is a kind of coproduct, so coproducts are perhaps the most abstract of all the generalizations of addition. Some coproducts are named to evoke their connection with addition; see Direct sum and Wedge sum.
- Incrementation, also known as the successor operation, is the addition of 1 to a number. In formal treatments of addition, such as the Peano axioms, the successor is an elementary operation, and addition is defined from successors through recursion.
- Summation describes the addition of arbitrarily many numbers, usually more than just two. It includes the idea of the sum of a single number, which is itself, and the empty sum, which is 0. An infinite summation is known as a series.
- Counting is an intuitive procedure that can be formalized as the summation of 1 over some finite domain. In everyday counting, the domain is typically a small set of physical objects; in mathematics it may be large and abstract, as it is for the prime counting function.
- Integration is a kind of "summation" over a continuum, or more precisely and generally, over a differentiable manifold. Integration over a zero-dimensional manifold reduces to summation.
- Subtraction can be thought of as a kind of addition—that is, the addition of an additive inverse. Subtraction is itself a sort of inverse to addition, in that adding x and subtracting x are inverse functions.
- Multiplication can be thought of as repeated addition. If a single term x appears in a sum n times, then the sum is the product of n and x. If n is not a natural number, the product may still make sense; for example, multiplication by −1 yields the additive inverse of a number. In many contexts, multiplication can be transformed into addition, and vice versa, through exponentials and logarithms. In general, multiplication operations always distribute over addition.
- Linear combinations combine multiplication and summation; they are sums in which each term has a multiplier, usually a real or complex number. Linear combinations are especially useful in contexts where straightforward addition would violate some normalization rule, such as mixing of strategies in game theory or superposition of states in quantum mechanics.
- Convolution is used to add two independent random variables defined by distribution functions. Its usual definition combines integration, subtraction, and multiplication. In general, convolution is useful as a kind of domain-side addition; by contrast, vector addition is a kind of range-side addition.