# Transfer Principle Non-Standard Analysis Essay

The history of calculus is fraught with philosophical debates about the meaning and logical validity of fluxions or infinitesimal numbers. The standard way to resolve these debates is to define the operations of calculus using epsilon–delta procedures rather than infinitesimals. **Non-standard analysis**^{[1]}^{[2]}^{[3]} instead reformulates the calculus using a logically rigorous notion of infinitesimal numbers.

Non-standard analysis was originated in the early 1960s by the mathematician Abraham Robinson.^{[4]}^{[5]} He wrote:

[...] the idea of infinitely small or

infinitesimalquantities seems to appeal naturally to our intuition. At any rate, the use of infinitesimals was widespread during the formative stages of the Differential and Integral Calculus. As for the objection [...] that the distance between two distinct real numbers cannot be infinitely small, Gottfried Wilhelm Leibniz argued that the theory of infinitesimals implies the introduction of ideal numbers which might be infinitely small or infinitely large compared with the real numbers but which wereto possess the same properties as the latter

Robinson argued that this law of continuity of Leibniz's is a precursor of the transfer principle. Robinson continued:

However, neither he nor his disciples and successors were able to give a rational development leading up to a system of this sort. As a result, the theory of infinitesimals gradually fell into disrepute and was replaced eventually by the classical theory of limits.

^{[6]}

Robinson continues:

It is shown in this book that Leibniz's ideas can be fully vindicated and that they lead to a novel and fruitful approach to classical Analysis and to many other branches of mathematics. The key to our method is provided by the detailed analysis of the relation between mathematical languages and mathematical structures which lies at the bottom of contemporary model theory.

In 1973, intuitionistArend Heyting praised non-standard analysis as "a standard model of important mathematical research".^{[7]}

## Introduction[edit]

A non-zero element of an ordered field is infinitesimal if and only if its absolute value is smaller than any element of of the form , for , a standard natural number. Ordered fields that have infinitesimal elements are also called non-Archimedean. More generally, non-standard analysis is any form of mathematics that relies on non-standard models and the transfer principle. A field which satisfies the transfer principle for real numbers is a hyperreal field, and non-standard real analysis uses these fields as *non-standard models* of the real numbers.

Robinson's original approach was based on these non-standard models of the field of real numbers. His classic foundational book on the subject *Non-standard Analysis* was published in 1966 and is still in print.^{[8]} On page 88, Robinson writes:

The existence of non-standard models of arithmetic was discovered by Thoralf Skolem (1934). Skolem's method foreshadows the ultrapower construction [...]

Several technical issues must be addressed to develop a calculus of infinitesimals. For example, it is not enough to construct an ordered field with infinitesimals. See the article on hyperreal numbers for a discussion of some of the relevant ideas.

## Basic definitions[edit]

In this section we outline one of the simplest approaches to defining a hyperreal field . Let be the field of real numbers, and let be the semiring of natural numbers. Denote by the set of sequences of real numbers. A field is defined as a suitable quotient of , as follows. Take a nonprincipal ultrafilter. In particular, contains the Fréchet filter. Consider a pair of sequences

We say that and are equivalent if they coincide on a set of indices which is a member of the ultrafilter, or in formulas:

The quotient of by the resulting equivalence relation is a hyperreal field , a situation summarized by the formula .

## Motivation[edit]

There are at least three reasons to consider non-standard analysis: historical, pedagogical, and technical.

### Historical[edit]

Much of the earliest development of the infinitesimal calculus by Newton and Leibniz was formulated using expressions such as *infinitesimal number* and *vanishing quantity*. As noted in the article on hyperreal numbers, these formulations were widely criticized by George Berkeley and others. It was a challenge to develop a consistent theory of analysis using infinitesimals and the first person to do this in a satisfactory way was Abraham Robinson.^{[6]}

In 1958 Curt Schmieden and Detlef Laugwitz published an Article "Eine Erweiterung der Infinitesimalrechnung"^{[9]} - "An Extension of Infinitesimal Calculus", which proposed a construction of a ring containing infinitesimals. The ring was constructed from sequences of real numbers. Two sequences were considered equivalent if they differed only in a finite number of elements. Arithmetic operations were defined elementwise. However, the ring constructed in this way contains zero divisors and thus cannot be a field.

### Pedagogical[edit]

H. Jerome Keisler, David Tall, and other educators maintain that the use of infinitesimals is more intuitive and more easily grasped by students than the "epsilon-delta" approach to analytic concepts.^{[10]} This approach can sometimes provide easier proofs of results than the corresponding epsilon-delta formulation of the proof. Much of the simplification comes from applying very easy rules of nonstandard arithmetic, as follows:

- infinitesimal × bounded = infinitesimal

- infinitesimal + infinitesimal = infinitesimal

together with the transfer principle mentioned below.

Another pedagogical application of non-standard analysis is Edward Nelson's treatment of the theory of stochastic processes.^{[11]}

### Technical[edit]

Some recent work has been done in analysis using concepts from non-standard analysis, particularly in investigating limiting processes of statistics and mathematical physics. Sergio Albeverio et al.^{[12]} discuss some of these applications.

## Approaches to non-standard analysis[edit]

There are two very different approaches to non-standard analysis: the semantic or model-theoretic approach and the syntactic approach. Both these approaches apply to other areas of mathematics beyond analysis, including number theory, algebra and topology.

Robinson's original formulation of non-standard analysis falls into the category of the *semantic approach*. As developed by him in his papers, it is based on studying models (in particular saturated models) of a theory. Since Robinson's work first appeared, a simpler semantic approach (due to Elias Zakon) has been developed using purely set-theoretic objects called superstructures. In this approach *a model of a theory* is replaced by an object called a *superstructure**V*(*S*) over a set S. Starting from a superstructure *V*(*S*) one constructs another object **V*(*S*) using the ultrapower construction together with a mapping *V*(*S*) → **V*(*S*) that satisfies the transfer principle. The map * relates formal properties of *V*(*S*) and **V*(*S*). Moreover, it is possible to consider a simpler form of saturation called countable saturation. This simplified approach is also more suitable for use by mathematicians who are not specialists in model theory or logic.

The *syntactic approach* requires much less logic and model theory to understand and use. This approach was developed in the mid-1970s by the mathematician Edward Nelson. Nelson introduced an entirely axiomatic formulation of non-standard analysis that he called Internal Set Theory (IST).^{[13]} IST is an extension of Zermelo-Fraenkel set theory (ZF) in that alongside the basic binary membership relation ∈, it introduces a new unary predicate *standard*, which can be applied to elements of the mathematical universe together with some axioms for reasoning with this new predicate.

Syntactic non-standard analysis requires a great deal of care in applying the principle of set formation (formally known as the axiom of comprehension), which mathematicians usually take for granted. As Nelson points out, a fallacy in reasoning in IST is that of *illegal set formation*. For instance, there is no set in IST whose elements are precisely the standard integers (here *standard* is understood in the sense of the new predicate). To avoid illegal set formation, one must only use predicates of ZFC to define subsets.^{[13]}

Another example of the syntactic approach is the Alternative Set Theory^{[14]} introduced by Vopěnka, trying to find set-theory axioms more compatible with the non-standard analysis than the axioms of ZF.

## Robinson's book[edit]

Abraham Robinson's book *Non-standard analysis* was published in 1966. Some of the topics developed in the book were already present in his 1961 article by the same title (Robinson 1961). In addition to containing the first full treatment of non-standard analysis, the book contains a detailed historical section where Robinson challenges some of the received opinions on the history of mathematics based on the pre–non-standard analysis perception of infinitesimals as inconsistent entities. Thus, Robinson challenges the idea that Augustin-Louis Cauchy's "sum theorem" in Cours d'Analyse concerning the convergence of a series of continuous functions was incorrect, and proposes an infinitesimal-based interpretation of its hypothesis that results in a correct theorem.

## Invariant subspace problem[edit]

Abraham Robinson and Allen Bernstein used non-standard analysis to prove that every polynomially compactlinear operator on a Hilbert space has an invariant subspace.^{[15]}

Given an operator T on Hilbert space *H*, consider the orbit of a point v in H under the iterates of T. Applying Gram-Schmidt one obtains an orthonormal basis (*e _{i}*) for H. Let (

*H*) be the corresponding nested sequence of "coordinate" subspaces of H. The matrix

_{i}*a*expressing T with respect to (

_{i,j}*e*) is almost upper triangular, in the sense that the coefficients

_{i}*a*

_{i+1,i}are the only nonzero sub-diagonal coefficients. Bernstein and Robinson show that if T is polynomially compact, then there is a hyperfinite index w such that the matrix coefficient

*a*

_{w+1,w}is infinitesimal. Next, consider the subspace

*H*of *

_{w}*H*. If y in

*H*has finite norm, then

_{w}*T*(

*y*) is infinitely close to

*H*.

_{w}Now let *T _{w}* be the operator acting on

*H*, where

_{w}*P*is the orthogonal projection to

_{w}*H*. Denote by q the polynomial such that

_{w}*q*(

*T*) is compact. The subspace

*H*is internal of hyperfinite dimension. By transferring upper triangularisation of operators of finite-dimensional complex vector space, there is an internal orthonormal Hilbert space basis (

_{w}*e*) for

_{k}*H*where k runs from 1 to w, such that each of the corresponding k-dimensional subspaces

_{w}*E*is T-invariant. Denote by Π

_{k}_{k}the projection to the subspace

*E*. For a nonzero vector x of finite norm in H, one can assume that

_{k}*q*(

*T*)(

*x*) is nonzero, or |

*q*(

*T*)(

*x*)| > 1 to fix ideas. Since

*q*(

*T*) is a compact operator, (

*q*(

*T*))(

_{w}*x*) is infinitely close to

*q*(

*T*)(

*x*) and therefore one has also |

*q*(

*T*)(

_{w}*x*)| > 1. Now let j be the greatest index such that . Then the space of all standard elements infinitely close to

*E*is the desired invariant subspace.

_{j}Upon reading a preprint of the Bernstein-Robinson paper, Paul Halmos reinterpreted their proof using standard techniques.^{[16]} Both papers appeared back-to-back in the same issue of the *Pacific Journal of Mathematics*. Some of the ideas used in Halmos' proof reappeared many years later in Halmos' own work on quasi-triangular operators.

## Other applications[edit]

Other results were received along the line of reinterpreting or reproving previously known results. Of particular interest is Kamae's proof^{[17]} of the individual ergodic theorem or van den Dries and Wilkie's treatment^{[18]} of Gromov's theorem on groups of polynomial growth. Nonstandard analysis was used by Larry Manevitz and Shmuel Weinberger to prove a result in algebraic topology.^{[19]}

The real contributions of non-standard analysis lie however in the concepts and theorems that utilizes the new extended language of non-standard set theory. Among the list of new applications in mathematics there are new approaches to probability ^{[11]} hydrodynamics,^{[20]} measure theory,^{[21]} nonsmooth and harmonic analysis,^{[22]} etc.

There are also applications of non-standard analysis to the theory of stochastic processes, particularly constructions of Brownian motion as random walks. Albeverio et-al^{[12]} have an excellent introduction to this area of research.

### Applications to calculus[edit]

As an application to mathematical education, H. Jerome Keisler wrote *Elementary Calculus: An Infinitesimal Approach*.^{[10]} Covering non-standard calculus, it develops differential and integral calculus using the hyperreal numbers, which include infinitesimal elements. These applications of non-standard analysis depend on the existence of the *standard part* of a finite hyperreal r. The standard part of r, denoted st(*r*), is a standard real number infinitely close to r. One of the visualization devices Keisler uses is that of an imaginary infinite-magnification microscope to distinguish points infinitely close together. Keisler's book is now out of print, but is freely available from his website; see references below.

## Critique[edit]

Despite the elegance and appeal of some aspects of non-standard analysis, criticisms have been voiced, as well, such as those by E. Bishop, A. Connes, and P. Halmos, as documented at criticism of non-standard analysis.

## Logical framework[edit]

Given any set S, the *superstructure* over a set S is the set *V*(*S*) defined by the conditions

Thus the superstructure over S is obtained by starting from S and iterating the operation of adjoining the power set of S and taking the union of the resulting sequence. The superstructure over the real numbers includes a wealth of mathematical structures: For instance, it contains isomorphic copies of all separable metric spaces and metrizable topological vector spaces. Virtually all of mathematics that interests an analyst goes on within *V*(**R**).

The working view of nonstandard analysis is a set ***R** and a mapping * : *V*(**R**) → *V*(***R**) which satisfies some additional properties. To formulate these principles we first state some definitions.

A formula has *bounded quantification* if and only if the only quantifiers which occur in the formula have range restricted over sets, that is are all of the form:

For example, the formula

has bounded quantification, the universally quantified variable x ranges over A, the existentially quantified variable y ranges over the powerset of B. On the other hand,

does not have bounded quantification because the quantification of *y* is unrestricted.

## Internal sets[edit]

A set *x* is *internal* if and only if *x* is an element of **A* for some element *A* of *V*(**R**). **A* itself is internal if *A* belongs to *V*(**R**).

We now formulate the basic logical framework of nonstandard analysis:

- Extension principle: The mapping * is the identity on
**R**. *Transfer principle*: For any formula*P*(*x*_{1}, ...,*x*) with bounded quantification and with free variables_{n}*x*_{1}, ...,*x*, and for any elements_{n}*A*_{1}, ...,*A*of_{n}*V*(**R**), the following equivalence holds:

*Countable saturation*: If {*A*_{k}}_{k ∈ N}is a decreasing sequence of nonempty internal sets, with*k*ranging over the natural numbers, then

One can show using ultraproducts that such a map * exists. Elements of *V*(**R**) are called *standard*. Elements of ***R** are called hyperreal numbers.

## First consequences[edit]

The symbol ***N** denotes the nonstandard natural numbers. By the extension principle, this is a superset of **N**. The set ***N** − **N** is nonempty. To see this, apply countable saturation to the sequence of internal sets

The sequence {*A _{n}*}

_{n ∈ N}has a nonempty intersection, proving the result.

We begin with some definitions: Hyperreals *r*, *s* are *infinitely close*if and only if

A hyperreal r is *infinitesimal* if and only if it is infinitely close to 0. For example, if n is a hyperinteger, i.e. an element of ***N** − **N**, then 1/*n* is an infinitesimal. A hyperreal r is *limited* (or *finite*) if and only if its absolute value is dominated by (less than) a standard integer. The limited hyperreals form a subring of ***R** containing the reals. In this ring, the infinitesimal hyperreals are an ideal.

The set of limited hyperreals or the set of infinitesimal hyperreals are *external* subsets of *V*(***R**); what this means in practice is that bounded quantification, where the bound is an internal set, never ranges over these sets.

**Example**: The plane (*x*, *y*) with x and y ranging over ***R** is internal, and is a model of plane Euclidean geometry. The plane with x and y restricted to limited values (analogous to the Dehn plane) is external, and in this limited plane the parallel postulate is violated. For example, any line passing through the point (0, 1) on the y-axis and having infinitesimal slope is parallel to the x-axis.

**Theorem.** For any limited hyperreal r there is a unique standard real denoted st(*r*) infinitely close to r. The mapping st is a ring homomorphism from the ring of limited hyperreals to **R**.

The mapping st is also external.

One way of thinking of the standard part of a hyperreal, is in terms of Dedekind cuts; any limited hyperreal s defines a cut by considering the pair of sets (*L*, *U*) where L is the set of standard rationals a less than s and U is the set of standard rationals b greater than s. The real number corresponding to (*L*, *U*) can be seen to satisfy the condition of being the standard part of s.

One intuitive characterization of continuity is as follows:

**Theorem.** A real-valued function f on the interval [*a*, *b*] is continuous if and only if for every hyperreal x in the interval *[*a*, *b*], we have: **f*(*x*) ≅ **f*(st(*x*)).

(see microcontinuity for more details). Similarly,

**Theorem.** A real-valued function f is differentiable at the real value x if and only if for every infinitesimal hyperreal number h, the value

exists and is independent of h. In this case *f*′(*x*) is a real number and is the derivative of f at x.

## κ-saturation[edit]

It is possible to "improve" the saturation by allowing collections of higher cardinality to be intersected. A model is κ-saturated if whenever is a collection of internal sets with the finite intersection property and ,

This is useful, for instance, in a topological space X, where we may want |2^{X}|-saturation to ensure the intersection of a standard neighborhood base is nonempty.^{[23]}

For any cardinal κ, a κ-saturated extension can be constructed.^{[24]}

## See also[edit]

## Further reading[edit]

## References[edit]

**^**Nonstandard Analysis in Practice. Edited by Francine Diener, Marc Diener. Springer, 1995.**^**Nonstandard Analysis, Axiomatically. By V. Vladimir Grigorevich Kanovei, Michael Reeken. Springer, 2004.**^**Nonstandard Analysis for the Working Mathematician. Edited by Peter A. Loeb, Manfred P. H. Wolff. Springer, 2000.**^**Non-standard Analysis. By Abraham Robinson. Princeton University Press, 1974.**^**Abraham Robinson and Nonstandard Analysis: History, Philosophy, and Foundations of Mathematics. By Joseph W. Dauben. www.mcps.umn.edu.- ^
^{a}^{b}Robinson, A.: Non-standard analysis. North-Holland Publishing Co., Amsterdam 1966. **^**Heijting, A. (1973) Address to Professor A. Robinson. At the occasion of the Brouwer memorial lecture given by Prof. A.Robinson on the 26th April 1973. Nieuw Arch. Wisk. (3) 21, pp. 134—137.**^**Robinson, Abraham (1996).*Non-standard analysis*(Revised ed.). Princeton University Press. ISBN 0-691-04490-2.**^**Curt Schmieden and Detlef Laugwitz:*Eine Erweiterung der Infinitesimalrechnung*, Mathematische Zeitschrift 69 (1958), 1-39- ^
^{a}^{b}H. Jerome Keisler,*Elementary Calculus: An Infinitesimal Approach*. First edition 1976; 2nd edition 1986: full text of 2nd edition - ^
^{a}^{b}Edward Nelson:*Radically Elementary Probability Theory*, Princeton University Press, 1987, full text - ^
^{a}^{b}Sergio Albeverio, Jans Erik Fenstad, Raphael Høegh-Krohn, Tom Lindstrøm:*Nonstandard Methods in Stochastic Analysis and Mathematical Physics*, Academic Press 1986. - ^
^{a}^{b}Edward Nelson:*Internal Set Theory: A New Approach to Nonstandard Analysis*, Bulletin of the American Mathematical Society, Vol. 83, Number 6, November 1977. A chapter on Internal Set Theory is available at http://www.math.princeton.edu/~nelson/books/1.pdf **^**Vopěnka, P. Mathematics in the Alternative Set Theory. Teubner, Leipzig, 1979.**^**Allen Bernstein and Abraham Robinson,*Solution of an invariant subspace problem of K. T. Smith and P. R. Halmos*, Pacific Journal of Mathematics 16:3 (1966) 421-431**^**P. Halmos,*Invariant subspaces for Polynomially Compact Operators*, Pacific Journal of Mathematics, 16:3 (1966) 433-437.**^**T. Kamae:*A simple proof of the ergodic theorem using nonstandard analysis*, Israel Journal of Mathematics vol. 42, Number 4, 1982.**^**L. van den Dries and A. J. Wilkie:*Gromov's Theorem on Groups of Polynomial Growth and Elementary Logic*, Journal of Algebra, Vol 89, 1984.**^**Manevitz, Larry M.; Weinberger, Shmuel: Discrete circle actions: a note using non-standard analysis. Israel J. Math. 94 (1996), 147--155.**^**Capinski M., Cutland N. J.*Nonstandard Methods for Stochastic Fluid Mechanics.*Singapore etc., World Scientific Publishers (1995)**^**Cutland N.*Loeb Measures in Practice: Recent Advances.*Berlin etc.: Springer (2001)**^**Gordon E. I., Kutateladze S. S., and Kusraev A. G.*Infinitesimal Analysis*Dordrecht, Kluwer Academic Publishers (2002)**^**Salbany, S.; Todorov, T. Nonstandard Analysis in Point-Set Topology. Erwing Schrodinger Institute for Mathematical Physics.**^**Chang, C. C.; Keisler, H. J. Model theory. Third edition. Studies in Logic and the Foundations of Mathematics, 73. North-Holland Publishing Co., Amsterdam, 1990. xvi+650 pp. ISBN 0-444-88054-2

## Bibliography

The other answers are excellent, but let me add a few points.

First, with a historical perspective, all the early fundamental theorems of calculus were first proved via methods using infinitesimals, rather than by methods using epsilon-delta arguments, since those methods did not appear until the nineteenth century. Calculus proceeded for centuries on the infinitesimal foundation, and the early arguments---whatever their level of rigor---are closer to their modern analogues in nonstandard analysis than to their modern analogues in epsilon-delta methods. In this sense, one could reasonably answer your question by pointing to any of these early fundamental theorems.

To be sure, the epsilon-delta methods arose in part because mathematicians became unsure of the foundational validity of infinitesimals. But since nonstandard analysis exactly provides the missing legitimacy, the original motivation for adopting epsilon-delta arguments appears to fall away.

Second, while it is true that almost any application of nonstandard analysis in analysis can be carried out using standard methods, the converse is also true. That is, epsilon-delta arguments can often also be translated into nonstandard analysis. Furthermore, someone raised with nonstandard analysis in their mathematical childhood would likely prefer things this way. In this sense, the preference between the two methods may be a cultural matter of upbringing.

For example, H. Jerome Keisler wrote an introductory calculus textbook called Elementary Calculus: an infinitesimal approach, and this text was used for many years as the main calculus textbook at the University of Wisconsin, Madison. I encourage you to take a look at this interesting text, which looks at first like an ordinary calculus textbook, except that in the inside cover, next to the various formulas for derivatives and integrals, there are also listed the various rules for manipulating infinitesimals, which fill the text. Kiesler writes:

This is a calculus textbook at the college Freshman level based on Abraham Robinson's infinitesimals, which date from 1960. Robinson's modern infinitesimal approach puts the intuitive ideas of the founders of the calculus on a mathematically sound footing, and is easier for beginners to understand than the more common approach via limits.

Finally, third, some may take your question to presume that a central purpose of nonstandard analysis is to provide applications in analysis. But this is not correct. The concept of nonstandard models of arithmetic, of analysis and of set theory arose in mathematical logic and has grown into an entire field, with hundreds of articles and many books, with its own problems and questions and methods, quite divorced from any application of the methods in other parts of mathematics. For example, the subject of Models of Arithmetic is focused on understanding the nonstandard models of the first order Peano Axioms, and it makes little sense to analyze these models using only standard methods.

To mention just a few fascinating classical theorems: every countable nonstandard model of arithmetic is isomorphic to a proper initial segment of itself (H. Friedman). Under the Continuum Hypothesis, every Scott set (a family of sets of natural numbers closed under Boolean operations, Turing reducibility and satisfying Konig's lemma) is the collection of definable sets of natural numbers of some nonstandard model of arithmetic (D. Scott and others). There is no nonstandard model of arithmetic for which either addition or multiplication is computable (S. Tennenbaum). Nonstandard models of arithmetic were also used to prove several fascinating independence results over PA, such as the results on Goodstein sequences, as well as the Paris-Harrington theorem on the independence over PA of a strong Ramsey theorem. Another interesting result shows that various forms of the pigeon hole principle are not equivalent over weak base theories; for example, the weak pigeon-hole principle that there is no bijection of n to 2n is not provable over the base theory from the weaker principle that there is no bijection of n with n^{2}. These proofs all make fundamental use of nonstandard methods, which it would seem difficult or impossible to omit or to translate to standard methods.

## 0 thoughts on “Transfer Principle Non-Standard Analysis Essay”