Nov 07, 2016 Ask ourselves some simple questions. Why do we differentiate - to find the slope; Why do we integrate - to find the area. What are all the situations where we would need differential equations or integral calculus? I can’t see any field where we w. Physical Problem for Ordinary Differential Equations Chemical Engineering Soap is prepared through a reaction known as saponification. In saponification, tallow (fats from animals such as cattle) or vegetable fat (e.g. Coconut) is reacted with potassium or sodium hydroxide to produce glycerol and fatty acid salt known as “soap”.
Visualization of heat transfer in a pump casing, created by solving the heat equation. Heat is being generated internally in the casing and being cooled at the boundary, providing a steady state temperature distribution.
A differential equation is a mathematicalequation that relates some function with its derivatives. In applications, the functions usually represent physical quantities, the derivatives represent their rates of change, and the differential equation defines a relationship between the two. Because such relations are extremely common, differential equations play a prominent role in many disciplines including engineering, physics, economics, and biology.
In pure mathematics, differential equations are studied from several different perspectives, mostly concerned with their solutions—the set of functions that satisfy the equation. Only the simplest differential equations are solvable by explicit formulas; however, some properties of solutions of a given differential equation may be determined without finding their exact form.
If a closed-form expression for the solution is not available, the solution may be numerically approximated using computers. The theory of dynamical systems puts emphasis on qualitative analysis of systems described by differential equations, while many numerical methods have been developed to determine solutions with a given degree of accuracy.
- 3Types
- 7Applications
- 7.1Physics
- 7.2Biology
History[edit]
Differential equations first came into existence with the invention of calculus by Newton and Leibniz. In Chapter 2 of his 1671 work Methodus fluxionum et Serierum Infinitarum,[1] Isaac Newton listed three kinds of differential equations:
He solves these examples and others using infinite series and discusses the non-uniqueness of solutions.
Jacob Bernoulli proposed the Bernoulli differential equation in 1695.[2] This is an ordinary differential equation of the form
for which the following year Leibniz obtained solutions by simplifying it.[3]
Historically, the problem of a vibrating string such as that of a musical instrument was studied by Jean le Rond d'Alembert, Leonhard Euler, Daniel Bernoulli, and Joseph-Louis Lagrange.[4][5][6][7] In 1746, d’Alembert discovered the one-dimensional wave equation, and within ten years Euler discovered the three-dimensional wave equation.[8]
The Euler–Lagrange equation was developed in the 1750s by Euler and Lagrange in connection with their studies of the tautochrone problem. This is the problem of determining a curve on which a weighted particle will fall to a fixed point in a fixed amount of time, independent of the starting point. Lagrange solved this problem in 1755 and sent the solution to Euler. Both further developed Lagrange's method and applied it to mechanics, which led to the formulation of Lagrangian mechanics.
In 1822, Fourier published his work on heat flow in Théorie analytique de la chaleur (The Analytic Theory of Heat),[9] in which he based his reasoning on Newton's law of cooling, namely, that the flow of heat between two adjacent molecules is proportional to the extremely small difference of their temperatures. Contained in this book was Fourier's proposal of his heat equation for conductive diffusion of heat. This partial differential equation is now taught to every student of mathematical physics.
Example[edit]
For example, in classical mechanics, the motion of a body is described by its position and velocity as the time value varies. Newton's laws allow these variables to be expressed dynamically (given the position, velocity, acceleration and various forces acting on the body) as a differential equation for the unknown position of the body as a function of time.
In some cases, this differential equation (called an equation of motion) may be solved explicitly.
An example of modelling a real world problem using differential equations is the determination of the velocity of a ball falling through the air, considering only gravity and air resistance. The ball's acceleration towards the ground is the acceleration due to gravity minus the acceleration due to air resistance. Gravity is considered constant, and air resistance may be modeled as proportional to the ball's velocity. This means that the ball's acceleration, which is a derivative of its velocity, depends on the velocity (and the velocity depends on time). Finding the velocity as a function of time involves solving a differential equation and verifying its validity.
Types[edit]
Differential equations can be divided into several types. Apart from describing the properties of the equation itself, these classes of differential equations can help inform the choice of approach to a solution. Commonly used distinctions include whether the equation is: Ordinary/Partial, Linear/Non-linear, and Homogeneous/Inhomogeneous. This list is far from exhaustive; there are many other properties and subclasses of differential equations which can be very useful in specific contexts.
Ordinary differential equations[edit]
An ordinary differential equation (ODE) is an equation containing an unknown function of one real or complex variablex, its derivatives, and some given functions of x. The unknown function is generally represented by a variable (often denoted y), which, therefore, depends on x. Thus x is often called the independent variable of the equation. The term 'ordinary' is used in contrast with the term partial differential equation, which may be with respect to more than one independent variable.
Linear differential equations are the differential equations that are linear in the unknown function and its derivatives. Their theory is well developed, and, in many cases, one may express their solutions in terms of integrals.
Most ODEs that are encountered in physics are linear, and, therefore, most special functions may be defined as solutions of linear differential equations (see Holonomic function).
As, in general, the solutions of a differential equation cannot be expressed by a closed-form expression, numerical methods are commonly used for solving differential equations on a computer.
Partial differential equations[edit]
A partial differential equation (PDE) is a differential equation that contains unknown multivariable functions and their partial derivatives. (This is in contrast to ordinary differential equations, which deal with functions of a single variable and their derivatives.) PDEs are used to formulate problems involving functions of several variables, and are either solved in closed form, or used to create a relevant computer model.
PDEs can be used to describe a wide variety of phenomena in nature such as sound, heat, electrostatics, electrodynamics, fluid flow, elasticity, or quantum mechanics. These seemingly distinct physical phenomena can be formalised similarly in terms of PDEs. Just as ordinary differential equations often model one-dimensional dynamical systems, partial differential equations often model multidimensional systems. PDEs find their generalisation in stochastic partial differential equations.
Non-linear differential equations[edit]
Non-linear differential equations are formed by the products of the unknown function and its derivatives are allowed and its degree is > 1. There are very few methods of solving nonlinear differential equations exactly; those that are known typically depend on the equation having particular symmetries. Nonlinear differential equations can exhibit very complicated behavior over extended time intervals, characteristic of chaos. Even the fundamental questions of existence, uniqueness, and extendability of solutions for nonlinear differential equations, and well-posedness of initial and boundary value problems for nonlinear PDEs are hard problems and their resolution in special cases is considered to be a significant advance in the mathematical theory (cf. Navier–Stokes existence and smoothness). However, if the differential equation is a correctly formulated representation of a meaningful physical process, then one expects it to have a solution.[10]
Linear differential equations frequently appear as approximations to nonlinear equations. These approximations are only valid under restricted conditions. For example, the harmonic oscillator equation is an approximation to the nonlinear pendulum equation that is valid for small amplitude oscillations (see below).
Equation order[edit]
Differential equations are described by their order, determined by the term with the highest derivatives. An equation containing only first derivatives is a first-order differential equation, an equation containing the second derivative is a second-order differential equation, and so on.[11][12] Differential equations that describe natural phenomena almost always have only first and second order derivatives in them, but there are some exceptions, such as the thin film equation, which is a fourth order partial differential equation.
Examples[edit]
In the first group of examples, u is an unknown function of x, and c and ω are constants that are supposed to be known. Two broad classifications of both ordinary and partial differential equations consists of distinguishing between linear and nonlinear differential equations, and between homogeneous differential equations and inhomogeneous ones.
- Inhomogeneous first-order linear constant coefficient ordinary differential equation:
- Homogeneous second-order linear ordinary differential equation:
- Homogeneous second-order linear constant coefficient ordinary differential equation describing the harmonic oscillator:
- Inhomogeneous first-order nonlinear ordinary differential equation:
- Second-order nonlinear (due to sine function) ordinary differential equation describing the motion of a pendulum of length L:
In the next group of examples, the unknown function u depends on two variables x and t or x and y.
- Homogeneous first-order linear partial differential equation:
- Homogeneous second-order linear constant coefficient partial differential equation of elliptic type, the Laplace equation:
- Homogeneous third-order non-linear partial differential equation :
Existence of solutions[edit]
Solving differential equations is not like solving algebraic equations. Not only are their solutions often unclear, but whether solutions are unique or exist at all are also notable subjects of interest.
For first order initial value problems, the Peano existence theorem gives one set of circumstances in which a solution exists. Given any point in the xy-plane, define some rectangular region , such that and is in the interior of . If we are given a differential equation and the condition that when , then there is locally a solution to this problem if and are both continuous on . This solution exists on some interval with its center at . The solution may not be unique. (See Ordinary differential equation for other results.)
However, this only helps us with first order initial value problems. Suppose we had a linear initial value problem of the nth order:
such that
For any nonzero , if and are continuous on some interval containing , is unique and exists.[13]
Related concepts[edit]
- A delay differential equation (DDE) is an equation for a function of a single variable, usually called time, in which the derivative of the function at a certain time is given in terms of the values of the function at earlier times.
- A stochastic differential equation (SDE) is an equation in which the unknown quantity is a stochastic process and the equation involves some known stochastic processes, for example, the Wiener process in the case of diffusion equations.
- A differential algebraic equation (DAE) is a differential equation comprising differential and algebraic terms, given in implicit form.
Connection to difference equations[edit]
The theory of differential equations is closely related to the theory of difference equations, in which the coordinates assume only discrete values, and the relationship involves values of the unknown function or functions and values at nearby coordinates. Many methods to compute numerical solutions of differential equations or study the properties of differential equations involve the approximation of the solution of a differential equation by the solution of a corresponding difference equation.
Applications[edit]
The study of differential equations is a wide field in pure and applied mathematics, physics, and engineering. All of these disciplines are concerned with the properties of differential equations of various types. Pure mathematics focuses on the existence and uniqueness of solutions, while applied mathematics emphasizes the rigorous justification of the methods for approximating solutions. Differential equations play an important role in modelling virtually every physical, technical, or biological process, from celestial motion, to bridge design, to interactions between neurons. Differential equations such as those used to solve real-life problems may not necessarily be directly solvable, i.e. do not have closed form solutions. Instead, solutions can be approximated using numerical methods.
Many fundamental laws of physics and chemistry can be formulated as differential equations. In biology and economics, differential equations are used to model the behavior of complex systems. The mathematical theory of differential equations first developed together with the sciences where the equations had originated and where the results found application. However, diverse problems, sometimes originating in quite distinct scientific fields, may give rise to identical differential equations. Whenever this happens, mathematical theory behind the equations can be viewed as a unifying principle behind diverse phenomena. As an example, consider the propagation of light and sound in the atmosphere, and of waves on the surface of a pond. All of them may be described by the same second-order partial differential equation, the wave equation, which allows us to think of light and sound as forms of waves, much like familiar waves in the water. Conduction of heat, the theory of which was developed by Joseph Fourier, is governed by another second-order partial differential equation, the heat equation. It turns out that many diffusion processes, while seemingly different, are described by the same equation; the Black–Scholes equation in finance is, for instance, related to the heat equation.
Physics[edit]
- Euler–Lagrange equation in classical mechanics
- Hamilton's equations in classical mechanics
- Radioactive decay in nuclear physics
- Newton's law of cooling in thermodynamics
- The wave equation
- The heat equation in thermodynamics
- Laplace's equation, which defines harmonic functions
- The geodesic equation
- The Navier–Stokes equations in fluid dynamics
- The Diffusion equation in stochastic processes
- The Convection–diffusion equation in fluid dynamics
- The Cauchy–Riemann equations in complex analysis
- The Poisson–Boltzmann equation in molecular dynamics
- The shallow water equations
- The Lorenz equations whose solutions exhibit chaotic flow.
Classical mechanics[edit]
So long as the force acting on a particle is known, Newton's second law is sufficient to describe the motion of a particle. Once independent relations for each force acting on a particle are available, they can be substituted into Newton's second law to obtain an ordinary differential equation, which is called the equation of motion.
Electrodynamics[edit]
Maxwell's equations are a set of partial differential equations that, together with the Lorentz force law, form the foundation of classical electrodynamics, classical optics, and electric circuits. These fields in turn underlie modern electrical and communications technologies. Maxwell's equations describe how electric and magnetic fields are generated and altered by each other and by charges and currents. They are named after the Scottish physicist and mathematician James Clerk Maxwell, who published an early form of those equations between 1861 and 1862.
General relativity[edit]
The Einstein field equations (EFE; also known as 'Einstein's equations') are a set of ten partial differential equations in Albert Einstein's general theory of relativity which describe the fundamental interaction of gravitation as a result of spacetime being curved by matter and energy.[14] First published by Einstein in 1915[15] as a tensor equation, the EFE equate local spacetime curvature (expressed by the Einstein tensor) with the local energy and momentum within that spacetime (expressed by the stress–energy tensor).[16]
Quantum mechanics[edit]
In quantum mechanics, the analogue of Newton's law is Schrödinger's equation (a partial differential equation) for a quantum system (usually atoms, molecules, and subatomic particles whether free, bound, or localized). It is not a simple algebraic equation, but in general a linearpartial differential equation, describing the time-evolution of the system's wave function (also called a 'state function').[17]
Biology[edit]
- Verhulst equation – biological population growth
- von Bertalanffy model – biological individual growth
- Replicator dynamics – found in theoretical biology
- Hodgkin–Huxley model – neural action potentials
Predator-prey equations[edit]
The Lotka–Volterra equations, also known as the predator–prey equations, are a pair of first-order, non-linear, differential equations frequently used to describe the population dynamics of two species that interact, one as a predator and the other as prey.
Chemistry[edit]
The rate law or rate equation for a chemical reaction is a differential equation that links the reaction rate with concentrations or pressures of reactants and constant parameters (normally rate coefficients and partial reaction orders).[18] To determine the rate equation for a particular system one combines the reaction rate with a mass balance for the system.[19] In addition, a range of differential equations are present in the study of thermodynamics and quantum mechanics.
Economics[edit]
- The key equation of the Solow–Swan model is
See also[edit]
- Picard–Lindelöf theorem on existence and uniqueness of solutions
- Recurrence relation, also known as 'difference equation'
References[edit]
- ^Newton, Isaac. (c.1671). Methodus Fluxionum et Serierum Infinitarum (The Method of Fluxions and Infinite Series), published in 1736 [Opuscula, 1744, Vol. I. p. 66].
- ^Bernoulli, Jacob (1695), 'Explicationes, Annotationes & Additiones ad ea, quae in Actis sup. de Curva Elastica, Isochrona Paracentrica, & Velaria, hinc inde memorata, & paratim controversa legundur; ubi de Linea mediarum directionum, alliisque novis', Acta Eruditorum
- ^Hairer, Ernst; Nørsett, Syvert Paul; Wanner, Gerhard (1993), Solving ordinary differential equations I: Nonstiff problems, Berlin, New York: Springer-Verlag, ISBN978-3-540-56670-0
- ^Cannon, John T.; Dostrovsky, Sigalia (1981). 'The evolution of dynamics, vibration theory from 1687 to 1742'. Studies in the History of Mathematics and Physical Sciences. 6. New York: Springer-Verlag: ix + 184 pp. ISBN0-3879-0626-6.GRAY, JW (July 1983). 'BOOK REVIEWS'. Bulletin (New Series) of the American Mathematical Society. 9 (1). (retrieved 13 Nov 2012).
- ^Wheeler, Gerard F.; Crummett, William P. (1987). 'The Vibrating String Controversy'. Am. J. Phys.55 (1): 33–37. Bibcode:1987AmJPh..55...33W. doi:10.1119/1.15311.
- ^For a special collection of the 9 groundbreaking papers by the three authors, see First Appearance of the wave equation: D'Alembert, Leonhard Euler, Daniel Bernoulli. - the controversy about vibrating strings (retrieved 13 Nov 2012). Herman HJ Lynge and Son.
- ^For de Lagrange's contributions to the acoustic wave equation, can consult Acoustics: An Introduction to Its Physical Principles and Applications Allan D. Pierce, Acoustical Soc of America, 1989; page 18.(retrieved 9 Dec 2012)
- ^Speiser, David. Discovering the Principles of Mechanics 1600-1800, p. 191 (Basel: Birkhäuser, 2008).
- ^Fourier, Joseph (1822). Théorie analytique de la chaleur (in French). Paris: Firmin Didot Père et Fils. OCLC2688081.
- ^Boyce, William E.; DiPrima, Richard C. (1967). Elementary Differential Equations and Boundary Value Problems (4th ed.). John Wiley & Sons. p. 3.
- ^Weisstein, Eric W. 'Ordinary Differential Equation Order.' From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/OrdinaryDifferentialEquationOrder.html
- ^Order and degree of a differential equation, accessed Dec 2015.
- ^Zill, Dennis G. A First Course in Differential Equations (5th ed.). Brooks/Cole. ISBN0-534-37388-7.
- ^Einstein, Albert (1916). 'The Foundation of the General Theory of Relativity'. Annalen der Physik. 354 (7): 769. Bibcode:1916AnP...354..769E. doi:10.1002/andp.19163540702. Archived from the original(PDF) on 2006-08-29.
- ^Einstein, Albert (November 25, 1915). 'Die Feldgleichungen der Gravitation'. Sitzungsberichte der Preussischen Akademie der Wissenschaften zu Berlin: 844–847. Retrieved 2006-09-12.
- ^Misner, Charles W.; Thorne, Kip S.; Wheeler, John Archibald (1973). Gravitation. San Francisco: W. H. Freeman. ISBN978-0-7167-0344-0 Chapter 34, p. 916.
- ^Griffiths, David J. (2004), Introduction to Quantum Mechanics (2nd ed.), Prentice Hall, pp. 1–2, ISBN0-13-111892-7
- ^IUPAC Gold Book definition of rate law. See also: According to IUPACCompendium of Chemical Terminology.
- ^Kenneth A. Connors Chemical Kinetics, the study of reaction rates in solution, 1991, VCH Publishers.
Further reading[edit]
- Abbott, P.; Neill, H. (2003). Teach Yourself Calculus. pp. 266–277.
- Blanchard, P.; Devaney, R. L.; Hall, G. R. (2006). Differential Equations. Thompson.
- Coddington, E. A.; Levinson, N. (1955). Theory of Ordinary Differential Equations. McGraw-Hill.
- Ince, E. L. (1956). Ordinary Differential Equations. Dover.
- Johnson, W. (1913). A Treatise on Ordinary and Partial Differential Equations. John Wiley and Sons. In University of Michigan Historical Math Collection
- Polyanin, A. D.; Zaitsev, V. F. (2003). Handbook of Exact Solutions for Ordinary Differential Equations (2nd ed.). Boca Raton: Chapman & Hall/CRC Press. ISBN1-58488-297-2.
- Porter, R. I. (1978). 'XIX Differential Equations'. Further Elementary Analysis.
- Teschl, Gerald (2012). Ordinary Differential Equations and Dynamical Systems. Providence: American Mathematical Society. ISBN978-0-8218-8328-0.
- Zwillinger, D. (1997). Handbook of Differential Equations (3rd ed.). Boston: Academic Press.
External links[edit]
Wikiquote has quotations related to: Differential equation |
Wikibooks has a book on the topic of: Ordinary Differential Equations |
Wikiversity has learning resources about Differential equations |
Wikisource has the text of the 1911 Encyclopædia Britannica article Differential Equation. |
- Media related to Differential equations at Wikimedia Commons
- Lectures on Differential EquationsMIT Open CourseWare Videos
- Online Notes / Differential Equations Paul Dawkins, Lamar University
- Differential Equations, S.O.S. Mathematics
- Introduction to modeling via differential equations Introduction to modeling by means of differential equations, with critical remarks.
- Mathematical Assistant on Web Symbolic ODE tool, using Maxima
- Collection of ODE and DAE models of physical systems MATLAB models
- Notes on Diffy Qs: Differential Equations for Engineers An introductory textbook on differential equations by Jiri Lebl of UIUC
- Khan Academy Video playlist on differential equations Topics covered in a first year course in differential equations.
Retrieved from 'https://en.wikipedia.org/w/index.php?title=Differential_equation&oldid=901632049'
Part of a series of articles about | ||||
Calculus | ||||
---|---|---|---|---|
| ||||
| ||||
| ||||
| ||||
|
Calculus, originally called infinitesimal calculus or 'the calculus of infinitesimals', is the mathematical study of continuous change, in the same way that geometry is the study of shape and algebra is the study of generalizations of arithmetic operations.
It has two major branches, differential calculus[1] and integral calculus.[2] Differential calculus concerns instantaneous rates of change and the slopes of curves. Integral calculus concerns accumulation of quantities and the areas under and between curves. These two branches are related to each other by the fundamental theorem of calculus. Both branches make use of the fundamental notions of convergence of infinite sequences and infinite series to a well-defined limit.
Infinitesimal calculus was developed independently in the late 17th century by Isaac Newton and Gottfried Wilhelm Leibniz.[3] Today, calculus has widespread uses in science, engineering, and economics.[4][better source needed]
In mathematics education, calculus denotes courses of elementary mathematical analysis, which are mainly devoted to the study of functions and limits. The word calculus (plural calculi) is a Latin word, meaning originally 'small pebble' (this meaning is kept in medicine, see Calculus (medicine)). Because such pebbles were used for calculation, the meaning of the word has evolved for meaning method of computation. It is therefore used for naming specific methods of calculation and related theories, such as propositional calculus, Ricci calculus, calculus of variations, lambda calculus, and process calculus.
- 1History
- 2Principles
- 4Varieties
- 5See also
- 7Further reading
History
Modern calculus was developed in 17th-century Europe by Isaac Newton and Gottfried Wilhelm Leibniz (independently of each other, first publishing around the same time) but elements of it appeared in ancient Greece, then in China and the Middle East, and still later again in medieval Europe and in India.
Ancient
Archimedes used the method of exhaustion to calculate the area under a parabola.
The ancient period introduced some of the ideas that led to integral calculus, but does not seem to have developed these ideas in a rigorous and systematic way. Calculations of volume and area, one goal of integral calculus, can be found in the EgyptianMoscow papyrus (13th dynasty, c. 1820 BC), but the formulas are simple instructions, with no indication as to method, and some of them lack major components.[5]
From the age of Greek mathematics, Eudoxus (c. 408–355 BC) used the method of exhaustion, which foreshadows the concept of the limit, to calculate areas and volumes, while Archimedes (c. 287–212 BC) developed this idea further, inventing heuristics which resemble the methods of integral calculus.[6]
The method of exhaustion was later discovered independently in China by Liu Hui in the 3rd century AD in order to find the area of a circle.[7] In the 5th century AD, Zu Gengzhi, son of Zu Chongzhi, established a method[8][9] that would later be called Cavalieri's principle to find the volume of a sphere.
Medieval
Alhazen, 11th century Arab mathematician and physicist
In the Middle East, Hasan Ibn al-Haytham, Latinized as Alhazen (c. 965 – c. 1040CE) derived a formula for the sum of fourth powers. He used the results to carry out what would now be called an integration of this function, where the formulae for the sums of integral squares and fourth powers allowed him to calculate the volume of a paraboloid.[10]
In the 14th century, Indian mathematicians gave a non-rigorous method, resembling differentiation, applicable to some trigonometric functions. Madhava of Sangamagrama and the Kerala School of Astronomy and Mathematics thereby stated components of calculus. A complete theory encompassing these components is now well known in the Western world as the Taylor series or infinite series approximations.[11] However, they were not able to 'combine many differing ideas under the two unifying themes of the derivative and the integral, show the connection between the two, and turn calculus into the great problem-solving tool we have today'.[10]
Modern
'The calculus was the first achievement of modern mathematics and it is difficult to overestimate its importance. I think it defines more unequivocally than anything else the inception of modern mathematics, and the system of mathematical analysis, which is its logical development, still constitutes the greatest technical advance in exact thinking.'
—John von Neumann[12]
In Europe, the foundational work was a treatise written by Bonaventura Cavalieri, who argued that volumes and areas should be computed as the sums of the volumes and areas of infinitesimally thin cross-sections. The ideas were similar to Archimedes' in The Method, but this treatise is believed to have been lost in the 13th century, and was only rediscovered in the early 20th century, and so would have been unknown to Cavalieri. Cavalieri's work was not well respected since his methods could lead to erroneous results, and the infinitesimal quantities he introduced were disreputable at first.
The formal study of calculus brought together Cavalieri's infinitesimals with the calculus of finite differences developed in Europe at around the same time. Pierre de Fermat, claiming that he borrowed from Diophantus, introduced the concept of adequality, which represented equality up to an infinitesimal error term.[13] The combination was achieved by John Wallis, Isaac Barrow, and James Gregory, the latter two proving the second fundamental theorem of calculus around 1670.
Isaac Newton developed the use of calculus in his laws of motion and gravitation.
The product rule and chain rule,[14] the notions of higher derivatives and Taylor series,[15] and of analytic functions[citation needed] were introduced by Isaac Newton in an idiosyncratic notation which he used to solve problems of mathematical physics. In his works, Newton rephrased his ideas to suit the mathematical idiom of the time, replacing calculations with infinitesimals by equivalent geometrical arguments which were considered beyond reproach. He used the methods of calculus to solve the problem of planetary motion, the shape of the surface of a rotating fluid, the oblateness of the earth, the motion of a weight sliding on a cycloid, and many other problems discussed in his Principia Mathematica (1687). In other work, he developed series expansions for functions, including fractional and irrational powers, and it was clear that he understood the principles of the Taylor series. He did not publish all these discoveries, and at this time infinitesimal methods were still considered disreputable.
Gottfried Wilhelm Leibniz was the first to state clearly the rules of calculus.
These ideas were arranged into a true calculus of infinitesimals by Gottfried Wilhelm Leibniz, who was originally accused of plagiarism by Newton.[16] He is now regarded as an independent inventor of and contributor to calculus. His contribution was to provide a clear set of rules for working with infinitesimal quantities, allowing the computation of second and higher derivatives, and providing the product rule and chain rule, in their differential and integral forms. Unlike Newton, Leibniz paid a lot of attention to the formalism, often spending days determining appropriate symbols for concepts.
Today, Leibniz and Newton are usually both given credit for independently inventing and developing calculus. Newton was the first to apply calculus to general physics and Leibniz developed much of the notation used in calculus today. The basic insights that both Newton and Leibniz provided were the laws of differentiation and integration, second and higher derivatives, and the notion of an approximating polynomial series. By Newton's time, the fundamental theorem of calculus was known.
When Newton and Leibniz first published their results, there was great controversy over which mathematician (and therefore which country) deserved credit. Newton derived his results first (later to be published in his Method of Fluxions), but Leibniz published his 'Nova Methodus pro Maximis et Minimis' first. Newton claimed Leibniz stole ideas from his unpublished notes, which Newton had shared with a few members of the Royal Society. This controversy divided English-speaking mathematicians from continental European mathematicians for many years, to the detriment of English mathematics.[citation needed] A careful examination of the papers of Leibniz and Newton shows that they arrived at their results independently, with Leibniz starting first with integration and Newton with differentiation. It is Leibniz, however, who gave the new discipline its name. Newton called his calculus 'the science of fluxions'.
Since the time of Leibniz and Newton, many mathematicians have contributed to the continuing development of calculus. One of the first and most complete works on both infinitesimal and integral calculus was written in 1748 by Maria Gaetana Agnesi.[17][18]
Foundations
In calculus, foundations refers to the rigorous development of the subject from axioms and definitions. In early calculus the use of infinitesimal quantities was thought unrigorous, and was fiercely criticized by a number of authors, most notably Michel Rolle and Bishop Berkeley. Berkeley famously described infinitesimals as the ghosts of departed quantities in his book The Analyst in 1734. Working out a rigorous foundation for calculus occupied mathematicians for much of the century following Newton and Leibniz, and is still to some extent an active area of research today.
Several mathematicians, including Maclaurin, tried to prove the soundness of using infinitesimals, but it would not be until 150 years later when, due to the work of Cauchy and Weierstrass, a way was finally found to avoid mere 'notions' of infinitely small quantities.[19] The foundations of differential and integral calculus had been laid. In Cauchy's Cours d'Analyse, we find a broad range of foundational approaches, including a definition of continuity in terms of infinitesimals, and a (somewhat imprecise) prototype of an (ε, δ)-definition of limit in the definition of differentiation.[20] In his work Weierstrass formalized the concept of limit and eliminated infinitesimals (although his definition can actually validate nilsquare infinitesimals). Following the work of Weierstrass, it eventually became common to base calculus on limits instead of infinitesimal quantities, though the subject is still occasionally called 'infinitesimal calculus'. Bernhard Riemann used these ideas to give a precise definition of the integral. It was also during this period that the ideas of calculus were generalized to Euclidean space and the complex plane.
In modern mathematics, the foundations of calculus are included in the field of real analysis, which contains full definitions and proofs of the theorems of calculus. The reach of calculus has also been greatly extended. Henri Lebesgue invented measure theory and used it to define integrals of all but the most pathological functions. Laurent Schwartz introduced distributions, which can be used to take the derivative of any function whatsoever.
Limits are not the only rigorous approach to the foundation of calculus. Another way is to use Abraham Robinson's non-standard analysis. Robinson's approach, developed in the 1960s, uses technical machinery from mathematical logic to augment the real number system with infinitesimal and infinite numbers, as in the original Newton-Leibniz conception. The resulting numbers are called hyperreal numbers, and they can be used to give a Leibniz-like development of the usual rules of calculus. There is also smooth infinitesimal analysis, which differs from non-standard analysis in that it mandates neglecting higher power infinitesimals during derivations.
Significance
While many of the ideas of calculus had been developed earlier in Greece, China, India, Iraq, Persia, and Japan, the use of calculus began in Europe, during the 17th century, when Isaac Newton and Gottfried Wilhelm Leibniz built on the work of earlier mathematicians to introduce its basic principles. The development of calculus was built on earlier concepts of instantaneous motion and area underneath curves.
Applications of differential calculus include computations involving velocity and acceleration, the slope of a curve, and optimization. Applications of integral calculus include computations involving area, volume, arc length, center of mass, work, and pressure. More advanced applications include power series and Fourier series.
Calculus is also used to gain a more precise understanding of the nature of space, time, and motion. For centuries, mathematicians and philosophers wrestled with paradoxes involving division by zero or sums of infinitely many numbers. These questions arise in the study of motion and area. The ancient Greek philosopher Zeno of Elea gave several famous examples of such paradoxes. Calculus provides tools, especially the limit and the infinite series, that resolve the paradoxes.
Principles
Limits and infinitesimals
Calculus is usually developed by working with very small quantities. Historically, the first method of doing so was by infinitesimals. These are objects which can be treated like real numbers but which are, in some sense, 'infinitely small'. For example, an infinitesimal number could be greater than 0, but less than any number in the sequence 1, 1/2, 1/3, ... and thus less than any positive real number. From this point of view, calculus is a collection of techniques for manipulating infinitesimals. The symbols and were taken to be infinitesimal, and the derivative was simply their ratio.
The infinitesimal approach fell out of favor in the 19th century because it was difficult to make the notion of an infinitesimal precise. However, the concept was revived in the 20th century with the introduction of non-standard analysis and smooth infinitesimal analysis, which provided solid foundations for the manipulation of infinitesimals.
In the late 19th century, infinitesimals were replaced within academia by the epsilon, delta approach to limits. Limits describe the value of a function at a certain input in terms of its values at nearby inputs. They capture small-scale behavior in the context of the real number system. In this treatment, calculus is a collection of techniques for manipulating certain limits. Infinitesimals get replaced by very small numbers, and the infinitely small behavior of the function is found by taking the limiting behavior for smaller and smaller numbers. Limits were thought to provide a more rigorous foundation for calculus, and for this reason they became the standard approach during the twentieth century.
Differential calculus
Tangent line at (x, f(x)). The derivative f′(x) of a curve at a point is the slope (rise over run) of the line tangent to that curve at that point.
Differential calculus is the study of the definition, properties, and applications of the derivative of a function. The process of finding the derivative is called differentiation. Given a function and a point in the domain, the derivative at that point is a way of encoding the small-scale behavior of the function near that point. By finding the derivative of a function at every point in its domain, it is possible to produce a new function, called the derivative function or just the derivative of the original function. In formal terms, the derivative is a linear operator which takes a function as its input and produces a second function as its output. This is more abstract than many of the processes studied in elementary algebra, where functions usually input a number and output another number. For example, if the doubling function is given the input three, then it outputs six, and if the squaring function is given the input three, then it outputs nine. The derivative, however, can take the squaring function as an input. This means that the derivative takes all the information of the squaring function—such as that two is sent to four, three is sent to nine, four is sent to sixteen, and so on—and uses this information to produce another function. The function produced by deriving the squaring function turns out to be the doubling function.
In more explicit terms the 'doubling function' may be denoted by g(x) = 2x and the 'squaring function' by f(x) = x2. The 'derivative' now takes the function f(x), defined by the expression 'x2', as an input, that is all the information—such as that two is sent to four, three is sent to nine, four is sent to sixteen, and so on—and uses this information to output another function, the function g(x) = 2x, as will turn out.
The most common symbol for a derivative is an apostrophe-like mark called prime. Thus, the derivative of a function called f is denoted by f′, pronounced 'f prime'. For instance, if f(x) = x2 is the squaring function, then f′(x) = 2x is its derivative (the doubling function g from above). This notation is known as Lagrange's notation.
If the input of the function represents time, then the derivative represents change with respect to time. For example, if f is a function that takes a time as input and gives the position of a ball at that time as output, then the derivative of f is how the position is changing in time, that is, it is the velocity of the ball.
If a function is linear (that is, if the graph of the function is a straight line), then the function can be written as y = mx + b, where x is the independent variable, y is the dependent variable, b is the y-intercept, and:
This gives an exact value for the slope of a straight line. If the graph of the function is not a straight line, however, then the change in y divided by the change in x varies. Derivatives give an exact meaning to the notion of change in output with respect to change in input. To be concrete, let f be a function, and fix a point a in the domain of f. (a, f(a)) is a point on the graph of the function. If h is a number close to zero, then a + h is a number close to a. Therefore, (a + h, f(a + h)) is close to (a, f(a)). The slope between these two points is
This expression is called a difference quotient. A line through two points on a curve is called a secant line, so m is the slope of the secant line between (a, f(a)) and (a + h, f(a + h)). The secant line is only an approximation to the behavior of the function at the point a because it does not account for what happens between a and a + h. It is not possible to discover the behavior at a by setting h to zero because this would require dividing by zero, which is undefined. The derivative is defined by taking the limit as h tends to zero, meaning that it considers the behavior of f for all small values of h and extracts a consistent value for the case when h equals zero:
Geometrically, the derivative is the slope of the tangent line to the graph of f at a. The tangent line is a limit of secant lines just as the derivative is a limit of difference quotients. For this reason, the derivative is sometimes called the slope of the function f.
Here is a particular example, the derivative of the squaring function at the input 3. Let f(x) = x2 be the squaring function.
The derivative f′(x) of a curve at a point is the slope of the line tangent to that curve at that point. This slope is determined by considering the limiting value of the slopes of secant lines. Here the function involved (drawn in red) is f(x) = x3 − x. The tangent line (in green) which passes through the point (−3/2, −15/8) has a slope of 23/4. Note that the vertical and horizontal scales in this image are different.
The slope of the tangent line to the squaring function at the point (3, 9) is 6, that is to say, it is going up six times as fast as it is going to the right. The limit process just described can be performed for any point in the domain of the squaring function. This defines the derivative function of the squaring function, or just the derivative of the squaring function for short. A computation similar to the one above shows that the derivative of the squaring function is the doubling function.
Leibniz notation
A common notation, introduced by Leibniz, for the derivative in the example above is
In an approach based on limits, the symbol dy/dx is to be interpreted not as the quotient of two numbers but as a shorthand for the limit computed above. Leibniz, however, did intend it to represent the quotient of two infinitesimally small numbers, dy being the infinitesimally small change in y caused by an infinitesimally small change dx applied to x. We can also think of d/dx as a differentiation operator, which takes a function as an input and gives another function, the derivative, as the output. For example:
In this usage, the dx in the denominator is read as 'with respect to x'. Another example of correct notation could be:
Even when calculus is developed using limits rather than infinitesimals, it is common to manipulate symbols like dx and dy as if they were real numbers; although it is possible to avoid such manipulations, they are sometimes notationally convenient in expressing operations such as the total derivative.
Integral calculus
Integral calculus is the study of the definitions, properties, and applications of two related concepts, the indefinite integral and the definite integral. The process of finding the value of an integral is called integration. In technical language, integral calculus studies two related linear operators.
The indefinite integral, also known as the antiderivative, is the inverse operation to the derivative. F is an indefinite integral of f when f is a derivative of F. (This use of lower- and upper-case letters for a function and its indefinite integral is common in calculus.)
The definite integral inputs a function and outputs a number, which gives the algebraic sum of areas between the graph of the input and the x-axis. The technical definition of the definite integral involves the limit of a sum of areas of rectangles, called a Riemann sum.
A motivating example is the distances traveled in a given time.
If the speed is constant, only multiplication is needed, but if the speed changes, a more powerful method of finding the distance is necessary. One such method is to approximate the distance traveled by breaking up the time into many short intervals of time, then multiplying the time elapsed in each interval by one of the speeds in that interval, and then taking the sum (a Riemann sum) of the approximate distance traveled in each interval. The basic idea is that if only a short time elapses, then the speed will stay more or less the same. However, a Riemann sum only gives an approximation of the distance traveled. We must take the limit of all such Riemann sums to find the exact distance traveled.
Constant velocity
Integration can be thought of as measuring the area under a curve, defined by f(x), between two points (here a and b).
When velocity is constant, the total distance traveled over the given time interval can be computed by multiplying velocity and time. For example, travelling a steady 50 mph for 3 hours results in a total distance of 150 miles. In the diagram on the left, when constant velocity and time are graphed, these two values form a rectangle with height equal to the velocity and width equal to the time elapsed. Therefore, the product of velocity and time also calculates the rectangular area under the (constant) velocity curve. This connection between the area under a curve and distance traveled can be extended to any irregularly shaped region exhibiting a fluctuating velocity over a given time period. If f(x) in the diagram on the right represents speed as it varies over time, the distance traveled (between the times represented by a and b) is the area of the shaded region s.
To approximate that area, an intuitive method would be to divide up the distance between a and b into a number of equal segments, the length of each segment represented by the symbol Δx. For each small segment, we can choose one value of the function f(x). Call that value h. Then the area of the rectangle with base Δx and height h gives the distance (time Δx multiplied by speed h) traveled in that segment. Associated with each segment is the average value of the function above it, f(x) = h. The sum of all such rectangles gives an approximation of the area between the axis and the curve, which is an approximation of the total distance traveled. A smaller value for Δx will give more rectangles and in most cases a better approximation, but for an exact answer we need to take a limit as Δx approaches zero.
The symbol of integration is , an elongated S (the S stands for 'sum'). The definite integral is written as:
and is read 'the integral from a to b of f-of-x with respect to x.' The Leibniz notation dx is intended to suggest dividing the area under the curve into an infinite number of rectangles, so that their width Δx becomes the infinitesimally small dx. In a formulation of the calculus based on limits, the notation
is to be understood as an operator that takes a function as an input and gives a number, the area, as an output. The terminating differential, dx, is not a number, and is not being multiplied by f(x), although, serving as a reminder of the Δx limit definition, it can be treated as such in symbolic manipulations of the integral. Formally, the differential indicates the variable over which the function is integrated and serves as a closing bracket for the integration operator.
The indefinite integral, or antiderivative, is written:
Functions differing by only a constant have the same derivative, and it can be shown that the antiderivative of a given function is actually a family of functions differing only by a constant. Since the derivative of the function y = x2 + C, where C is any constant, is y′ = 2x, the antiderivative of the latter given by:
The unspecified constant C present in the indefinite integral or antiderivative is known as the constant of integration.
Fundamental theorem
The fundamental theorem of calculus states that differentiation and integration are inverse operations. More precisely, it relates the values of antiderivatives to definite integrals. Because it is usually easier to compute an antiderivative than to apply the definition of a definite integral, the fundamental theorem of calculus provides a practical way of computing definite integrals. It can also be interpreted as a precise statement of the fact that differentiation is the inverse of integration.
The fundamental theorem of calculus states: If a function f is continuous on the interval [a, b] and if F is a function whose derivative is f on the interval (a, b), then
Furthermore, for every x in the interval (a, b),
This realization, made by both Newton and Leibniz, who based their results on earlier work by Isaac Barrow, was key to the proliferation of analytic results after their work became known. The fundamental theorem provides an algebraic method of computing many definite integrals—without performing limit processes—by finding formulas for antiderivatives. It is also a prototype solution of a differential equation. Differential equations relate an unknown function to its derivatives, and are ubiquitous in the sciences.
Applications
The logarithmic spiral of the Nautilus shell is a classical image used to depict the growth and change related to calculus.
Calculus is used in every branch of the physical sciences, actuarial science, computer science, statistics, engineering, economics, business, medicine, demography, and in other fields wherever a problem can be mathematically modeled and an optimal solution is desired. It allows one to go from (non-constant) rates of change to the total change or vice versa, and many times in studying a problem we know one and are trying to find the other.
Physics makes particular use of calculus; all concepts in classical mechanics and electromagnetism are related through calculus. The mass of an object of known density, the moment of inertia of objects, as well as the total energy of an object within a conservative field can be found by the use of calculus. An example of the use of calculus in mechanics is Newton's second law of motion: historically stated it expressly uses the term 'change of motion' which implies the derivative saying Thechangeof momentum of a body is equal to the resultant force acting on the body and is in the same direction. Commonly expressed today as Force = Mass × acceleration, it implies differential calculus because acceleration is the time derivative of velocity or second time derivative of trajectory or spatial position. Starting from knowing how an object is accelerating, we use calculus to derive its path.
Maxwell's theory of electromagnetism and Einstein's theory of general relativity are also expressed in the language of differential calculus. Chemistry also uses calculus in determining reaction rates and radioactive decay. In biology, population dynamics starts with reproduction and death rates to model population changes.
Calculus can be used in conjunction with other mathematical disciplines. For example, it can be used with linear algebra to find the 'best fit' linear approximation for a set of points in a domain. Or it can be used in probability theory to determine the probability of a continuous random variable from an assumed density function. In analytic geometry, the study of graphs of functions, calculus is used to find high points and low points (maxima and minima), slope, concavity and inflection points.
Green's Theorem, which gives the relationship between a line integral around a simple closed curve C and a double integral over the plane region D bounded by C, is applied in an instrument known as a planimeter, which is used to calculate the area of a flat surface on a drawing. For example, it can be used to calculate the amount of area taken up by an irregularly shaped flower bed or swimming pool when designing the layout of a piece of property.
Discrete Green's Theorem, which gives the relationship between a double integral of a function around a simple closed rectangular curve C and a linear combination of the antiderivative's values at corner points along the edge of the curve, allows fast calculation of sums of values in rectangular domains. For example, it can be used to efficiently calculate sums of rectangular domains in images, in order to rapidly extract features and detect object; another algorithm that could be used is the summed area table.
In the realm of medicine, calculus can be used to find the optimal branching angle of a blood vessel so as to maximize flow. From the decay laws for a particular drug's elimination from the body, it is used to derive dosing laws. In nuclear medicine, it is used to build models of radiation transport in targeted tumor therapies.
In economics, calculus allows for the determination of maximal profit by providing a way to easily calculate both marginal cost and marginal revenue.
Calculus is also used to find approximate solutions to equations; in practice it is the standard way to solve differential equations and do root finding in most applications. Examples are methods such as Newton's method, fixed point iteration, and linear approximation. For instance, spacecraft use a variation of the Euler method to approximate curved courses within zero gravity environments.
Varieties
Over the years, many reformulations of calculus have been investigated for different purposes.
Non-standard calculus
Imprecise calculations with infinitesimals were widely replaced with the rigorous (ε, δ)-definition of limit starting in the 1870s. Meanwhile, calculations with infinitesimals persisted and often led to correct results. This led Abraham Robinson to investigate if it were possible to develop a number system with infinitesimal quantities over which the theorems of calculus were still valid. In 1960, building upon the work of Edwin Hewitt and Jerzy Łoś, he succeeded in developing non-standard analysis. The theory of non-standard analysis is rich enough to be applied in many branches of mathematics. As such, books and articles dedicated solely to the traditional theorems of calculus often go by the title non-standard calculus.
Smooth infinitesimal analysis
This is another reformulation of the calculus in terms of infinitesimals. Based on the ideas of F. W. Lawvere and employing the methods of category theory, it views all functions as being continuous and incapable of being expressed in terms of discrete entities. One aspect of this formulation is that the law of excluded middle does not hold in this formulation.
Constructive analysis
Constructive mathematics is a branch of mathematics that insists that proofs of the existence of a number, function, or other mathematical object should give a construction of the object. As such constructive mathematics also rejects the law of excluded middle. Reformulations of calculus in a constructive framework are generally part of the subject of constructive analysis.
See also
Lists
Other related topics
- Precalculus (mathematical education)
References
- ^'Differential Calculus - Definition of Differential calculus by Merriam-Webster'. Retrieved 15 September 2017.
- ^'Integral Calculus - Definition of Integral calculus by Merriam-Webster'. Retrieved 15 September 2017.
- ^Eves, Howard (1976). An Introduction to the History of Mathematics (4th ed.). New York: Holt, Rinehart and Winston. p. 305. ISBN978-0-03-089539-5.
- ^Fisher, Irving (1897). A brief introduction to the infinitesimal calculus. New York: The Macmillan Company.
- ^Morris Kline, Mathematical thought from ancient to modern times, Vol. I
- ^Archimedes, Method, in The Works of ArchimedesISBN978-0-521-66160-7
- ^Dun, Liu; Fan, Dainian; Cohen, Robert Sonné (1966). A comparison of Archimdes' and Liu Hui's studies of circles. Chinese studies in the history and philosophy of science and technology. 130. Springer. p. 279. ISBN978-0-7923-3463-7.,pp. 279ff
- ^Katz, Victor J. (2008). A history of mathematics (3rd ed.). Boston, MA: Addison-Wesley. p. 203. ISBN978-0-321-38700-4.
- ^Zill, Dennis G.; Wright, Scott; Wright, Warren S. (2009). Calculus: Early Transcendentals (3 ed.). Jones & Bartlett Learning. p. xxvii. ISBN978-0-7637-5995-7.Extract of page 27
- ^ abKatz, V.J. 1995. 'Ideas of Calculus in Islam and India.' Mathematics Magazine (Mathematical Association of America), 68(3):163–174.
- ^'Indian mathematics'.
- ^von Neumann, J., 'The Mathematician', in Heywood, R.B., ed., The Works of the Mind, University of Chicago Press, 1947, pp. 180–196. Reprinted in Bródy, F., Vámos, T., eds., The Neumann Compendium, World Scientific Publishing Co. Pte. Ltd., 1995, ISBN981-02-2201-7, pp. 618–626.
- ^André Weil: Number theory. An approach through history. From Hammurapi to Legendre. Birkhauser Boston, Inc., Boston, MA, 1984, ISBN0-8176-4565-9, p. 28.
- ^Blank, Brian E.; Krantz, Steven George (2006). Calculus: Single Variable, Volume 1 (illustrated ed.). Springer Science & Business Media. p. 248. ISBN978-1-931914-59-8.Extract of page 248
- ^Ferraro, Giovanni (2007). The Rise and Development of the Theory of Series up to the Early 1820s (illustrated ed.). Springer Science & Business Media. p. 87. ISBN978-0-387-73468-2.Extract of p. 87
- ^Leibniz, Gottfried Wilhelm. The Early Mathematical Manuscripts of Leibniz. Cosimo, Inc., 2008. p. 228. Copy
- ^Allaire, Patricia R. (2007). Foreword. A Biography of Maria Gaetana Agnesi, an Eighteenth-century Woman Mathematician. By Cupillari, Antonella (illustrated ed.). Edwin Mellen Press. p. iii. ISBN978-0-7734-5226-8.
- ^Unlu, Elif (April 1995). 'Maria Gaetana Agnesi'. Agnes Scott College.
- ^Russell, Bertrand (1946). History of Western Philosophy. London: George Allen & Unwin Ltd. p. 857.
The great mathematicians of the seventeenth century were optimistic and anxious for quick results; consequently they left the foundations of analytical geometry and the infinitesimal calculus insecure. Leibniz believed in actual infinitesimals, but although this belief suited his metaphysics it had no sound basis in mathematics. Weierstrass, soon after the middle of the nineteenth century, showed how to establish the calculus without infinitesimals, and thus at last made it logically secure. Next came Georg Cantor, who developed the theory of continuity and infinite number. 'Continuity' had been, until he defined it, a vague word, convenient for philosophers like Hegel, who wished to introduce metaphysical muddles into mathematics. Cantor gave a precise significance to the word, and showed that continuity, as he defined it, was the concept needed by mathematicians and physicists. By this means a great deal of mysticism, such as that of Bergson, was rendered antiquated.
- ^Grabiner, Judith V. (1981). The Origins of Cauchy's Rigorous Calculus. Cambridge: MIT Press. ISBN978-0-387-90527-3.
Further reading
Books
- Boyer, Carl Benjamin (1949). The History of the Calculus and its Conceptual Development. Hafner. Dover edition 1959, ISBN0-486-60509-4
- Courant, RichardISBN978-3-540-65058-4Introduction to calculus and analysis 1.
- Edmund Landau. ISBN0-8218-2830-4Differential and Integral Calculus, American Mathematical Society.
- Robert A. Adams. (1999). ISBN978-0-201-39607-2Calculus: A complete course.
- Albers, Donald J.; Richard D. Anderson and Don O. Loftsgaarden, ed. (1986) Undergraduate Programs in the Mathematics and Computer Sciences: The 1985–1986 Survey, Mathematical Association of America No. 7.
- John Lane Bell: A Primer of Infinitesimal Analysis, Cambridge University Press, 1998. ISBN978-0-521-62401-5. Uses synthetic differential geometry and nilpotent infinitesimals.
- Florian Cajori, 'The History of Notations of the Calculus.' Annals of Mathematics, 2nd Ser., Vol. 25, No. 1 (Sep. 1923), pp. 1–46.
- Leonid P. Lebedev and Michael J. Cloud: 'Approximating Perfection: a Mathematician's Journey into the World of Mechanics, Ch. 1: The Tools of Calculus', Princeton Univ. Press, 2004.
- Cliff Pickover. (2003). ISBN978-0-471-26987-8Calculus and Pizza: A Math Cookbook for the Hungry Mind.
- Michael Spivak. (September 1994). ISBN978-0-914098-89-8 Calculus. Publish or Perish publishing.
- Tom M. Apostol. (1967). ISBN978-0-471-00005-1Calculus, Volume 1, One-Variable Calculus with an Introduction to Linear Algebra. Wiley.
- Tom M. Apostol. (1969). ISBN978-0-471-00007-5Calculus, Volume 2, Multi-Variable Calculus and Linear Algebra with Applications. Wiley.
- Silvanus P. Thompson and Martin Gardner. (1998). ISBN978-0-312-18548-0Calculus Made Easy.
- Mathematical Association of America. (1988). Calculus for a New Century; A Pump, Not a Filter, The Association, Stony Brook, NY. ED 300 252.
- Thomas/Finney. (1996). ISBN978-0-201-53174-9Calculus and Analytic geometry 9th, Addison Wesley.
- Weisstein, Eric W. 'Second Fundamental Theorem of Calculus.' From MathWorld—A Wolfram Web Resource.
- Howard Anton, Irl Bivens, Stephen Davis:'Calculus', John Willey and Sons Pte. Ltd., 2002. ISBN978-81-265-1259-1
- Larson, Ron, Bruce H. Edwards (2010). Calculus, 9th ed., Brooks Cole Cengage Learning. ISBN978-0-547-16702-2
- McQuarrie, Donald A. (2003). Mathematical Methods for Scientists and Engineers, University Science Books. ISBN978-1-891389-24-5
- Salas, Saturnino L.; Hille, Einar; Etgen, Garret J. (2007). Calculus: One and Several Variables (10th ed.). Wiley. ISBN978-0-471-69804-3.
- Stewart, James (2012). Calculus: Early Transcendentals, 7th ed., Brooks Cole Cengage Learning. ISBN978-0-538-49790-9
- Thomas, George B., Maurice D. Weir, Joel Hass, Frank R. Giordano (2008), Calculus, 11th ed., Addison-Wesley. ISBN0-321-48987-X
Online books
- Boelkins, M. (2012). Active Calculus: a free, open text(PDF). Archived from the original on 30 May 2013. Retrieved 1 February 2013.
- Crowell, B. (2003). 'Calculus'. Light and Matter, Fullerton. Retrieved 6 May 2007 from http://www.lightandmatter.com/calc/calc.pdf
- Garrett, P. (2006). 'Notes on first year calculus'. University of Minnesota. Retrieved 6 May 2007 from http://www.math.umn.edu/~garrett/calculus/first_year/notes.pdf
- Faraz, H. (2006). 'Understanding Calculus'. Retrieved 6 May 2007 from UnderstandingCalculus.com, URL http://www.understandingcalculus.com (HTML only)
- Keisler, H.J. (2000). 'Elementary Calculus: An Approach Using Infinitesimals'. Retrieved 29 August 2010 from http://www.math.wisc.edu/~keisler/calc.html
- Mauch, S. (2004). 'Sean's Applied Math Book' (pdf). California Institute of Technology. Retrieved 6 May 2007 from https://web.archive.org/web/20070614183657/http://www.cacr.caltech.edu/~sean/applied_math.pdf
- Sloughter, Dan (2000). 'Difference Equations to Differential Equations: An introduction to calculus'. Retrieved 17 March 2009 from http://synechism.org/drupal/de2de/
- Stroyan, K.D. (2004). 'A brief introduction to infinitesimal calculus'. University of Iowa. Retrieved 6 May 2007 from https://web.archive.org/web/20050911104158/http://www.math.uiowa.edu/~stroyan/InfsmlCalculus/InfsmlCalc.htm (HTML only)
- Strang, G. (1991). 'Calculus' Massachusetts Institute of Technology. Retrieved 6 May 2007 from http://ocw.mit.edu/ans7870/resources/Strang/strangtext.htm
- Smith, William V. (2001). 'The Calculus'. Retrieved 4 July 2008 [1] (HTML only).
External links
- Hazewinkel, Michiel, ed. (2001) [1994], 'Calculus', Encyclopedia of Mathematics, Springer Science+Business Media B.V. / Kluwer Academic Publishers, ISBN978-1-55608-010-4
- Weisstein, Eric W.'Calculus'. MathWorld.
- Topics on Calculus at PlanetMath.org.
- Calculus Made Easy (1914) by Silvanus P. Thompson Full text in PDF
- Calculus on In Our Time at the BBC
- Calculus.org: The Calculus page at University of California, Davis – contains resources and links to other sites
- COW: Calculus on the Web at Temple University – contains resources ranging from pre-calculus and associated algebra
- Online Integrator (WebMathematica) from Wolfram Research
- The Role of Calculus in College Mathematics from ERICDigests.org
- OpenCourseWare Calculus from the Massachusetts Institute of Technology
- Infinitesimal Calculus – an article on its historical development, in Encyclopedia of Mathematics, ed. Michiel Hazewinkel.
- Daniel Kleitman, MIT. 'Calculus for Beginners and Artists'.
- Calculus Problems and Solutions by D.A. Kouba
- ‹See Tfd›(in English)‹See Tfd›(in Arabic)The Excursion of Calculus, 1772
Retrieved from 'https://en.wikipedia.org/w/index.php?title=Calculus&oldid=901161186'