39 Differential equations
39.1 Dynamical systems
In this Block, we take on what an important application of derivatives: the representation of dynamical systems.
“Dynamical systems” (but not under that name) were developed initially in the 1600s to relate planetary motion to the force of gravity. Nowadays, they are used to describe all sorts of physical systems from oscillations in electrical circuits to the ecology of interacting species to the spread of contagious disease.
As examples of dynamical systems, consider a ball thrown thrown through the air or a rocket being launched to deploy a satellite. At each instant of time, a ball has a position—a point in
The “dynamical” in “dynamical systems” refers to the change in state. For the ball, the state changes under the influence of mechanisms such as gravity and air resistance. The mathematical representation of a dynamical system codifies how the state changes as a function of the instantaneous state. For example, if the instantaneous state is a single quantity called
To say that
The dynamical system describing the motion of
Notice that the function
It is essential that you train yourself to distinguish two very different statements
- anti-differentiation problems like
, which has as both the with-respect-to variable and as the argument to the function .
and
- dynamical systems like
This is one place where Leibniz’s notation for derivatives can be useful:
Dynamical systems with multiple state quantities are written mathematically as sets of differential equations, for instance:
Let’s illustrate the idea of a dynamical system with a children’s game: “Chutes and Ladders”. Since hardly any children have studied calculus, the game isn’t presented as differential equations, but as a simple board and the rules for the movement along the board.
A player’s state in this game is shown by the position of a token, but we will define the state to be the number of the square that the player’s token is on. In Chutes and Ladders the state is one of the integers from 1 to 100. In contrast, the dynamical systems that we will study with calculus have a state that is a point on the number line, or in the coordinate plane, or higher-dimensional space. Our calculus dynamical system describe the change of state using derivatives with respect to time, whereas in chutes and ladders the the state jumps from one value to the next value.
The game board displays not only the set of possible states but also the rule for changing state jumping from one state to another.
In the real game, players roll a die to determine how many steps to take to the next state. But we will play a simpler game: Just move one step forward on each turn, except … from place to place there are ladders that connect two squares. When the state reaches a square holding the foot of a ladder, the state is swept up to the higher-numbered square at the top of the ladder. Similarly, there are chutes. These work much like the ladders but carry the state from a higher-numbered square to a lower-numbered square.
The small drawings on the board are not part of the action of the game. Rather, they represent the idea that good deeds lead the player to progress, while wrong-doing produces regression. Thus, the productive gardener in square 1 is rewarded by being moved upward to the harvest in square 38. In square 64 a brat is pulling on his sister’s braids. This misdeed results in punishment: he is moved back to square 60.
Our dice-free version of Chutes and Ladders is an example of a discrete-time, discrete-state dynamical system. Since there is no randomness involved, the movement of the state is deterministic. (With dice, the movement would be stochastic.)
The differential equations of a dynamical system correspond to a continuous-time, continuous-space system. This continuity is the reason we use derivatives to describe the motion of the state. The movement in the systems we will explore is also deterministic. (In Chapter ?sec-forcing we will encounter briefly some instances of stochastic systems.)
39.2 State
The mathematical language of differential equations and dynamical systems is able to describe a stunning range of systems, for example:
- physics
- swing of a pendulum
- bobbing of a mass hanging from a spring.
- a rocket shooting up from the launch pad
- commerce
- investment growth
- growth in equity in a house as a mortgage is paid up. (“Equity” is the amount of the value of the house that belongs to you.)
- biology
- growth of animal populations, including predator and prey.
- spread of an infectious disease
- growth of an organism or a crop.
All these systems involve a state that describes the configuration of the system at a given instant in time. For the growth of a crop, the state would be, say, the amount of biomass per unit area. For the spread of infectious disease, the state would be the fraction of people who are infectious and the fraction who are susceptible to infection. “State” in this sense is used in the sense of “the state of affairs,” or “his mental state,” or “the state of their finances.”
Since we are interested in how the state changes over time, sometimes we refer to it as the dynamical state.
One of the things you learn when you study a field such as physics or epidemiology or engineering is what constitutes a useful description of the dynamical state for different situations. In the crop and infectious disease examples above, the state mentioned is a strong simplification of reality: a model. Often, the modeling cycle leads the modeler to include more components to the state. For instance, some models of crop growth include the density of crop-eating insects. For infectious disease, a model might include the fraction of people who are incubating the disease but not yet contagious.
Consider the relatively simple physical system of a pendulum, swinging back and forth under the influence of gravity. In physics, you learn the essential dynamical elements of the pendulum system: the current angle the pendulum makes to the vertical, and the rate at which that angle changes. There are also fixed elements of the system, for instance the length of the pendulum’s rod and the local gravitational acceleration. Although such fixed characteristics may be important in describing the system, they are not elements of the dynamical state. Instead, they might appear as parameters in the functions on the right-hand side of the differential equations.
To be complete, the dynamical state of a system has to include all those changing aspects of the system that allow you to calculate from the state at this instant what the state will be at the next instant. For example, the angle of the pendulum at an instant tells you a lot about what the angle will be at the next instant, but not everything. You also need to know which way the pendulum is swinging and how fast.
Figuring out what constitutes the dynamical state requires knowledge of the mechanics of the system, e.g. the action of gravity, the constraint imposed by the pivot of the pendulum. You get that knowledge by studying the relevant field: electrical engineering, economics, epidemiology, etc. You also learn what aspects of the system are fixed or change slowly enough that they can be considered fixed. (Sometimes you find out that something your intuition tells you is important to the dynamics is, in fact, not. An example is the mass of the pendulum.)
39.3 State space
The state of a dynamical system tells you the configuration of the system at any instant in time. It is appropriate to think about the instantaneous state as a single point in a state space, a coordinate system with an axis for each component of state. As the system configuration changes with time—say, the pendulum loses velocity as it swings to the left—the instantaneous state moves along a path in the state space. Such a path is called a trajectory of the dynamical system.
In this book, we will work almost exclusively with systems that have a one- or two-dimensional state. Consequently, the state space will be either the number line or the coordinate plane. The methods you learn will be broadly applicable to systems with higher-dimensional state.
For the deterministic dynamical systems we will be working with, a basic principle is that a trajectory can never cross itself. This can be demonstrated by contradiction. Suppose a trajectory did cross itself. This would mean that the motion from the crossing point couple possibly go in either of two directions; the state might follow one branch of the cross or the other. Such a system would not be deterministic. Determinism implies that from each point in state space the flow goes only in one direction.
The dimension of the state space is the same as the number of components of the state; one axis of state space for every component of the state. has important implications for the type of motion that can exist.
- If the state space is one-dimensional, the state as a function of time must be monotonic. Otherwise, the trajectory would cross itself, which is not permitted.
- A state space that is two- or higher-dimensional can support motion that oscillates back and forth. Such a trajectory does not cross itself, instead it goes round and round in a spiral or a closed loop.
For many decades, it was assumed that all dynamical systems produce either monotonic behavior or spiral or loop behavior. In the 1960s, scientists working on a highly simplified model of the atmosphere discovered numerically that there is a third type of behavior, the irregular and practically unpredictable behavior called chaos. To display chaos, the state space of the system must have at least three elements.
That calculus is the language of change can be seen in the words used in this section. For instance, instantaneous, continuous, and monotonic are all words introduced in Block 1 of this book.
39.4 Dynamics
The dynamics of a system is a description of how the individual components of the state change as a function of the entire set of components of the state.
At any instant in time, the state is a set of quantities. We will use
The differential equations describing the
$$_t x(t) = f(x(t), y(t), z(t)) , \
_t y(t) = g(x(t), y(t), z(t)) , \ _t z(t) = h(x(t), y(t), z(t)) .$$
The way these equations are written is practically impossible to read: the expression
This more concise way of writing the differential equations makes it easier to describe how to interpret the equations. Formally,
On the right side of each equation is a function that takes the state quantities as inputs. Each individual equation can be interpreted as completing the elliptical sentence (that is, ending in “…”) in the previous paragraph, so that the whole equation reads like, “The way the
Remember that
Mathematically, a dynamical system consists of two things:
- The state variables, which is a set of quantities that vary in time.
- The dynamics, which is the set of dynamical functions, one function for each of the state variables.
A simple example is the dynamics of retirement-account interest. In a retirement account, you put aside money—this is called “contributing”—each month. The value
The left-hand side of this equation is boilerplate for “the way the
Remember that the dynamical function is something that the modeler constructs from her knowledge of the system. To model the dynamics of a pendulum requires some knowledge of physics. Without getting involved with the physics, we note that the oscillatory nature of pendulum movement means that there must be at least two state variables. A physicist learns that a good way to describe the motion uses these two quantities: the angle
The other equation comes from the definition that the derivative of the position
$$
Consider the population of two interacting species, say rabbits and foxes. As you know, the relationship between rabbits and foxes is rather unhappy from the rabbits’ point of view even if it is fulfilling for the foxes.
Many people assume that such populations are more or less fixed: that the rabbits are in a steady balance with the foxes. In fact, as any gardener can tell you, some years there are lots of rabbits and others not: an oscillation. Just from this fact, we know that the dynamical state must have at least two components.
In a simple, but informative, model, the two components of the dynamical state are
Similarly, in the absence of food (rabbits are fox food), the foxes will starve or emigrate, so the dynamical equation for foxes is very similar
Of course, in real ecosystems there are many other quantities that change and that are relevant. For instance, foxes eat not only rabbits, but birds and frogs and earthworms and berries. And the diet of rabbits eat weeds and grass (which is generally in plentiful supply), but also the gardener’s flowers and carrots (and other vegetables). Growth in the rabbit population leads to decrease in available flowers and vegetables, which in turn leads to slower growth (or even population decline) for rabbits.
In the spirit of illustrating dynamics, we will leave out these important complexities and imagine that the state consists of just two numbers: how many rabbits there are and how many foxes. The dynamics therefore involve two equations, one for
t f = {} - , f$$
The quantities
How are you supposed to know that
Coming up with this description of dynamics requires knowing something about rabbits and foxes. The particular forms used, for instance the interaction term
39.5 State space and flow field
For the purpose of developing intuition it is helpful to represent the instantaneous state as a point in a graphical frame and the dynamics as a field of vectors showing how, for each possible state, the state changes. For instance, in the Rabbit-Fox dynamics, the state is the pair
The present state of the system might be any point in the state space. But once we know the present state, the dynamical functions evaluated at the present state tell us how the state changes over a small increment in time. The step over a small increment of time can be represented by a vector.
Let’s illustrate with the Rabbit-Fox system, whose dynamical equations are given above. The dynamical functions take a position in state space as input. Each of the functions returns a scalar.
To make a plot, we need numerical values for all the parameters in those equations.
The vector field corresponding to the dynamics is called a flow, as if it were a pool of swirling water. Figure 39.2 shows the flow of the rabbit/fox system.
Staying with the analogy to a pool of swirling water or the currents in a river, you can place a lightweight marker such as a leaf at some point in the flow and follow its path over time. This path—position in state space as a function of time—is called the trajectory of the flow. There are many possible trajectories, depending on where you place the leaf.
In Chapter 33 we considered the path followed by a robot arm. In that chapter, we separated out the
Each component of the solution is called a time series and is often plotted as a function of time, for instance
From the flow field, you can approximate the trajectory that will be followed from any initial condition. Starting from the initial condition, just follow the flow. You already have some practice following a flow from your study of the gradient ascent method of optimization described in Chapter 23. At the argmax, the gradient is nil. Thus, the gradient ascent method stops at the argmax. We will see an analogous behavior in dynamical systems: any place where the flow is nil is a potential resting point for the state, called a fixed point.
Let’s return to the pendulum and examine its flow field. We will modify the equations just a little bit to include air resistance in the model. Air resistance is a force, so we know it will appear in the
?fig-pendulum-in-air shows the flow field of the pendulum. Also shown is a trajectory and the two time series corresponding to that trajectory.
## Solution containing functions theta(t), v(t).
The pendulum was started out by lifting it to an angle of
The flow of a dynamical system tells how different points in state space are connected. Because movement of the state is continuous in time and the state space itself is continuous, the connections cannot be stated in the form “this point goes to that point.” Instead, as has been the case all along in calculus, we describe the movement in terms of a “velocity” vector. Each dynamical function specifies one component of the “velocity” vector, taken together they tell the direction and speed of movement of the state at each instant in time.
Perhaps it would be better to use the term state velocity instead of “velocity.” In physics and most aspects of everyday life, “velocity” refers to the rate of change of physical position of an object. Similarly, the state velocity tells the rate of change of the position of the state. It is a useful visualization technique to think of the state as an object skating around the state space in a manner directed by the dynamical functions. But the state space almost always includes components other than physical position. For instance, in the rabbit/fox model, the state says nothing about where individual rabbits and foxes are located in their environment; it is all about the density of animals in a region.
In physics, often the state space consists of position in physical state as well as the physical velocity in physical space. For instance, the state might consist of the three
Returning to the Chutes and Ladders game used as an example near the start of this chapter …
The state in chutes and ladders is one of the hundred numbers 1, 2,
In the no-dice game, the state follows the arrows. Looking carefully at Figure 39.5, you can see that each state has a forward connection to at most one state. This is the hallmark of determinism.
In the children’s game, the play is not deterministic because a die is used to indicate which state follows from each other state. A die has six faces with the six numbers 1 to 6. So, each state is connected to six other states in the forward direction. Which of the six is to be followed depends on the number that comes up on the die. Multiple forward connections means the dynamics are stochastic (random).
Straightforward examination of the flow often tells you a lot about the big picture of the system. In dice-free Chutes and Ladders, The 100 states are divided into three isolated islands. State 1 is part of the island in the lower right corner of Figure 39.5. Follow the arrows starting from any place on that island and you will eventually reach state 84. And state 84 is part of a cycle
39.6 Exercises
Exercise 39.01
Refer to Figure 39.5 showing the flow from state to state of the dice-free Chutes and Ladders game.
The text mentions a cycle involving state 84. Write down all the other cycles in the flow.
The goal of the game is to get to state 100. From how many initial states (other than 100) will the flow eventually lead to state 100.
The only terminal endpoint for the flow on the state island shown in the upper left corner of the diagram is state 100. Explain why, in the dice-free game, there cannot possibly be another terminal endpoint for the flow on that state island.
Exercise 39.03
Trace a trajectory from each of the points labeled A, B, C, and D until it reaches the edge of the box. Note the direction in which the trajectory is heading using compass directions.
Part A Which direction for trajectory A?
N NE E SE S SW W NW
Part B Which direction for trajectory B?
N NE E SE S SW W NW
Part C Which direction for trajectory C?
N NE E SE S SW W NW
Part D Which direction for trajectory D?
N NE E SE S SW W NW
Exercise 39.05
Explain briefly: What is the distinction between an instantaneous state and a state space?
Exercise 39.07
Trace trajectories from each of the initial conditions A, B, C.
Give a one-word description for the shape shared by all the trajectories.
Give the
coordinates (roughly) where the trajectories will meet up if continued for a long enough time.
Exercise 39.09
Consider a dynamical system with state variables
Part A What type of object is shown by the graph
- a time series
- a trajectory
- a state space
- an instantaneous state
- nonsense
Part B What type of object is shown by the graph
- a time series
- a trajectory
- a state space
- an instantaneous state
- nonsense
Part C What type of object is shown by the graph
- a time series
- a trajectory
- a state space
- an instantaneous state
- nonsense
Part D What type of object is the coordinate
- a time series
- a trajectory
- a state space
- an instantaneous state
- nonsense
Part E What type of object is the
- a time series
- a trajectory
- a state space
- an instantaneous state
- nonsense
Exercise 39.11
Here is a flow field:
## Solution containing functions u(t), v(t).
## Solution containing functions u(t), v(t).
## Solution containing functions u(t), v(t).
All but one of the plots above is a time series from a trajectory starting at one of the initial conditions A, B, or C. The time series might be
Part A To which of these choices does Plot 1 belong?
Part B To which of these choices does Plot 2 belong?
Part C To which of these choices does Plot 3 belong?
Part D To which of these choices does Plot 4 belong?
Exercise 39.13
Part A Plot A
time series trajectory
Part B Plot B
time series trajectory
Part C Plot C
time series trajectory
Part D Plot D
time series trajectory
Part E Plot E
time series trajectory
Part F Plot F
time series trajectory
Part G Plot G
time series trajectory
Part H Plot H
time series trajectory
Exercise 39.15
The graph shows two trajectories, A, B.
## Solution containing functions u(t), v(t).
## Solution containing functions u(t), v(t).
For each trajectory, sketch the time series.
- Trajectory A, variable
- Trajectory A, variable
- Trajectory B, variable
- Trajectory B, variable
Exercise 39.17
We will be using a handful of Greek letters in our mathematical notation. You should learn these by heart:
: alpha (lowercase) : beta (lowercase) : gamma (lowercase) : delta lowercase : lambda (lowercase) : lambda (uppercase) : omega (lowercase) : xi (lowercase), pronounced “ex-eee” : eta (lowercase)
The last two of these,
On a piece of paper, write out each of the following Greek letters and, alongside it, the name of the letter.