# Ddeproblem solve — Delay Differential Equations in Julia

Содержание

## Ddeproblem solve — Delay Differential Equations in Julia

diffeqpy is a package for solving differential equations in Python. It utilizes DifferentialEquations.jl for its core routines to give high performance solving of many different types of differential equations, including:

• Discrete equations (function maps, discrete stochastic (Gillespie/Markov) simulations)
• Ordinary differential equations (ODEs)
• Split and Partitioned ODEs (Symplectic integrators, IMEX Methods)
• Stochastic ordinary differential equations (SODEs or SDEs)
• Random differential equations (RODEs or RDEs)
• Differential algebraic equations (DAEs)
• Delay differential equations (DDEs)
• Mixed discrete and continuous equations (Hybrid Equations, Jump Diffusions)

directly in Python.

If you have any questions, or just want to chat about solvers/using the package, please feel free to chat in the Gitter channel. For bug reports, feature requests, etc., please submit an issue.

To install diffeqpy, use pip:

Using diffeqpy requires that Julia is installed and in the path, along with DifferentialEquations.jl and PyCall.jl. To install Julia, download a generic binary from the JuliaLang site and add it to your path. To install Julia packages required for diffeqpy, open up Python interpreter then run:

and you’re good! In addition, to improve the performance of your code it is recommended that you use Numba to JIT compile your derivative functions. To install Numba, use:

Import and setup the solvers via the commands:

The general flow for using the package is to follow exactly as would be done in Julia, except add de. in front. Most of the commands will work without any modification. Thus the DifferentialEquations.jl documentation and the DiffEqTutorials are the main in-depth documentation for this package. Below we will show how to translate these docs to Python code.

Ordinary Differential Equation (ODE) Examples

The solution object is the same as the one described in the DiffEq tutorials and in the solution handling documentation (note: the array interface is missing). Thus for example the solution time points are saved in sol.t and the solution values are saved in sol.u . Additionally, the interpolation sol(t) gives a continuous solution.

We can plot the solution values using matplotlib:

We can utilize the interpolation to get a finer solution:

The common interface arguments can be used to control the solve command. For example, let’s use saveat to save the solution at every t=0.1 , and let’s utilize the Vern9() 9th order Runge-Kutta method along with low tolerances abstol=reltol=1e-10 :

The set of algorithms for ODEs is described at the ODE solvers page.

Compilation with Numba and Julia

When solving a differential equation, it’s pertinent that your derivative function f is fast since it occurs in the inner loop of the solver. We can utilize Numba to JIT compile our derivative functions to improve the efficiency of the solver:

Additionally, you can directly define the functions in Julia. This will allow for more specialization and could be helpful to increase the efficiency over the Numba version for repeat or long calls. This is done via julia.Main.eval :

Systems of ODEs: Lorenz Equations

To solve systems of ODEs, simply use an array as your initial condition and define f as an array function:

or we can draw the phase plot:

In-Place Mutating Form

When dealing with systems of equations, in many cases it’s helpful to reduce memory allocations by using mutating functions. In diffeqpy, the mutating form adds the mutating vector to the front. Let’s make a fast version of the Lorenz derivative, i.e. mutating and JIT compiled:

or using a Julia function:

Stochastic Differential Equation (SDE) Examples

Solving one-dimensonal SDEs du = f(u,t)dt + g(u,t)dW_t is like an ODE except with an extra function for the diffusion (randomness or noise) term. The steps follow the SDE tutorial.

Systems of SDEs with Diagonal Noise

An SDE with diagonal noise is where a different Wiener process is applied to every part of the system. This is common for models with phenomenological noise. Let’s add multiplicative noise to the Lorenz equation:

Systems of SDEs with Non-Diagonal Noise

In many cases you may want to share noise terms across the system. This is known as non-diagonal noise. The DifferentialEquations.jl SDE Tutorial explains how the matrix form of the diffusion term corresponds to the summation style of multiple Wiener processes. Essentially, the row corresponds to which system the term is applied to, and the column is which noise term. So du[i,j] is the amount of noise due to the j th Wiener process that’s applied to u[i] . We solve the Lorenz system with correlated noise as follows:

Here you can see that the warping effect of the noise correlations is quite visible!

Differential-Algebraic Equation (DAE) Examples

A differential-algebraic equation is defined by an implicit function f(du,u,p,t)=0 . All of the controls are the same as the other examples, except here you define a function which returns the residuals for each part of the equation to define the DAE. The initial value u0 and the initial derivative du0 are required, though they do not necessarily have to satisfy f (known as inconsistent initial conditions). The methods will automatically find consistent initial conditions. In order for this to occur, differential_vars must be set. This vector states which of the variables are differential (have a derivative term), with false meaning that the variable is purely algebraic.

This example shows how to solve the Robertson equation:

and the in-place JIT compiled form:

Delay Differential Equations

A delay differential equation is an ODE which allows the use of previous values. In this case, the function needs to be a JIT compiled Julia function. It looks just like the ODE, except in this case there is a function h(p,t) which allows you to interpolate and grab previous values.

We must provide a history function h(p,t) that gives values for u before t0 . Here we assume that the solution was constant before the initial time point. Additionally, we pass constant_lags = [20.0] to tell the solver that only constant-time lags were used and what the lag length was. This helps improve the solver accuracy by accurately stepping at the points of discontinuity. Together this is:

Notice that the solver accurately is able to simulate the kink (discontinuity) at t=20 due to the discontinuity of the derivative at the initial time point! This is why declaring discontinuities can enhance the solver accuracy.

## Solving algebraic looping problem by introducing a unit delay

First, here is my code.

The problem here lies in the (*Flip Flop Control Law *) section. Since the equation in u1 is dependent on u2 and vice-versa, these two equations result in an algebraic loop. One method I know of for overcomming this problem is to introduce a unit delay. In SIMULINK, the «Unit Delay» block is available, but in Mathematica, how do I program a unit delay for this system so I can simulate and plot the results successfully for both Theta and Phi?

## A new technique to solve the initial value problems for fractional fuzzy delay differential equations

• Truong Vinh An
• Ho Vu
• Ngo Van Hoa

## Abstract

Using some recent results of fixed point of weakly contractive mappings on the partially ordered space, the existence and uniqueness of solution for interval fractional delay differential equations (IFDDEs) in the setting of the Caputo generalized Hukuhara fractional differentiability are studied. The dependence of the solution on the order and the initial condition of IFDDE is shown. A new technique is proposed to find the exact solutions of IFDDE by using the solutions of interval integer order delay differential equation. Finally, some examples are given to illustrate the applications of our results.

## 1 Introduction

Fractional calculus and fractional differential equations are a field of increasing interest due to their applicability to the analysis of phenomena, and they play an important role in a variety of fields such as rheology, viscoelasticity, electrochemistry, diffusion processes, etc. Usually applications of fractional calculus amount to replacing the time derivative in a given evolution equation by a derivative of fractional order. One can find applications of fractional differential equations in signal processing and in the complex dynamic in biological tissues (see [ 1 , 2 , 3 ]). To observe some basic information and results of various type of fractional differential equations, one can see the papers and monographs of Samko et al. [ 4 ], Podlubny [ 5 ] and Kilbas et al. [ 6 ].

Interval analysis and interval differential equation were proposed as an attempt to handle interval uncertainty that appears in many mathematical or computer models of some deterministic real-world phenomena in which uncertainties or vagueness pervade. In the recent time this theory has been developed in theoretical directions, and a w >7 , 8 , 9 , 10 , 11 , 12 ]). Recently, the issue of fuzzy fractional calculus and fuzzy fractional differential equations has emerged as the significant subject, and this new theory has become very attractive to many scientists. The concept of fuzzy type Riemann-Liouville differentiability based on the Hukuhara differentiability was initiated by Agarwal et al. in [ 13 , 14 ] with some applications to fractional order initial value problem of fuzzy differential equation. By using the Hausdorff measure of non-compactness and under compactness type conditions, authors proved the existence of solution of fuzzy fractional integral equation. Following this direction, the concepts of fuzzy fractional differentiability have been developed and extended in some papers to investigate some results on the existence and uniqueness of solutions to fuzzy differential equations, and have been cons >15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 , 27 , 28 ]).

give the existence and uniqueness theorem of solution for a general form of the interval fractional integral equation by using some recent results of fixed point of weakly contractive mappings on partially ordered sets, and use these results in order to investigate the existence and uniqueness result of solution for problem ( 1.1 ).

show that the solutions of the initial value problem ( 1.1 ) depend continuously on the initial condition, the order and the right-hand side of equation.

propose a new technique to find the exact solutions of problem ( 1.1 ) by using the solutions of interval integer order delay differential equation.

This paper is organized as follows. In Section 2 , some basic concepts and notations about fractional derivative for interval functions are introduced. Bes >3 , we prove the existence and uniqueness of solution for a general form of the interval fractional integral equation and use this result to investigate the existence and uniqueness results of solutions for problem ( 1.1 ). Finally, a new technique to find the exact solutions of problem ( 1.1 ) is provided and two examples are given to illustrate this technique.

## 2 Preliminaries

### Lemma 2.1

If $$(X)_> \subset K_(\mathbb )$$ is a nondecreasing sequence such that $$X_ \to X$$ in $$K_(\mathbb )$$ , then $$X_ \preceq X$$ for all $$n \in\mathbb$$ .

Every pair of elements of $$K_(\mathbb )$$ has a lower bound or an upper bound.

### Lemma 2.2

$$(C([a,b],K_(\mathbb )), \preceq )$$ is a partial ordered space.

If $$(X)_> \subset C([a,b],K_(\mathbb ))$$ is a nondecreasing sequence such that $$X_ \to X$$ in $$C([a,b],K_(\mathbb ))$$ , then $$X_ \preceq X$$ for all $$n \in\mathbb$$ .

Every pair of elements of $$C([a,b],K_(\mathbb ))$$ has a lower bound or an upper bound.

Let an interval function $$X:[a,b]\rightarrow K_(\mathbb )$$ , then X is called w-increasing ( w-decreasing) on $$[a,b]$$ if $$t\mapsto w(X(t))$$ is nondecreasing (nonincreasing) on $$[a,b]$$ . We say that X is w-monotone on $$[a,b]$$ if X is w-increasing or w-decreasing on $$[a,b]$$ .

## Machine learning meets math: Solve differential equations with new Julia library

Need a good math tutor? Julia’s the name and differential equations is the new game. Julia’s latest library combines machine learning with solving differential equations. This collaborative effort shows off the power that Julialang has as a platform for machine learning.

Julia continues to make waves since its co-creators won the 2020 James H. Wilkinson Prize for Numerical Software. The Wilkinson prize is awarded every four years. This year it celebrates the innovative language “for the creation of Julia, an innovative environment for the creation of high-performance tools that enable the analysis and solution of computational science problems.” There’s no better way to start the fresh new year than already on top with a shiny new prize.

Now, the language unveils a new library upon the scientific computing community: DiffEqFlux.jl. It combines the power of solving differential equations and machine learning.

This library for neural differential equations reminds us why Julia deserves the award. Let’s pay our congrats and spread the word.

## Nobody solves problems like Julia

Julia’s team showed off DiffEqFlux.jl in a blog post on January 18, 2020. The post is a combined effort of Julia library creators and the authors of the Neural Ordinary Differential Equations paper, which won Best Paper of NeurIPS 2020.

First of all, DiffEqFlux.jl is a recipe that combines two great libraries into one elegant interaction: DifferentialEquations.jl and Flux.jl.

DifferentialEquations.jl is a suite for solving, what else, differential equations. (See some example Jupyter notebooks on GitHub and follow the interactive introduction and tutorial.) Are differential equations a long-lost school memory or a concept that you struggle with? That is no problem with the amount of helpful tutorials and introductions. The Jupyter notebook, “An Intro to DifferentialEquations.jl” helps dip you in.

Flux.jl, on the other hand, is an “elegant machine learning stack”. It is a library for machine learning and enables the powerful nature of Julia. Several demos of Flux are available on GitHub in the model zoo. Use the examples as a starting point for your own machine learning models.

With that in mind, the latest library combines differential equations and machine learning into one beautiful package.

## Diffy Q + machine learning = match made in heaven

So, why is machine learning the perfect match for differential equations?

The announcement blog post answers this question (in a very helpful tone – frankly, I wish all mathematical concepts were explained like this). While you should absolutely read the entire explanation, here is just a sample:

There are three common ways to define a nonlinear transform: direct modeling, machine learning, and differential equations. Directly writing down the nonlinear function only works if you know the exact functional form that relates the input to the output. However, in many cases, such exact relations are not known a priori. So how do you do nonlinear modeling if you don’t know the nonlinearity? One way to address this is to use machine learning.

The blog post’s comprehensive nature cannot be understated. It sets a new precedent for future tutorials and explanations to come. By the end of the post, you will know how to implement the neural ODE layer in Julia and understand its behavior.

With the neural ordinary differential equation (ODE), machine learning meets math!

## High honors for a high-level language

For further reading about differential equation solvers, be sure to read this article by the lead developer of DifferentialEquations.jl. Christopher Rackauckas compares differential equation solver suites in various languages: MATLAB, R, Julia (of course), Python, C, Mathematica, Maple, and even an old-school set of Fortran solvers. The article highlights the good and bad about all methods, their limitations and efficiency. (If you haven’t tried Julia yet, perhaps Christopher Rackauckas’ writing will convince you.)

Congratulations once again to Julia for winning the James H. Wilkinson Prize for Numerical Software. All the praise is well-deserved.

We all look forward to seeing what the future holds. Take a bow, you earned it!

Internet Explorer version 9 and earlier are not supported (why?). Please use of a modern browser such as Firefox or Chrome.

## Abstract

DifferentialEquations.jl is a package for solving differential equations in Julia. It covers discrete equations (function maps, discrete stochastic (Gillespie/Markov) simulations), ordinary differential equations, stochastic differential equations, algebraic differential equations, delay differential equations, hybrid differential equations, jump diffusions, and (stochastic) partial differential equations. Through extensive use of multiple dispatch, metaprogramming, plot recipes, foreign function interfaces (FFI), and call-overloading, DifferentialEquations.jl offers a unified user interface to solve and analyze various forms of differential equations while not sacrificing features or performance. Many modern features are integrated into the solvers, such as allowing arbitrary user-defined number systems for high-precision and arithmetic with physical units, built-in multithreading and parallelism, and symbolic calculation of Jacobians. Integrated into the package is an algorithm testing and benchmarking suite to both ensure accuracy and serve as an easy way for researchers to develop and distribute their own methods. Together, these features build a highly extendable suite which is feature-rich and highly performant.

Funding statement: This work was partially supported by NIH grants P50GM76516 and R01GM107264 and NSF grants DMS1562176 and DMS1161621. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1321846, the National Academies of Science, Engineering, and Medicine via the Ford Foundation, and the National Institutes of Health Award T32 EB009418. Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the NIH.

## (1) Overview

### 1 Introduction

Differential equations are fundamental components of many scientific models; they are used to describe large-scale physical phenomena like planetary systems [ 10 ] and the Earth’s climate [ 12 , 18 ], all the way to smaller scale biological phenomena like biochemical reactions [ 30 ] and developmental processes [ 27 , 7 ]. Because of the ubiquity of these equations, standard sets of solvers have been developed, including Shampine’s ODE suite for MATLAB [ 25 ], Hairer’s Fortran codes [ 8 ], and the Sundials CVODE solvers [ 11 ].

However, these software packages contain many limitations which stem from their implementation and the time when they were developed. Since the time of their inception, many other forms of differential equations have become commonplace tools not only for mathematicians, but throughout the sciences. Stochastic differential equations (SDEs), have become more commonplace not only in mathematical finance [ 23 , 5 ], but also in biochemical [ 4 , 13 ] and ecological models. Delay differential equations have become a ubiquitous tool for modeling phenomena with natural delays as seen in Neuroscience [ 3 , 22 ] and control theory [ 24 ]. However, a user who is familiar with standard ODE tools has to “leave the box” to find a new specialized package to handle these kinds of differential equations, or write their own solver scripts [ 9 ]. Also, when many of these methods were implemented the standard computer was limited by the speed of the processor. These days, most processors are multi-core and many computers contain GPGPU [ 1 ] or Xeon Phi [ 17 , 6 ] acceleration cards and thus taking advantage of the ever-present parallelism is key to achieving good performance.

Other design limitations stem from the programming languages used in the implementation. Many of these algorithms, being developed in early C/Fortran, do not have abstractions for generalized array formats. In order to use these algorithms, one must provide the solver with a vector. In cases where a matrix or a higher dimensional tensor are the natural representation of the differential equation, the user is required to transform their equation into a vector equation for use in these solvers. Also, these solvers are limited to using 64-bit floating point calculations. The numerical precision limits their use in high-precision applications, requiring specialized codes when precision lower than 10 –16 is required. Lastly, many times these programs are interfaced via a scripting language where looping is not optimized and where “vectorized” codes provide the most efficient solution. However, vectorized coding in the style of MATLAB or NumPy results in temporary allocations and can lack compiler optimizations which require type inference. This increases the computational burden of the user-defined functions which degrades the efficiency of the solver.

The goal of DifferentialEquations.jl is build off of the foundation created by these previous differential equation libraries and modernize them using Julia. Julia is a scripting language, used in-place of languages like R, Python, MATLAB, but offers the performance one would associate with low-level compiled languages. This allows users to start prototypes in Julia, but also solve their large-scale models within the same language, instead of resorting to two language solutions when performance is needed. The language achieves this goal by extensive utilization of multiple dispatch and metaprogramming to design a language that is both easy for a compiler to understand and easy for a programmer to use [ 2 ]. DifferentialEquations.jl builds off of these design principles to arrive at a fast, feature-rich, and highly extendable differential equations suite which is easy to use.

We start by describing the innovations in usability. In Section 1.1 we show how multiple dispatch is used to consol >solve and plot . Since these commands are used for all forms of differential equations, the user interface is unified in a manner that makes it easy for a user to explore other types of models. Then in Section 1.2 we show how metaprogramming is used to further simplify the user API, allowing the user to define a function in a “mathematical format” which is automatically converted into the computationally-efficient encoding. After that, we describe how the internals were designed in order to be both feature-filled and highly performant. In Section 1.3 we describe the package structure of DifferentialEquations.jl and how the Base libraries, component solvers, and add-on packages come together to provide the full functionality of DifferentialEquations.jl. In Section 1.4 we describe how multiple dispatch is used to write a single generic method which compiles into specialized functions dependent on the number types given to the solver. We show how this allows for the solvers to both achieve high performance while being compatible with any Julia-defined number system which implements a few basic mathematical operations, including fast high and intermediate precision numbers and arithmetic with physical units. In Section 1.5 we describe the experimental within-method multi-threading which is being used to further enhance the performance of the methods, and the multi-node parallelism which is included for performing Monte Carlo simulations of stochastic models. We then discuss some of the tools which allows DifferentialEquations.jl to be a good test suite for the fast development and deployment of new solver algorithms, and the tools provided for performing benchmarks. Lastly, we describe the current limitations and future development plans.

#### 1.1 A Unified API Through Multiple Dispatch

DifferentialEquations.jl uses multiple dispatch on specialized types to arrive at a unified user-API for the different types of equations. To use the package, one follows the steps:

1. Define a problem.
2. Solve the problem.
3. Plot the solution.

This standardization of the API makes complicated solvers accessible to less programming-inclined individuals, giving a good framework for future development and allows for the latest research in numerical differential equations to be utilized without complications.

#### 1.1.1 Solving ODEs

To define a problem, a user must call the constructor for the appropriate problem object. Since ordinary differential equations (ODEs) are represented in the general form as

the ODEProblem is defined by specifying a function f and an initial condition u. For example, we can define the linear ODE using the commands:

Many other examples are prov > 1 and the Jupyter notebook tutorials in DiffEqTutorials.jl 2 (for use with Julia, see IJulia.jl 3 ). To solve the ODE, the user can simply call the solve command on the problem:

By using a dispatch architecture on AbstractArrays and using the array-defined indexing functionality prov >eachindex(A) ), DifferentialEquations.jl accepts problems defined on arrays of any size. For example, one can define and solve a system of equations where the dependent variable u is a matrix as follows:

For most other packages, one would normally have to define u as a vector and rewrite the system of equations in the vector form. However, by allowing arbitrary problem sizes, DifferentialEquations.jl allows the user to specify problems in the natural format and solve directly on any array of numbers. This can be helpful for problems like discretizations of partial differential equations (PDEs) where the matrix format matches some underlying structure, and could result in a denser formulation.

The solver returns a solution object which holds all of the information about the solution. Dispatches to array functions are prov >sol object, allowing for the solution object act like a timeseries array. In addition, high-order efficient interpolations are lazily constructed throughout the solution (by default, a feature which can be turned off) and the sol object’s call is overloaded with the interpolating function. Thus the solution object can both be used as an array of the solution values, and as a continuous approximation given by the numerical solution. The syntax is as follows:

Цукерберг рекомендует:  Современный PHP Работа с ВКонтакте. Создание приложения

The solution can be plotted using the prov > 4 . The plot recipes use the solver object to build a default plot which is customizable using any of the commands from the Plots.jl package, and can be plotted to any plotting backend prov > 5 backend (a Julia wrapper for matplotlib 6 ) via the command:

These defaults are deliberately made so that a standard user does not need to dig further into the manual and understand the differences between all of the algorithms. However, an extensive set of functionality is available if the user wishes. All of these functions can be modified via additional arguments. For example, to change the solver algorithm to a highly efficient Order 7 method due to Verner [ 29 ], set the line width in the plot to 3 pixels, and add some labels to the plot, one could instead use the commands:

The output of this command is shown in Figure 1 . Note that the output is automatically smoothed using 10*length(sol) equally spaced interpolated values through the timespan.

Example of the ODE plot recipe. This plot was created using the PyPlot backend through Plots.jl. Shown is the solution to the 4 × 2 ODE with f(t,u) = Au where A is given in the code. Each line corresponds to one component of the matrix over time.

Lastly, these solvers tie into Julia integrated development environments ( >16 ] are equipped with a progressbar and time estimates to monitor the progress of the solver. Additionally, all of the DifferentialEquations.jl functions are thoroughly tested and documented with the Jupyter notebook system [ 19 ], allowing for reproducible exploration.

#### 1.1.2 Solving SDEs

By using multiple-dispatch, the same user API is offered for other types of equations. For example, if one wishes to solve a stochastic differential equation (SDE):

then one builds an SDEProblem object by specifying the initial condition and now the two functions, f and g. However, the rest of the usage is the same: simply use the solve and plot functions. To extend the previous example to have multiplicative noise, the code would be:

While this user interface is simple, the default methods these algorithms can call are efficient high-order solvers with adaptive timestepping [ 21 ]. These methods tie into the plotting functionality and IDEs in the same manner as the ODE solvers, making it easy for users to explore stochastic modeling without having to learn learn a new interface.

#### 1.1.3 Solving (Stochastic) PDEs

Again, the same user API is offered for the available stochastic PDE solvers. Instead, one builds a HeatProblem object which dispatches to algorithms for solving (Stochastic) PDEs. An example using the previously defined functions is:

Additional keyword arguments can be supplied to HeatProblem to specify boundary data and initial condtions. Notice that the main difference is now we must specify a space-time mesh (and boundary conditions as optional keyword arguments). Again, the same plotting and analysis commands apply to the solution object sol (where now the plot dispatch is to a trisurf plot).

#### 1.2.1 A Macro-Based Interface

Most differential equations packages require that the user understands some details about the implementation of the library. However, the DifferentialEquations.jl ecosystem implements various Domain-Specific Languages (DSLs) via macros in order to give more natural options for defining mathematical constructs. In this section we will demonstrate the DSL for defining ODEs. For demonstrations related to other types of equations, please see the documentation.

The famous Lorenz system is mathematically defined as

## Solving Delay Differential Equations with dde23

### L.F. ShampineMathematics Department Southern Methodist UniversityDallas, TX 75275 lshampin@mail.smu.edu

S. Thompson
Department of Mathematics & Statistics
thompson@runet.edu

## 1 Introduction

Ordinary differential equations (ODEs) and delay differential equations (DDEs) are used to describe many phenomena of physical interest. While ODEs contain derivatives which depend on the solution at the present value of the independent variable («time»), DDEs contain in addition derivatives which depend on the solution at previous times. DDEs arise in models throughout the sciences . Despite the obvious similarities between ODEs and DDEs, solutions of DDE problems can differ from solutions for ODE problems in several striking, and significant, ways  . This accounts in part for the lack of much general-purpose software for solving DDEs.

We consider here only systems of delay differential equations of the form

 y пїЅ (t) = f(t,y(t),y(t — t 1),y(t — t 2), пїЅ ,y(t — t k))
(1)

that are solved on a пїЅ t пїЅ b with given history y(t) = S(t) for t пїЅ a . The constant delays are such that t = min( t 1, пїЅ , t k) > 0. Although DDEs with delays (lags) of more general form are important, this is a large and useful class of DDEs. Indeed, Baker, Paul, and Willé  write that «The lag functions that arise most frequently in the modelling literature are constants.»

Although the effective solution of DDEs has benefited a great deal from the advances made in ODE technology during the past several years, the state-of-the-art for DDE software is not at the level of ODE software. The few FORTRAN codes for solving DDEs are cons >ATLAB  program dde23  with the goal of making it as easy as possible to solve the wide range of DDEs with constant delays encountered in practice.

This tutorial shows how to solve DDEs with dde23. It is organized as follows. Important differences between DDEs and ODEs are discussed briefly in § 2. In § 3 there is a brief discussion of how numerical methods for ODEs can be extended to solve DDEs. The most important part of this tutorial is the collection of examples in § 4. As the first few show, anyone familiar with solving ODEs using ode23  will find it easy to solve routine DDEs with dde23. Several examples then illustrate the powerful capabilities of dde23 for solving DDEs that are far from routine. Most of the examples have an exercise that prov >ATLAB with dde23.

## 2 Delay Differential Equations

In this section we describe briefly some important differences between DDEs and ODEs. More detailed discussions of the various issues are found in .

The most obvious difference between ODEs and DDEs is the initial data. The solution of an ODE is determined by its value at the initial point t = a. In evaluating the DDEs (1) for a пїЅ t пїЅ b, a term like y(t — t j) may represent values of the solution at points prior to the initial point. For example, at t = a we must have the solution at a — t j. It is easy to see that if T is the longest delay, the equations generally require us to prov >- T пїЅ t пїЅ a. For DDEs we must provide not just the value of the solution at the initial point, but also the «history», the solution at times prior to the initial point.

Because numerical methods for both ODEs and DDEs are intended for problems with solutions that have several continuous derivatives, discontinuities in low-order derivatives require special attention. This is a much more serious matter for DDEs. For one thing, such discontinuities are not unusual for ODEs, but they are almost always present for DDEs: Generally there is a discontinuity in the first derivative of the solution at the initial point because generally S пїЅ (a — ) пїЅ y пїЅ (a+) = f(a,S(a — t 1), пїЅ ,S(a — t k)). There can also be discontinuities at times both before and after the initial point. Some problems have histories with discontinuities in low-order derivatives. Some models involve equations that change when the solution satisfies a given relation, e.g., when a solution component has a given value. These changes often cause discontinuities in low-order derivatives of the solution.

Another reason why discontinuities are much more serious for DDEs is that they propagate. If the solution has a discontinuity in a derivative somewhere, there are discontinuities in the rest of the interval at a spacing given by the delays. In reasonably general circumstances, the propagated discontinuities are smoothed: If there is a discontinuity at t * of order k , i.e., there is a jump in y (k) at t * , then the discontinuity at t * + t j is of order at least k+1 , the discontinuity at t * +2 t j is of order at least k+2 , and so on. This is very important for numerical solution of the DDE because once the orders are high enough, the discontinuities will not interfere with the numerical method and we can stop tracking them.

To see how discontinuities propagate and smooth out, let us solve

 y пїЅ (t) = y(t — 1)
(2)

for 0 пїЅ t with history S(t) = 1 for t пїЅ 0. With this history, the problem reduces on the interval 0 пїЅ t пїЅ 1 to the ODE y пїЅ (t) = 1 with initial value y(0) = 1. Solving this problem we find that y(t) = t + 1 for 0 пїЅ t пїЅ 1. Notice that the solution has a discontinuity in its first derivative at t = 0 . In the same way we find that y(t) = (t 2 +1)/2 for 1 пїЅ t пїЅ 2 . The first derivative is continuous at t = 1 , but there is a discontinuity in the second derivative. In general the solution on the interval [k,k+1] is a polynomial of degree k+1 and there is a discontinuity of order k + 1 at t = k .

## 3 Numerical Methods for DDEs

In this section we discuss a few aspects of the numerical solution of DDEs. A detailed discussion of the methods used by dde23 can be found in .

A popular approach to solving DDEs is to extend one of the methods used to solve ODEs. Most of the codes are based on explicit Runge-Kutta methods. dde23 takes this approach by extending the method of the M ATLAB ODE solver ode23. The >пїЅ t пїЅ 1, the DDE reduces to an initial value problem for an ODE with y(t — 1) equal to the given history S(t — 1) and initial value y(0) = 1. We can solve this ODE numerically using any of the popular methods for the purpose. Analytical solution of the DDE on the next interval 1 пїЅ t пїЅ 2 is handled the same way as the first interval, but the numerical solution is somewhat complicated, and the complications are present for each of the subsequent intervals. The first complication is that we must keep track of how the discontinuity at the initial point propagates because of the delays. Another is that at each discontinuity we start the solution of an initial value problem for an ODE. Runge-Kutta methods are attractive because they are much easier to start than other popular numerical methods for ODEs. Still another issue is the term y(t — 1) that is in principle known because we have already found y(t) for 0 пїЅ t пїЅ 1. This has been a serious obstacle to applying Runge-Kutta methods to DDEs, so we need to discuss the matter more fully.

Runge-Kutta methods, like all discrete variable methods for ODEs, produce approximations yn to y(xn) on a mesh n> in the interval of interest, here [0,1]. They do this by starting with the given initial value, y = y(a) at x = a, and stepping from yn пїЅ y(xn) a distance of hn to yn+1 пїЅ y(xn+1) at xn+1 = xn + hn. The step size hn is chosen as small as necessary to get an accurate approximation. It is chosen as big as possible so as to reach the end of the interval in as few steps as possible, which is to say, as cheaply as possible. In the case of solving (2) on the interval [1,2], we have values of the solution only on a mesh in [0,1]. So, where do the values y(t — 1) come from? In their original form Runge-Kutta methods produce answers only at mesh points, but it is now known how to obtain «continuous extensions» that yield an approximate solution between mesh points. The trick is to get values between mesh points that are just as accurate and to do this cheaply. In some cases the continuous extensions can be viewed as interpolants. As an example, the first w >- 1) needed when integrating the ODE on [1,2], and similarly for all the subsequent intervals.

The Runge-Kutta methods mentioned are all explicit recipes for computing yn+1 given yn and the ability to evaluate the equation. For reasons of efficiency, a solver tries to use the biggest step size hn that will yield the specified accuracy, but what if it is bigger than the shortest delay t ? In taking a step to xn + hn, we would then need values of the solution at points in the span of the step, but we are trying to compute the solution at the end of the step and do not yet know these values. A good many solvers restrict the step size to avoid this issue. Some solvers, including dde23, use whatever step size appears appropriate and iterate to evaluate the implicit formula that arises in this way.

## 4 Examples

In this section we use problems from the literature to show how to solve DDEs with dde23. Solving a DDE with dde23 is much like solving an ODE with ode23, but there are some notable differences. Examples 1 through 3 show how to solve typical problems. They should be read in order. dde23 has a powerful event location capability that is quite similar to that of ode23. Example 4 illustrates the capability by finding local maxima of the solution. ODE and DDE solvers are intended for problems with solutions that have several continuous derivatives. However, it is not unusual for equations to have different forms in different circumstances, which leads to discontinuities in low-order derivatives of the solution when the circumstances change. This matter is more serious for DDEs because discontinuities propagate and discontinuities can occur in the history. Examples 5 through 8 show how to deal with discontinuities in low-order derivatives, including jumps in the solution itself. They consider situations in order of difficulty and some require familiarity with a previous example. dde23 is limited to problems with constant delays, but the examples/exercises/problems of this section show that for this class of problems, it is both easy to use and powerful.

Complete solutions are provided for all the examples that can be used as templates. Some of the examples have exercises that are solved in a similar way. It is worth trying them for practice. Complete solutions are provided as a check and as further templates. This tutorial ends with some additional problems that serve as exercises for all the examples. Again, complete solutions are provided as a check and as further templates.

A naming convention is used throughout this section. For example, exam1.m is the M-file for solving the problem of Example 1. The equations of this problem are evaluated in the M-file exam1f.m. Some problems involve additional files, specifically a history function and/or an event function. The corresponding M-files have the names exam1h.m and exam1e.m, respectively. The M-files for the exercises follow the same convention with exam replaced by exer. Finally, the M-files for the additional problems are similarly named with exam replaced by prob.

### Example 1

We illustrate the straightforward solution of a DDE by computing and plotting the solution of Example 3 of . The equations

 y1 пїЅ (t)

 =

 y1(t — 1)

 y2 пїЅ (t)

 =

 y1(t — 1) + y2(t — 0.2)

 y3 пїЅ (t)

 =

 y2(t)

are to be solved on [0,5] with history y1(t) = 1,y2(t) = 1, y3(t) = 1 for t пїЅ 0.

A typical invocation of dde23 has the form The input argument tspan is the interval of integration, here [0, 5]. The history argument is the name of a function that evaluates the solution at the input value of t and returns it as a column vector. Here exam1h.m can be coded as Quite often the history is a constant vector. A simpler way to prov >- t j) for t j given as lags(j). It is not necessary to define local vectors ylag1, ylag2 as we have done here, but often this makes the coding of the DDEs clearer. The ddefile must return a column vector.

This is perhaps a good place to point out that dde23 does not assume that terms like y(t — t j) actually appear in the equations. Because of this, you can use dde23 to solve ODEs. If you do, it is best to input an empty array, [], for lags because any delay specified affects the computation even when it does not appear in the equations.

The input arguments of dde23 are much like those of ode23, but the output differs formally in that it is one structure, here called sol, rather than several arrays The field sol.x corresponds to the array t of values of the independent variable returned by ode23 and the field sol.y, to the array y of solution values. So, one way to plot the solution is

After defining the equations in exam1f.m, the complete program exam1.m to compute and plot the solution is Note that we must supply the name of the ddefile to the solver, i.e., the string 'exam1f' rather than exam1f. Also, we have taken advantage of the easy way to specify a constant history.

### Exercise 1

To gain experience with dde23, compute and plot the solution of the following problem from . Solve

 y1 пїЅ (t)

 =

 y5(t — 1) + y3(t — 1)

 y2 пїЅ (t)

 =

 y1(t — 1) + y2(t — 0.5)

 y3 пїЅ (t)

 =

 y3(t — 1) + y1(t — 0.5)

 y4 пїЅ (t)

 =

 y5(t — 1) y4(t — 1)

 y5 пїЅ (t)

 =

 y1(t — 1)

on [0,1] with history y1(t) = exp(t+1), y2(t) = exp(t+0.5), y3(t) = sin(t+1), y4(t) = y1(t), y5(t) = y1(t) for t пїЅ 0.

In this you will have to evaluate the history in a function and supply its name, say 'exer1h', as the history argument of dde23. Remember that both the ddefile and the history function must return column vectors. In  this problem is used to show how to prepare a class of DDEs for solution with DMRODE. You might find it interesting to compare this preparation to what you had to do.

### Example 2

We show how to get output at specific points with Example 5 of , a scalar equation that exhibits chaotic behavior. We solve the equation

 y пїЅ (t)

 =

 2 y(t — 2) 1 + y(t — 2) 9.65 — y(t)
(3)

on [0,100] with history y(t) = 0.5 for t пїЅ 0.

Output from dde23 is not just formally different from that of ode23. dde23 computes an approximate solution S(t) val >пїЅ (t): With this form of output, you can solve a DDE just once and then obtain inexpensively as many solution values as you like, anywhere you like. The numerical solution itself is continuous and has a continuous derivative, so you can always get a smooth graph by evaluating it at enough points with ddeval.

The example of  plots y(t — 2) against y(t). This is quite a common task in nonlinear dynamics, but we cannot proceed as in Example 1. That is because the entries of sol.x are not equally spaced: If t * appears in sol.x, we have an approximation to y(t * ) in sol.y, but generally t * — 2 does not appear in sol.x, so we do not have an approximation to y(t * — 2). ddeval makes such plots easy. In exam2.m we first define an array t of 1000 equally spaced points in [2,100] and obtain solution values at these points with ddeval. We then use ddeval a second time to evaluate the solution at the entries of t-2. In this way we obtain values approximating both y(t) and y(t — 2) for the same t. This might seem like a lot of plot points, but ddeval is just evaluating a piecewise-polynomial function and is coded to take advantage of fast builtin functions and vectorization, so this is not expensive and results in a smooth graph.

Because M ATLAB does not distinguish scalars and vectors of one component, the single DDE can be coded as The complete program exam2.m to compute and plot y(t — 2) against y(t) is

### Exercise 2

Farmer  gives plots of various Poincaré sections for the Mackey-Glass equation, a scalar DDE that exhibits chaotic behavior. Reproduce Fig. 2a of the paper by solving

 y пїЅ (t)

 =

 0.2 y(t — 14) 1 + y(t — 14) 10 — 0.1 y(t)
(4)

on [0,300] with history y(t) = 0.5 for t пїЅ 0 and plotting y(t — 14) against y(t). The figure begins with t = 50 to allow an initial transient time to settle down. To reproduce it, form an array of 1000 equally spaced points in [50,300], evaluate y(t) at these points, and then evaluate y(t — 14).

### Example 3

We show how to set options and deal with parameters by solving Example 4.2 of . The equation

 y пїЅ (t)

 =

 — l y(t — 1) ( 1 + y(t) )
(5)

is solved on [0,20] with history y(t) = t for t пїЅ 0 for four values of the parameter l , namely 1.5, 2, 2.5, and 3.

Often default error tolerances are perfectly satisfactory, but here more stringent tolerances are needed for the larger values of l . Options are set with ddeset exactly as they are set for ode23 with odeset. When options are used, a call to dde23 has the form Options like relative and absolute error tolerances are the same in the two solvers. In particular, both have a default relative error tolerance of 10 — 3 and default absolute error tolerance of 10 — 6 . The tolerances imposed for the larger l in exam3.m are relatively stringent for this solver, but this is a price that must be pa >l = 3 and default tolerances.

Parameters can always be communicated as global variables, but as is common with M ATLAB solvers, they can also be passed through dde23 as arguments following the options argument. For two values of l we use default tolerances, so must use an empty array, [], as a placeholder for the options argument. When parameters are passed through dde23, they must appear as arguments of the ddefile and if present, the history function, even if they are not used. Accordingly, exam3f.m can be coded as and exam3h.m as

After defining the equation in exam3f.m and the history in exam3h.m, the complete program exam3.m to compute and plot the four solutions as in  is This has been coded in a very straightforward manner to make clear that we are solving four problems and using different tolerances.

### Exercise 3

Wheldon’s model of chronic granuloctic leukemia  has the form

 y1 пїЅ (t)

 =

 a 1 + b y1(t — t ) g — l y1(t) 1 + m y2(t) d

 y2 пїЅ (t)

 =

 l y1(t) 1 + m y2(t) d — w y2(t)

Code the equations for general values of the parameters to make it easy to experiment with the model. Remember that if you do not set any options, you must use a placeholder of [] for the options argument. Solve the problem on [0,200] with history y1(t) = 100, y2(t) = 100 for t пїЅ 0 and parameter values a = 1.1 ×10 10 , b = 10 — 12 , g = 1.25, d = 1, l = 10, m = 4 ×10 — 8 , w = 2.43 that you set in the main program. Compare the solutions you obtain with t = 7 and t = 20 . You could code this as You should find that the solution is oscillatory in both cases. In the first, the oscillations are damped quickly and in the second, they are not.

### Example 4

It is often necessary to find when a solution satisfies a certain relation, e.g., when a component has a specific value. An event is sa >- t 1), пїЅ ,y(t — t k)), vanishes. Some problems involve many of these «event functions». This example shows how to use the powerful event location capability of dde23.

Figure 15.6 of  displays the solution of an infectious disease model. The equations

 y1 пїЅ (x)

 =

 — y1(x)y2(x — 1) + y2(x — 10)

 y2 пїЅ (x)

 =

 y1(x)y2(x — 1) — y2(x)

 y3 пїЅ (x)

 =

 y2(x) — y2(x — 10)

are solved on [0,40] with history y1(x) = 5,y2(x) = 0.1,y3(x) = 1 for x пїЅ 0. To illustrate event location, we compute the local maxima of all three solution components.

We compute the maxima by finding where the first derivatives vanish. The three event functions come from the DDEs: y1 пїЅ (x) = — y1(x)y2(x — 1)+y2(x — 10), and so forth. All event functions are evaluated in a single M ATLAB function that returns the values as a column vector. The name of this function is passed to the solver as the value of the 'Events' option. For this example we evaluate the three functions in exam4e by a call to exam4f. Because event location is used for a variety of purposes, we have to tell dde23 more about what we want to do. Sometimes we just want to know that an event has occurred and other times we want to terminate the integration then. We tell the solver about this by returning a vector isterminal from exam4e. To terminate the integration when event function k vanishes, we set component k of isterminal to 1 (true), and otherwise to 0 (false). For this example none of the events is terminal. There is an annoying matter of some importance: Sometimes we want to start an integration with an event function that vanishes at the initial point. Imagine, for example, that we fire a model rocket into the air and we want to know when it hits the ground. It is natural to use the height of the rocket as a terminal event function, but it vanishes at the initial time as well as the final time. dde23 treats an event at the initial point in a special way. The solver locates such an event and reports it, but does not treat it as terminal, no matter how isterminal is set. The example shows that how an event function vanishes may be important: To distinguish maxima from minima, we want the solver to report that a derivative vanished only when it changes from positive to negative values. This is done using direction. If we are interested only in events for which event function k is increasing through 0, we set component k of direction to +1. Correspondingly, we set it to — 1 if we are interested only in those events for which the event function is decreasing, and 0 if we are interested in all events. Once we understand what information must be provided, it is easy to code the event functions of this example as

Now that we have discussed how to tell the solver what we want it to do, we have to discuss how it reports what happened. The locations of events are returned as the field sol.xe and the values of the solution at these points are returned as the field sol.ye. If there are no events, sol.xe = []. The field sol.ie reports which event occurred. A value of k indicates that event function k vanished at the corresponding entry of sol.xe.

It is straightforward to code the equations as With exam4e.m and exam4f, it is also straightforward to code the solution of the problem as the first two lines of the complete solution exam4.m that follows: The only complication in this program is separating the various kinds of events. It is not necessary, but perhaps clearer, to introduce local variables for the fields that return the results of the event location. The command n1 = find(ie == 1) finds the indices corresponding to the first event function. These indices allow us to extract the information that y1(x) has its maxima at xe(n1) and its values there are ye(1,n1). The second and third event functions are handled in the same way and then all the results are plotted.

### Exercise 4

To gain some experience with event location, try two experiments:

• Terminate the integration when y1(x) has its first maximum.
• Compute local minima instead of maxima.

• Each can be done by changing only one line in exam4e.m.

### Example 5

ODE and DDE solvers are intended for problems with solutions that have several continuous derivatives. It is not unusual for equations to have different forms in different circumstances, which leads to discontinuities in low-order derivatives of the solution, or even in the solution itself, when the circumstances change. Although a robust solver may be able to produce an acceptable solution, it is better practice to account for the changes and it can be necessary. There are two issues: Do we know in advance where the changes occur? Is the solution itself continuous? In this example we show how to solve problems that have a continuous solution with discontinuities in a low-order derivative at points known in advance. The history is the solution prior to the initial point and its discontinuities must also be taken into account because they propagate into the interval of integration. Discontinuities in the history are handled in the same way, but are a little simpler because discontinuities in the history itself are permitted.

Example 4.4 of  is an infection model due to Hoppensteadt and Waltman. The equation

y пїЅ (t) = пїЅ
пїЅ
пїЅ
пїЅ
пїЅ
пїЅ
пїЅ

 — r y(t) 0.4 (1 — t)

 if 0 пїЅ t пїЅ 1 — c,

 — r y(t) (0.4 (1 — t) + 10 — e m y(t))

 if 1 — c пїЅ 1,

 — r y(t) (10 — e m y(t))

 if 1 пїЅ 2 — c,

 — r e m y(t) (y(t — 1) — y(t))

if 2 — c пїЅ 0. Here c = 1/ пїЅ 2 and m = r/10. Oberle and Pesch solve this problem for several values of the parameter r, but we solve it only for r = 0.5. The different phases of the spread of the disease are described by different equations. In this example the phases change at times known in advance. The model requires the solution to be continuous, but the changes in the equation lead to jumps in low order derivatives. In addition to y(t), an approximation to I(t) = — y пїЅ (t)/(r y(t)) is required.

dde23 deals easily with problems that have a continuous solution and discontinuities in low-order derivatives at known points. All you have to do is tell the solver where the discontinuities are by providing them as the value of the 'Jumps' option. However, you need to keep in mind that the history is the solution prior to the initial point, so you must also account for its discontinuities. For instance, the Marchuk immunology model discussed in  has the history max(0,t+10 — 6 ) for t пїЅ 0. Its solution has a jump in the first derivative at t = — 10 — 6 which propagates into the interval of integration. Discontinuities in the history are handled like discontinuities at known points during the integration. In one respect they are simpler; a jump in the history itself is treated the same as a jump in one of its low order derivatives. Low-order discontinuities in the history have an effect in the interval of integration because of the delays. If the initial point is a and the longest delay is T, discontinuities that occur before a — T have no effect on the integration, so there is no need to include them in 'Jumps'.

Having discussed how to deal with the discontinuities, it is straightforward to solve the problem. We compute an approximation to y(10) and compare it to an accurate value reported in . This illustrates the computation of an approximation at a specific point and confirms the accuracy of the computation. We compute and plot I(t) at the points of sol.x using the fields sol.y and sol.yp. If we should want values at other t or should want a smoother graph, we would compute the necessary values with ddeval. If we treat r as a parameter, the equation can be coded as The complete program exam5.m is then

This program results in the output and the two figures displayed. The accuracy of the computed result is what we might expect for the specified error tolerances.

### Exercise 5

Example 4 of  solves the equation

 y пїЅ (t) = y(t — 1)

on [0,1] with history y(t) = ( — 1) [ — 5t ] for t пїЅ 0. For s > 0, the function [s] is floor(s) in M ATLAB . The history has jump discontinuities prior to t = 0 that must be set in 'Jumps'. With a delay of 1, only jumps that occur at t пїЅ — 1 can have an effect in [0,1].

### Example 6

Discontinuities in the solution itself complicate matters. If nothing else, we must specify the jumps. In this example we show how to deal with the special case of a jump in the solution at the initial point. We also show how to plot the solution in a phase plane.

We solve a model of the infamous four year life cycle of a population of lemmings . The equation

 y пїЅ (t)

 =

 r y(t) пїЅпїЅ пїЅ 1 — y(t — 0.74) m пїЅпїЅ пїЅ

is solved on [0,40] with history y(t) = 19 for t пїЅ (t) against y(t). This is easily done because in addition to sol.y, dde23 also returns a field sol.yp with values of the first derivative. For this example, these values prov >пїЅ (t).

Most DDE problems have solutions that are continuous at the initial point, so there is no need to supply the solver with an initial value in addition to a history function. However, if you should want to use a different initial value, all you have to do is provide it as the value of the 'InitialY' option. The solver deals automatically with the discontinuity in the first derivative that is ordinarily present at the initial point, so you need act only if the solution itself is discontinuous. Here the solution has a small jump at the initial point, indeed small enough that we must use error tolerances smaller than the default values so that the solver «sees» the jump.

Using the capability of passing parameters through dde23, exam6f.m can be coded as The complete program exam6.m to compute and plot the solution is then

Clearly the normalized population gets quite small, but how small? A reasonably accurate answer is obtained easily: The smallest value of y(t) is approximately min(sol.y), namely 0.0116. If we wanted a better answer, we could obtain it by introducing event functions as in the last example.

### Exercise 6

The ARCHI manual  provides a sample program for solving

 y1 пїЅ (t)

 =

 y1(t — 1) y2(t — 2)

 y2 пїЅ (t)

 =

 — y1(t) y2(t — 2)

on [0,4] with history y1(t) = cos(t),y2(t) = sin(t) for t — 9 . dde23 does not permit a pure absolute error, but for practice with options, use the default relative error tolerance and set 'AbsTol' to 1e-9. You might find it interesting to compare your program to the sample in .

### Example 7

For some problems the changes in the equations occur at times that are not known in advance. The event location capability is used to determine when there is a change and the integration is terminated. It is then restarted with the new definition of the equation. The role of the history and the possibility of a jump discontinuity in the solution itself complicate this, but dde23 was designed to make it as painless as possible. This example and the next show how to proceed.

Marriott and DeLisle  solve a DDE that involves a step function of the history term. With D = y(t — 12) — xb , the equation is

 y пїЅ (t) = ( — y(t) + p ( a + e sign( D ) — u sin 2 ( D ) ) )/ t .

It is solved on [0,120] with history y(t) = 0.6 for t пїЅ 0 and parameter values xb = — 0.427, a = 0.16, e = 0.02, u = 0.5, t = 1 .

The term sign( D ) is an >D ). With the given history, it is initialized to +1. We integrate until the event function y(t — 12) — xb vanishes. The 'Events' option is used to locate this event and terminate the integration then. The sign of state is changed and the integration restarted. This continues until the end of the interval is reached. It is straightforward to code the event location as and with state defined in exam7.m as described, the evaluation of the DDE is coded as

The event location capability deals with the issue of finding when the equations change, but there is a matter special to DDEs on restarting, the history. On a restart, dde23 accepts the previously computed solution structure as history. dde23 updates the information in the solution structure each time it is called, so the solution returned is always valid from the initial to the last point reached in the integration, namely sol.x(end). It is convenient to compare this point to the end of the interval of integration and perform the restarts in a while loop. In this loop the solution structure of one integration is used as the history for the next until the run is completed.

With exam7e.m and exam7f.m, the complete program exam7.m to compute and plot the solution is The program prints out a message about restarts and where they occur. The resulting output is

### Exercise 7

The equations of the Marchuk immunology model discussed in  are

 y1 пїЅ (t)

 =

 (h1 — h2 y3(t)) y1(t)

 y2 пїЅ (t)

 =

 x (y4) h3 y3(t — t ) y1(t — t ) — h5 (y2(t) — 1)

 y3 пїЅ (t)

 =

 h4 (y2(t) — y3(t)) — h8 y3(t) y1(t)

 y4 пїЅ (t)

 =

 h6 y1(t) — h7 y4(t)

Here the coefficient

x (y4) = пїЅ
пїЅ
пїЅ

 1

 if y4 пїЅ 0.1

 (1 — y4) 10/9

 if 0.1 пїЅ 1

is continuous, but has a jump in its first derivative when y4(t) = 0.1, which leads to a jump in a low order derivative of y2(t). The value of the delay is t = 0.5 . The problem is solved on [0,60] with history y1(t) = max(0,t+10 — 6 ), y2(t) = 1, y3(t) = 1, y4(t) = 0 for t пїЅ 0 . As was noted in Example 5, y1(t) has a jump in its first derivative at t = — 10 — 6 that propagates into the interval of integration. Figure 15.8 of  presents plots for h1 = 2, h2 = 0.8, h3 = 10 4 , h4 = 0.17, h5 = 0.5, h7 = 0.12, h8 = 8 and two values of h6, namely 10 and 300. Treat h6 as a parameter in your program and try to reproduce the figure for h6 = 300. For this you will have to plot the scaled components 10 4 y1, y2/2, y3, 10 y4 with axis([0 60 -1 15.5]) . An array yplot of scaled values for plotting can be formed easily by and so forth. To solve this problem accurately over the whole interval, you will need to reduce the tolerances to, say, a relative tolerance of 10 — 5 and an absolute tolerance of 10 — 8 .

dde23 is sufficiently robust that it can solve this problem in a straightforward way. However, you can be much more conf >- 10 — 6 . As in Example 7, terminate the integration when the event function y4(t) — 0.1 vanishes. Use a parameter state with value +1 if y4(t) пїЅ 0.1 and — 1 otherwise. The problem is to be solved with y4(0) = 0, so initialize state to +1. Thereafter, each time that the solver returns, check whether you have reached the end of the interval. If sol.x(end) x (y4) = 1 if state is +1 and x (y4) = (1 - y4) 10/9 otherwise.

### Example 8

This example is much like Example 7 except that the solution itself is discontinuous. We restart at discontinuities, so the jump in the solution occurs at the initial point of an integration and can be handled as in Example 6.

A two-wheeled suitcase may begin to rock from side to side as it is pulled. When this happens, the person pulling it attempts to return it to the vertical by applying a restoring moment to the handle. There is a delay in this response that can affect significantly the stability of the motion. This is modeled by Suherman et alia  with the DDE

 q пїЅ пїЅ (t) + sign( q (t)) g cos( q (t)) - sin( q (t)) + b q (t - t ) = A sin( W t + h )

The equation is solved on [0, 12] as a pair of first order equations with y1(t) = q (t), y2(t) = q пїЅ (t). Figure 3 of  shows a plot of y1(t) against t and a plot of y2(t) against y1(t) when g = 2.48, b = 1, t = 0.1, A = 0.75, W = 1.37, h = arcsin( g /A) and the initial history is the constant vector zero.

A wheel hits the ground (the suitcase is vertical) when y1(t) = 0 and the suitcase has fallen over when | y1(t) | = p /2. The events are terminal and all are to be reported. The event function can be coded as

As in Example 7, the parameter state seen in the event function is used in exam8.m to evaluate properly the discontinuous coefficient sign(y1(t)) in the DDE. We initialize it to +1 and change its sign when dde23 returns because y1(t) vanished. However, there are two event functions, so we must check the last entry in sol.ie to see if we should change the sign of state. With this, the DDEs can be coded as

When a wheel hits the ground, the integration is to be restarted with y1(t) = 0 and y2(t) multiplied by the coefficient of restitution 0.913. The 'InitialY' option is used for this purpose. The solution at all the mesh points is available as the field sol.y and in particular, the solution at the time of the event is the last column of this array, sol.y(:,end). If the suitcase falls over, the run is terminated, so again we must check which event occurred. With exam8e.m and exam8f.m, the complete program exam8.m to solve the problem and plot the solution in the phase plane is Note that ddeset can be used to change the value of an option or add an option, just as with odeset. The program reproduces the phase plane plot of Figure 3 in . It also reports what kind of event occurred and the location of the event. Reference values were computed with the FORTRAN 77 code DKLAG5  used in  and verified with its successor DKLAG6 . Having written the three solvers, we can fairly say that it is very much easier to solve this problem in M ATLAB with dde23. The program results in the output The accuracy of the computed results is what we might expect for the specified error tolerances.

After running this program, sol.xe is This does not seem to agree with the event locations reported by the program. For instance, why is there an event at 0? That is because one of the event functions is y1 and this component of the solution has initial value 0. As explained in Example 4, dde23 locates this event, but does not terminate the integration because the terminal event occurs at the initial point. The first integration terminates at the first point after the initial point where y1(t * ) = 0, namely t * = 4.5168. The second appearance of 4.5168 in sol.xe is the same event at the initial point of the second integration. The same thing happens at 9.7511 and finally the event at 11.6704 tells us that the suitcase fell over and we are finished.

This section presents a few problems for practice. They are taken from the literature and some are quite recent. A familiarity with the examples of the previous section is assumed. Some hints are given and some output is prov >ATLAB with dde23.

### Problem 1

Hale  cites predator-prey models obtained by introducing a resource limitation on the prey and assuming the birth rate of predators responds to changes in the magnitude of the population y1 of prey and the population y2 of predators only after a time delay t . Starting with the system of ODEs 

 y1 пїЅ (t)

 =

 a y1(t) + b y1(t) y2(t)

 y2 пїЅ (t)

 =

 c y2(t) + d y1(t) y2(t)

we arrive in this way at a system of DDEs

 y1 пїЅ (t)

 =

 a y1(t) пїЅпїЅ пїЅ 1 - y1(t) m пїЅпїЅ пїЅ + b y1(t) y2(t)

 y2 пїЅ (t)

 =

 c y2(t) + d y1(t - t ) y2(t - t )

It is interesting to explore the effect of the delay, so let us solve both systems on [0,100] with initial value y1(0) = 80, y2(0) = 30 for the ODEs and the same vector as constant history for the DDEs. Suppose that the parameters a = 0.25, b = - 0.01, c = - 1.00, d = 0.01 , and m = 200 .

Recall that you solve ODEs with dde23 by setting lags to []. When this is done, the argument Z that dde23 supplies to the functions it calls is the empty array. You can use this to code the evaluation of both sets of equations in the same function by testing isempty(Z) to find out which set to evaluate. A more straightforward approach is to use two functions for the two sets of equations. Solve the DDE with t = 1. Plot in one figure y2(t) against y1(t) for the two solutions. This phase plane plot of the solution of the ODEs should be a closed curve corresponding to a limit cycle. To achieve this you will need to tighten the error tolerances with a command like The figure makes the point that introducing a delay into an ODE model can have a profound effect on the solution. If you experiment with t , you will find this to be true even for small delays. It is also interesting to remove the resource term 1 - y1(t)/m and see how the orbits change as t is changed.

### Problem 2

This problem cons >t = 1.0, 1.4, 3.9, 5.0, 7.5,10 are cons >t . You should find that the solutions obtained for different values of t differ dramatically. Solve on [0,350] the equations

 y1 пїЅ (t)

 =

 - 1 ca R y1(t) + 1 ca R y2(t) + 1 ca Vstr y3(t)

 y2 пїЅ (t)

 =

 1 cv R y1(t) - пїЅпїЅ пїЅ 1 cv R + 1 cv r пїЅпїЅ пїЅ y2(t)

 y3 пїЅ (t)

 =

 f(Ts,Tp)

where

 Ts

 =

 1 1 + ( y1(t - t )/ a s ) b s

 Tp

 =

 1 1 + ( a p/y1(t) ) b p

 f(Ts,Tp)

 =

 a H Ts 1 + g H Tp - b H Tp .

For t пїЅ 0, the solution has the constant value

 y1(t)

 =

 P

 y2(t)

 =

 пїЅпїЅ пїЅ 1 1 + R/r пїЅпїЅ пїЅ P

 y3(t)

 =

 пїЅпїЅ пїЅ 1 R Vstr пїЅпїЅ пїЅ пїЅпїЅ пїЅ 1 1 + r/R пїЅпїЅ пїЅ P

As in , use ca = 1.55, cv = 519, R = 1.05,r = 0.068, Vstr = 67.9, a = a s = a p = 93, a H = 0.84, b = b s = b p = 7, b H = 1.17, g H = 0, P = 93 . The following figures for t = 1 and t = 7.5 show qualitatively different solutions.

One of the figures of  shows the solution components when the peripheral pressure R is reduced exponentially from its value of 1.05 to 0.84 beginning at t = 600. For this computation the delay was 4 and the interval [0,1000]. You can easily modify the previous program to solve this problem. All you have to do is inform the solver of the low-order discontinuity at a known time by setting the value of the 'Jumps' option to 600, modify the function for evaluating the DDEs to include and use the specified delay and interval. All the solution components are of interest. The figure shows the sharp change in the heart rate due to the change in R at t = 600.

### Problem 3

Plant's Neuron Interaction Model  is given by the equations

 y1 пїЅ (t)

 =

 a y1(t) - y 3 1(t) / 3 + m ( y1(t - t ) - y1,0 )

 y2 пїЅ (t)

 =

 r ( y1(t) + a - b y2(t) )

When m = 0, these equations have a steady state solution (y1,0,y2,0), i.e., a solution with y1 пїЅ (t) = y2 пїЅ (t) = 0. Solve the equations on [0,100] with history y1(t) = a y1,0, y2(t) = b y2,0 for t пїЅ 0 and a = 0.8, b = 0.7, r = 0.08 .

The parameters a and b determine how close the solution starts to the steady state solution. Let us take a = 0.4 and b = 1.8. To determine the steady state solution, we find from the equation y2 пїЅ (t) = 0 that when m = 0, y2,0 = (y1,0 + a)/b. Using this in the equation y1 пїЅ (t) = 0 for m = 0, we find that y1,0 satisfies the algebraic equation

 - b y 3 + (3 b + 1) y + 3 a = 0 .

After computing all the roots of this cubic equation with roots, y1,0 is the unique real root bigger than 1 - r b. For the given a,b this results in y1,0 = 2.417960226013935. Using this value, compute y2,0 and solve the problem for t = 20 and various m, say m = 1, - 1, 10, - 10. The figures for two of these values show what you might find.

### Problem 4

An epidemic model due to Cooke  describes the fraction y(t) of a population which is infected at time t by the equation

 y пїЅ (t) = b y(t - 7) ( 1 - y(t) ) - c y(t)

Here b and c are positive constants. The equation is solved on [0,100] with history y(t) = a for t пїЅ 0. The constant a satisfies 0 a c, the solution y(t) = 1 - c/b is a second equilibrium point. Solve this DDE for different values of b , c , and a . Verify that if b > c , the solution approaches the second equilibrium point, and otherwise it approaches the zero equilibrium point. The long-term behavior of the solution is independent of the delay; you might want to verify this computationally. The figure shows the approach of the solution to the second equilibrium point when b = 2 , c = 1, and a = 0.8 .

### Problem 5

Another ep >

 y1 пїЅ (t)

 =

 l ( y2(t) - y1(t) ) y1(t) y2(t) - ( d + e + g ) y1(t)

 y2 пїЅ (t)

 =

 b e - a y2(t - T) y2(t - T) e - d1 T - d y2(t) - e y1(t)

They are solved on [0,25] with history y1(t) = 2,y2(t) = 3.5 for t пїЅ 0 and parameter values a = 1, b = 80, d = 1, d1 = 1, g = 0.5, e = 10, T = 0.2 .

As in Problem 2, it is convenient to pass the parameters to the function for evaluating the DDEs as global variables or to hard code them. In  the solution is investigated for a number of l , so pass it as a parameter through dde23. Values l = 12, 15, 20, 28 are of interest. You might find it interesting to compare your plots to those of Figure 4 in . The following figure shows the case l = 12 .

### Problem 6

A population growth model due to Cooke et alia  describes the population y(t) at time t by the equation

 y пїЅ (t) = b e - a y(t - T) y(t - T) e - d1 T - d y(t)

Solve the equation on [0,25] with history y(t) = 3.5 for t пїЅ 0 for one or more of the data sets

• a = 1, d = 1, d1 = 1, b = 20
• a = 1, d = 1, d1 = 1, b = 80
• a = 1, d = 1, d1 = 0, b = 20
• a = 1, d = 1, d1 = 0, b = 80
• For each set of parameter values, solve the problem using three values of the delay, namely T = 0.2,1.0, 2.4 , and plot the solutions on the same figure. Structures can be indexed, so this can be coded as On exit from the loop, the solution for the first delay is sol(1).x,sol(1).y and so forth. Note that T must be communicated to prob6f as a parameter or global variable because it appears in the equation. In the code fragment it is communicated as a global variable along with the parameters of the data set. You should use tolerances more stringent than the defaults, e.g., You might find it interesting to compare your solutions with those of Figure 3 in . The following figures show the solutions for two of the data sets. Obviously the delay has a profound effect on the solution.

 C.T.H. Baker, C.A.H. Paul, and D.R. Willé, A bibliography on the numerical solution of delay differential equations, Numer. Anal. Rept. No. 269, Maths. Dept., Univ. of Manchester, U.K., 1995.

 C.T.H. Baker, C.A.H. Paul, and D.R. Willé, Issues in the numerical solution of evolutionary delay differential equations, Adv. Comp. Math., 3 (1995) 171-196.

 K. Cooke, P. van den Driessche, and X. Zou, Interaction of maturation delay and nonlinear birth in population and epidemic models, J. Math. Biol., 39 (1999) 332-352.

 S.P. Corwin, D. Sarafyan, and S. Thompson, DKLAG6: A code based on continuously imbedded sixth order Runge-Kutta methods for the solution of state dependent functional differential equations, Appl. Num. Math., 24 (1997) 319-333.

 J.D. Farmer, Chaotic attractors of an infinite-dimensional dynamical system, Physica D, 4 (1982) 366-393.

 E. Hairer, S.P. Nø rsett, and G. Wanner, Solving Ordinary Differential Equations I, Springer-Verlag, Berlin, 1987.

 J. Hale, Functional Differential Equations, Springer-Verlag, Berlin, 1971.

 N. MacDonald, Time Lags in Biological Models, Springer-Verlag, Berlin, 1978.

 N. MacDonald, Biological Delay Systems: Linear Stability Theory, Cambridge University Press, Cambridge, 1989.

 C. Marriott and C. DeLisle, Effects of discontinuities in the behavior of a delay differential equation, Physica D, 36 (1989) 198-206.

 M ATLAB 5, The MathWorks, Inc., 3 Apple Hill Dr., Natick, MA 01760, 1998.

 K.W. Neves, Automatic integration of functional differential equations: an approach, ACM TOMS, 1 (1975), 357-368.

 K.W. Neves and S. Thompson, Software for the numerical solution of systems of functional differential equations with state dependent delays, Appl. Num. Math., 9 (1992), 385-401.

 H.J. Oberle and H.J. Pesch, Numerical treatment of delay differential equations by Hermite interpolation, Numer. Math., 37 (1981) 235-255.

 J.M. Ortega and W.G. Poole, An Introduction to Numerical Methods for Differential Equations, Pitman Publishing Inc., Marshfield, Massachusetts, 1981.

 J.T. Ottesen, Modelling of the Baroflex-Feedback Mechanism With Time-Delay, J. Math. Biol., 36 (1997), 41-63.

 C.A.H. Paul, A user-guide to Archi, Numer. Anal. Rept. No. 283, Maths. Dept., Univ. of Manchester, U.K., 1995.

 L.F. Shampine and M.W. Reichelt, The M ATLAB ODE suite, SIAM J. Sci. Comput., 18 (1997) 1-22.

 L.F. Shampine and S. Thompson, Event location for ordinary differential equations, Comp. & Maths. with Appls., 39 (2000) 43-54.

 L.F. Shampine and S. Thompson, Solving DDEs in M ATLAB , manuscript.

 S. Suherman, R.H. Plaut, L.T. Watson, and S. Thompson, Effect of human response time on rocking instability of a two-wheeled suitcase, J. of Sound and Vibration, 207 (1997) 617-625.

 L. Tavernini, Continuous-Time Modeling and Simulation, Gordon and Breach, Amsterdam, 1996.

 D.R. Willé and C.T.H. Baker, DELSOL - a numerical code for the solution of systems of delay-differential equations, Appl. Numer. Math., 9 (1992) 223-234.

File translated from T E X by T T Hgold, version 2.60.
On 23 Mar 2000, 17:47.

## Delay Differential Equations

A delay differential equation is a differential equation where the time derivatives at the current time depend on the solution and possibly its derivatives at previous times:

Instead of a simple initial condition, an initial history function needs to be specified. The quantities and are called the delays or time lags. The delays may be constants, functions and of (time-dependent delays), or functions and (state-dependent delays). Delay equations with delays of the derivatives are referred to as neutral delay differential equations (NDDEs).

The equation processing code in NDSolve has been designed so that you can input a delay differential equation in essentially mathematical notation.

 x [ t- τ ] dependent variable x with delay τ x [ t /; t ≤ t ]  ϕ specification of initial history function as expression ϕ for t less than t

Inputting delays and initial history.

Currently, the implementation for DDEs in NDSolve only supports constant delays.

For simplicity, this documentation is written assuming that integration always proceeds from smaller to larger . However, NDSolve supports integration in the other direction if the initial history function is given for values above and the delays are negative.

## Comparison and Contrast with ODEs

While DDEs look a lot like ODEs the theory for them is quite a bit more complicated and there are some surprising differences from ODEs. This section will show a few examples of the differences.

 Out= PlayAnimation

As long as the initial function satisfies , the solution for is always 1. [ Z06 ] With ODEs, you could always integrate backward in time from a solution to obtain the initial condition.

 Out= PlayAnimation

For , the solutions are monotonic, for the solutions oscillate, and for the solutions approach a limit cycle. Of course, for the scalar ODE, solutions are monotonic independent of .

This simple scalar delay differential equation has chaotic solutions and the motion shown above looks very much like Brownian motion. [S07] As the delay is increased beyond a limit cycle appears, followed eventually by a period-doubling cascade leading to chaos before .

Stability is much more complicated for delay equations as well. It is well known that the linear ODE test equation has asymptotically stable solutions if and is unstable if .

The closest corresponding DDE is . Even if you cons > and the situation is no longer so clear-cut. Shown below are some plots of solutions indicating this.

So the solution can be stable with and unstable with depending on the value of . A Manipulate is set up below so that you can investigate the plane.

 Out= PlayAnimation

## Propagation and Smoothing of Discontinuities

The way discontinuities are propagated by the delays is an important feature of DDEs and has a profound effect on numerical methods for solving them.

In the example above, is continuous, but there is a jump discontinuity in at since approaching from the left the value is , given by the derivative of the initial history function , while approaching from the right the value is given by the DDE, giving .

Near , we have by the continuity of at and so is continuous at .

Differentiating the equation, we can conclude that so has a jump discontinuity at . Using essentially the same argument as above, we can conclude that at the second derivative is continuous.

Similarly, is continuous at or, in other words, at , is times differentiable. This is referred to as smoothing and holds generally for non-neutral delay equations. In some cases the smoothing can be faster than one order per interval.[Z06]

For neutral delay equations the situation is quite different.

It is easy to see that the solution is piecewise with continuous. However,

which has a discontinuity at every non-negative integer.

In general, there is no smoothing of discontinuities for neutral DDEs.

The propagation of discontinuities is very important from the standpoint of numerical solvers. If the possible discontinuity points are ignored, then the order of the solver will be reduced. If a discontinuity point is known, a more accurate solution can be found by integrating just up to the discontinuity point and then restarting the method just past the point with the new function values. This way, the integration method is used on smooth parts of the solution, leading to better accuracy and fewer rejected steps. From any given discontinuity points, future discontinuity points can be determined from the delays and detected by treating them as events to be located.

When there are multiple delays, the propagation of discontinuities can become quite complicated.

It is clear from the plot that there is a discontinuity at each non-negative integer as would be expected from the neutral delay . However, looking at the second and third derivative, it is clear that there are also discontinuities associated with points like , , propagated from the jump discontinuities in .

In fact, there is a whole tree of discontinuities that are propagated forward in time. A way of determining and displaying the discontinuity tree for a solution interval is shown in the subsection below.

## Storing History Data

Once the solution has advanced beyond the first discontinuity point, some of the delayed values that need to be computed are outs >InterpolationOrder->All in NDSolve ). NDSolve has a general algorithm for obtaining dense output from most methods, so you can use just about any method as the integrator. Some methods, including the default for DDEs, have their own way of getting dense output, which is usually more efficient than the general method. Methods that are low enough order, such as "ExplicitRungeKutta" with "DifferenceOrder"->3 can just use a cubic Hermite polynomial as the dense output, so there is essentially no extra cost in keeping the history.

Since the history data is accessed frequently, it needs to have a quick lookup mechanism to determine which step to interpolate within. In NDSolve , this is done with a binary search mechanism and the search time is negligible compared with the cost of actual function evaluation.

The data for each successful step is saved before attempting the next step, and is saved in a data structure that can repeatedly be expanded efficiently. When NDSolve produces the solution, it simply takes this data and restructures it into an InterpolatingFunction object, so DDE solutions are always returned with dense output.

## The Method of Steps

For constant delays, it is possible to get the entire set of discontinuities as fixed time. The idea of the method of steps is to simply integrate the smooth function over these intervals and restart on the next interval, being sure to reevaluate the function from the right. As long as the intervals do not get too small, the method works quite well in practice.

The method currently implemented for NDSolve is based on the method of steps.

#### Symbolic Method of Steps

This section defines a symbolic method of steps that illustrates how the method works. Note that to keep the code simpler and more to the point, it does not do any real argument checking. Also, the data structure and lookup for the history is not done in an efficient way, but for symbolic solutions this is a minor issue.

## Solve 2nd Order Differential Equations

A differential equation relates some function with the derivatives of the function. Functions typically represent physical quantities and the derivatives represent a rate of change. The differential equation defines a relationship between the quantity and the derivative. Differential equations are very common in fields such as biology, engineering, economics, and physics.

Differential equations may be studied from several different perspectives. Only simple differential equations are solvable by explicit formulas while more complex systems are typically solved with numerical methods. Numerical methods have been developed to determine solutions with a given degree of accuracy.

The term with highest number of derivatives describes the order of the differential equation. A first-order differential equation only contains single derivatives. A second-order differential equation has at least one term with a double derivative. Higher order differential equations are also possible.

Below is an example of a second-order differential equation.

To numerically solve a differential equation with higher-order terms, it can be broken into multiple first-order differential equations as shown below.

A numerical solution to this equation can be computed with a variety of different solvers and programming environments. Solution files are available in MATLAB, Python, and Julia below or through a web-interface. Each of these example problems can be easily modified for solutions to other second-order differential equations as well.

Another scenario is when the damping coefficient c = (0.9 + 0.7 t) is not known but must be estimated from data. The value of c is allowed to change every 0.5 seconds. The true and estimated values of c are shown on the plot below. Predicted and actual values of y are in agreement even though the estimate is not continuous but only changes at discrete time points.

## Delay-differential equations

Delay differential equations differ from ordinary differential equations in that the derivative at any time depends on the solution (and in the case of neutral equations on the derivative) at prior times. The simplest constant delay equations have the form $\tag <1>y'(t) = f(t, y(t), y(t-\tau_1), y(t-\tau_2),\ldots, y(t-\tau_k))$

where the time delays (lags) $$\tau_j$$ are positive constants. More generally, state dependent delays may depend on the solution, that is $$\tau_i = \tau_i (t,y(t)) \ .$$

## Introduction

Systems of delay differential equations now occupy a place of central importance in all areas of science and particularly in the biological sciences (e.g., population dynamics and epidemiology). Baker, Paul, & Willé (1995) contains references for several application areas.

Interest in such systems often arises when traditional pointwise modeling assumptions are replaced by more realistic distributed assumptions, for example, when the birth rate of predators is affected by prior levels of predators or prey rather than by only the current levels in a predator-prey model. The manner in which the properties of systems of delay differential equations differ from those of systems of ordinary differential equations has been and remains an active area of research; see Martin & Ruan (2001) and Raghothama & Narayanan (2002) for typical examples of such studies. See also Shampine, Gladwell, and Thompson (2003) for a description of several common models.

## Initial History Function

Additional information is required to specify a system of delay differential equations. Because the derivative in (1) depends on the solution at the previous time $$t - \tau_j \ ,$$ it is necessary to prov >

## Derivative Discontinuities

In most models, the delay differential equation and the initial history are incompatible: for some derivative order, usually the first, the left and right derivatives are not equal. For example, the simple model $$y'(t) = y(t-1)$$ with constant history $$y(t) = 1$$ has the property that $$y'(0^<+>) = 1 \ne y'(0^<->) = 0 \ .$$

One of the most fascinating properties of delay differential equations is the manner in which such derivative discontinuities are propagated in time. For the equation and history just described, for example, the initial first discontinuity is propagated as a second degree discontinuity at time $$t = 1 \ ,$$ as a third degree discontinuity at time $$t = 2 \ ,$$ and, more generally, as a discontinuity in the $$<(n+1)>^$$ derivative at time $$t = n \ .$$ This behavior is typical of that for a wide class of delay differential equations: generalized smoothing occurs as the initial derivative discontinuity is propagated successively to higher order derivatives. Smoothing need not occur for neutral equations or for non-neutral equations with vanishing delays.

Neves & Feldstein (1976) characterized the tree of derivative discontinuity times for state dependent delay differential equations as the zeroes with odd multiplicity of equations $\tag <2>\tau_i (t,y(t)) - T = 0$

where $$T$$ is the initial time or any later discontinuity time.

## Continuous Extensions

Several of the solvers discussed in the next section use explicit Runge-Kutta methods to integrate systems of delay differential equations. An important question in this case is that of interpolation. Unlike ordinary differential equation solvers that are based on linear multistep methods possessing natural extensions, early Runge-Kutta solvers did not incorporate interpolation; rather they stepped exactly to the next output point instead of stepping beyond it and obtaining interpolated solutions. Interest in the issues of obtaining dense output without limiting the step size in this fashion and by the desire to incorporate root finding led to the development of Runge-Kutta methods endowed with suitable interpolants. Interpolation is handled in one of two ways in modern Runge-Kutta solvers, Hermite interpolation and continuously imbedded methods. For example, the solver dde23 which is based on a third order Runge-Kutta method uses Hermite interpolation of the old and new solution and derivative to obtain an accurate interpolant. By way of contrast, the solver dde_solver uses a sixth order Runge-Kutta method based on a continuously embedded $$C^1$$ interpolant derived from the same derivative approximations used by the basic method. In addition to providing accurate and efficient solutions, either type of interpolant can be used in conjunction with a root finder to locate derivative discontinuity times.

## Available Delay Differential Equation Software

A number of issues must be taken into account by software for delay differential equations. Baker, Paul, & Willé (1995), Shampine & Thompson (2001), and Thompson & Shampine (2006) discuss the various issues. The well known dmrode solver (Neves (1975)) was the first effective software for delay differential equations. Many of the central ideas on which this solver was based were used in later f77 solvers dklag5 (Neves & Thompson (1992)) and dklag6 (Corwin, Sarafyan, and Thompson (1997)), and the Fortran 90/95 dde_solver (Thompson & Shampine (2006)). Although the state of the art for numerical software for delay differential equations is not as advanced as that for ordinary differential equation software, several high quality solvers have recently been developed. The effectiveness of the software is determined in large part by the manner in which propagated derivative discontinuities are handled. Some delay differential equation solvers such as those in Paul (1995), and Thompson & Shampine (2006) explicitly track and locate the zeroes of (2) and include them as integration mesh points. Different approaches are used in other software. For example, the ddverk solver (Enright & Hayashi (1997)) uses derivative defect error control to implicitly locate discontinuity times. It then uses special interpolants to step cross the discontinuities. The ddesd solver (Shampine (2005)) uses residual error control to avoid the use of embedded local error estimates near discontinuity times.

Effective delay differential equation software must deal with other difficulties peculiar to systems of delay differential equations. Early software, for example, limited the step sizes used to be no larger than the smallest delay. But small delays are encountered in many problems; and this artificial restriction on the step size can have a drastic effect on the efficiency of a solver. Most of the solvers mentioned above are based on pairs of explicit continuously embedded Runge-Kutta methods (Shampine (1994)). When the step size exceeds a delay, the underlying interpolation polynomials are iterated in a manner somewhat akin to a predictor-corrector iteration for linear multistep methods. Refer to Baker & Paul (1996), Baker, Paul, & Willé (1995), Enright & Hayashi (1998), and Shampine & Thompson (2001) for details of various aspects of this issue.

The solvers dde23, ddesd, and dde_solver contain a very useful provision for finding zeroes of event functions (Shampine (1994)) that depend on the solution. In addition to solving a system of delay differential equations, they simultaneously locate zeroes of state dependent functions $$g(t,y(t)) = 0 \ .$$ Such special events may signal problem changes requiring integration restarts. The use of event functions is illustrated in the next section.

Although much recent delay differential equation software utilizes explicit continuously embedded Runge-Kutta methods, software based on other methods has been developed. For example, Jackiewicz & Lo (2006) and Willé & Baker (1992) utilize generalized Adams linear multistep methods; and the radar5 solver (http://www.unige.ch/

hairer/software.html) is based on collocation methods. Another well known and widely used program with the ability to solve delay differential equations is the xppaut program (Ermentrout (2002)). The use of software based on a class of general linear methods (diagonally implicit multistage integration methods) is discussed in Hoppensteadt & Jackiewicz (2006) in conjunction with the problem considered in the next section. Bellen & Zennaro (2003) discuss the commonly used methods for delay differential equations in considerable detail.

## An Example

Hoppensteadt & Jackiewicz (2006) investigated a model which generalizes previously studied models for infectious diseases. Solving this model requires the determination of a threshold time at which the accumulated dosage of infection reaches a prescribed level. Once this time is determined, the relevant equations may be integrated to obtain the desired solution. The minimum threshold time $$t_0$$ is the unique value for which $\int_<0>^ \rho(t) I_0(t) dt = m .$ The defining delay differential equations are $\tau'(t) = \frac<\rho(t) I(t)><\rho(\tau(t)) I(\tau(t))>, \quad \tau(t_0) = 0$ $S'(t) = -r(t) I(t) S(t), \quad S(0) = 0$ Here the function $$I(t)$$ is $I(t) = \begin I_0(t), & -\sigma \le t \le t_0, \\ I_0(t) + S_0 - S(\tau(t)), & t_0 \le t \le t_0 + \sigma \\ S(\tau(t-\sigma)) - S(\tau(t)), & t_0 + \sigma \le t \end$

For Example 1 of the reference, the relevant variables and functions are given by $$m = 0.1, \sigma = 1, S_0 = 10, \rho(t) = 1, r(t) = r_0, S_0 = 10,$$ and \[ I_0 = \begin 0.4(1+t), & t \le 0 \\ 0.4(1-t), & 0 \le t \le 1 \\ 0, & 1

## Ddeproblem solve - Delay Differential Equations in Julia

DDEs are mostly solved in a stepwise fashion with a principle called the method of steps. For instance, consider the DDE with a single delay

with given initial condition . Then the solution on the interval [0,τ] is given by ψ(t) which is the solution to the inhomogeneous initial value problem

with ψ(0) = ϕ(0) . This can be continued for the successive intervals by using the solution to the previous interval as inhomogeneous term. In practice, the initial value problem is often solved numerically.

### Example

Suppose f(x(t),x(t − τ)) = ax(t − τ) and ϕ(t) = 1 . Then the initial value problem can be solved with integration,

i.e., x(t) = at + 1 , where we picked C = 1 to fit the initial condition x(0) = ϕ(0) . Similarly, for the interval we integrate and fit the initial condition to find that x(t) = at 2 / 2 + t + D where D = (a − 1)τ + 1 − aτ 2 / 2 .

## Reduction to ODE

In some cases, delay differential equations are equivalent to a system of ordinary differential equations.

• Example 1 Consider an equation

Introduce to get a system of ODEs

• Example 2 An equation

is equivalent to where

## The characteristic equation

Similar to ODEs, many properties of linear DDEs can be characterized and analyzed using the characteristic equation [ 1 ] . The characteristic equation associated with the linear DDE with discrete delays

The roots λ of the characteristic equation are called characteristic roots or eigenvalues and the solution set is often referred to as the spectrum. Because of the exponential in the characteristic equation, the DDE has, unlike the ODE case, an infinite number of eigenvalues, making a spectral analysis more involved. The spectrum does however have a some properties which can be exploited in the analysis. For instance, even though there are an infinite number of eigenvalues, there are only a finite number of eigenvalues to the right of any vertical line in the complex plane.

This characteristic equation is a nonlinear eigenproblem and there are many methods to compute the spectrum numerically [ 2 ] . In some special situations it is possible to solve the characteristic equation explicitly. Consider, for example, the following DDE:

The characteristic equation is

There are an infinite number of solutions to this equation for complex λ. They are given by

1. ^Michiels, Niculescu, 2007 Chapter 1
2. ^Michiels, Niculescu, 2007Chapter 2

## References

• Bellman, Richard; Cooke, Kenneth L. (1963). Differential-difference equations. New York-London: Academic Press. ISBN 978-0120848508.
• Driver, Rodney D. (1977). Ordinary and Delay Differential Equations. New York: Springer Verlag. ISBN 0387902317.
• Michiels, Wim and Niculescu, Silviu-Iulian (2007). Stability and stabilization of time-delay systems. An eigenvalue based approach. ISBN 978-0-898716-32-0.
• Delay-Differential Equations at Scholarpedia, curated by Skip Thompson.

Wikimedia Foundation . 2010 .

### Look at other dictionaries:

delay differential equation — noun a type of differential equation in which the derivative of the unknown function at a certain time is given in terms of the values of the function at previous times Syn: DDE … Wiktionary

Differential equation — Not to be confused with Difference equation. Visualization of heat transfer in a pump casing, created by solving the heat equation. Heat is being generated internally in the casing and being cooled at the boundary, prov >Wikipedia

Stochastic differential equation — A stochastic differential equation (SDE) is a differential equation in which one or more of the terms is a stochastic process, thus resulting in a solution which is itself a stochastic process. SDE are used to model diverse phenomena such as… … Wikipedia

Differential algebraic equation — In mathematics, differential algebraic equations (DAEs) are a general form of (systems of) differential equations for vector–valued functions x in one independent variable t, where is a vector of dependent variables and the system has as many… … Wikipedia

Delay-Differentialgleichung — Retardierte Differentialgleichungen sind ein spezieller Typ Differentialgleichung, oft auch als DDE (Delayed Differential Equation) abgekürzt oder als Differentialgleichung mit nacheilendem Argument bezeichnet. Bei ihnen hängt die Ableitung einer … Deutsch Wikipedia

Differential (mechanical device) — For other uses, see Differential. A cutaway view of an automotive final drive unit which contains the differential Input … Wikipedia

Bi-directional delay line — In mathematics, a bi directional delay line is a numerical analysis technique used in computer simulation for solving ordinary differential equations by converting them to hyperbolic equations. In this way an explicit solution scheme is obtained… … Wikipedia

Boolean delay equation — As a novel type of semi discrete dynamical systems, Boolean Delay Equations (BDEs) are models with Boolean valued variables that evolve in continuous time. Since at the present time, most phenomena are too complex to be modeled by partial… … Wikipedia

Defining equation (physics) — For common nomenclature of base quantities used in this article, see Physical quantity. For 4 vector modifications used in relativity, see Four vector. Very often defining equations are in the form of a constitutive equation, since parameters of… … Wikipedia

Distributed parameter system — A distributed parameter system (as opposed to a lumped parameter system) is a system whose state space is infinite dimensional. Such systems are therefore also known as infinite dimensional systems. Typical examples are systems described by… … Wikipedia

Понравилась статья? Поделиться с друзьями: 