Ddeproblem solve — Delay Differential Equations in Julia
diffeqpy is a package for solving differential equations in Python. It utilizes DifferentialEquations.jl for its core routines to give high performance solving of many different types of differential equations, including:
- Discrete equations (function maps, discrete stochastic (Gillespie/Markov) simulations)
- Ordinary differential equations (ODEs)
- Split and Partitioned ODEs (Symplectic integrators, IMEX Methods)
- Stochastic ordinary differential equations (SODEs or SDEs)
- Random differential equations (RODEs or RDEs)
- Differential algebraic equations (DAEs)
- Delay differential equations (DDEs)
- Mixed discrete and continuous equations (Hybrid Equations, Jump Diffusions)
directly in Python.
If you have any questions, or just want to chat about solvers/using the package, please feel free to chat in the Gitter channel. For bug reports, feature requests, etc., please submit an issue.
To install diffeqpy, use pip:
Using diffeqpy requires that Julia is installed and in the path, along with DifferentialEquations.jl and PyCall.jl. To install Julia, download a generic binary from the JuliaLang site and add it to your path. To install Julia packages required for diffeqpy, open up Python interpreter then run:
and you’re good! In addition, to improve the performance of your code it is recommended that you use Numba to JIT compile your derivative functions. To install Numba, use:
Import and setup the solvers via the commands:
The general flow for using the package is to follow exactly as would be done in Julia, except add de. in front. Most of the commands will work without any modification. Thus the DifferentialEquations.jl documentation and the DiffEqTutorials are the main in-depth documentation for this package. Below we will show how to translate these docs to Python code.
Ordinary Differential Equation (ODE) Examples
The solution object is the same as the one described in the DiffEq tutorials and in the solution handling documentation (note: the array interface is missing). Thus for example the solution time points are saved in sol.t and the solution values are saved in sol.u . Additionally, the interpolation sol(t) gives a continuous solution.
We can plot the solution values using matplotlib:
We can utilize the interpolation to get a finer solution:
The common interface arguments can be used to control the solve command. For example, let’s use saveat to save the solution at every t=0.1 , and let’s utilize the Vern9() 9th order Runge-Kutta method along with low tolerances abstol=reltol=1e-10 :
The set of algorithms for ODEs is described at the ODE solvers page.
Compilation with Numba and Julia
When solving a differential equation, it’s pertinent that your derivative function f is fast since it occurs in the inner loop of the solver. We can utilize Numba to JIT compile our derivative functions to improve the efficiency of the solver:
Additionally, you can directly define the functions in Julia. This will allow for more specialization and could be helpful to increase the efficiency over the Numba version for repeat or long calls. This is done via julia.Main.eval :
Systems of ODEs: Lorenz Equations
To solve systems of ODEs, simply use an array as your initial condition and define f as an array function:
or we can draw the phase plot:
In-Place Mutating Form
When dealing with systems of equations, in many cases it’s helpful to reduce memory allocations by using mutating functions. In diffeqpy, the mutating form adds the mutating vector to the front. Let’s make a fast version of the Lorenz derivative, i.e. mutating and JIT compiled:
or using a Julia function:
Stochastic Differential Equation (SDE) Examples
Solving one-dimensonal SDEs du = f(u,t)dt + g(u,t)dW_t is like an ODE except with an extra function for the diffusion (randomness or noise) term. The steps follow the SDE tutorial.
Systems of SDEs with Diagonal Noise
An SDE with diagonal noise is where a different Wiener process is applied to every part of the system. This is common for models with phenomenological noise. Let’s add multiplicative noise to the Lorenz equation:
Systems of SDEs with Non-Diagonal Noise
In many cases you may want to share noise terms across the system. This is known as non-diagonal noise. The DifferentialEquations.jl SDE Tutorial explains how the matrix form of the diffusion term corresponds to the summation style of multiple Wiener processes. Essentially, the row corresponds to which system the term is applied to, and the column is which noise term. So du[i,j] is the amount of noise due to the j th Wiener process that’s applied to u[i] . We solve the Lorenz system with correlated noise as follows:
Here you can see that the warping effect of the noise correlations is quite visible!
Differential-Algebraic Equation (DAE) Examples
A differential-algebraic equation is defined by an implicit function f(du,u,p,t)=0 . All of the controls are the same as the other examples, except here you define a function which returns the residuals for each part of the equation to define the DAE. The initial value u0 and the initial derivative du0 are required, though they do not necessarily have to satisfy f (known as inconsistent initial conditions). The methods will automatically find consistent initial conditions. In order for this to occur, differential_vars must be set. This vector states which of the variables are differential (have a derivative term), with false meaning that the variable is purely algebraic.
This example shows how to solve the Robertson equation:
and the in-place JIT compiled form:
Delay Differential Equations
A delay differential equation is an ODE which allows the use of previous values. In this case, the function needs to be a JIT compiled Julia function. It looks just like the ODE, except in this case there is a function h(p,t) which allows you to interpolate and grab previous values.
We must provide a history function h(p,t) that gives values for u before t0 . Here we assume that the solution was constant before the initial time point. Additionally, we pass constant_lags = [20.0] to tell the solver that only constant-time lags were used and what the lag length was. This helps improve the solver accuracy by accurately stepping at the points of discontinuity. Together this is:
Notice that the solver accurately is able to simulate the kink (discontinuity) at t=20 due to the discontinuity of the derivative at the initial time point! This is why declaring discontinuities can enhance the solver accuracy.
Solving algebraic looping problem by introducing a unit delay
First, here is my code.
The problem here lies in the (*Flip Flop Control Law *) section. Since the equation in u1 is dependent on u2 and vice-versa, these two equations result in an algebraic loop. One method I know of for overcomming this problem is to introduce a unit delay. In SIMULINK, the «Unit Delay» block is available, but in Mathematica, how do I program a unit delay for this system so I can simulate and plot the results successfully for both Theta and Phi?
A new technique to solve the initial value problems for fractional fuzzy delay differential equations
- Truong Vinh An
- Ho Vu
- Ngo Van Hoa
Using some recent results of fixed point of weakly contractive mappings on the partially ordered space, the existence and uniqueness of solution for interval fractional delay differential equations (IFDDEs) in the setting of the Caputo generalized Hukuhara fractional differentiability are studied. The dependence of the solution on the order and the initial condition of IFDDE is shown. A new technique is proposed to find the exact solutions of IFDDE by using the solutions of interval integer order delay differential equation. Finally, some examples are given to illustrate the applications of our results.
Fractional calculus and fractional differential equations are a field of increasing interest due to their applicability to the analysis of phenomena, and they play an important role in a variety of fields such as rheology, viscoelasticity, electrochemistry, diffusion processes, etc. Usually applications of fractional calculus amount to replacing the time derivative in a given evolution equation by a derivative of fractional order. One can find applications of fractional differential equations in signal processing and in the complex dynamic in biological tissues (see [ 1 , 2 , 3 ]). To observe some basic information and results of various type of fractional differential equations, one can see the papers and monographs of Samko et al. [ 4 ], Podlubny [ 5 ] and Kilbas et al. [ 6 ].
Interval analysis and interval differential equation were proposed as an attempt to handle interval uncertainty that appears in many mathematical or computer models of some deterministic real-world phenomena in which uncertainties or vagueness pervade. In the recent time this theory has been developed in theoretical directions, and a w >7 , 8 , 9 , 10 , 11 , 12 ]). Recently, the issue of fuzzy fractional calculus and fuzzy fractional differential equations has emerged as the significant subject, and this new theory has become very attractive to many scientists. The concept of fuzzy type Riemann-Liouville differentiability based on the Hukuhara differentiability was initiated by Agarwal et al. in [ 13 , 14 ] with some applications to fractional order initial value problem of fuzzy differential equation. By using the Hausdorff measure of non-compactness and under compactness type conditions, authors proved the existence of solution of fuzzy fractional integral equation. Following this direction, the concepts of fuzzy fractional differentiability have been developed and extended in some papers to investigate some results on the existence and uniqueness of solutions to fuzzy differential equations, and have been cons >15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 , 27 , 28 ]).
give the existence and uniqueness theorem of solution for a general form of the interval fractional integral equation by using some recent results of fixed point of weakly contractive mappings on partially ordered sets, and use these results in order to investigate the existence and uniqueness result of solution for problem ( 1.1 ).
show that the solutions of the initial value problem ( 1.1 ) depend continuously on the initial condition, the order and the right-hand side of equation.
propose a new technique to find the exact solutions of problem ( 1.1 ) by using the solutions of interval integer order delay differential equation.
This paper is organized as follows. In Section 2 , some basic concepts and notations about fractional derivative for interval functions are introduced. Bes >3 , we prove the existence and uniqueness of solution for a general form of the interval fractional integral equation and use this result to investigate the existence and uniqueness results of solutions for problem ( 1.1 ). Finally, a new technique to find the exact solutions of problem ( 1.1 ) is provided and two examples are given to illustrate this technique.
Every pair of elements of \(K_
Every pair of elements of \(C([a,b],K_
Let an interval function \(X:[a,b]\rightarrow K_
Machine learning meets math: Solve differential equations with new Julia library
Need a good math tutor? Julia’s the name and differential equations is the new game. Julia’s latest library combines machine learning with solving differential equations. This collaborative effort shows off the power that Julialang has as a platform for machine learning.
Julia continues to make waves since its co-creators won the 2020 James H. Wilkinson Prize for Numerical Software. The Wilkinson prize is awarded every four years. This year it celebrates the innovative language “for the creation of Julia, an innovative environment for the creation of high-performance tools that enable the analysis and solution of computational science problems.” There’s no better way to start the fresh new year than already on top with a shiny new prize.
Now, the language unveils a new library upon the scientific computing community: DiffEqFlux.jl. It combines the power of solving differential equations and machine learning.
This library for neural differential equations reminds us why Julia deserves the award. Let’s pay our congrats and spread the word.
Nobody solves problems like Julia
Julia’s team showed off DiffEqFlux.jl in a blog post on January 18, 2020. The post is a combined effort of Julia library creators and the authors of the Neural Ordinary Differential Equations paper, which won Best Paper of NeurIPS 2020.
First of all, DiffEqFlux.jl is a recipe that combines two great libraries into one elegant interaction: DifferentialEquations.jl and Flux.jl.
DifferentialEquations.jl is a suite for solving, what else, differential equations. (See some example Jupyter notebooks on GitHub and follow the interactive introduction and tutorial.) Are differential equations a long-lost school memory or a concept that you struggle with? That is no problem with the amount of helpful tutorials and introductions. The Jupyter notebook, “An Intro to DifferentialEquations.jl” helps dip you in.
Flux.jl, on the other hand, is an “elegant machine learning stack”. It is a library for machine learning and enables the powerful nature of Julia. Several demos of Flux are available on GitHub in the model zoo. Use the examples as a starting point for your own machine learning models.
With that in mind, the latest library combines differential equations and machine learning into one beautiful package.
Diffy Q + machine learning = match made in heaven
So, why is machine learning the perfect match for differential equations?
The announcement blog post answers this question (in a very helpful tone – frankly, I wish all mathematical concepts were explained like this). While you should absolutely read the entire explanation, here is just a sample:
There are three common ways to define a nonlinear transform: direct modeling, machine learning, and differential equations. Directly writing down the nonlinear function only works if you know the exact functional form that relates the input to the output. However, in many cases, such exact relations are not known a priori. So how do you do nonlinear modeling if you don’t know the nonlinearity? One way to address this is to use machine learning.
The blog post’s comprehensive nature cannot be understated. It sets a new precedent for future tutorials and explanations to come. By the end of the post, you will know how to implement the neural ODE layer in Julia and understand its behavior.
With the neural ordinary differential equation (ODE), machine learning meets math!
High honors for a high-level language
For further reading about differential equation solvers, be sure to read this article by the lead developer of DifferentialEquations.jl. Christopher Rackauckas compares differential equation solver suites in various languages: MATLAB, R, Julia (of course), Python, C, Mathematica, Maple, and even an old-school set of Fortran solvers. The article highlights the good and bad about all methods, their limitations and efficiency. (If you haven’t tried Julia yet, perhaps Christopher Rackauckas’ writing will convince you.)
Congratulations once again to Julia for winning the James H. Wilkinson Prize for Numerical Software. All the praise is well-deserved.
We all look forward to seeing what the future holds. Take a bow, you earned it!
Internet Explorer version 9 and earlier are not supported (why?). Please use of a modern browser such as Firefox or Chrome.
Reading: DifferentialEquations.jl – A Performant and Feature-Rich Ecosystem for Solving Differential .
DifferentialEquations.jl – A Performant and Feature-Rich Ecosystem for Solving Differential Equations in Julia
Christopher Rackauckas ,
Department of Mathematics, University of California-Irvine, Irvine, CA, 92697, US
Department of Mathematics, University of California-Irvine, Irvine, CA, 92697, US
DifferentialEquations.jl is a package for solving differential equations in Julia. It covers discrete equations (function maps, discrete stochastic (Gillespie/Markov) simulations), ordinary differential equations, stochastic differential equations, algebraic differential equations, delay differential equations, hybrid differential equations, jump diffusions, and (stochastic) partial differential equations. Through extensive use of multiple dispatch, metaprogramming, plot recipes, foreign function interfaces (FFI), and call-overloading, DifferentialEquations.jl offers a unified user interface to solve and analyze various forms of differential equations while not sacrificing features or performance. Many modern features are integrated into the solvers, such as allowing arbitrary user-defined number systems for high-precision and arithmetic with physical units, built-in multithreading and parallelism, and symbolic calculation of Jacobians. Integrated into the package is an algorithm testing and benchmarking suite to both ensure accuracy and serve as an easy way for researchers to develop and distribute their own methods. Together, these features build a highly extendable suite which is feature-rich and highly performant.
Funding statement: This work was partially supported by NIH grants P50GM76516 and R01GM107264 and NSF grants DMS1562176 and DMS1161621. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1321846, the National Academies of Science, Engineering, and Medicine via the Ford Foundation, and the National Institutes of Health Award T32 EB009418. Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the NIH.
Differential equations are fundamental components of many scientific models; they are used to describe large-scale physical phenomena like planetary systems [ 10 ] and the Earth’s climate [ 12 , 18 ], all the way to smaller scale biological phenomena like biochemical reactions [ 30 ] and developmental processes [ 27 , 7 ]. Because of the ubiquity of these equations, standard sets of solvers have been developed, including Shampine’s ODE suite for MATLAB [ 25 ], Hairer’s Fortran codes [ 8 ], and the Sundials CVODE solvers [ 11 ].
However, these software packages contain many limitations which stem from their implementation and the time when they were developed. Since the time of their inception, many other forms of differential equations have become commonplace tools not only for mathematicians, but throughout the sciences. Stochastic differential equations (SDEs), have become more commonplace not only in mathematical finance [ 23 , 5 ], but also in biochemical [ 4 , 13 ] and ecological models. Delay differential equations have become a ubiquitous tool for modeling phenomena with natural delays as seen in Neuroscience [ 3 , 22 ] and control theory [ 24 ]. However, a user who is familiar with standard ODE tools has to “leave the box” to find a new specialized package to handle these kinds of differential equations, or write their own solver scripts [ 9 ]. Also, when many of these methods were implemented the standard computer was limited by the speed of the processor. These days, most processors are multi-core and many computers contain GPGPU [ 1 ] or Xeon Phi [ 17 , 6 ] acceleration cards and thus taking advantage of the ever-present parallelism is key to achieving good performance.
Other design limitations stem from the programming languages used in the implementation. Many of these algorithms, being developed in early C/Fortran, do not have abstractions for generalized array formats. In order to use these algorithms, one must provide the solver with a vector. In cases where a matrix or a higher dimensional tensor are the natural representation of the differential equation, the user is required to transform their equation into a vector equation for use in these solvers. Also, these solvers are limited to using 64-bit floating point calculations. The numerical precision limits their use in high-precision applications, requiring specialized codes when precision lower than 10 –16 is required. Lastly, many times these programs are interfaced via a scripting language where looping is not optimized and where “vectorized” codes provide the most efficient solution. However, vectorized coding in the style of MATLAB or NumPy results in temporary allocations and can lack compiler optimizations which require type inference. This increases the computational burden of the user-defined functions which degrades the efficiency of the solver.
The goal of DifferentialEquations.jl is build off of the foundation created by these previous differential equation libraries and modernize them using Julia. Julia is a scripting language, used in-place of languages like R, Python, MATLAB, but offers the performance one would associate with low-level compiled languages. This allows users to start prototypes in Julia, but also solve their large-scale models within the same language, instead of resorting to two language solutions when performance is needed. The language achieves this goal by extensive utilization of multiple dispatch and metaprogramming to design a language that is both easy for a compiler to understand and easy for a programmer to use [ 2 ]. DifferentialEquations.jl builds off of these design principles to arrive at a fast, feature-rich, and highly extendable differential equations suite which is easy to use.
We start by describing the innovations in usability. In Section 1.1 we show how multiple dispatch is used to consol >solve and plot . Since these commands are used for all forms of differential equations, the user interface is unified in a manner that makes it easy for a user to explore other types of models. Then in Section 1.2 we show how metaprogramming is used to further simplify the user API, allowing the user to define a function in a “mathematical format” which is automatically converted into the computationally-efficient encoding. After that, we describe how the internals were designed in order to be both feature-filled and highly performant. In Section 1.3 we describe the package structure of DifferentialEquations.jl and how the Base libraries, component solvers, and add-on packages come together to provide the full functionality of DifferentialEquations.jl. In Section 1.4 we describe how multiple dispatch is used to write a single generic method which compiles into specialized functions dependent on the number types given to the solver. We show how this allows for the solvers to both achieve high performance while being compatible with any Julia-defined number system which implements a few basic mathematical operations, including fast high and intermediate precision numbers and arithmetic with physical units. In Section 1.5 we describe the experimental within-method multi-threading which is being used to further enhance the performance of the methods, and the multi-node parallelism which is included for performing Monte Carlo simulations of stochastic models. We then discuss some of the tools which allows DifferentialEquations.jl to be a good test suite for the fast development and deployment of new solver algorithms, and the tools provided for performing benchmarks. Lastly, we describe the current limitations and future development plans.
1.1 A Unified API Through Multiple Dispatch
DifferentialEquations.jl uses multiple dispatch on specialized types to arrive at a unified user-API for the different types of equations. To use the package, one follows the steps:
- Define a problem.
- Solve the problem.
- Plot the solution.
This standardization of the API makes complicated solvers accessible to less programming-inclined individuals, giving a good framework for future development and allows for the latest research in numerical differential equations to be utilized without complications.
1.1.1 Solving ODEs
To define a problem, a user must call the constructor for the appropriate problem object. Since ordinary differential equations (ODEs) are represented in the general form as
the ODEProblem is defined by specifying a function f and an initial condition u. For example, we can define the linear ODE using the commands:
Many other examples are prov > 1 and the Jupyter notebook tutorials in DiffEqTutorials.jl 2 (for use with Julia, see IJulia.jl 3 ). To solve the ODE, the user can simply call the solve command on the problem:
By using a dispatch architecture on AbstractArrays and using the array-defined indexing functionality prov >eachindex(A) ), DifferentialEquations.jl accepts problems defined on arrays of any size. For example, one can define and solve a system of equations where the dependent variable u is a matrix as follows:
For most other packages, one would normally have to define u as a vector and rewrite the system of equations in the vector form. However, by allowing arbitrary problem sizes, DifferentialEquations.jl allows the user to specify problems in the natural format and solve directly on any array of numbers. This can be helpful for problems like discretizations of partial differential equations (PDEs) where the matrix format matches some underlying structure, and could result in a denser formulation.
The solver returns a solution object which holds all of the information about the solution. Dispatches to array functions are prov >sol object, allowing for the solution object act like a timeseries array. In addition, high-order efficient interpolations are lazily constructed throughout the solution (by default, a feature which can be turned off) and the sol object’s call is overloaded with the interpolating function. Thus the solution object can both be used as an array of the solution values, and as a continuous approximation given by the numerical solution. The syntax is as follows:
The solution can be plotted using the prov > 4 . The plot recipes use the solver object to build a default plot which is customizable using any of the commands from the Plots.jl package, and can be plotted to any plotting backend prov > 5 backend (a Julia wrapper for matplotlib 6 ) via the command:
These defaults are deliberately made so that a standard user does not need to dig further into the manual and understand the differences between all of the algorithms. However, an extensive set of functionality is available if the user wishes. All of these functions can be modified via additional arguments. For example, to change the solver algorithm to a highly efficient Order 7 method due to Verner [ 29 ], set the line width in the plot to 3 pixels, and add some labels to the plot, one could instead use the commands:
The output of this command is shown in Figure 1 . Note that the output is automatically smoothed using 10*length(sol) equally spaced interpolated values through the timespan.
Example of the ODE plot recipe. This plot was created using the PyPlot backend through Plots.jl. Shown is the solution to the 4 × 2 ODE with f(t,u) = Au where A is given in the code. Each line corresponds to one component of the matrix over time.
Lastly, these solvers tie into Julia integrated development environments ( >16 ] are equipped with a progressbar and time estimates to monitor the progress of the solver. Additionally, all of the DifferentialEquations.jl functions are thoroughly tested and documented with the Jupyter notebook system [ 19 ], allowing for reproducible exploration.
1.1.2 Solving SDEs
By using multiple-dispatch, the same user API is offered for other types of equations. For example, if one wishes to solve a stochastic differential equation (SDE):
then one builds an SDEProblem object by specifying the initial condition and now the two functions, f and g. However, the rest of the usage is the same: simply use the solve and plot functions. To extend the previous example to have multiplicative noise, the code would be:
While this user interface is simple, the default methods these algorithms can call are efficient high-order solvers with adaptive timestepping [ 21 ]. These methods tie into the plotting functionality and IDEs in the same manner as the ODE solvers, making it easy for users to explore stochastic modeling without having to learn learn a new interface.
1.1.3 Solving (Stochastic) PDEs
Again, the same user API is offered for the available stochastic PDE solvers. Instead, one builds a HeatProblem object which dispatches to algorithms for solving (Stochastic) PDEs. An example using the previously defined functions is:
Additional keyword arguments can be supplied to HeatProblem to specify boundary data and initial condtions. Notice that the main difference is now we must specify a space-time mesh (and boundary conditions as optional keyword arguments). Again, the same plotting and analysis commands apply to the solution object sol (where now the plot dispatch is to a trisurf plot).
1.2 Enhanced Performance and Readability Through Macros
1.2.1 A Macro-Based Interface
Most differential equations packages require that the user understands some details about the implementation of the library. However, the DifferentialEquations.jl ecosystem implements various Domain-Specific Languages (DSLs) via macros in order to give more natural options for defining mathematical constructs. In this section we will demonstrate the DSL for defining ODEs. For demonstrations related to other types of equations, please see the documentation.
The famous Lorenz system is mathematically defined as
Solving Delay Differential Equations with dde23
Southern Methodist University
Dallas, TX 75275
Department of Mathematics & Statistics
Radford, VA 24142
Ordinary differential equations (ODEs) and delay differential equations (DDEs) are used to describe many phenomena of physical interest. While ODEs contain derivatives which depend on the solution at the present value of the independent variable («time»), DDEs contain in addition derivatives which depend on the solution at previous times. DDEs arise in models throughout the sciences . Despite the obvious similarities between ODEs and DDEs, solutions of DDE problems can differ from solutions for ODE problems in several striking, and significant, ways  . This accounts in part for the lack of much general-purpose software for solving DDEs.
We consider here only systems of delay differential equations of the form
that are solved on a пїЅ t пїЅ b with given history y(t) = S(t) for t пїЅ a . The constant delays are such that t = min( t 1, пїЅ , t k) > 0. Although DDEs with delays (lags) of more general form are important, this is a large and useful class of DDEs. Indeed, Baker, Paul, and Willé  write that «The lag functions that arise most frequently in the modelling literature are constants.»
Although the effective solution of DDEs has benefited a great deal from the advances made in ODE technology during the past several years, the state-of-the-art for DDE software is not at the level of ODE software. The few FORTRAN codes for solving DDEs are cons >ATLAB  program dde23  with the goal of making it as easy as possible to solve the wide range of DDEs with constant delays encountered in practice.
This tutorial shows how to solve DDEs with dde23. It is organized as follows. Important differences between DDEs and ODEs are discussed briefly in § 2. In § 3 there is a brief discussion of how numerical methods for ODEs can be extended to solve DDEs. The most important part of this tutorial is the collection of examples in § 4. As the first few show, anyone familiar with solving ODEs using ode23  will find it easy to solve routine DDEs with dde23. Several examples then illustrate the powerful capabilities of dde23 for solving DDEs that are far from routine. Most of the examples have an exercise that prov >ATLAB with dde23.
2 Delay Differential Equations
In this section we describe briefly some important differences between DDEs and ODEs. More detailed discussions of the various issues are found in .
The most obvious difference between ODEs and DDEs is the initial data. The solution of an ODE is determined by its value at the initial point t = a. In evaluating the DDEs (1) for a пїЅ t пїЅ b, a term like y(t — t j) may represent values of the solution at points prior to the initial point. For example, at t = a we must have the solution at a — t j. It is easy to see that if T is the longest delay, the equations generally require us to prov >- T пїЅ t пїЅ a. For DDEs we must provide not just the value of the solution at the initial point, but also the «history», the solution at times prior to the initial point.
Because numerical methods for both ODEs and DDEs are intended for problems with solutions that have several continuous derivatives, discontinuities in low-order derivatives require special attention. This is a much more serious matter for DDEs. For one thing, such discontinuities are not unusual for ODEs, but they are almost always present for DDEs: Generally there is a discontinuity in the first derivative of the solution at the initial point because generally S пїЅ (a — ) пїЅ y пїЅ (a+) = f(a,S(a — t 1), пїЅ ,S(a — t k)). There can also be discontinuities at times both before and after the initial point. Some problems have histories with discontinuities in low-order derivatives. Some models involve equations that change when the solution satisfies a given relation, e.g., when a solution component has a given value. These changes often cause discontinuities in low-order derivatives of the solution.
Another reason why discontinuities are much more serious for DDEs is that they propagate. If the solution has a discontinuity in a derivative somewhere, there are discontinuities in the rest of the interval at a spacing given by the delays. In reasonably general circumstances, the propagated discontinuities are smoothed: If there is a discontinuity at t * of order k , i.e., there is a jump in y (k) at t * , then the discontinuity at t * + t j is of order at least k+1 , the discontinuity at t * +2 t j is of order at least k+2 , and so on. This is very important for numerical solution of the DDE because once the orders are high enough, the discontinuities will not interfere with the numerical method and we can stop tracking them.
To see how discontinuities propagate and smooth out, let us solve
for 0 пїЅ t with history S(t) = 1 for t пїЅ 0. With this history, the problem reduces on the interval 0 пїЅ t пїЅ 1 to the ODE y пїЅ (t) = 1 with initial value y(0) = 1. Solving this problem we find that y(t) = t + 1 for 0 пїЅ t пїЅ 1. Notice that the solution has a discontinuity in its first derivative at t = 0 . In the same way we find that y(t) = (t 2 +1)/2 for 1 пїЅ t пїЅ 2 . The first derivative is continuous at t = 1 , but there is a discontinuity in the second derivative. In general the solution on the interval [k,k+1] is a polynomial of degree k+1 and there is a discontinuity of order k + 1 at t = k .
3 Numerical Methods for DDEs
In this section we discuss a few aspects of the numerical solution of DDEs. A detailed discussion of the methods used by dde23 can be found in .
A popular approach to solving DDEs is to extend one of the methods used to solve ODEs. Most of the codes are based on explicit Runge-Kutta methods. dde23 takes this approach by extending the method of the M ATLAB ODE solver ode23. The >пїЅ t пїЅ 1, the DDE reduces to an initial value problem for an ODE with y(t — 1) equal to the given history S(t — 1) and initial value y(0) = 1. We can solve this ODE numerically using any of the popular methods for the purpose. Analytical solution of the DDE on the next interval 1 пїЅ t пїЅ 2 is handled the same way as the first interval, but the numerical solution is somewhat complicated, and the complications are present for each of the subsequent intervals. The first complication is that we must keep track of how the discontinuity at the initial point propagates because of the delays. Another is that at each discontinuity we start the solution of an initial value problem for an ODE. Runge-Kutta methods are attractive because they are much easier to start than other popular numerical methods for ODEs. Still another issue is the term y(t — 1) that is in principle known because we have already found y(t) for 0 пїЅ t пїЅ 1. This has been a serious obstacle to applying Runge-Kutta methods to DDEs, so we need to discuss the matter more fully.
Runge-Kutta methods, like all discrete variable methods for ODEs, produce approximations yn to y(xn) on a mesh
The Runge-Kutta methods mentioned are all explicit recipes for computing yn+1 given yn and the ability to evaluate the equation. For reasons of efficiency, a solver tries to use the biggest step size hn that will yield the specified accuracy, but what if it is bigger than the shortest delay t ? In taking a step to xn + hn, we would then need values of the solution at points in the span of the step, but we are trying to compute the solution at the end of the step and do not yet know these values. A good many solvers restrict the step size to avoid this issue. Some solvers, including dde23, use whatever step size appears appropriate and iterate to evaluate the implicit formula that arises in this way.
In this section we use problems from the literature to show how to solve DDEs with dde23. Solving a DDE with dde23 is much like solving an ODE with ode23, but there are some notable differences. Examples 1 through 3 show how to solve typical problems. They should be read in order. dde23 has a powerful event location capability that is quite similar to that of ode23. Example 4 illustrates the capability by finding local maxima of the solution. ODE and DDE solvers are intended for problems with solutions that have several continuous derivatives. However, it is not unusual for equations to have different forms in different circumstances, which leads to discontinuities in low-order derivatives of the solution when the circumstances change. This matter is more serious for DDEs because discontinuities propagate and discontinuities can occur in the history. Examples 5 through 8 show how to deal with discontinuities in low-order derivatives, including jumps in the solution itself. They consider situations in order of difficulty and some require familiarity with a previous example. dde23 is limited to problems with constant delays, but the examples/exercises/problems of this section show that for this class of problems, it is both easy to use and powerful.
Complete solutions are provided for all the examples that can be used as templates. Some of the examples have exercises that are solved in a similar way. It is worth trying them for practice. Complete solutions are provided as a check and as further templates. This tutorial ends with some additional problems that serve as exercises for all the examples. Again, complete solutions are provided as a check and as further templates.
A naming convention is used throughout this section. For example, exam1.m is the M-file for solving the problem of Example 1. The equations of this problem are evaluated in the M-file exam1f.m. Some problems involve additional files, specifically a history function and/or an event function. The corresponding M-files have the names exam1h.m and exam1e.m, respectively. The M-files for the exercises follow the same convention with exam replaced by exer. Finally, the M-files for the additional problems are similarly named with exam replaced by prob.
We illustrate the straightforward solution of a DDE by computing and plotting the solution of Example 3 of . The equations
are to be solved on [0,5] with history y1(t) = 1,y2(t) = 1, y3(t) = 1 for t пїЅ 0.
A typical invocation of dde23 has the form The input argument tspan is the interval of integration, here [0, 5]. The history argument is the name of a function that evaluates the solution at the input value of t and returns it as a column vector. Here exam1h.m can be coded as Quite often the history is a constant vector. A simpler way to prov >- t j) for t j given as lags(j). It is not necessary to define local vectors ylag1, ylag2 as we have done here, but often this makes the coding of the DDEs clearer. The ddefile must return a column vector.
This is perhaps a good place to point out that dde23 does not assume that terms like y(t — t j) actually appear in the equations. Because of this, you can use dde23 to solve ODEs. If you do, it is best to input an empty array, , for lags because any delay specified affects the computation even when it does not appear in the equations.
The input arguments of dde23 are much like those of ode23, but the output differs formally in that it is one structure, here called sol, rather than several arrays The field sol.x corresponds to the array t of values of the independent variable returned by ode23 and the field sol.y, to the array y of solution values. So, one way to plot the solution is
After defining the equations in exam1f.m, the complete program exam1.m to compute and plot the solution is Note that we must supply the name of the ddefile to the solver, i.e., the string 'exam1f' rather than exam1f. Also, we have taken advantage of the easy way to specify a constant history.
To gain experience with dde23, compute and plot the solution of the following problem from . Solve
on [0,1] with history y1(t) = exp(t+1), y2(t) = exp(t+0.5), y3(t) = sin(t+1), y4(t) = y1(t), y5(t) = y1(t) for t пїЅ 0.
In this you will have to evaluate the history in a function and supply its name, say 'exer1h', as the history argument of dde23. Remember that both the ddefile and the history function must return column vectors. In  this problem is used to show how to prepare a class of DDEs for solution with DMRODE. You might find it interesting to compare this preparation to what you had to do.
We show how to get output at specific points with Example 5 of , a scalar equation that exhibits chaotic behavior. We solve the equation
on [0,100] with history y(t) = 0.5 for t пїЅ 0.
Output from dde23 is not just formally different from that of ode23. dde23 computes an approximate solution S(t) val >пїЅ (t): With this form of output, you can solve a DDE just once and then obtain inexpensively as many solution values as you like, anywhere you like. The numerical solution itself is continuous and has a continuous derivative, so you can always get a smooth graph by evaluating it at enough points with ddeval.
The example of  plots y(t — 2) against y(t). This is quite a common task in nonlinear dynamics, but we cannot proceed as in Example 1. That is because the entries of sol.x are not equally spaced: If t * appears in sol.x, we have an approximation to y(t * ) in sol.y, but generally t * — 2 does not appear in sol.x, so we do not have an approximation to y(t * — 2). ddeval makes such plots easy. In exam2.m we first define an array t of 1000 equally spaced points in [2,100] and obtain solution values at these points with ddeval. We then use ddeval a second time to evaluate the solution at the entries of t-2. In this way we obtain values approximating both y(t) and y(t — 2) for the same t. This might seem like a lot of plot points, but ddeval is just evaluating a piecewise-polynomial function and is coded to take advantage of fast builtin functions and vectorization, so this is not expensive and results in a smooth graph.
Because M ATLAB does not distinguish scalars and vectors of one component, the single DDE can be coded as The complete program exam2.m to compute and plot y(t — 2) against y(t) is
Farmer  gives plots of various Poincaré sections for the Mackey-Glass equation, a scalar DDE that exhibits chaotic behavior. Reproduce Fig. 2a of the paper by solving
on [0,300] with history y(t) = 0.5 for t пїЅ 0 and plotting y(t — 14) against y(t). The figure begins with t = 50 to allow an initial transient time to settle down. To reproduce it, form an array of 1000 equally spaced points in [50,300], evaluate y(t) at these points, and then evaluate y(t — 14).
We show how to set options and deal with parameters by solving Example 4.2 of . The equation
is solved on [0,20] with history y(t) = t for t пїЅ 0 for four values of the parameter l , namely 1.5, 2, 2.5, and 3.
Often default error tolerances are perfectly satisfactory, but here more stringent tolerances are needed for the larger values of l . Options are set with ddeset exactly as they are set for ode23 with odeset. When options are used, a call to dde23 has the form Options like relative and absolute error tolerances are the same in the two solvers. In particular, both have a default relative error tolerance of 10 — 3 and default absolute error tolerance of 10 — 6 . The tolerances imposed for the larger l in exam3.m are relatively stringent for this solver, but this is a price that must be pa >l = 3 and default tolerances.
Parameters can always be communicated as global variables, but as is common with M ATLAB solvers, they can also be passed through dde23 as arguments following the options argument. For two values of l we use default tolerances, so must use an empty array, , as a placeholder for the options argument. When parameters are passed through dde23, they must appear as arguments of the ddefile and if present, the history function, even if they are not used. Accordingly, exam3f.m can be coded as and exam3h.m as
After defining the equation in exam3f.m and the history in exam3h.m, the complete program exam3.m to compute and plot the four solutions as in  is This has been coded in a very straightforward manner to make clear that we are solving four problems and using different tolerances.
Wheldon’s model of chronic granuloctic leukemia  has the form
Code the equations for general values of the parameters to make it easy to experiment with the model. Remember that if you do not set any options, you must use a placeholder of  for the options argument. Solve the problem on [0,200] with history y1(t) = 100, y2(t) = 100 for t пїЅ 0 and parameter values a = 1.1 ×10 10 , b = 10 — 12 , g = 1.25, d = 1, l = 10, m = 4 ×10 — 8 , w = 2.43 that you set in the main program. Compare the solutions you obtain with t = 7 and t = 20 . You could code this as You should find that the solution is oscillatory in both cases. In the first, the oscillations are damped quickly and in the second, they are not.
It is often necessary to find when a solution satisfies a certain relation, e.g., when a component has a specific value. An event is sa >- t 1), пїЅ ,y(t — t k)), vanishes. Some problems involve many of these «event functions». This example shows how to use the powerful event location capability of dde23.
Figure 15.6 of  displays the solution of an infectious disease model. The equations
are solved on [0,40] with history y1(x) = 5,y2(x) = 0.1,y3(x) = 1 for x пїЅ 0. To illustrate event location, we compute the local maxima of all three solution components.
We compute the maxima by finding where the first derivatives vanish. The three event functions come from the DDEs: y1 пїЅ (x) = — y1(x)y2(x — 1)+y2(x — 10), and so forth. All event functions are evaluated in a single M ATLAB function that returns the values as a column vector. The name of this function is passed to the solver as the value of the 'Events' option. For this example we evaluate the three functions in exam4e by a call to exam4f. Because event location is used for a variety of purposes, we have to tell dde23 more about what we want to do. Sometimes we just want to know that an event has occurred and other times we want to terminate the integration then. We tell the solver about this by returning a vector isterminal from exam4e. To terminate the integration when event function k vanishes, we set component k of isterminal to 1 (true), and otherwise to 0 (false). For this example none of the events is terminal. There is an annoying matter of some importance: Sometimes we want to start an integration with an event function that vanishes at the initial point. Imagine, for example, that we fire a model rocket into the air and we want to know when it hits the ground. It is natural to use the height of the rocket as a terminal event function, but it vanishes at the initial time as well as the final time. dde23 treats an event at the initial point in a special way. The solver locates such an event and reports it, but does not treat it as terminal, no matter how isterminal is set. The example shows that how an event function vanishes may be important: To distinguish maxima from minima, we want the solver to report that a derivative vanished only when it changes from positive to negative values. This is done using direction. If we are interested only in events for which event function k is increasing through 0, we set component k of direction to +1. Correspondingly, we set it to — 1 if we are interested only in those events for which the event function is decreasing, and 0 if we are interested in all events. Once we understand what information must be provided, it is easy to code the event functions of this example as
Now that we have discussed how to tell the solver what we want it to do, we have to discuss how it reports what happened. The locations of events are returned as the field sol.xe and the values of the solution at these points are returned as the field sol.ye. If there are no events, sol.xe = . The field sol.ie reports which event occurred. A value of k indicates that event function k vanished at the corresponding entry of sol.xe.
It is straightforward to code the equations as With exam4e.m and exam4f, it is also straightforward to code the solution of the problem as the first two lines of the complete solution exam4.m that follows: The only complication in this program is separating the various kinds of events. It is not necessary, but perhaps clearer, to introduce local variables for the fields that return the results of the event location. The command n1 = find(ie == 1) finds the indices corresponding to the first event function. These indices allow us to extract the information that y1(x) has its maxima at xe(n1) and its values there are ye(1,n1). The second and third event functions are handled in the same way and then all the results are plotted.
To gain some experience with event location, try two experiments:
Each can be done by changing only one line in exam4e.m.
ODE and DDE solvers are intended for problems with solutions that have several continuous derivatives. It is not unusual for equations to have different forms in different circumstances, which leads to discontinuities in low-order derivatives of the solution, or even in the solution itself, when the circumstances change. Although a robust solver may be able to produce an acceptable solution, it is better practice to account for the changes and it can be necessary. There are two issues: Do we know in advance where the changes occur? Is the solution itself continuous? In this example we show how to solve problems that have a continuous solution with discontinuities in a low-order derivative at points known in advance. The history is the solution prior to the initial point and its discontinuities must also be taken into account because they propagate into the interval of integration. Discontinuities in the history are handled in the same way, but are a little simpler because discontinuities in the history itself are permitted.
Example 4.4 of  is an infection model due to Hoppensteadt and Waltman. The equation
Propagation and Smoothing of Discontinuities
The way discontinuities are propagated by the delays is an important feature of DDEs and has a profound effect on numerical methods for solving them.
In the example above, is continuous, but there is a jump discontinuity in at since approaching from the left the value is , given by the derivative of the initial history function , while approaching from the right the value is given by the DDE, giving .
Near , we have by the continuity of at and so is continuous at .
Differentiating the equation, we can conclude that so has a jump discontinuity at . Using essentially the same argument as above, we can conclude that at the second derivative is continuous.
Similarly, is continuous at or, in other words, at , is times differentiable. This is referred to as smoothing and holds generally for non-neutral delay equations. In some cases the smoothing can be faster than one order per interval.[Z06]
For neutral delay equations the situation is quite different.
It is easy to see that the solution is piecewise with continuous. However,
which has a discontinuity at every non-negative integer.
In general, there is no smoothing of discontinuities for neutral DDEs.
The propagation of discontinuities is very important from the standpoint of numerical solvers. If the possible discontinuity points are ignored, then the order of the solver will be reduced. If a discontinuity point is known, a more accurate solution can be found by integrating just up to the discontinuity point and then restarting the method just past the point with the new function values. This way, the integration method is used on smooth parts of the solution, leading to better accuracy and fewer rejected steps. From any given discontinuity points, future discontinuity points can be determined from the delays and detected by treating them as events to be located.
When there are multiple delays, the propagation of discontinuities can become quite complicated.
It is clear from the plot that there is a discontinuity at each non-negative integer as would be expected from the neutral delay . However, looking at the second and third derivative, it is clear that there are also discontinuities associated with points like , , propagated from the jump discontinuities in .
In fact, there is a whole tree of discontinuities that are propagated forward in time. A way of determining and displaying the discontinuity tree for a solution interval is shown in the subsection below.
Storing History Data
Once the solution has advanced beyond the first discontinuity point, some of the delayed values that need to be computed are outs >InterpolationOrder->All in NDSolve ). NDSolve has a general algorithm for obtaining dense output from most methods, so you can use just about any method as the integrator. Some methods, including the default for DDEs, have their own way of getting dense output, which is usually more efficient than the general method. Methods that are low enough order, such as "ExplicitRungeKutta" with "DifferenceOrder"->3 can just use a cubic Hermite polynomial as the dense output, so there is essentially no extra cost in keeping the history.
Since the history data is accessed frequently, it needs to have a quick lookup mechanism to determine which step to interpolate within. In NDSolve , this is done with a binary search mechanism and the search time is negligible compared with the cost of actual function evaluation.
The data for each successful step is saved before attempting the next step, and is saved in a data structure that can repeatedly be expanded efficiently. When NDSolve produces the solution, it simply takes this data and restructures it into an InterpolatingFunction object, so DDE solutions are always returned with dense output.
The Method of Steps
For constant delays, it is possible to get the entire set of discontinuities as fixed time. The idea of the method of steps is to simply integrate the smooth function over these intervals and restart on the next interval, being sure to reevaluate the function from the right. As long as the intervals do not get too small, the method works quite well in practice.
The method currently implemented for NDSolve is based on the method of steps.
Symbolic Method of Steps
This section defines a symbolic method of steps that illustrates how the method works. Note that to keep the code simpler and more to the point, it does not do any real argument checking. Also, the data structure and lookup for the history is not done in an efficient way, but for symbolic solutions this is a minor issue.
Solve 2nd Order Differential Equations
A differential equation relates some function with the derivatives of the function. Functions typically represent physical quantities and the derivatives represent a rate of change. The differential equation defines a relationship between the quantity and the derivative. Differential equations are very common in fields such as biology, engineering, economics, and physics.
Differential equations may be studied from several different perspectives. Only simple differential equations are solvable by explicit formulas while more complex systems are typically solved with numerical methods. Numerical methods have been developed to determine solutions with a given degree of accuracy.
The term with highest number of derivatives describes the order of the differential equation. A first-order differential equation only contains single derivatives. A second-order differential equation has at least one term with a double derivative. Higher order differential equations are also possible.
Below is an example of a second-order differential equation.
To numerically solve a differential equation with higher-order terms, it can be broken into multiple first-order differential equations as shown below.
A numerical solution to this equation can be computed with a variety of different solvers and programming environments. Solution files are available in MATLAB, Python, and Julia below or through a web-interface. Each of these example problems can be easily modified for solutions to other second-order differential equations as well.
Another scenario is when the damping coefficient c = (0.9 + 0.7 t) is not known but must be estimated from data. The value of c is allowed to change every 0.5 seconds. The true and estimated values of c are shown on the plot below. Predicted and actual values of y are in agreement even though the estimate is not continuous but only changes at discrete time points.
Delay differential equations differ from ordinary differential equations in that the derivative at any time depends on the solution (and in the case of neutral equations on the derivative) at prior times. The simplest constant delay equations have the form \[\tag <1>y'(t) = f(t, y(t), y(t-\tau_1), y(t-\tau_2),\ldots, y(t-\tau_k)) \]
where the time delays (lags) \( \tau_j \) are positive constants. More generally, state dependent delays may depend on the solution, that is \( \tau_i = \tau_i (t,y(t)) \ .\)
Systems of delay differential equations now occupy a place of central importance in all areas of science and particularly in the biological sciences (e.g., population dynamics and epidemiology). Baker, Paul, & Willé (1995) contains references for several application areas.
Interest in such systems often arises when traditional pointwise modeling assumptions are replaced by more realistic distributed assumptions, for example, when the birth rate of predators is affected by prior levels of predators or prey rather than by only the current levels in a predator-prey model. The manner in which the properties of systems of delay differential equations differ from those of systems of ordinary differential equations has been and remains an active area of research; see Martin & Ruan (2001) and Raghothama & Narayanan (2002) for typical examples of such studies. See also Shampine, Gladwell, and Thompson (2003) for a description of several common models.
Initial History Function
Additional information is required to specify a system of delay differential equations. Because the derivative in (1) depends on the solution at the previous time \( t - \tau_j \ ,\) it is necessary to prov >
In most models, the delay differential equation and the initial history are incompatible: for some derivative order, usually the first, the left and right derivatives are not equal. For example, the simple model \( y'(t) = y(t-1) \) with constant history \( y(t) = 1 \) has the property that \( y'(0^<+>) = 1 \ne y'(0^<->) = 0 \ .\)
One of the most fascinating properties of delay differential equations is the manner in which such derivative discontinuities are propagated in time. For the equation and history just described, for example, the initial first discontinuity is propagated as a second degree discontinuity at time \( t = 1 \ ,\) as a third degree discontinuity at time \( t = 2 \ ,\) and, more generally, as a discontinuity in the \( <(n+1)>^
Neves & Feldstein (1976) characterized the tree of derivative discontinuity times for state dependent delay differential equations as the zeroes with odd multiplicity of equations \[\tag <2>\tau_i (t,y(t)) - T = 0 \]
where \( T \) is the initial time or any later discontinuity time.
Several of the solvers discussed in the next section use explicit Runge-Kutta methods to integrate systems of delay differential equations. An important question in this case is that of interpolation. Unlike ordinary differential equation solvers that are based on linear multistep methods possessing natural extensions, early Runge-Kutta solvers did not incorporate interpolation; rather they stepped exactly to the next output point instead of stepping beyond it and obtaining interpolated solutions. Interest in the issues of obtaining dense output without limiting the step size in this fashion and by the desire to incorporate root finding led to the development of Runge-Kutta methods endowed with suitable interpolants. Interpolation is handled in one of two ways in modern Runge-Kutta solvers, Hermite interpolation and continuously imbedded methods. For example, the solver dde23 which is based on a third order Runge-Kutta method uses Hermite interpolation of the old and new solution and derivative to obtain an accurate interpolant. By way of contrast, the solver dde_solver uses a sixth order Runge-Kutta method based on a continuously embedded \( C^1 \) interpolant derived from the same derivative approximations used by the basic method. In addition to providing accurate and efficient solutions, either type of interpolant can be used in conjunction with a root finder to locate derivative discontinuity times.
Available Delay Differential Equation Software
A number of issues must be taken into account by software for delay differential equations. Baker, Paul, & Willé (1995), Shampine & Thompson (2001), and Thompson & Shampine (2006) discuss the various issues. The well known dmrode solver (Neves (1975)) was the first effective software for delay differential equations. Many of the central ideas on which this solver was based were used in later f77 solvers dklag5 (Neves & Thompson (1992)) and dklag6 (Corwin, Sarafyan, and Thompson (1997)), and the Fortran 90/95 dde_solver (Thompson & Shampine (2006)). Although the state of the art for numerical software for delay differential equations is not as advanced as that for ordinary differential equation software, several high quality solvers have recently been developed. The effectiveness of the software is determined in large part by the manner in which propagated derivative discontinuities are handled. Some delay differential equation solvers such as those in Paul (1995), and Thompson & Shampine (2006) explicitly track and locate the zeroes of (2) and include them as integration mesh points. Different approaches are used in other software. For example, the ddverk solver (Enright & Hayashi (1997)) uses derivative defect error control to implicitly locate discontinuity times. It then uses special interpolants to step cross the discontinuities. The ddesd solver (Shampine (2005)) uses residual error control to avoid the use of embedded local error estimates near discontinuity times.
Effective delay differential equation software must deal with other difficulties peculiar to systems of delay differential equations. Early software, for example, limited the step sizes used to be no larger than the smallest delay. But small delays are encountered in many problems; and this artificial restriction on the step size can have a drastic effect on the efficiency of a solver. Most of the solvers mentioned above are based on pairs of explicit continuously embedded Runge-Kutta methods (Shampine (1994)). When the step size exceeds a delay, the underlying interpolation polynomials are iterated in a manner somewhat akin to a predictor-corrector iteration for linear multistep methods. Refer to Baker & Paul (1996), Baker, Paul, & Willé (1995), Enright & Hayashi (1998), and Shampine & Thompson (2001) for details of various aspects of this issue.
The solvers dde23, ddesd, and dde_solver contain a very useful provision for finding zeroes of event functions (Shampine (1994)) that depend on the solution. In addition to solving a system of delay differential equations, they simultaneously locate zeroes of state dependent functions \( g(t,y(t)) = 0 \ .\) Such special events may signal problem changes requiring integration restarts. The use of event functions is illustrated in the next section.
Although much recent delay differential equation software utilizes explicit continuously embedded Runge-Kutta methods, software based on other methods has been developed. For example, Jackiewicz & Lo (2006) and Willé & Baker (1992) utilize generalized Adams linear multistep methods; and the radar5 solver (http://www.unige.ch/
hairer/software.html) is based on collocation methods. Another well known and widely used program with the ability to solve delay differential equations is the xppaut program (Ermentrout (2002)). The use of software based on a class of general linear methods (diagonally implicit multistage integration methods) is discussed in Hoppensteadt & Jackiewicz (2006) in conjunction with the problem considered in the next section. Bellen & Zennaro (2003) discuss the commonly used methods for delay differential equations in considerable detail.
Hoppensteadt & Jackiewicz (2006) investigated a model which generalizes previously studied models for infectious diseases. Solving this model requires the determination of a threshold time at which the accumulated dosage of infection reaches a prescribed level. Once this time is determined, the relevant equations may be integrated to obtain the desired solution. The minimum threshold time \( t_0 \) is the unique value for which \[ \int_<0>^
For Example 1 of the reference, the relevant variables and functions are given by \( m = 0.1, \sigma = 1, S_0 = 10, \rho(t) = 1, r(t) = r_0, S_0 = 10, \) and \[ I_0 = \begin
Ddeproblem solve - Delay Differential Equations in Julia
DDEs are mostly solved in a stepwise fashion with a principle called the method of steps. For instance, consider the DDE with a single delay
with given initial condition . Then the solution on the interval [0,τ] is given by ψ(t) which is the solution to the inhomogeneous initial value problem
with ψ(0) = ϕ(0) . This can be continued for the successive intervals by using the solution to the previous interval as inhomogeneous term. In practice, the initial value problem is often solved numerically.
Suppose f(x(t),x(t − τ)) = ax(t − τ) and ϕ(t) = 1 . Then the initial value problem can be solved with integration,
i.e., x(t) = at + 1 , where we picked C = 1 to fit the initial condition x(0) = ϕ(0) . Similarly, for the interval we integrate and fit the initial condition to find that x(t) = at 2 / 2 + t + D where D = (a − 1)τ + 1 − aτ 2 / 2 .
Reduction to ODE
In some cases, delay differential equations are equivalent to a system of ordinary differential equations.
Introduce to get a system of ODEs
is equivalent to where
The characteristic equation
Similar to ODEs, many properties of linear DDEs can be characterized and analyzed using the characteristic equation [ 1 ] . The characteristic equation associated with the linear DDE with discrete delays
The roots λ of the characteristic equation are called characteristic roots or eigenvalues and the solution set is often referred to as the spectrum. Because of the exponential in the characteristic equation, the DDE has, unlike the ODE case, an infinite number of eigenvalues, making a spectral analysis more involved. The spectrum does however have a some properties which can be exploited in the analysis. For instance, even though there are an infinite number of eigenvalues, there are only a finite number of eigenvalues to the right of any vertical line in the complex plane.
This characteristic equation is a nonlinear eigenproblem and there are many methods to compute the spectrum numerically [ 2 ] . In some special situations it is possible to solve the characteristic equation explicitly. Consider, for example, the following DDE:
The characteristic equation is
There are an infinite number of solutions to this equation for complex λ. They are given by
Wikimedia Foundation . 2010 .
Look at other dictionaries:
delay differential equation — noun a type of differential equation in which the derivative of the unknown function at a certain time is given in terms of the values of the function at previous times Syn: DDE … Wiktionary
Differential equation — Not to be confused with Difference equation. Visualization of heat transfer in a pump casing, created by solving the heat equation. Heat is being generated internally in the casing and being cooled at the boundary, prov >Wikipedia
Stochastic differential equation — A stochastic differential equation (SDE) is a differential equation in which one or more of the terms is a stochastic process, thus resulting in a solution which is itself a stochastic process. SDE are used to model diverse phenomena such as… … Wikipedia
Differential algebraic equation — In mathematics, differential algebraic equations (DAEs) are a general form of (systems of) differential equations for vector–valued functions x in one independent variable t, where is a vector of dependent variables and the system has as many… … Wikipedia
Delay-Differentialgleichung — Retardierte Differentialgleichungen sind ein spezieller Typ Differentialgleichung, oft auch als DDE (Delayed Differential Equation) abgekürzt oder als Differentialgleichung mit nacheilendem Argument bezeichnet. Bei ihnen hängt die Ableitung einer … Deutsch Wikipedia
Differential (mechanical device) — For other uses, see Differential. A cutaway view of an automotive final drive unit which contains the differential Input … Wikipedia
Bi-directional delay line — In mathematics, a bi directional delay line is a numerical analysis technique used in computer simulation for solving ordinary differential equations by converting them to hyperbolic equations. In this way an explicit solution scheme is obtained… … Wikipedia
Boolean delay equation — As a novel type of semi discrete dynamical systems, Boolean Delay Equations (BDEs) are models with Boolean valued variables that evolve in continuous time. Since at the present time, most phenomena are too complex to be modeled by partial… … Wikipedia
Defining equation (physics) — For common nomenclature of base quantities used in this article, see Physical quantity. For 4 vector modifications used in relativity, see Four vector. Very often defining equations are in the form of a constitutive equation, since parameters of… … Wikipedia
Distributed parameter system — A distributed parameter system (as opposed to a lumped parameter system) is a system whose state space is infinite dimensional. Such systems are therefore also known as infinite dimensional systems. Typical examples are systems described by… … Wikipedia