# Introduction to Runge-Kutta-Nyström Methods

Alternative Title: How to turn any Runge-Kutta method into a Runge-Kutta-Nyström algorithm.

Update: If you’re going to use this algorithm, I highly recommend making the modifications in this post to reduce cumulative error by another factor of h.

Differential equations describe everything from how populations grow with time to how waves flow through the ocean. While a few have nice analytic expressions, many do not – think the three-body problem or the Navier-Stokes equations. This isn’t a matter of the algebra being difficult – for many cases it can be shown that no such “nice” solution exists. In these cases we turn to computers and numerical methods.

Runge-Kutta methods are a family of numerical methods for solving first-order differential equations:

They offer startling performance and quality gains over the Euler Method. A fourth-order Runge-Kutta method can easily produce solutions many orders of magnitude more precise, all while requiring a fraction of a percent of the computing power the Euler Method demands.

However, Runge-Kutta methods have several limitations for studying complex systems:

1. They are limited to first-order differential equations.
2. They produce solutions for differential equations of a single variable.

I’m going to tackle the first of these two problems with a different family of solvers – Runge-Kutta-Nyström (RKN) methods.

Runge-Kutta-Nyström methods have a single sentence on Wikipedia describing them – a mere 20 word side-note in an article of over 3,000 words. There are several thousand articles on RKN methods on Google Scholar, many of them specialized and with little explanation of the basics – such as how to implement them or use them. While specialized RKN methods are very useful in the use cases they were developed for, they are not illustrative.

Fortunately, we don’t need to perform (as much of) the work required to develop RKN methods – we can take any existing Runge-Kutta method and adapt it to be an RKN method.

Let’s begin with a well-known Runge-Kutta method – RK4. However, we’ll write it as if we were solving a differential equation for y” in terms of y’. We want to produce an algorithm which solves this equation:

The modification for the expression for k_1 is straightforward – expand it to include the initial value of y:

Now, we define our estimate for y’ which we’ll use to compute k_2, y’_1 and then use the trapezoidal rule to extrapolate an estimate for y for estimating k_2.

Similarly for k_3:

And finishing the pattern, we get

And then plug in as we would for RK4 to produce our results:

Importantly, note that we use the RK4-weighting on estimating both y’ and y – the trapezoidal rule was only for calculating our k-values.

That’s it. This RKN4 algorithm produces results with cumulative O(h^3) error instead of the normal O(h^4) we expect for RK4, but this is unsurprising as we’re integrating slope, which has step-wise O(h^4) error.

The following table records the decrease in cumulative error in while evaluating y”(t) = 0.5 y'(t) + 0.5 y(t) with initial conditions y'(0) = y(0) = 1 and t_f = 1.0.

Performance-wise, for trivial differential equations RKN4 requires about 50% more compute time than a similar first-order equation evaluated with RK4. As RKN4 requires the same number of function evaluations as RK4, this difference is smaller for more involved equations.

I’ve published the code I developed for this blog post on GitHub. My results can be recreated by checking out tag v0.1.2 and running “go run cmd/validate/validate.go”.