Improve your Runge-Kutta-Nyström algorithms with this one weird trick

In my last post, I discussed Runge-Kutta-Nyström (RKN) methods, and a generic strategy for turning any Runge-Kutta method into an RKN method. The focus was more on the simplest way to do so, rather than the best way.

Today I found that there’s a way of improving cumulative error from O(h^3) to O(h^4), with very little additional compute required.

A Poincaré map of a Duffing Oscillator simulated with the algorithm in this post.

Let’s examine a simplification I glossed over – when calculating k_2, we know y’_n and y’_1. We can get an estimate for y_1 by integrating the polynomial which meets the conditions f(0)=y’_n and f(t_f)=y’_1. This is just the trapezoidal rule.

This takes two obvious pieces of information – we know the slope of y at the beginning and end of each interval for which we’re estimating the change in y. However, it ignores that we also know the acceleration of y. That is, we have a third initial condition:

Thus, we need a quadratic to represent the general function which meets all of these criteria:

Integrating gives:

So all we need to do is replace our use of the trapezoidal rule with this quadratic rule. Recall that by definition, k_1 = y”_0, so it appears in every y_i estimate.

We repeat this pattern to calculate k_3:

and k_4:

Then, we finish our estimation of y_n+1 and y’_n+1 as before:

The impact on the accuracy of RKN algorithms is staggering. Using the same differential equation as last post, I’ve re-run the calculation of cumulative error. In the below table, “Before” denotes the algorithm from the previous post, with the trapezoidal rule, and “After” is with the quadratic rule.

StepsBefore Error(y)After Error(y)Before Error(y’)After Error(y’)
11.484 E-33.438 E-31.593 E-24.681 E-4
107.797 E-63.529 E-72.282 E-57.484 E-8
1008.967 E-93.486 E-112.366 E-87.914 E-12
10009.086 E-122.664 E-152.375 E-115.773 E-15
Note that with errors on the order of 10^-15, we’re approaching the limit of the ability for 64-bit floating points to detect differences in numbers.

We’ve improved the cumulative error of the algorithm from O(h^3) to O(h^4) – an entire factor of h! The additional performance cost is tiny – at most an increase of 10% for trivial second-order differential equations – so this modification to the original algorithm should always be worth it.

You can recreate the results of this blog post by checking out v0.1.3 of my diffeq-go repository and running:

diffeq-go$ go run cmd/validate/validate.go

3 Comments

  1. John M says:

    I am not sure where y_4 in equation (30) comes from; does equation (30) have a typo and should be `y(t+h) = y_0 + \frac h 6 (y’_0 + 2y’_1 + 2y’_2 + y’_3)` instead (seems to be the case in your Go implementation)?

    1. Will Beason says:

      Great catch, and you’re correct! That’s what I get for copy+pasting the equation above it, and not properly modifying its contents.

Leave a Reply