In my last post, I discussed Runge-Kutta-Nyström (RKN) methods, and a generic strategy for turning any Runge-Kutta method into an RKN method. The focus was more on the simplest way to do so, rather than the best way.
Today I found that there’s a way of improving cumulative error from O(h^3) to O(h^4), with very little additional compute required.

Let’s examine a simplification I glossed over – when calculating k_2, we know y’_n and y’_1. We can get an estimate for y_1 by integrating the polynomial which meets the conditions f(0)=y’_n and f(t_f)=y’_1. This is just the trapezoidal rule.

This takes two obvious pieces of information – we know the slope of y at the beginning and end of each interval for which we’re estimating the change in y. However, it ignores that we also know the acceleration of y. That is, we have a third initial condition:

Thus, we need a quadratic to represent the general function which meets all of these criteria:

Integrating gives:

So all we need to do is replace our use of the trapezoidal rule with this quadratic rule. Recall that by definition, k_1 = y”_0, so it appears in every y_i estimate.

We repeat this pattern to calculate k_3:

and k_4:

Then, we finish our estimation of y_n+1 and y’_n+1 as before:

The impact on the accuracy of RKN algorithms is staggering. Using the same differential equation as last post, I’ve re-run the calculation of cumulative error. In the below table, “Before” denotes the algorithm from the previous post, with the trapezoidal rule, and “After” is with the quadratic rule.
Steps | Before Error(y) | After Error(y) | Before Error(y’) | After Error(y’) | |
1 | 1.484 E-3 | 3.438 E-3 | 1.593 E-2 | 4.681 E-4 | |
10 | 7.797 E-6 | 3.529 E-7 | 2.282 E-5 | 7.484 E-8 | |
100 | 8.967 E-9 | 3.486 E-11 | 2.366 E-8 | 7.914 E-12 | |
1000 | 9.086 E-12 | 2.664 E-15 | 2.375 E-11 | 5.773 E-15 |
We’ve improved the cumulative error of the algorithm from O(h^3) to O(h^4) – an entire factor of h! The additional performance cost is tiny – at most an increase of 10% for trivial second-order differential equations – so this modification to the original algorithm should always be worth it.
You can recreate the results of this blog post by checking out v0.1.3 of my diffeq-go repository and running:
diffeq-go$ go run cmd/validate/validate.go
I am not sure where y_4 in equation (30) comes from; does equation (30) have a typo and should be `y(t+h) = y_0 + \frac h 6 (y’_0 + 2y’_1 + 2y’_2 + y’_3)` instead (seems to be the case in your Go implementation)?
Great catch, and you’re correct! That’s what I get for copy+pasting the equation above it, and not properly modifying its contents.