-
Notifications
You must be signed in to change notification settings - Fork 34
Description
Hi,
I have been running into issues where the accumulation of float round-off are causing the Dop solvers to return the solution up to the second-to-last point (i.e. at approx x_end-dx), instead of the last. Here is a minimum breaking example, where I try to integrate from 0 to 1 in steps of 0.1:
use ode_solvers::*;
struct Test {}
impl System<f32, Vector1<f32>> for Test {
fn system(&self, _x: f32, y: &Vector1<f32>, dy: &mut Vector1<f32>) {
dy[0] = -y[0]
}
}
fn main() {
let system = Test {};
let x = 0.0; let x_end = 1.0; let dx = 0.1;
let y = Vector1::<f32>::from_vec(vec![1.]);
let mut stepper = Dopri5::new(system, x, x_end, dx, y, 1.0e-3, 1.0e-6);
stepper.integrate().unwrap();
let x_last = stepper.x_out().last().unwrap();
assert!((x_end - x_last).abs() < 1.0e-3, "x_last={x_last}"); // fails, x_last = 0.9000001
}
Looking at the sparse x-vector returned by the solver,
println!("{:?}", stepper.x_out()); -> [0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.70000005, 0.8000001, 0.9000001],
we see that there is an accumulation of round-off error on the order of 1e-7.
In the following loop, the solver stops outputting, since 0.9000001+0.1 > 1.0. It also doesn't go through the x_end check because the 1e-7 error is greater than dense_output::endpoint_tolerance() = 1e-9 (hard-coded value).
Lines 454 to 478 in 921857e
| while self.xd.abs() <= self.x.abs() { | |
| if self.x_old.abs() <= self.xd.abs() && self.x.abs() >= self.xd.abs() { | |
| let theta = (self.xd - self.x_old) / self.h_old; | |
| let theta1 = T::one() - theta; | |
| let y_out = self.compute_y_out(theta, theta1); | |
| self.results.push(self.xd, y_out); | |
| self.xd += self.dx; | |
| } | |
| if self | |
| .f | |
| .solout(self.xd, self.results.get().1.last().unwrap(), dy) | |
| { | |
| break; | |
| } | |
| } | |
| // Ensure the last point is added if it's within floating point error of x_end. | |
| if (self.xd - self.x_end).abs() < dense_output::endpoint_tolerance() { | |
| let theta = (self.x_end - self.x_old) / self.h_old; | |
| let theta1 = T::one() - theta; | |
| let y_out = self.compute_y_out(theta, theta1); | |
| self.results.push(self.x_end, y_out); | |
| self.xd += self.dx; | |
| } |
A inelegant workaround I have is to ask the solver to go one dx further, and then select the point closest to the intended x_end as the true final step.
Another option would be to let the user control the value of endpoint_tolerance.
Note that this error is not consistent, as the round-off errors sometimes cancel out nicely. The same example as above integrated to 5.0 instead of 1.0 works as expected.