template <typename T>
T ComputeLen(T x1, T x2, T iters)
{
T len{0};
T diff = x2-x1;
T incr;
incr = diff / iters;
for(T dx = 0; abs(dx) <= abs(diff); dx += incr)
{
//Pythagoras
T a = incr;
T b = f( x1+dx+incr) - f( x1+dx );
//c = sqrt( a^2 + b^2 )
T c = sqrt( (a*a) + (b*b) );
len += c;
}
return len;
}
//Calculates the length of the function y=x^2 from 0 to 10
ComputeLen<double>(0.0f, 10.0f, 100000000);
("f" is just a function that takes a double and return its value squared, so: y=x^2)
Right now i have to pass it the amount of iterations i want between x1 and x2 and it will give me an estimate for the length. The problem is that i cant be sure how how accurate that estimate is - in terms of how many digits are accurate after the dot.
Does anyone know of a way to calculate the amount of iterations that is needed for a result with n-digit precision?
This is called the trapezium method, because a similar algorithm involving the areas of trapezoids is used to numerically approximate integrals.
Does anyone know of a way to calculate the amount of iterations that is needed for a result with n-digit precision?
That's not knowable in the general case. For example, no finite number of samples will let you get the arc length of sin(1/x) between -1 and 1.
You need to understand how that particular function behaves in that particular range to know how many samples to take.
For a general function you couldn't know in advance how many subintervals you would need. You would have to compute the arc length with N subintervals, then with 2N subintervals, and so on, seeing by how much your approximated arc lengths differed, and stopping when the difference got less than a certain tolerance.
If you take too many intervals then the truncation (i.e. theoretical approximation) error would be small, but you would start getting hit by rounding error instead.
There is a method known as Richardson extrapolation which allows you to both estimate the error and (less reliably) improve the solution when you have numerical approximations on N and 2N intervals.
However, if you want to estimate your error, then I don't think for a general function you could avoid doing the calculation at least twice.
Does anyone know of a way to calculate the amount of iterations that is needed for a result with n-digit precision?
Yes, simple (if you know how). lastchance describes it in other words I would have used (double intervals until arithmetical errors prevail, same as when you derive numerically.)
The method reminds me the title of a paper: "How Long Is the Coast of Britain?", Benoit Mandelbrot's first publish about fractional dimensions.