Next: Examples, Previous: Getting Started, Up: Top [Contents]
Give a digital computer a problem in arithmetic, and it will grind away methodically, tirelessly, at gigahertz speed, until ultimately it produces the wrong answer. … An interval computation yields a pair of numbers, an upper and a lower bound, which are guaranteed to enclose the exact answer. Maybe you still don’t know the truth, but at least you know how much you don’t know.
Interval arithmetic adds two unique features to ordinary computer arithmetic: (1) Functions can be evaluated over (connected) subsets of their domain, and (2) any computational errors are automatically considered and are accumulated in the final outcome. In conjunction they yield a verified result enclosure over a range of input values.
These possibilities of interval arithmetic enable great new possibilities, but what is wrong with the well-known computer arithmetic in the first place?
Floating-point arithmetic, as specified by IEEE Std 754, is available in almost every computer system today. It is wide-spread, implemented in common hardware and integral part in programming languages. For example, the binary64 format (a.k.a. double-precision) is the default numeric data type in GNU Octave. Benefits are obvious: The results of arithmetic operations are (mostly) well-defined and comparable between different systems and computation is highly efficient.
However, there are some downsides of floating-point arithmetic in practice, which will eventually produce errors in computations. Generally speaking, most of these problems occur in any arithmetic with finite precision.
ans = 0.1
,” this is not absolutely correct. In fact, the value is only close enough to the value 0.1. Additionally, many functions produce limit values (∞ × −∞ = −∞, ∞ ÷ 0 = ∞, ∞ ÷ −0 = −∞, log (0) = −∞), which is sometimes (but not always!) useful when overflow and underflow occur.
Interval arithmetic addresses above problems in its very special way. It accepts the fact that numbers cannot be stored or computed with infinite precision and introduces enclosures of exact values, which can be computed on any machine with finite precision.
This introduces new possibilities for algorithms. Any errors are covered by the range of an interval during the course of computation. All members of intervals are by definition finite real numbers, which results in an exception free and mathematically well-defined arithmetic. The possibility to actually evaluate a function on a connected range of values and compute a guaranteed enclosure of all possible values is a unique selling point.
For example, the interval newton method (see @infsup/fzero) is able to find all zeros of a particular function. More precisely, the algorithm is able to reliably eliminate ranges of values where the function cannot have a root by a simple interval evaluation of either the function itself or its derivative. Global convergence can be achieved by bisecting the intermediate ranges.
Intervals can be used instead of simple numbers to automatically take into account the tolerance (or uncertainty) of the values used in calculation. Every day we need to compute the result of a lot of simple mathematical equations. For example the cost of the apples bought at the farmer’s market is given by:
apple price = apple cost per kilo · kilos of apple bought
When we need the result of those mathematical expressions, we put the values on the right hand side of the equation and we compute its result for the left hand side. We usually put wrong (erroneous) numbers into the equation and therefore where is no doubt we get wrong results. There are a lot of reasons why we use incorrect values, for example
For example, if a pipe breaks and you want to buy a new one you need its diameter. If you do not have a caliber, you may measure its circumference and divide it by π.
diameter = circumference / π
Here are two errors: the circumference is known within the tolerance given by your meter, moreover π has an infinite number of digits while only few of them can be used in the operation. You may think the error is negligible, the result is enough accurate to buy a new pipe in a hardware shop. However, the not infinite precision of those operations avoid the use of computers as automatic theorem demonstration tools and so on.
This kind of issue is quite common in engineer design. What engineers do is to make sure their design will work in the worst case or in most of the cases (usually more than 99.9 percent). A simple example follows.
Let us say you want to repaint the walls of your living room completely messed up by your children. You need to compute how many paint cans you need to buy. The equation is quite simple:
paint cans = 2 · (room width + room length) · room height / (paint per can · paint efficiency),
where “paint efficiency” is how many square meters of surface can be painted with a liter of paint. The problem here is that usually we do not have a long enough meter to measure the room width and length. It is much simpler to count the number of steps to go through it (1 step is about a meter, let us say from 0.9 to 1.1 meters). Moreover, the paint provider usually declares a paint efficiency range.
Here is the data:
To compute the average result just put average values in. We get: paint cans = 2 · (6 + 4) · 3 / (40 · 1) = 1.5, which means two paint cans unless you are able to buy just half of the second can.
What happens in the worst case? Just put pessimistic values in the equation. We get: paint cans = 2 · (6.6 + 4.4) · 3 / (40 · 0.7) = 2.36. That is, in the worst case we would be short 0.36 cans of paint. It makes sense to buy 3 cans.
Last, consider the best case. Is it enough to only buy a single can of paint? Just put optimistic values in the equation. We get: paint cans = 2 · (5.4 + 3.6) · 3 / (40 · 1.3) = 1.04, which means one can of paint would not be enough.
You have to buy at least two cans, but probably need one more. For this result we had to go through the equation multiple times (at least twice) and carefully consider for each variable, which would be the most optimistic / pessimistic value assignment, which is not trivial. For example consider the room size versus the paint efficiency: It depends whether the highest or the lowest value takes an optimistic or pessimistic role—and this was a simple example with basic arithmetic operations.
Using interval arithmetic it is possible to compute the result in a single run with ranges as inputs. The following example demonstrates this and further below is explained how it works.
step = midrad (1, "0.1"); w = 6 * step; l = 4 * step; h = 3; eff = infsupdec ("[0.7, 1.3]"); cansize = 40; cans = 2 * (w + l) * h / (eff * cansize) ⇒ cans ⊂ [1.0384, 2.3572]_com
## Since we can only buy whole cans ceil (cans) ⇒ ans = [2, 3]_def
Interval arithmetic, introduced in the 1960s, is a young and powerful technique. Its first application has been to control errors in computations and simplify error analysis for engineers (rounding errors, truncation errors, and conversion errors). The range evaluation of functions has soon been exploited for reliably checking for certain function values and for self-verifying algorithms. Latest usage scenarios comprise root finding, function approximation, and robust pattern recognition. More useful applications are certainly left to be detected.
The major problem in interval arithmetic is that errors can easily build up, such that the final result is too wide to be useful. This is especially true when the dependency problem applies, that is, a single variable occurs several times within a computation and is represented by an interval in each occurrence. Then, the variable virtually may take different values independently, which introduces a systematic error. For example, computing x .^ 2
will always yield a subset of times (x, x)
, the latter considers two intervals independent of each other.
x = infsupdec ("[-1, 3]"); x .^ 2 ⇒ ans = [0, 9]_com times (x, x) ⇒ ans = [-3, +9]_com
After all, it is possible to reduce overestimation errors by subdividing the function’s domain into smaller intervals, e. g., with bisection. This technique is called “mincing”. The computational errors are proportional to the interval width and a linear convergence can be achieved.
x1 = infsupdec ("[-1, 1]"); x2 = infsupdec ("[1, 3]"); hull (x1 .^ 2, x2 .^ 2) ⇒ ans = [0, 9]_com hull (times (x1, x1), times (x2, x2)) ⇒ ans = [-1, +9]_com
However, this does not help when ranges of input values are too big. For certain applications it is better to use statistical models, where infinite domains are supported.
There are good introductions to interval arithmetic available and should be consulted for a deeper understanding of the topic. The following recommendations can make a starting point.
Next: Examples, Previous: Getting Started, Up: Top [Contents]