Decode an interval from its interchange format.
The input must either be a matrix of n × 128 bits for n bare intervals, or a matrix of n × 136 bits for n decorated intervals. Bits are in increasing order. Byte order depends on the system’s endianness. First 8 bytes are used for the lower interval boundary, next 8 bytes are used for the upper interval boundary, (optionally) last byte is used for the decoration.
The result is a row vector of intervals.
Accuracy: For all valid interchange encodings the following equation holds:
X == bitunpack (interval_bitpack (X))
.
See also: @infsup/bitunpack, @infsupdec/bitunpack.
Package: interval