cdar-mBound: Exact real arithmetic using Centred Dyadic Approximations

[ bsd3, data, library, program ] [ Propose Tags ] [ Report a vulnerability ]
Versions [RSS] 0.1.0.0, 0.1.0.1, 0.1.0.2, 0.1.0.3, 0.1.0.4
Dependencies base (>=4.7 && <4.15), cdar-mBound, containers, deepseq, integer-gmp [details]
License BSD-3-Clause
Author Jens Blanck, Michal Konecny
Maintainer mikkonecny@gmail.com
Category Data
Source repo head: git clone https://github.com/michalkonecny/cdar -b mBound
Uploaded by MichalKonecny at 2021-05-08T15:11:34Z
Distributions LTSHaskell:0.1.0.4, NixOS:0.1.0.4, Stackage:0.1.0.4
Reverse Dependencies 1 direct, 5 indirect [details]
Executables logistic, examples
Downloads 1181 total (22 in the last 30 days)
Rating (no votes yet) [estimated by Bayesian average]
Your Rating
  • λ
  • λ
  • λ
Status Docs available [build log]
Last success reported on 2021-05-08 [all 1 reports]

Readme for cdar-mBound-0.1.0.1

[back to package description]

CDAR (cdar-mBound)

Implementation of computable real numbers, which is also often referred to as Exact Real Arithmetic.

Please see section at the bottom how this branch (mBound) differs from the master branch.

Computable Real Numbers

Computable real numbers form a countable subset of the real numbers. It contains all real numbers that can be described by a finite program. This includes all numbers that are commonly used in mathematics, including π and e. It is also closed under all field operations and all common transcendental functions such as exponential and trigonometric functions.

Interval Arithmetic and Computable Real numbers

This implementation used interval arithmetic internally but that doesn't mean that this is Interval Arithmetic. Computable real numbers are exact, so we are working under the assumption that arbitrary good approximations of input values can be found. This is obviously different from interval arithmetic where one of the main starting points is that input values may not be exact.

This implementation

Is based on Centred Dyadic Approximations as described in Blanck 2006.

It is also heavily inspired by the iRRAM implementation by Norbert Müller. In particular, it uses nested intervals rather than Cauchy sequences with a computable modulus function.

This is implementation should have comparable efficiency to implementations using Cauchy sequences for shallow expressions. However, for deeply nested expressions, such as iterated functions, it should be significantly faster.

Each computable real can be viewed as an infinite list of rapidly shrinking intervals. Thus, laziness of Haskell is used to keep the computations finite.

  • MPFR Arbitrary precision floating point numbers with specified rounding modes. While arbitrary precision can be used it is still fixed for a computation so this is still floating point arithmetic, but with larger precision.

  • AERN2 Computable real numbers and continuous functions using Cauchy sequences.

    • AERN2 uses on the CDAR type Approx as safely-rounded multi-precision floating-point numbers and builds interval arithmetic on top of them.
  • ireal Computable real numbers using Cauchy sequences with a fixed modulus.

  • constrible From the description this appears to be the real closure of the field of rational numbers. This allows for decidable equality, but excludes transcendental functions.

  • exact-real Computable real numbers using Cauchy sequences with a fixed modulus.

  • ERA (Can't find a link at the moment) Computable real number using Cauchy sequences with computable modulus.

Comparison with ireal and AERN2

(https://github.com/michalkonecny/haskell-reals-comparison)

Installation

Should build under stack.

Motivation

Although the terminology Exact Real Arithmetic promises the ability to compute arbitrarily the result with huge precision, this is not the real strength of this implementation. It is certainly possible to compute 10000 digits of π, and it doesn't take very long. But, even more useful is the fact that if you ask for 10 digits from your resulting computable real number, then you will get 10 correct digits even if the computation has to compute intermediate values with much higher precision.

Main usage cases:

  • To provide exact results for important computations.
  • To provide numerical computations without having to understand the problems of floating point computations. See Goldberg 1991.
  • To provide numerical analysis without having to do error analysis.

A word of warning though: Some operations are, by necessity, partial. In particular, comparisons are partial, and so is 1/x near 0.

Variation mBound vs master

In the mBounds branch, each Approx has an additional integer component, called mBound, a bound for the number of bits in the integer component m. Thus there is a limit on the bit size of the results of arithmetic operations. In the original variation, the size of m could grow arbitrarily for large numbers.
mBound plays a similar role for Approx numbers as mantissa size does for floating-point numbers.