lagrangian-0.6.0.1: Solve Lagrange multiplier problems

Safe HaskellNone
LanguageHaskell98

Numeric.AD.Lagrangian

Contents

Description

Numerically solve convex Lagrange-multiplier problems with conjugate gradient descent.

Consider an example from the Wikipedia page on Lagrange multipliers in which we want to maximize the function f(x, y) = x + y, subject to the constraint x^2 + y^2 = 1:

>>> maximize (\[x, y] -> x + y) [(\[x, y] -> x^2 + y^2) <=> 1] 0.00001 2
Right ([0.707,0.707], [-0.707])

The Right indicates success; the first element of the pair is the argument of the objective function at the maximum, and the second element is a list of Lagrange multipliers.

Synopsis

Constraint type

data Constraint Source

An equality constraint of the form g(x, y, ...) = c. Use <=> to construct a Constraint.

(<=>) :: (forall a. Floating a => [a] -> a) -> (forall b. Floating b => b) -> Constraint infixr 1 Source

Build a Constraint from a function and a constant

Optimizers

maximize Source

Arguments

:: (forall a. Floating a => [a] -> a)

The objective function to minimize

-> [Constraint]

A list of constraints g <=> c corresponding to equations of the form g(x, y, ...) = c

-> Double

Stop iterating when the largest component of the gradient is smaller than this value

-> Int

The arity of the objective function, which must equal the arity of the constraints

-> Either (Result, Statistics) (Vector Double, Vector Double)

Either a Right containing the argmax and the Lagrange multipliers, or a Left containing an explanation of why the gradient ascent failed

Numerically maximize the Langrangian. The objective function and each of the constraints must take the same number of arguments.

minimize Source

Arguments

:: (forall a. Floating a => [a] -> a)

The objective function to minimize

-> [Constraint]

A list of constraints g <=> c corresponding to equations of the form g(x, y, ...) = c

-> Double

Stop iterating when the largest component of the gradient is smaller than this value

-> Int

The arity of the objective function, which must equal the arity of the constraints

-> Either (Result, Statistics) (Vector Double, Vector Double)

Either a Right containing the argmin and the Lagrange multipliers, or a Left containing an explanation of why the gradient descent failed

Numerically minimize the Langrangian. The objective function and each of the constraints must take the same number of arguments.

Experimental features

feasible :: (Floating a, Field a, Element a) => (forall b. Floating b => [b] -> b) -> [Constraint] -> [a] -> Bool Source

WARNING: Experimental. This is not a true feasibility test for the function. I am not sure exactly how to implement that. This just checks the feasiblility at a point. If this ever returns false, solve can fail.