numeric-optimization: Unified interface to various numerical optimization algorithms

[ algorithms, bsd3, library, math, numeric, numerical, optimisation, optimization ] [ Propose Tags ]


Manual Flags


Build example programs


Enable CGDescent optimization algorithm provided by nonlinear-optimization package and CG_DESCENT-C library. Since they are licensed under GPL, setting this flag True implies that resulting binary is also under GPL.


Enable L-BFGS (since


Enable L-BFGS-B (since


Use -f <flag> to enable a flag, or -f -<flag> to disable that flag. More info


Maintainer's Corner

Package maintainers

For package maintainers and hackage trustees


  • No Candidates
Versions [RSS],,
Change log
Dependencies base (>=4.12 && <5), constraints, data-default-class (>= && <0.2), hmatrix (>=, lbfgs (>=0.1 && <0.2), numeric-limits (>=0.1 && <0.2), numeric-optimization, primitive (>=, vector (>= && <0.14) [details]
License BSD-3-Clause
Copyright Copyright (c) 2023 Masahiro Sakai
Author Masahiro Sakai
Category Math, Algorithms, Optimisation, Optimization, Numeric, Numerical
Home page
Bug tracker
Source repo head: git clone
Uploaded by MasahiroSakai at 2023-06-21T14:39:20Z
Distributions NixOS:
Reverse Dependencies 2 direct, 0 indirect [details]
Executables rosenbrock
Downloads 82 total (5 in the last 30 days)
Rating (no votes yet) [estimated by Bayesian average]
Your Rating
  • λ
  • λ
  • λ
Status Docs uploaded by user
Build status unknown [no reports yet]

Readme for numeric-optimization-

[back to package description]


Hackage Hackage Deps License

Unified interface to various numerical optimization algorithms.

The aim of the package is to provide a convenient interface like Python's scipy.optimize.

Note that the package name is numeric-optimization and not numerical-optimization. The name numeric-optimization comes from the module name Numeric.Optimization.

Example Usage

{-# LANGUAGE OverloadedLists #-}
import Data.Vector.Storable (Vector)
import Numeric.Optimization

main :: IO ()
main = do
  result <- minimize LBFGS def (WithGrad rosenbrock rosenbrock') [-3,-4]
  print (resultSuccess result)  -- True
  print (resultSolution result)  -- [0.999999999009131,0.9999999981094296]
  print (resultValue result)  -- 1.8129771632403013e-18

rosenbrock :: Vector Double -> Double
rosenbrock [x,y] = sq (1 - x) + 100 * sq (y - sq x)

rosenbrock' :: Vector Double -> Vector Double
rosenbrock' [x,y] =
  [ 2 * (1 - x) * (-1) + 100 * 2 * (y - sq x) * (-2) * x
  , 100 * 2 * (y - sq x)

sq :: Floating a => a -> a
sq x = x ** 2

Supported Algorithms

Algorithm Solver implemention Haskell binding
CG_DESCENT CG_DESCENT-C nonlinear-optimization Requires with-cg-descent flag
Limited memory BFGS (L-BFGS) liblbfgs lbfgs
Limited memory BFGS with bounds constraints (L-BFGS-B) L-BFGS-B l-bfgs-b Requires with-lbfgsb flag
Newton's method Pure Haskell implementation using HMatrix -


Installing Prerequisites


You may need to install BLAS and LAPACK for hmatrix.

Windows (MSYS2):
$ pacman -S mingw-w64-x86_64-lapack

or if you use MSYS2 installed by stack

$ stack exec -- pacman -S mingw-w64-x86_64-lapack
Debian and Ubuntu Linux:
$ apt-get install libblas-dev liblapack-dev

libblas-dev and liblapack-dev are reference implementations. You need to install optimized ones for better performance. (See DebianScience/LinearAlgebraLibraries)


By default hmatrix uses BLAS and LAPACK provided by Accelerate Framework provided by macOS.


If you want to use L-BFGS-B, you have to install the development package of liblbfgsb.

Ubuntu Linux:
$ apt-get install liblbfgsb-dev
Homebrew (macOS and Linux):
$ brew install msakai/tap/liblbfgsb
Windows (MSYS2):
$ wget
$ pacman -U mingw-w64-x86_64-liblbfgsb-3.0-1-any.pkg.tar.zst

or if you use MSYS2 installed by stack

$ wget
$ stack exec -- pacman -Sy
$ stack exec -- pacman -U mingw-w64-x86_64-liblbfgsb-3.0-1-any.pkg.tar.zst


The code in thie packaged is licensed under BSD-3-Clause.

If you enable with-cg-descent flag, it uses GPL-licensed packages and the resulting binary should be distributed under GPL.