haskell-mpi: Distributed parallel programming in Haskell using MPI.

[ bsd3, distributed-computing, ffi, library ] [ Propose Tags ]

MPI is defined by the Message-Passing Interface Standard, as specified by the Message Passing Interface Forum. The latest release of the standard is known as MPI-2. These Haskell bindings are designed to work with any standards compliant implementation of MPI-2. Examples are MPICH2: http://www.mcs.anl.gov/research/projects/mpich2 and OpenMPI: http://www.open-mpi.org.

In addition to reading these documents, users may also find it beneficial to consult the MPI-2 standard documentation provided by the MPI Forum: http://www.mpi-forum.org, and also the documentation for the MPI implementation linked to this library (that is, the MPI implementation that was chosen when this Haskell library was compiled).

Control.Parallel.MPI.Fast contains a high-performance interface for working with (possibly mutable) arrays of storable Haskell data types.

Control.Parallel.MPI.Simple contains a convenient (but slower) interface for sending arbitrary serializable Haskell data values as messages.

Control.Parallel.MPI.Internal contains a direct binding to the C interface.

Control.Parallel.MPI.Base contains essential MPI functionality which is independent of the message passing API. This is re-exported by the Fast and Simple modules, and usually does not need to be explcitly imported itself.

Notable differences between Haskell-MPI and the standard C interface to MPI:

  1. Some collective message passing operations are split into send and receive parts to facilitate a more idiomatic Haskell style of programming. For example, C provides the MPI_Gather function which is called by all processes participating in the communication, whereas Haskell-MPI provides gatherSend and gatherRecv which are called by the sending and receiving processes respectively.

  2. The order of arguments for some functions is changed to allow for the most common patterns of partial function application.

  3. Errors are raised as exceptions rather than return codes (assuming that the error handler to errorsThrowExceptions, otherwise errors will terminate the computation just like C interface).

Below is a small but complete MPI program. Process 1 sends the message "Hello World" to process 0, which in turn receives the message and prints it to standard output. All other processes, if there are any, do nothing.

module Main where

import Control.Parallel.MPI.Simple (mpiWorld, commWorld, unitTag, send, recv)

main :: IO ()
main = mpiWorld $ \size rank ->
   if size < 2
      then putStrLn "At least two processes are needed"
      else case rank of
         0 -> do (msg, _status) <- recv commWorld 1 unitTag
                 putStrLn msg
         1 -> send commWorld 0 unitTag "Hello World"
         _ -> return ()

[Skip to Readme]


  • Control
    • Parallel
      • MPI
        • Control.Parallel.MPI.Base
        • Control.Parallel.MPI.Fast
        • Control.Parallel.MPI.Internal
        • Control.Parallel.MPI.Simple


Automatic Flags

Build testsuite and code coverage tests


Link with extra libraries for MPICH 1.4


Use -f <flag> to enable a flag, or -f -<flag> to disable that flag. More info


Maintainer's Corner

Package maintainers

For package maintainers and hackage trustees


  • No Candidates
Versions [RSS] 0.5.0, 1.0.0, 1.1.0, 1.2.0, 1.2.1, 1.4.0
Dependencies array, base (>3 && <=5), bytestring, cereal, extensible-exceptions, hpc, HUnit, process, testrunner, unix [details]
License BSD-3-Clause
Copyright (c) 2010-2015 Bernard James Pope, Dmitry Astapov, Abhishek Kulkarni, Andres Löh, Ian Ross
Author Bernard James Pope (Bernie Pope)
Maintainer florbitous@gmail.com
Category FFI, Distributed Computing
Home page http://github.com/bjpop/haskell-mpi
Source repo head: git clone git://github.com/bjpop/haskell-mpi.git
Uploaded by BerniePope at 2015-02-08T23:33:15Z
Distributions NixOS:1.4.0
Reverse Dependencies 2 direct, 0 indirect [details]
Executables haskell-mpi-comprunclean, haskell-mpi-testsuite
Downloads 5215 total (22 in the last 30 days)
Rating 2.0 (votes: 1) [estimated by Bayesian average]
Your Rating
  • λ
  • λ
  • λ
Status Docs not available [build log]
All reported builds failed as of 2016-12-11 [all 8 reports]

Readme for haskell-mpi-1.4.0

[back to package description]
Haskell-mpi, Haskell bindings to the MPI library

How to build

Use "cabal install --extra-include-dirs=/path/to/mpi/headers --extra-lib-dirs=/path/to/mpi/libs"
or something similar.  Make sure that you have libmpi.a and libmpi.so available.

When building against MPICH 1.4, pass extra flag "-fmpich14"


Two types of tests are provided:

   1. Unit tests.
   2. Standalone tests.

The unit tests are designed to test the functions exported by the library on
an individual basis. The standalone tests are comprised of complete programs -
they act as simple integration tests, and may also include regression tests.

How to enable testing

Add "-ftest" to cabal install:

   cabal -ftest install

How to run the unit tests

(Assuming you have built haskell-mpi  with -ftest, as described above):

Run the program "haskell-mpi-testsuite" using "mpirun" like so:

  mpirun -np 2 haskell-mpi-testsuite 1>sender.log 2>receiver.log

Process with rank 0 emits the output to stdout, and every other rank reports
to the stderr.

If you are using the PBS batch system to launch jobs, there is a sample
job script in test/pbs/ for submitting the test case to the jobs queue.

How to run standalone tests

Standalone test programs can be found in the test/examples directory.
You can test the execution of these programs using the shelltestrunner package:


Make sure you install shelltestrunner first, for example:

   cabal install shelltestrunner

To run the tests, issue this command:

   shelltest --execdir test/examples/

License and Copyright

Bindings-MPI is distributed as open source software under the terms of the BSD 
License (see the file LICENSE in the top directory).

Author(s): Bernie Pope, Dmitry Astapov. Copyright 2010.

Contact information

Email Bernie Pope:

   florbitous <at> gmail <dot> com


Around the year 2000 Michael Weber released hMPI, a Haskell binding to MPI:


Development on that code appears to have stopped in about the year 2001.
Hal Daumé III picked up the code and got it working with (at the time)
a more recent version of GHC:


In February 2010 both Michael and Hal reported that they had not worked on
the code for a long time, so it was open for new maintainers.

In early 2010 Bernie Pope downloaded the above mentioned versions of
hMPI and tried to get them working with a modern GHC.

A few things had changed in Haskell since hMPI was written, which suggested
that it might be worth starting the binding from scratch. In particular
the FFI had changed in a few ways, the C2HS tool had matured substantially,
and good quality serialization libraries had emerged. So while haskell-mpi
is highly inspired by hMPI (which was very good code),
it is almost entirely a rewrite.

Haskell-mpi got its first main injection of effort during the inaugural
AusHac Australian Haskell Hackathon, hosted at UNSW from the 16th to the
18th of July 2010. The end result was a proof of concept.

The next major injection of effort happened when Dmitry Astapov started
contributing to the project in August 2010.

Contributions have also been made by:

   - Abhishek Kulkarni: support for MPI-2 intercommunicator client/server
   - Andres Löh: bug fixes
   - Ian Ross: updated the code to work with newer C2HS.