Copyright | [2014..2020] The Accelerate Team |
---|---|
License | BSD3 |
Maintainer | Trevor L. McDonell <trevor.mcdonell@gmail.com> |
Stability | experimental |
Portability | non-portable (GHC extensions) |
Safe Haskell | None |
Language | Haskell2010 |
This module implements a backend for the Accelerate language targeting
multicore CPUs. Expressions are on-line translated into LLVM code, which is
just-in-time executed in parallel over the available CPUs. Functions are
automatically parallelised over all available cores, unless you set the
environment variable 'ACCELERATE_LLVM_NATIVE_THREADS=N', in which case N
threads will be used.
Programs must be compiled with '-threaded', otherwise you will get a "Blocked indefinitely on MVar" error.
Synopsis
- data Acc a
- class Arrays a
- class Afunction f
- type family AfunctionR f
- run :: Arrays a => Acc a -> a
- runWith :: Arrays a => Native -> Acc a -> a
- run1 :: (Arrays a, Arrays b) => (Acc a -> Acc b) -> a -> b
- run1With :: (Arrays a, Arrays b) => Native -> (Acc a -> Acc b) -> a -> b
- runN :: Afunction f => f -> AfunctionR f
- runNWith :: forall f. Afunction f => Native -> f -> AfunctionR f
- stream :: (Arrays a, Arrays b) => (Acc a -> Acc b) -> [a] -> [b]
- streamWith :: (Arrays a, Arrays b) => Native -> (Acc a -> Acc b) -> [a] -> [b]
- data Async a
- wait :: Async a -> IO a
- poll :: Async a -> IO (Maybe a)
- cancel :: Async a -> IO ()
- runAsync :: Arrays a => Acc a -> IO (Async a)
- runAsyncWith :: Arrays a => Native -> Acc a -> IO (Async a)
- run1Async :: (Arrays a, Arrays b) => (Acc a -> Acc b) -> a -> IO (Async b)
- run1AsyncWith :: (Arrays a, Arrays b) => Native -> (Acc a -> Acc b) -> a -> IO (Async b)
- runNAsync :: (Afunction f, RunAsync r, ArraysFunctionR f ~ RunAsyncR r) => f -> r
- runNAsyncWith :: (Afunction f, RunAsync r, ArraysFunctionR f ~ RunAsyncR r) => Native -> f -> r
- runQ :: Afunction f => f -> ExpQ
- runQWith :: Afunction f => f -> ExpQ
- runQAsync :: Afunction f => f -> ExpQ
- runQAsyncWith :: Afunction f => f -> ExpQ
- data Native
- createTarget :: [Int] -> IO Native
Documentation
Accelerate is an embedded language that distinguishes between vanilla arrays (e.g. in Haskell memory on the CPU) and embedded arrays (e.g. in device memory on a GPU), as well as the computations on both of these. Since Accelerate is an embedded language, programs written in Accelerate are not compiled by the Haskell compiler (GHC). Rather, each Accelerate backend is a runtime compiler which generates and executes parallel SIMD code of the target language at application runtime.
The type constructor Acc
represents embedded collective array operations.
A term of type Acc a
is an Accelerate program which, once executed, will
produce a value of type a
(an Array
or a tuple of Arrays
). Collective
operations of type Acc a
comprise many scalar expressions, wrapped in
type constructor Exp
, which will be executed in parallel. Although
collective operations comprise many scalar operations executed in parallel,
scalar operations cannot initiate new collective operations: this
stratification between scalar operations in Exp
and array operations in
Acc
helps statically exclude nested data parallelism, which is difficult
to execute efficiently on constrained hardware such as GPUs.
- A simple example
As a simple example, to compute a vector dot product we can write:
dotp :: Num a => Vector a -> Vector a -> Acc (Scalar a) dotp xs ys = let xs' = use xs ys' = use ys in fold (+) 0 ( zipWith (*) xs' ys' )
The function dotp
consumes two one-dimensional arrays (Vector
s) of
values, and produces a single (Scalar
) result as output. As the return type
is wrapped in the type Acc
, we see that it is an embedded Accelerate
computation - it will be evaluated in the object language of dynamically
generated parallel code, rather than the meta language of vanilla Haskell.
As the arguments to dotp
are plain Haskell arrays, to make these available
to Accelerate computations they must be embedded with the
use
function.
An Accelerate backend is used to evaluate the embedded computation and return
the result back to vanilla Haskell. Calling the run
function of a backend
will generate code for the target architecture, compile, and execute it. For
example, the following backends are available:
- accelerate-llvm-native: for execution on multicore CPUs
- accelerate-llvm-ptx: for execution on NVIDIA CUDA-capable GPUs
See also Exp
, which encapsulates embedded scalar computations.
- Avoiding nested parallelism
As mentioned above, embedded scalar computations of type Exp
can not
initiate further collective operations.
Suppose we wanted to extend our above dotp
function to matrix-vector
multiplication. First, let's rewrite our dotp
function to take Acc
arrays
as input (which is typically what we want):
dotp :: Num a => Acc (Vector a) -> Acc (Vector a) -> Acc (Scalar a) dotp xs ys = fold (+) 0 ( zipWith (*) xs ys )
We might then be inclined to lift our dot-product program to the following
(incorrect) matrix-vector product, by applying dotp
to each row of the
input matrix:
mvm_ndp :: Num a => Acc (Matrix a) -> Acc (Vector a) -> Acc (Vector a) mvm_ndp mat vec = let Z :. rows :. cols = unlift (shape mat) :: Z :. Exp Int :. Exp Int in generate (index1 rows) (\row -> the $ dotp vec (slice mat (lift (row :. All))))
Here, we use generate
to create a one-dimensional
vector by applying at each index a function to slice
out the corresponding row
of the matrix to pass to the dotp
function.
However, since both generate
and
slice
are data-parallel operations, and moreover that
slice
depends on the argument row
given to it by
the generate
function, this definition requires
nested data-parallelism, and is thus not permitted. The clue that this
definition is invalid is that in order to create a program which will be
accepted by the type checker, we must use the function
the
to retrieve the result of the dotp
operation,
effectively concealing that dotp
is a collective array computation in order
to match the type expected by generate
, which is that
of scalar expressions. Additionally, since we have fooled the type-checker,
this problem will only be discovered at program runtime.
In order to avoid this problem, we can make use of the fact that operations
in Accelerate are rank polymorphic. The fold
operation reduces along the innermost dimension of an array of arbitrary
rank, reducing the rank (dimensionality) of the array by one. Thus, we can
replicate
the input vector to as many rows
there
are in the input matrix, and perform the dot-product of the vector with every
row simultaneously:
mvm :: A.Num a => Acc (Matrix a) -> Acc (Vector a) -> Acc (Vector a) mvm mat vec = let Z :. rows :. cols = unlift (shape mat) :: Z :. Exp Int :. Exp Int vec' = A.replicate (lift (Z :. rows :. All)) vec in A.fold (+) 0 ( A.zipWith (*) mat vec' )
Note that the intermediate, replicated array vec'
is never actually created
in memory; it will be fused directly into the operation which consumes it. We
discuss fusion next.
- Fusion
Array computations of type Acc
will be subject to array fusion;
Accelerate will combine individual Acc
computations into a single
computation, which reduces the number of traversals over the input data and
thus improves performance. As such, it is often useful to have some intuition
on when fusion should occur.
The main idea is to first partition array operations into two categories:
- Element-wise operations, such as
map
,generate
, andbackpermute
. Each element of these operations can be computed independently of all others. - Collective operations such as
fold
,scanl
, andstencil
. To compute each output element of these operations requires reading multiple elements from the input array(s).
Element-wise operations fuse together whenever the consumer operation uses a single element of the input array. Element-wise operations can both fuse their inputs into themselves, as well be fused into later operations. Both these examples should fuse into a single loop:
If the consumer operation uses more than one element of the input array
(typically, via generate
indexing an array multiple
times), then the input array will be completely evaluated first; no fusion
occurs in this case, because fusing the first operation into the second
implies duplicating work.
On the other hand, collective operations can fuse their input arrays into themselves, but on output always evaluate to an array; collective operations will not be fused into a later step. For example:
Here the element-wise sequence (use
+ generate
+ zipWith
) will
fuse into a single operation, which then fuses into the collective
fold
operation. At this point in the program the
fold
must now be evaluated. In the final step the
map
reads in the array produced by
fold
. As there is no fusion between the
fold
and map
steps, this
program consists of two "loops"; one for the use
+ generate
+ zipWith
+ fold
step, and one for the final
map
step.
You can see how many operations will be executed in the fused program by
Show
-ing the Acc
program, or by using the debugging option -ddump-dot
to save the program as a graphviz DOT file.
As a special note, the operations unzip
and
reshape
, when applied to a real array, are executed
in constant time, so in this situation these operations will not be fused.
- Tips
- Since
Acc
represents embedded computations that will only be executed when evaluated by a backend, we can programatically generate these computations using the meta language Haskell; for example, unrolling loops or embedding input values into the generated code. - It is usually best to keep all intermediate computations in
Acc
, and onlyrun
the computation at the very end to produce the final result. This enables optimisations between intermediate results (e.g. array fusion) and, if the target architecture has a separate memory space, as is the case of GPUs, to prevent excessive data transfers.
Instances
Arrays b => Afunction (Acc b) | |
Defined in Data.Array.Accelerate.Trafo.Sharing type AfunctionR (Acc b) # type ArraysFunctionR (Acc b) afunctionRepr :: AfunctionRepr (Acc b) (AfunctionR (Acc b)) (ArraysFunctionR (Acc b)) convertOpenAfun :: HasCallStack => Config -> ArrayLayout aenv aenv -> Acc b -> OpenAfun aenv (ArraysFunctionR (Acc b)) | |
(Arrays a, Arrays b, ApplyAcc t) => ApplyAcc ((Acc a -> Acc b) -> t) | |
(Arrays a, ApplyAcc t) => ApplyAcc (Acc a -> t) | |
Defined in Data.Array.Accelerate.Smart type FromApplyAcc (Acc a -> t) | |
(Arrays a, Afunction r) => Afunction (Acc a -> r) | |
Defined in Data.Array.Accelerate.Trafo.Sharing type AfunctionR (Acc a -> r) # type ArraysFunctionR (Acc a -> r) afunctionRepr :: AfunctionRepr (Acc a -> r) (AfunctionR (Acc a -> r)) (ArraysFunctionR (Acc a -> r)) convertOpenAfun :: HasCallStack => Config -> ArrayLayout aenv aenv -> (Acc a -> r) -> OpenAfun aenv (ArraysFunctionR (Acc a -> r)) | |
type ArraysFunctionR (Acc b) | |
Defined in Data.Array.Accelerate.Trafo.Sharing type ArraysFunctionR (Acc b) = ArraysR b | |
type AfunctionR (Acc b) | |
Defined in Data.Array.Accelerate.Trafo.Sharing | |
type ArraysFunctionR (Acc a -> r) | |
Defined in Data.Array.Accelerate.Trafo.Sharing type ArraysFunctionR (Acc a -> r) = ArraysR a -> ArraysFunctionR r | |
type FromApplyAcc ((Acc a -> Acc b) -> t) | |
Defined in Data.Array.Accelerate.Smart | |
type FromApplyAcc (Acc a -> t) | |
Defined in Data.Array.Accelerate.Smart type FromApplyAcc (Acc a -> t) = SmartAcc (ArraysR a) -> FromApplyAcc t | |
type AfunctionR (Acc a -> r) | |
Defined in Data.Array.Accelerate.Trafo.Sharing |
The Arrays
class characterises the types which can appear in collective
Accelerate computations of type Acc
.
Arrays
consists of nested tuples of individual Array
s, currently up
to 16-elements wide. Accelerate computations can thereby return multiple
results.
Instances
Arrays () | |
Defined in Data.Array.Accelerate.Sugar.Array type ArraysR () | |
(Arrays x0, Arrays x1) => Arrays (x0, x1) | |
Defined in Data.Array.Accelerate.Sugar.Array type ArraysR (x0, x1) | |
(Shape sh, Elt e) => Arrays (Array sh e) | |
(Arrays x0, Arrays x1, Arrays x2) => Arrays (x0, x1, x2) | |
Defined in Data.Array.Accelerate.Sugar.Array type ArraysR (x0, x1, x2) | |
(Arrays x0, Arrays x1, Arrays x2, Arrays x3) => Arrays (x0, x1, x2, x3) | |
Defined in Data.Array.Accelerate.Sugar.Array type ArraysR (x0, x1, x2, x3) | |
(Arrays x0, Arrays x1, Arrays x2, Arrays x3, Arrays x4) => Arrays (x0, x1, x2, x3, x4) | |
Defined in Data.Array.Accelerate.Sugar.Array type ArraysR (x0, x1, x2, x3, x4) | |
(Arrays x0, Arrays x1, Arrays x2, Arrays x3, Arrays x4, Arrays x5) => Arrays (x0, x1, x2, x3, x4, x5) | |
Defined in Data.Array.Accelerate.Sugar.Array type ArraysR (x0, x1, x2, x3, x4, x5) | |
(Arrays x0, Arrays x1, Arrays x2, Arrays x3, Arrays x4, Arrays x5, Arrays x6) => Arrays (x0, x1, x2, x3, x4, x5, x6) | |
Defined in Data.Array.Accelerate.Sugar.Array type ArraysR (x0, x1, x2, x3, x4, x5, x6) | |
(Arrays x0, Arrays x1, Arrays x2, Arrays x3, Arrays x4, Arrays x5, Arrays x6, Arrays x7) => Arrays (x0, x1, x2, x3, x4, x5, x6, x7) | |
Defined in Data.Array.Accelerate.Sugar.Array type ArraysR (x0, x1, x2, x3, x4, x5, x6, x7) | |
(Arrays x0, Arrays x1, Arrays x2, Arrays x3, Arrays x4, Arrays x5, Arrays x6, Arrays x7, Arrays x8) => Arrays (x0, x1, x2, x3, x4, x5, x6, x7, x8) | |
Defined in Data.Array.Accelerate.Sugar.Array type ArraysR (x0, x1, x2, x3, x4, x5, x6, x7, x8) | |
(Arrays x0, Arrays x1, Arrays x2, Arrays x3, Arrays x4, Arrays x5, Arrays x6, Arrays x7, Arrays x8, Arrays x9) => Arrays (x0, x1, x2, x3, x4, x5, x6, x7, x8, x9) | |
Defined in Data.Array.Accelerate.Sugar.Array type ArraysR (x0, x1, x2, x3, x4, x5, x6, x7, x8, x9) | |
(Arrays x0, Arrays x1, Arrays x2, Arrays x3, Arrays x4, Arrays x5, Arrays x6, Arrays x7, Arrays x8, Arrays x9, Arrays x10) => Arrays (x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, x10) | |
Defined in Data.Array.Accelerate.Sugar.Array type ArraysR (x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, x10) | |
(Arrays x0, Arrays x1, Arrays x2, Arrays x3, Arrays x4, Arrays x5, Arrays x6, Arrays x7, Arrays x8, Arrays x9, Arrays x10, Arrays x11) => Arrays (x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, x10, x11) | |
Defined in Data.Array.Accelerate.Sugar.Array type ArraysR (x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, x10, x11) arraysR :: ArraysR (ArraysR (x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, x10, x11)) toArr :: ArraysR (x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, x10, x11) -> (x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, x10, x11) fromArr :: (x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, x10, x11) -> ArraysR (x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, x10, x11) | |
(Arrays x0, Arrays x1, Arrays x2, Arrays x3, Arrays x4, Arrays x5, Arrays x6, Arrays x7, Arrays x8, Arrays x9, Arrays x10, Arrays x11, Arrays x12) => Arrays (x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, x10, x11, x12) | |
Defined in Data.Array.Accelerate.Sugar.Array type ArraysR (x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, x10, x11, x12) arraysR :: ArraysR (ArraysR (x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, x10, x11, x12)) toArr :: ArraysR (x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, x10, x11, x12) -> (x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, x10, x11, x12) fromArr :: (x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, x10, x11, x12) -> ArraysR (x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, x10, x11, x12) | |
(Arrays x0, Arrays x1, Arrays x2, Arrays x3, Arrays x4, Arrays x5, Arrays x6, Arrays x7, Arrays x8, Arrays x9, Arrays x10, Arrays x11, Arrays x12, Arrays x13) => Arrays (x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, x10, x11, x12, x13) | |
Defined in Data.Array.Accelerate.Sugar.Array type ArraysR (x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, x10, x11, x12, x13) arraysR :: ArraysR (ArraysR (x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, x10, x11, x12, x13)) toArr :: ArraysR (x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, x10, x11, x12, x13) -> (x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, x10, x11, x12, x13) fromArr :: (x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, x10, x11, x12, x13) -> ArraysR (x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, x10, x11, x12, x13) | |
(Arrays x0, Arrays x1, Arrays x2, Arrays x3, Arrays x4, Arrays x5, Arrays x6, Arrays x7, Arrays x8, Arrays x9, Arrays x10, Arrays x11, Arrays x12, Arrays x13, Arrays x14) => Arrays (x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, x10, x11, x12, x13, x14) | |
Defined in Data.Array.Accelerate.Sugar.Array type ArraysR (x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, x10, x11, x12, x13, x14) arraysR :: ArraysR (ArraysR (x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, x10, x11, x12, x13, x14)) toArr :: ArraysR (x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, x10, x11, x12, x13, x14) -> (x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, x10, x11, x12, x13, x14) fromArr :: (x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, x10, x11, x12, x13, x14) -> ArraysR (x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, x10, x11, x12, x13, x14) | |
(Arrays x0, Arrays x1, Arrays x2, Arrays x3, Arrays x4, Arrays x5, Arrays x6, Arrays x7, Arrays x8, Arrays x9, Arrays x10, Arrays x11, Arrays x12, Arrays x13, Arrays x14, Arrays x15) => Arrays (x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, x10, x11, x12, x13, x14, x15) | |
Defined in Data.Array.Accelerate.Sugar.Array type ArraysR (x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, x10, x11, x12, x13, x14, x15) arraysR :: ArraysR (ArraysR (x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, x10, x11, x12, x13, x14, x15)) toArr :: ArraysR (x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, x10, x11, x12, x13, x14, x15) -> (x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, x10, x11, x12, x13, x14, x15) fromArr :: (x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, x10, x11, x12, x13, x14, x15) -> ArraysR (x0, x1, x2, x3, x4, x5, x6, x7, x8, x9, x10, x11, x12, x13, x14, x15) |
afunctionRepr, convertOpenAfun
Instances
Arrays b => Afunction (Acc b) | |
Defined in Data.Array.Accelerate.Trafo.Sharing type AfunctionR (Acc b) # type ArraysFunctionR (Acc b) afunctionRepr :: AfunctionRepr (Acc b) (AfunctionR (Acc b)) (ArraysFunctionR (Acc b)) convertOpenAfun :: HasCallStack => Config -> ArrayLayout aenv aenv -> Acc b -> OpenAfun aenv (ArraysFunctionR (Acc b)) | |
(Arrays a, Afunction r) => Afunction (Acc a -> r) | |
Defined in Data.Array.Accelerate.Trafo.Sharing type AfunctionR (Acc a -> r) # type ArraysFunctionR (Acc a -> r) afunctionRepr :: AfunctionRepr (Acc a -> r) (AfunctionR (Acc a -> r)) (ArraysFunctionR (Acc a -> r)) convertOpenAfun :: HasCallStack => Config -> ArrayLayout aenv aenv -> (Acc a -> r) -> OpenAfun aenv (ArraysFunctionR (Acc a -> r)) |
type family AfunctionR f #
Instances
type AfunctionR (Acc b) | |
Defined in Data.Array.Accelerate.Trafo.Sharing | |
type AfunctionR (Acc a -> r) | |
Defined in Data.Array.Accelerate.Trafo.Sharing |
Synchronous execution
runWith :: Arrays a => Native -> Acc a -> a Source #
As run
, but execute using the specified target (thread gang).
run1 :: (Arrays a, Arrays b) => (Acc a -> Acc b) -> a -> b Source #
This is runN
, specialised to an array program of one argument.
run1With :: (Arrays a, Arrays b) => Native -> (Acc a -> Acc b) -> a -> b Source #
As run1
, but execute using the specified target (thread gang).
runN :: Afunction f => f -> AfunctionR f Source #
Prepare and execute an embedded array program.
This function can be used to improve performance in cases where the array
program is constant between invocations, because it enables us to bypass
front-end conversion stages and move directly to the execution phase. If you
have a computation applied repeatedly to different input data, use this,
specifying any changing aspects of the computation via the input parameters.
If the function is only evaluated once, this is equivalent to run
.
In order to use runN
you must express your Accelerate program as a function
of array terms:
f :: (Arrays a, Arrays b, ... Arrays c) => Acc a -> Acc b -> ... -> Acc c
This function then returns the compiled version of f
:
runN f :: (Arrays a, Arrays b, ... Arrays c) => a -> b -> ... -> c
At an example, rather than:
step :: Acc (Vector a) -> Acc (Vector b) step = ... simulate :: Vector a -> Vector b simulate xs = run $ step (use xs)
Instead write:
simulate = runN step
You can use the debugging options to check whether this is working
successfully. For example, running with the -ddump-phases
flag should show
that the compilation steps only happen once, not on the second and subsequent
invocations of simulate
. Note that this typically relies on GHC knowing
that it can lift out the function returned by runN
and reuse it.
See the programs in the 'accelerate-examples' package for examples.
See also runQ
, which compiles the Accelerate program at _Haskell_ compile
time, thus eliminating the runtime overhead altogether.
runNWith :: forall f. Afunction f => Native -> f -> AfunctionR f Source #
As runN
, but execute using the specified target (thread gang).
stream :: (Arrays a, Arrays b) => (Acc a -> Acc b) -> [a] -> [b] Source #
Stream a lazily read list of input arrays through the given program, collecting results as we go.
streamWith :: (Arrays a, Arrays b) => Native -> (Acc a -> Acc b) -> [a] -> [b] Source #
As stream
, but execute using the specified target (thread gang).
Asynchronous execution
Block the calling thread until the computation completes, then return the result.
poll :: Async a -> IO (Maybe a) #
Test whether the asynchronous computation has already completed. If so,
return the result, else Nothing
.
runAsyncWith :: Arrays a => Native -> Acc a -> IO (Async a) Source #
As runAsync
, but execute using the specified target (thread gang).
run1Async :: (Arrays a, Arrays b) => (Acc a -> Acc b) -> a -> IO (Async b) Source #
As run1
, but execute asynchronously.
run1AsyncWith :: (Arrays a, Arrays b) => Native -> (Acc a -> Acc b) -> a -> IO (Async b) Source #
As run1Async
, but execute using the specified target (thread gang).
runNAsync :: (Afunction f, RunAsync r, ArraysFunctionR f ~ RunAsyncR r) => f -> r Source #
As runN
, but execute asynchronously.
runNAsyncWith :: (Afunction f, RunAsync r, ArraysFunctionR f ~ RunAsyncR r) => Native -> f -> r Source #
As runNWith
, but execute asynchronously.
Ahead-of-time compilation
runQ :: Afunction f => f -> ExpQ Source #
Ahead-of-time compilation for an embedded array program.
This function will generate, compile, and link into the final executable,
code to execute the given Accelerate computation at Haskell compile time.
This eliminates any runtime overhead associated with the other run*
operations. The generated code will be optimised for the compiling
architecture.
Since the Accelerate program will be generated at Haskell compile time,
construction of the Accelerate program, in particular via meta-programming,
will be limited to operations available to that phase. Also note that any
arrays which are embedded into the program via use
will be stored as part of the final executable.
Usage of this function in your program is similar to that of runN
. First,
express your Accelerate program as a function of array terms:
f :: (Arrays a, Arrays b, ... Arrays c) => Acc a -> Acc b -> ... -> Acc c
This function then returns a compiled version of f
as a Template Haskell
splice, to be added into your program at Haskell compile time:
{-# LANGUAGE TemplateHaskell #-} f' :: a -> b -> ... -> c f' = $( runQ f )
Note that at the splice point the usage of f
must monomorphic; i.e. the
types a
, b
and c
must be at some known concrete type.
See the lulesh-accelerate project for an example.
- Note:
It is recommended to use GHC-8.6 or later. Earlier GHC versions can
successfully build executables utilising runQ
, but fail to correctly link
libraries containing this function.
- Note:
Due to GHC#13587, this currently must be as an untyped splice.
The correct type of this function is similar to that of runN
:
runQ :: Afunction f => f -> Q (TExp (AfunctionR f))
Since: 1.1.0.0
runQAsyncWith :: Afunction f => f -> ExpQ Source #
Ahead-of-time analogue of runNAsyncWith
. See runQ
for more information.
The correct type of this function is:
runQAsyncWith :: (Afunction f, RunAsync r, AfunctionR f ~ RunAsyncR r) => f -> Q (TExp (Native -> r))
Since: 1.1.0.0
Execution targets
Native machine code JIT execution target
Instances
Skeleton Native | |
Defined in Data.Array.Accelerate.LLVM.Native.CodeGen generate :: UID -> Gamma aenv -> ArrayR (Array sh e) -> IRFun1 Native aenv (sh -> e) -> CodeGen Native (IROpenAcc Native aenv (Array sh e)) transform :: UID -> Gamma aenv -> ArrayR (Array sh a) -> ArrayR (Array sh' b) -> IRFun1 Native aenv (sh' -> sh) -> IRFun1 Native aenv (a -> b) -> CodeGen Native (IROpenAcc Native aenv (Array sh' b)) map :: UID -> Gamma aenv -> ArrayR (Array sh a) -> TypeR b -> IRFun1 Native aenv (a -> b) -> CodeGen Native (IROpenAcc Native aenv (Array sh b)) fold :: UID -> Gamma aenv -> ArrayR (Array sh e) -> IRFun2 Native aenv (e -> e -> e) -> Maybe (IRExp Native aenv e) -> MIRDelayed Native aenv (Array (sh, Int) e) -> CodeGen Native (IROpenAcc Native aenv (Array sh e)) foldSeg :: UID -> Gamma aenv -> ArrayR (Array (sh, Int) e) -> IntegralType i -> IRFun2 Native aenv (e -> e -> e) -> Maybe (IRExp Native aenv e) -> MIRDelayed Native aenv (Array (sh, Int) e) -> MIRDelayed Native aenv (Segments i) -> CodeGen Native (IROpenAcc Native aenv (Array (sh, Int) e)) scan :: UID -> Gamma aenv -> ArrayR (Array (sh, Int) e) -> Direction -> IRFun2 Native aenv (e -> e -> e) -> Maybe (IRExp Native aenv e) -> MIRDelayed Native aenv (Array (sh, Int) e) -> CodeGen Native (IROpenAcc Native aenv (Array (sh, Int) e)) scan' :: UID -> Gamma aenv -> ArrayR (Array (sh, Int) e) -> Direction -> IRFun2 Native aenv (e -> e -> e) -> IRExp Native aenv e -> MIRDelayed Native aenv (Array (sh, Int) e) -> CodeGen Native (IROpenAcc Native aenv (Array (sh, Int) e, Array sh e)) permute :: UID -> Gamma aenv -> ArrayR (Array sh e) -> ShapeR sh' -> IRPermuteFun Native aenv (e -> e -> e) -> IRFun1 Native aenv (sh -> PrimMaybe sh') -> MIRDelayed Native aenv (Array sh e) -> CodeGen Native (IROpenAcc Native aenv (Array sh' e)) backpermute :: UID -> Gamma aenv -> ArrayR (Array sh e) -> ShapeR sh' -> IRFun1 Native aenv (sh' -> sh) -> CodeGen Native (IROpenAcc Native aenv (Array sh' e)) stencil1 :: UID -> Gamma aenv -> StencilR sh a stencil -> TypeR b -> IRFun1 Native aenv (stencil -> b) -> IRBoundary Native aenv (Array sh a) -> MIRDelayed Native aenv (Array sh a) -> CodeGen Native (IROpenAcc Native aenv (Array sh b)) stencil2 :: UID -> Gamma aenv -> StencilR sh a stencil1 -> StencilR sh b stencil2 -> TypeR c -> IRFun2 Native aenv (stencil1 -> stencil2 -> c) -> IRBoundary Native aenv (Array sh a) -> MIRDelayed Native aenv (Array sh a) -> IRBoundary Native aenv (Array sh b) -> MIRDelayed Native aenv (Array sh b) -> CodeGen Native (IROpenAcc Native aenv (Array sh c)) | |
Persistent Native | |
Embed Native | |
Defined in Data.Array.Accelerate.LLVM.Native.Embed | |
Execute Native | |
Defined in Data.Array.Accelerate.LLVM.Native.Execute map :: Maybe (a :~: b) -> ArrayR (Array sh a) -> TypeR b -> ExecutableR Native -> Gamma aenv -> ValR Native aenv -> Array sh a -> Par Native (FutureR Native (Array sh b)) generate :: ArrayR (Array sh e) -> ExecutableR Native -> Gamma aenv -> ValR Native aenv -> sh -> Par Native (FutureR Native (Array sh e)) transform :: ArrayR (Array sh a) -> ArrayR (Array sh' b) -> ExecutableR Native -> Gamma aenv -> ValR Native aenv -> sh' -> Array sh a -> Par Native (FutureR Native (Array sh' b)) backpermute :: ArrayR (Array sh e) -> ShapeR sh' -> ExecutableR Native -> Gamma aenv -> ValR Native aenv -> sh' -> Array sh e -> Par Native (FutureR Native (Array sh' e)) fold :: HasInitialValue -> ArrayR (Array sh e) -> ExecutableR Native -> Gamma aenv -> ValR Native aenv -> Delayed (Array (sh, Int) e) -> Par Native (FutureR Native (Array sh e)) foldSeg :: IntegralType i -> HasInitialValue -> ArrayR (Array (sh, Int) e) -> ExecutableR Native -> Gamma aenv -> ValR Native aenv -> Delayed (Array (sh, Int) e) -> Delayed (Segments i) -> Par Native (FutureR Native (Array (sh, Int) e)) scan :: Direction -> HasInitialValue -> ArrayR (Array (sh, Int) e) -> ExecutableR Native -> Gamma aenv -> ValR Native aenv -> Delayed (Array (sh, Int) e) -> Par Native (FutureR Native (Array (sh, Int) e)) scan' :: Direction -> ArrayR (Array (sh, Int) e) -> ExecutableR Native -> Gamma aenv -> ValR Native aenv -> Delayed (Array (sh, Int) e) -> Par Native (FutureR Native (Array (sh, Int) e, Array sh e)) permute :: Bool -> ArrayR (Array sh e) -> ShapeR sh' -> ExecutableR Native -> Gamma aenv -> ValR Native aenv -> Array sh' e -> Delayed (Array sh e) -> Par Native (FutureR Native (Array sh' e)) stencil1 :: TypeR a -> ArrayR (Array sh b) -> sh -> ExecutableR Native -> Gamma aenv -> ValR Native aenv -> Delayed (Array sh a) -> Par Native (FutureR Native (Array sh b)) stencil2 :: TypeR a -> TypeR b -> ArrayR (Array sh c) -> sh -> ExecutableR Native -> Gamma aenv -> ValR Native aenv -> Delayed (Array sh a) -> Delayed (Array sh b) -> Par Native (FutureR Native (Array sh c)) aforeign :: String -> ArraysR as -> ArraysR bs -> (as -> Par Native (FutureR Native bs)) -> as -> Par Native (FutureR Native bs) | |
Link Native | |
Defined in Data.Array.Accelerate.LLVM.Native.Link data ExecutableR Native linkForTarget :: ObjectR Native -> LLVM Native (ExecutableR Native) | |
Compile Native | |
Defined in Data.Array.Accelerate.LLVM.Native.Compile data ObjectR Native compileForTarget :: PreOpenAcc DelayedOpenAcc aenv a -> Gamma aenv -> LLVM Native (ObjectR Native) | |
Marshal Native | |
Defined in Data.Array.Accelerate.LLVM.Native.Execute.Marshal type ArgR Native marshalInt :: Int -> ArgR Native marshalScalarData' :: SingleType e -> ScalarArrayData e -> Par Native (DList (ArgR Native)) | |
Foreign Native | |
Defined in Data.Array.Accelerate.LLVM.Native.Foreign foreignAcc :: Foreign asm => asm (a -> b) -> Maybe (a -> Par Native (FutureR Native b)) foreignExp :: Foreign asm => asm (x -> y) -> Maybe (IRFun1 Native () (x -> y)) | |
Intrinsic Native | |
Defined in Data.Array.Accelerate.LLVM.Native.Compile intrinsicForTarget :: HashMap ShortByteString Label | |
Target Native Source # | |
Remote Native | Data instance for arrays in the native backend. We assume a shared-memory machine, and just manipulate the underlying Haskell array directly. |
Defined in Data.Array.Accelerate.LLVM.Native.Array.Data allocateRemote :: ArrayR (Array sh e) -> sh -> Par Native (Array sh e) useRemoteR :: SingleType e -> Int -> ArrayData e -> Par Native (FutureR Native (ArrayData e)) copyToRemoteR :: SingleType e -> Int -> ArrayData e -> Par Native (FutureR Native (ArrayData e)) copyToHostR :: SingleType e -> Int -> ArrayData e -> Par Native (FutureR Native (ArrayData e)) copyToPeerR :: Native -> SingleType e -> Int -> ArrayData e -> Par Native (FutureR Native (ArrayData e)) useRemoteAsync :: ArraysR arrs -> arrs -> Par Native (FutureArraysR Native arrs) copyToRemoteAsync :: ArraysR arrs -> arrs -> Par Native (FutureArraysR Native arrs) copyToHostAsync :: ArraysR arrs -> arrs -> Par Native (FutureArraysR Native arrs) copyToPeerAsync :: Native -> ArraysR arrs -> arrs -> Par Native (FutureArraysR Native arrs) indexRemoteAsync :: TypeR e -> Array sh e -> Int -> Par Native (FutureR Native e) | |
Async Native Source # | |
Defined in Data.Array.Accelerate.LLVM.Native.Execute.Async new :: HasCallStack => Par Native (FutureR Native a) # put :: HasCallStack => FutureR Native a -> a -> Par Native () # get :: HasCallStack => FutureR Native a -> Par Native a # fork :: Par Native () -> Par Native () # liftPar :: HasCallStack => LLVM Native a -> Par Native a # block :: HasCallStack => FutureR Native a -> Par Native a # spawn :: HasCallStack => Par Native a -> Par Native a # newFull :: HasCallStack => a -> Par Native (FutureR Native a) # | |
MonadState Native (Par Native) Source # | |
Monad (Par Native) Source # | |
Functor (Par Native) Source # | |
Applicative (Par Native) Source # | |
MonadIO (Par Native) Source # | |
MonadCont (Par Native) Source # | |
data ExecutableR Native | |
Defined in Data.Array.Accelerate.LLVM.Native.Link | |
data ObjectR Native | |
Defined in Data.Array.Accelerate.LLVM.Native.Compile | |
type ArgR Native | |
type FutureR Native Source # | |
newtype Par Native a Source # | |
data KernelMetadata Native | |