cuda-0.10.0.0: FFI binding to the CUDA interface for programming NVIDIA GPUs

Copyright[2009..2018] Trevor L. McDonell
LicenseBSD
Safe HaskellNone
LanguageHaskell98

Foreign.CUDA.Driver.Stream

Contents

Description

Stream management for low-level driver interface

Synopsis

Stream Management

newtype Stream Source #

A processing stream. All operations in a stream are synchronous and executed in sequence, but operations in different non-default streams may happen out-of-order or concurrently with one another.

Use Events to synchronise operations between streams.

Constructors

Stream 

Fields

Instances
Eq Stream Source # 
Instance details

Defined in Foreign.CUDA.Driver.Stream

Methods

(==) :: Stream -> Stream -> Bool #

(/=) :: Stream -> Stream -> Bool #

Show Stream Source # 
Instance details

Defined in Foreign.CUDA.Driver.Stream

type StreamPriority = Int Source #

Priority of an execution stream. Work submitted to a higher priority stream may preempt execution of work already executing in a lower priority stream. Lower numbers represent higher priorities.

type StreamCallback = FunPtr (Ptr () -> CInt -> Ptr () -> IO ()) Source #

createWithPriority :: StreamPriority -> [StreamFlag] -> IO Stream Source #

Create a stream with the given priority. Work submitted to a higher-priority stream may preempt work already executing in a lower priority stream.

The convention is that lower numbers represent higher priorities. The default priority is zero. The range of meaningful numeric priorities can be queried using getStreamPriorityRange. If the specified priority is outside the supported numerical range, it will automatically be clamped to the highest or lowest number in the range

Requires CUDA-5.5.

http://docs.nvidia.com/cuda/cuda-driver-api/group__CUDA__STREAM.html#group__CUDA__STREAM_1g95c1a8c7c3dacb13091692dd9c7f7471

destroy :: Stream -> IO () Source #

Destroy a stream. If the device is still doing work in the stream when destroy is called, the function returns immediately and the resources associated with the stream will be released automatically once the device has completed all work.

http://docs.nvidia.com/cuda/cuda-driver-api/group__CUDA__STREAM.html#group__CUDA__STREAM_1g244c8833de4596bcd31a06cdf21ee758

callback :: Stream -> StreamCallback -> Ptr () -> [StreamCallbackFlag] -> IO () Source #

Add a callback to a compute stream. This function will be executed on the host after all currently queued items in the stream have completed.

https://docs.nvidia.com/cuda/cuda-driver-api/group__CUDA__STREAM.html#group__CUDA__STREAM_1g613d97a277d7640f4cb1c03bd51c2483

Since: 0.10.0.0

write :: Storable a => DevicePtr a -> a -> Stream -> [StreamWriteFlag] -> IO () Source #

Write a value to memory, (presumably) after all preceding work in the stream has completed. Unless the option WriteValueNoMemoryBarrier is supplied, the write is preceded by a system-wide memory fence.

http://docs.nvidia.com/cuda/cuda-driver-api/group__CUDA__EVENT.html#group__CUDA__EVENT_1g091455366d56dc2f1f69726aafa369b0

http://docs.nvidia.com/cuda/cuda-driver-api/group__CUDA__EVENT.html#group__CUDA__EVENT_1gc8af1e8b96d7561840affd5217dd6830

Requires CUDA-8.0 for 32-bit values.

Requires CUDA-9.0 for 64-bit values.

wait :: Storable a => DevicePtr a -> a -> Stream -> [StreamWaitFlag] -> IO () Source #

Wait on a memory location. Work ordered after the operation will block until the given condition on the memory is satisfied.

http://docs.nvidia.com/cuda/cuda-driver-api/group__CUDA__EVENT.html#group__CUDA__EVENT_1g629856339de7bc6606047385addbb398

http://docs.nvidia.com/cuda/cuda-driver-api/group__CUDA__EVENT.html#group__CUDA__EVENT_1g6910c1258c5f15aa5d699f0fd60d6933

Requires CUDA-8.0 for 32-bit values.

Requires CUDA-9.0 for 64-bit values.

defaultStreamLegacy :: Stream Source #

The legacy default stream is an implicit stream which synchronises with all other streams in the same Context, except for non-blocking streams.

https://docs.nvidia.com/cuda/cuda-driver-api/stream-sync-behavior.html#stream-sync-behavior__default-stream

Since: 0.10.0.0

defaultStreamPerThread :: Stream Source #

The per-thread default stream is an implicit stream local to both the thread and the calling Context, and which does not synchronise with other streams (just like explicitly created streams). The per-thread default stream is not a non-blocking stream and will synchronise with the legacy default stream if both are used in the same program.

file:///Developer/NVIDIA/CUDA-9.2/doc/html/cuda-driver-api/stream-sync-behavior.html#stream-sync-behavior__default-stream

Since: 0.10.0.0