massiv-scheduler-0.1.1.0: Work stealing scheduler for Massiv (Массив) and other parallel applications.

Copyright(c) Alexey Kuleshevich 2018-2019
LicenseBSD3
MaintainerAlexey Kuleshevich <lehins@yandex.ru>
Stabilityexperimental
Portabilitynon-portable
Safe HaskellNone
LanguageHaskell2010

Control.Massiv.Scheduler

Contents

Description

 
Synopsis

Scheduler and strategies

data Comp Source #

Computation strategy to use when scheduling work.

Constructors

Seq

Sequential computation

ParOn ![Int]

Schedule workers to run on specific capabilities. Specifying an empty list ParOn [] or using Par will result in utilization of all available capabilities.

ParN !Word16

Specify the number of workers that will be handling all the jobs. Difference from ParOn is that workers can jump between cores. Using ParN 0 will result in using all available capabilities.

Bundled Patterns

pattern Par :: Comp

Parallel computation using all available cores. Same as ParOn []

Instances
Eq Comp Source # 
Instance details

Defined in Control.Massiv.Scheduler.Computation

Methods

(==) :: Comp -> Comp -> Bool #

(/=) :: Comp -> Comp -> Bool #

Show Comp Source # 
Instance details

Defined in Control.Massiv.Scheduler.Computation

Methods

showsPrec :: Int -> Comp -> ShowS #

show :: Comp -> String #

showList :: [Comp] -> ShowS #

Semigroup Comp Source # 
Instance details

Defined in Control.Massiv.Scheduler.Computation

Methods

(<>) :: Comp -> Comp -> Comp #

sconcat :: NonEmpty Comp -> Comp #

stimes :: Integral b => b -> Comp -> Comp #

Monoid Comp Source # 
Instance details

Defined in Control.Massiv.Scheduler.Computation

Methods

mempty :: Comp #

mappend :: Comp -> Comp -> Comp #

mconcat :: [Comp] -> Comp #

NFData Comp Source # 
Instance details

Defined in Control.Massiv.Scheduler.Computation

Methods

rnf :: Comp -> () #

data Scheduler m a Source #

Main type for scheduling work. See withScheduler or withScheduler_ for the only ways to get access to such data type.

Since: 0.1.0

Constructors

Scheduler 

Fields

  • numWorkers :: !Int

    Get the number of workers.

  • scheduleWork :: m a -> m ()

    Schedule an action to be picked up and computed by a worker from a pool.

Initialize Scheduler

withScheduler Source #

Arguments

:: MonadUnliftIO m 
=> Comp

Computation strategy

-> (Scheduler m a -> m b)

Action that will be scheduling all the work.

-> m [a] 

Initialize a scheduler and submit jobs that will be computed sequentially or in parallelel, which is determined by the Computation strategy.

Here are some cool properties about the withScheduler:

  • This function will block until all of the submitted jobs have finished or at least one of them resulted in an exception, which will be re-thrown at the callsite.
  • It is totally fine for nested jobs to submit more jobs for the same or other scheduler
  • It is ok to initialize multiple schedulers at the same time, although that will likely result in suboptimal performance, unless workers are pinned to different capabilities.
  • Warning It is very dangerous to schedule jobs that do blocking IO, since it can lead to a deadlock very quickly, if you are not careful. Consider this example. First execution works fine, since there are two scheduled workers, and one can unblock the other, but the second scenario immediately results in a deadlock.
>>> withScheduler (ParOn [1,2]) $ \s -> newEmptyMVar >>= (\ mv -> scheduleWork s (readMVar mv) >> scheduleWork s (putMVar mv ()))
[(),()]
>>> import System.Timeout
>>> timeout 1000000 $ withScheduler (ParOn [1]) $ \s -> newEmptyMVar >>= (\ mv -> scheduleWork s (readMVar mv) >> scheduleWork s (putMVar mv ()))
Nothing

Important: In order to get work done truly in parallel, program needs to be compiled with -threaded GHC flag and executed with +RTS -N -RTS to use all available cores.

Since: 0.1.0

withScheduler_ Source #

Arguments

:: MonadUnliftIO m 
=> Comp

Computation strategy

-> (Scheduler m a -> m b)

Action that will be scheduling all the work.

-> m () 

Same as withScheduler, but discards results of submitted jobs.

Since: 0.1.0

Helper functions

fromWorkerAsyncException :: Exception e => SomeException -> Maybe e Source #

Deprecated: In favor of asyncExceptionFromException

If any one of the workers dies with any exception, even the async exception, it will be re-thrown in the main thread.

>>> let didAWorkerDie = handleJust asyncExceptionFromException (return . (== ThreadKilled)) . fmap or
>>> :t didAWorkerDie
didAWorkerDie :: Foldable t => IO (t Bool) -> IO Bool
>>> didAWorkerDie $ withScheduler Par $ \ s -> scheduleWork s $ pure False
False
>>> didAWorkerDie $ withScheduler Par $ \ s -> scheduleWork s $ myThreadId >>= killThread >> pure False
True
>>> withScheduler Par $ \ s -> scheduleWork s $ myThreadId >>= killThread >> pure False
*** Exception: thread killed

Since: 0.1.0

traverseConcurrently :: (MonadUnliftIO m, Traversable t) => Comp -> (a -> m b) -> t a -> m (t b) Source #

Map an action over each element of the Foldable t acccording to the supplied computation strategy.

Since: 0.1.0

traverseConcurrently_ :: (MonadUnliftIO m, Foldable t) => Comp -> (a -> m b) -> t a -> m () Source #

Just like traverseConcurrently, but discard the results of computation.

Since: 0.1.0

traverse_ :: (Applicative f, Foldable t) => (a -> f ()) -> t a -> f () Source #

This is generally a faster way to traverse while ignoring the result rather than using mapM_.

Since: 0.1.0