h&s丘      ! " # $ % & ' ( ) * + , - . / 0 1 2 3 4 5 6 7 8 9 : ; < = > ? @ A B C D E F G H I J K L M N O P Q R S T U V W X YZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! !       " " " " " " # # # # # # # # # # # # # # $ $ $ $ $ $ $ $ $ $ $ $ $$$$$$$$%%%%%%%%%%%%%%%%%%%%%%%%%%%%&&&&&&&&&&&&&&&&&&&&&&&&&&&&     '''''''''''''''((((((((((((())**************************++++++++++++++++++++++++++++++++,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,----....................................................//////////////////////////000000000000000111BSD3streamly@composewell.com experimentalGHC Safe-Inferred" $%'.145789:+!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHC Safe-Inferred" $%'.145789:,B streamly-coreLike assert but is not removed by the compiler, it is always present in production code. Pre-release!(c) 2017 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHC Safe-Inferred# $%'.145789:/A streamly-coreStolen from the async package. The perf improvement is modest, 2% on a thread heavy benchmark (parallel composition using noop computations). A version of forkIO that does not include the outer exception handler: saves a bit of time when we will be installing our own exception handler. streamly-coreFork a thread that is automatically killed as soon as the reference to the returned threadId is garbage collected. streamly-coreFork a thread that is automatically killed as soon as the reference to the returned threadId is garbage collected.!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHC Safe-Inferred" $%'.145789:0W streamly-coreDiscard any exceptions or value returned by an effectful action. Pre-release!(c) 2022 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHC Safe-Inferred" $%'.145789:2 streamly-coreA simple stateful function composing monad that chains state passing functions. This can be considered as a simplified version of the State monad or even a Fold. Unlike fold the step function is one-shot and not called in a loop. streamly-core&Chain the actions and zip the outputs.  streamly-core4Maps a function on the output of the fold (the type b).  (c) 2019 Composewell Technologies (c) 2013 Gabriel GonzalezBSD3streamly@composewell.com experimentalGHC Safe-Inferred" $%'.145789:4  streamly-core A strict  streamly-coreReturn  if the given value is a Left',  otherwise. streamly-coreReturn ! if the given value is a Right',  otherwise. streamly-core3Return the contents of a Left'-value or errors out. streamly-core4Return the contents of a Right'-value or errors out.  2!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHC Safe-Inferred" $%'.145789:8- streamly-coreRepresents the result of the step of a Fold.  returns an intermediate state of the fold, the fold step can be called again with the state or the driver can use extract& on the state to get the result out.  returns the final result and the fold cannot be driven further. Pre-release streamly-core'Map a monadic function over the result b in Step s b.Internal streamly-coreIf  then map the state, if  then call the next step. streamly-core maps over . fmap =   streamly-core maps over the fold state and  maps over the fold result.2(c) 2020 Composewell Technologies and Contributors BSD-3-Clausestreamly@composewell.com experimentalGHC Safe-Inferred" $%'.145789:?' streamly-coreAn  has an associated IO action that is automatically called whenever the finalizer is garbage collected. The action can be run and cleared prematurely.You can hold a reference to the finalizer in your data structure, if the data structure gets garbage collected the finalizer will be called.It is implemented using . Pre-release streamly-core8GC hook to run an IO action stored in a finalized IORef. streamly-coreCreate a finalizer that calls the supplied function automatically when the it is garbage collected./The finalizer is always run using the state of the monad that is captured at the time of calling  newFinalizer./Note: To run it on garbage collection we have no option but to use the monad state captured at some earlier point of time. For the case when the finalizer is run manually before GC we could run it with the current state of the monad but we want to keep both the cases consistent. Pre-release streamly-coreRun the action associated with the finalizer and deactivate it so that it never runs again. Note, the finalizing action runs with async exceptions masked.If this function is called multiple times, the action is guaranteed to run once and only once. Pre-release streamly-coreRun an action clearing the finalizer atomically wrt async exceptions. The action is run with async exceptions masked.This function can be called at most once after setting the finalizer. If the finalizer is not set it is considered a bug. Pre-release !(c) 2022 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHC Safe-Inferred" $%'.145789:?  !"#$%&'(  !"#$%&'( (c) 2019 Composewell Technologies (c) 2013 Gabriel GonzalezBSD3streamly@composewell.com experimentalGHC Safe-Inferred" $%'.145789:A+ streamly-core A strict . streamly-core#Convert strict Maybe' to lazy Maybe/ streamly-coreExtract the element out of a Just' and throws an error if its argument is Nothing'.0 streamly-core7Returns True iff its argument is of the form "Just' _".+,-./0+,-.0/3!(c) 2023 Composewell Technologies BSD3-3-Clausestreamly@composewell.com experimentalGHC Safe-Inferred# $%'.145789:I 3 streamly-core*A lifted mutable byte array type wrapping MutableByteArray# RealWorld. This is a low level array used to back high level unboxed arrays and serialized data.9 streamly-core&Return the size of the array in bytes.: streamly-coreUse a  MutByteArray as Ptr a. This is useful when we want to pass an array as a pointer to some operating system call or to a "safe" FFI call.If the array is not pinned it is copied to pinned memory before passing it to the monadic action.Performance Notes: Forces a copy if the array is not pinned. It is advised that the programmer keeps this in mind and creates a pinned array opportunistically before this operation occurs, to avoid the cost of a copy if possible.Unsafe because of direct pointer operations. The user must ensure that they are writing within the legal bounds of the array. Pre-release< streamly-coreFor use with unsafe FFI functions. Does not force pin the array memory.C streamly-corePut a sub range of a source array into a subrange of a destination array. This is not safe as it does not check the bounds of neither the src array nor the destination array.D streamly-coreUnsafe as it does not check whether the start offset and length supplied are valid inside the array.E streamly-corecloneSliceUnsafe offset len arr clones a slice of the supplied array starting at the given offset and equal to the given length.F streamly-core %pinnedCloneSliceUnsafe offset len arrG streamly-coreReturn , if the array is allocated in pinned memory.H streamly-coreReturn a copy of the array in pinned memory if unpinned, else return the original array.I streamly-coreReturn a copy of the array in unpinned memory if pinned, else return the original array.23456789:;<=>?@ABCDEFGHI4!(c) 2020 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHC Safe-Inferred" $%'.145789:J !(c) 2019 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHC Safe-Inferred" $%'.145789:Q5 J streamly-coreLike Fold except that the initial state of the accmulator can be generated using a dynamically supplied input. This affords better stream fusion optimization in nested fold operations where the initial fold state is determined based on a dynamic value.InternalK streamly-coreFold   step   inject   extractL streamly-core:Make a consumer from a left fold style pure step function.If your Fold returns only  (i.e. never returns a ) then you can use foldl'* constructors. See also: Streamly.Data.Fold.foldl'InternalM streamly-core lmapM f fold maps the monadic function f on the input of the fold.InternalN streamly-core/Map a monadic function on the output of a fold.InternalO streamly-coreInternalP streamly-coreAppend the elements of an input stream to a provided starting value.;stream = fmap Data.Monoid.Sum $ Stream.enumerateFromTo 1 106Stream.fold (Fold.fromRefold Refold.sconcat 10) streamSum {getSum = 65}sconcat = Refold.foldl' (<>)InternalQ streamly-coreSupply the output of the first consumer as input to the second consumer.InternalR streamly-coreKeep running the same consumer over and over again on the input, feeding the output of the previous run to the next.InternalS streamly-core Take at most n input elements and fold them using the supplied fold. A negative count is treated as 0.Internal JKLMNOPQRS JKLPORMNQS5!(c) 2018 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHC Safe-Inferred" $%'.145789:RU streamly-coreA stream is a succession of Us. A V< produces a single value and the next state of the stream. X3 indicates there are no more values in the stream.UXVW6!(c) 2021 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHC Safe-Inferred" $%'.145789:V Y streamly-core!State representing a nested loop.\ streamly-coreA Producer m a b. is a generator of a stream of values of type b from a seed of type a in  m. Pre-release] streamly-core Producer step inject extracta streamly-core#Convert a list of pure values to a Stream Pre-releaseb streamly-coreInterconvert the producer between two interconvertible input types. Pre-releasec streamly-core9Map the producer input to another value of the same type. Pre-release streamly-core7Map a function on the output of the producer (the type b). Pre-released streamly-coreApply the second unfold to each output element of the first unfold and flatten the output in a single stream. Pre-release streamly-core8Maps a function on the output of the producer (the type b). YZ[\]^_`abcd!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHC Safe-Inferred" $%'.145789:Xve streamly-coreData type to represent practically large quantities of time efficiently. It can represent time up to ~292 billion years at nanosecond resolution.g streamly-coresecondsh streamly-core nanosecondsefghefgh(c) 2019 Composewell Technologies (c) 2013 Gabriel GonzalezBSD3streamly@composewell.com experimentalGHC Safe-Inferred" $%'.145789:Yo streamly-core A strict (,,,)q streamly-core A strict (,,)s streamly-core A strict (,,)u streamly-core A strict (,)opqrstuvuvstqrop7!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHC Safe-Inferred" $%'.145789:]} streamly-coreRepresents a stateful transformation over an input stream of values of type a to outputs of type b in  m. streamly-coreThe composed pipe distributes the input to both the constituent pipes and zips the output of the two using a supplied zipping function. streamly-coreThe composed pipe distributes the input to both the constituent pipes and merges the outputs of the two. streamly-coreLift a pure function to a {. streamly-coreCompose two pipes such that the output of the second pipe is attached to the input of the first pipe. {|}~!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHC Safe-Inferred" $%'.145789:^ streamly-coreLift a monadic function to a {. {|}~ {|}~8!(c) 2023 Composewell Technologies BSD3-3-Clausestreamly@composewell.com experimentalGHC Safe-Inferred& $%'.0145789:;m% streamly-coreImplementation of sizeOf that works on the generic representation of an ADT. streamly-coreChains peek functions that pass the current position to the next function streamly-coreA location inside a mutable byte array with the bound of the array. Is it cheaper to just get the bound using the size of the array whenever needed? streamly-coreThe  type class provides operations for serialization (unboxing) and deserialization (boxing) of fixed-length, non-recursive Haskell data types to and from their byte stream representation.Unbox uses fixed size encoding, therefore, size is independent of the value, it must be determined solely by the type. This restriction makes types with  instances suitable for storing in arrays. Note that sum types may have multiple constructors of different sizes, the size of a sum type is computed as the maximum required by any constructor.The  operation reads as many bytes from the mutable byte array as the size of the data type and builds a Haskell data type from these bytes.  operation converts a Haskell data type to its binary representation which consists of size bytes and then stores these bytes into the mutable byte array. These operations do not check the bounds of the array, the user of the type class is expected to check the bounds before peeking or poking.IMPORTANT: The serialized data's byte ordering remains the same as the host machine's byte order. Therefore, it can not be deserialized from host machines with a different byte ordering.Instances can be derived via Generics, Template Haskell, or written manually. Note that the data type must be non-recursive. WARNING! Generic and Template Haskell deriving, both hang for recursive data types. Deriving via Generics is more convenient but Template Haskell should be preferred over Generics for the following reasons: Instances derived via Template Haskell provide better and more reliable performance.Generic deriving allows only 256 fields or constructor tags whereas template Haskell has no limit.Here is an example, for deriving an instance of this type class using generics:import GHC.Generics (Generic):{data Object = Object { _int0 :: Int , _int1 :: Int } deriving Generic:}-import Streamly.Data.MutByteArray (Unbox(..))instance Unbox Object,To derive the instance via Template Haskell: import Streamly.Data.MutByteArray (deriveUnbox) $(deriveUnbox [d|instance Unbox Object|]) See 9:: for more information on deriving using Template Haskell.+If you want to write the instance manually::{ instance Unbox Object where sizeOf _ = 16 peekAt i arr = do -- Check the array bounds x0 <- peekAt i arr x1 <- peekAt (i + 8) arr return $ Object x0 x1$ pokeAt i arr (Object x0 x1) = do -- Check the array bounds pokeAt i arr x0 pokeAt (i + 8) arr x1:} streamly-core=Get the size. Size cannot be zero, should be at least 1 byte. streamly-corepeekAt byte-offset array reads an element of type a2 from the the given the byte offset in the array.IMPORTANT: The implementation of this interface may not check the bounds of the array, the caller must not assume that. streamly-corepokeAt byte-offset array writes an element of type a0 to the the given the byte offset in the array.IMPORTANT: The implementation of this interface may not check the bounds of the array, the caller must not assume that.!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com pre-releaseGHC Safe-Inferred% $%'./0145789:wN  streamly-coreRelative times are relative to some arbitrary point of time. Unlike - they are not relative to a predefined epoch. streamly-core;Absolute times are relative to a predefined epoch in time.  represents times using e which can represent times up to ~292 billion years at a nanosecond resolution. streamly-core8A type class for converting between units of time using  as the intermediate representation with a nanosecond resolution. This system of units can represent up to ~292 years at nanosecond resolution with fast arithmetic operations.NOTE: Converting to and from units may truncate the value depending on the original value and the size and resolution of the destination unit. streamly-core5A type class for converting between time units using  as the intermediate and the widest representation with a nanosecond resolution. This system of units can represent arbitrarily large times but provides least efficient arithmetic operations due to  arithmetic.NOTE: Converting to and from units may truncate the value depending on the original value and the size and resolution of the destination unit.8A type class for converting between units of time using e as the intermediate representation. This system of units can represent up to ~292 billion years at nanosecond resolution with reasonably efficient arithmetic operations.NOTE: Converting to and from units may truncate the value depending on the original value and the size and resolution of the destination unit. streamly-coreAn  time representation with a millisecond resolution. It can represent time up to ~292 million years. streamly-coreAn  time representation with a microsecond resolution. It can represent time up to ~292,000 years. streamly-coreAn  time representation with a nanosecond resolution. It can represent time up to ~292 years. streamly-core Convert a  to an absolute time. streamly-coreConvert absolute time to a . streamly-core Convert a  to a relative time. streamly-coreConvert relative time to a . streamly-core/Difference between two absolute points of time. streamly-coreConvert nanoseconds to a string showing time in an appropriate unit.efghefgh;(c) 2019 Composewell Technologies (c) 2009-2012, Cetin Sert (c) 2010, Eugene KirpichovBSD3streamly@composewell.com experimentalGHC Safe-Inferred" $%'.145789:Y  streamly-coreClock types. A clock may be system-wide (that is, visible to all processes) or per-process (measuring time that is meaningful only within a process). All implementations shall support CLOCK_REALTIME. (The only suspend-aware monotonic is CLOCK_BOOTTIME on Linux.) streamly-coreThe identifier for the system-wide monotonic clock, which is defined as a clock measuring real time, whose value cannot be set via  clock_settime and which cannot have negative clock jumps. The maximum possible clock jump shall be implementation defined. For this clock, the value returned by  represents the amount of time (in seconds and nanoseconds) since an unspecified point in the past (for example, system start-up time, or the Epoch). This point does not change after system start-up time. Note that the absolute value of the monotonic clock is meaningless (because its origin is arbitrary), and thus there is no need to set it. Furthermore, realtime applications can rely on the fact that the value of this clock is never set. streamly-coreThe identifier of the system-wide clock measuring real time. For this clock, the value returned by  represents the amount of time (in seconds and nanoseconds) since the Epoch. streamly-coreThe identifier of the CPU-time clock associated with the calling process. For this clock, the value returned by  represents the amount of execution time of the current process. streamly-coreThe identifier of the CPU-time clock associated with the calling OS thread. For this clock, the value returned by  represents the amount of execution time of the current OS thread. streamly-core(since Linux 2.6.28; Linux and Mac OSX) Similar to CLOCK_MONOTONIC, but provides access to a raw hardware-based time that is not subject to NTP adjustments or the incremental adjustments performed by adjtime(3). streamly-core(since Linux 2.6.32; Linux and Mac OSX) A faster but less precise version of CLOCK_MONOTONIC. Use when you need very fast, but not fine-grained timestamps. streamly-core(since Linux 2.6.39; Linux and Mac OSX) Identical to CLOCK_MONOTONIC, except it also includes any time that the system is suspended. This allows applications to get a suspend-aware monotonic clock without having to deal with the complications of CLOCK_REALTIME, which may have discontinuities if the time is changed using settimeofday(2). streamly-core(since Linux 2.6.32; Linux-specific) A faster but less precise version of CLOCK_REALTIME. Use when you need very fast, but not fine-grained timestamps. !(c) 2017 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHC Safe-Inferred" $%'.145789:  streamly-core6Specifies the stream yield rate in yields per second (Hertz*). We keep accumulating yield credits at . At any point of time we allow only as many yields as we have accumulated as per  since the start of time. If the consumer or the producer is slower or faster, the actual rate may fall behind or exceed . We try to recover the gap between the two by increasing or decreasing the pull rate from the producer. However, if the gap becomes more than $ we try to recover only as much as . puts a bound on how low the instantaneous rate can go when recovering the rate gap. In other words, it determines the maximum yield latency. Similarly,  puts a bound on how high the instantaneous rate can go when recovering the rate gap. In other words, it determines the minimum yield latency. We reduce the latency by increasing concurrency, therefore we can say that it puts an upper bound on concurrency.If the ; is 0 or negative the stream never yields a value. If the / is 0 or negative we do not attempt to recover.Since: 0.5.0 (Streamly) streamly-coreThe lower rate limit streamly-core"The target rate we want to achieve streamly-coreThe upper rate limit streamly-coreMaximum slack from the goal streamly-coreBuffering policy for persistent push workers (in ParallelT). In a pull style SVar (in AsyncT, AheadT etc.), the consumer side dispatches workers on demand, workers terminate if the buffer is full or if the consumer is not cosuming fast enough. In a push style SVar, a worker is dispatched only once, workers are persistent and keep pushing work to the consumer via a bounded buffer. If the buffer becomes full the worker either blocks, or it can drop an item from the buffer to make space.Pull style SVars are useful in lazy stream evaluation whereas push style SVars are useful in strict left Folds.XXX Maybe we can separate the implementation in two different types instead of using a common SVar type. streamly-coreAn SVar or a Stream Var is a conduit to the output from multiple streams running concurrently and asynchronously. An SVar can be thought of as an asynchronous IO handle. We can write any number of streams to an SVar in a non-blocking manner and then read them back at any time at any pace. The SVar would run the streams asynchronously and accumulate results. An SVar may not really execute the stream completely and accumulate all the results. However, it ensures that the reader can read the results at whatever paces it wants to read. The SVar monitors and adapts to the consumer's pace.An SVar is a mini scheduler, it has an associated workLoop that holds the stream tasks to be picked and run by a pool of worker threads. It has an associated output queue where the output stream elements are placed by the worker threads. A outputDoorBell is used by the worker threads to intimate the consumer thread about availability of new results in the output queue. More workers are added to the SVar by  fromStreamVar on demand if the output produced is not keeping pace with the consumer. On bounded SVars, workers block on the output queue to provide throttling of the producer when the consumer is not pulling fast enough. The number of workers may even get reduced depending on the consuming pace.New work is enqueued either at the time of creation of the SVar or as a result of executing the parallel combinators i.e. <| and <|>< when the already enqueued computations get evaluated. See joinStreamVarAsync. streamly-coreIdentify the type of the SVar. Two computations using the same style can be scheduled on the same SVar. streamly-core=Sorting out-of-turn outputs in a heap for Ahead style streams streamly-core7Events that a child thread may send to a parent thread. streamly-core0Adapt the stream state from one type to another.<!(c) 2017 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHC Safe-Inferred# $%'.145789:0 streamly-coreA newtype wrapper for the 3 type adding a cross product style monad instance.A  bind behaves like a for loop::{Stream.fold Fold.toList $ StreamK.toStream $ StreamK.unCross $ do x <- StreamK.mkCross $ StreamK.fromStream $ Stream.fromList [1,2]= -- Perform the following actions for each x in the stream return x:}[1,2]&Nested monad binds behave like nested for loops::{Stream.fold Fold.toList $ StreamK.toStream $ StreamK.unCross $ do x <- StreamK.mkCross $ StreamK.fromStream $ Stream.fromList [1,2] y <- StreamK.mkCross $ StreamK.fromStream $ Stream.fromList [3,4]; -- Perform the following actions for each x, for each y return (x, y):}[(1,3),(1,4),(2,3),(2,4)] streamly-coreA monadic continuation, it is a function that yields a value of type "a" and calls the argument (a -> m r) as a continuation with that value. We can also think of it as a callback with a handler (a -> m r). Category theorists call it a codensity type, a special type of right kan extension. streamly-core7A terminal function that has no continuation to follow. streamly-core,Continuation Passing Style (CPS) version of Streamly.Data.Stream.Stream . Unlike Streamly.Data.Stream.Stream, < can be composed recursively without affecting performance.'Semigroup instance appends two streams:(<>) = Stream.append streamly-core*Make an empty stream from a stop function. streamly-coreMake a singleton stream from a callback function. The callback function calls the one-shot yield continuation to yield an element. streamly-core/Add a yield function at the head of the stream. streamly-coreA right associative prepend operation to add a pure value at the head of an existing stream::s = 1 `StreamK.cons` 2 `StreamK.cons` 3 `StreamK.cons` StreamK.nil,Stream.fold Fold.toList (StreamK.toStream s)[1,2,3] It can be used efficiently with :5fromFoldable = Prelude.foldr StreamK.cons StreamK.nil)Same as the following but more efficient:'cons x xs = return x `StreamK.consM` xs streamly-coreOperator equivalent of . &> toList $ 1 .: 2 .: 3 .: nil [1,2,3]  streamly-coreA stream that terminates without producing any output or side effect.6Stream.fold Fold.toList (StreamK.toStream StreamK.nil)[] streamly-coreA stream that terminates without producing any output, but produces a side effect.Stream.fold Fold.toList (StreamK.toStream (StreamK.nilM (print "nil")))"nil"[] Pre-release streamly-coreA right associative prepend operation to add an effectful value at the head of an existing stream::s = putStrLn "hello" `StreamK.consM` putStrLn "world" `StreamK.consM` StreamK.nil+Stream.fold Fold.drain (StreamK.toStream s)helloworld It can be used efficiently with :7fromFoldableM = Prelude.foldr StreamK.consM StreamK.nil)Same as the following but more efficient:5consM x xs = StreamK.fromEffect x `StreamK.append` xs streamly-coreFold a stream by providing an SVar, a stop continuation, a singleton continuation and a yield continuation. The stream would share the current SVar passed via the State. streamly-coreFold a stream by providing a State, stop continuation, a singleton continuation and a yield continuation. The stream will not use the SVar passed via State. streamly-core The function f decides how to reconstruct the stream. We could reconstruct using a shared state (SVar) or without sharing the state. streamly-core;Fold sharing the SVar state within the reconstructed stream streamly-core Right fold to a streaming monad. &foldrS StreamK.cons StreamK.nil === id can be used to perform stateless stream to stream transformations like map and filter in general. It can be coupled with a scan to perform stateful transformations. However, note that the custom map and filter routines can be much more efficient than this due to better stream fusion.3input = StreamK.fromStream $ Stream.fromList [1..5]Stream.fold Fold.toList $ StreamK.toStream $ StreamK.foldrS StreamK.cons StreamK.nil input [1,2,3,4,5]%Find if any element in the stream is :7step x xs = if odd x then StreamK.fromPure True else xsinput = StreamK.fromStream (Stream.fromList (2:4:5:undefined)) :: StreamK IO IntStream.fold Fold.toList $ StreamK.toStream $ StreamK.foldrS step (StreamK.fromPure False) input[True]:Map (+2) on odd elements and filter out the even elements:;step x xs = if odd x then (x + 2) `StreamK.cons` xs else xsinput = StreamK.fromStream (Stream.fromList [1..5]) :: StreamK IO IntStream.fold Fold.toList $ StreamK.toStream $ StreamK.foldrS step StreamK.nil input[3,5,7] Pre-release streamly-coreLike / but shares the SVar state across computations. streamly-core-Lazy right fold with a monadic step function. streamly-coreStrict left fold with an extraction function. Like the standard strict left fold, but applies a user supplied extraction function (the third argument) to the folded value at the end. This is designed to work with the foldl library. The suffix x is a mnemonic for extraction.Note that the accumulator is always evaluated including the initial value. streamly-coreStrict left associative fold. streamly-coreLike foldx#, but with a monadic step function. streamly-coreLike " but with a monadic step function. streamly-coreLazy right associative fold. streamly-coreDetach a stream from an SVar streamly-coreApply a stream of functions to a stream of values and flatten the results.8Note that the second stream is evaluated multiple times. Definition:2crossApply = StreamK.crossApplyWith StreamK.append crossApply = Stream.crossWith id streamly-core Definition:5crossWith f m1 m2 = fmap f m1 `StreamK.crossApply` m28Note that the second stream is evaluated multiple times. streamly-coreGiven a  StreamK m a and  StreamK m b generate a stream with all possible combinations of the tuple (a, b). Definition:cross = StreamK.crossWith (,)The second stream is evaluated multiple times. If that is not desired it can be cached in an => and then generated from the array before calling this function. Caching may also improve performance if the stream is expensive to evaluate.See  ?& for a much faster fused alternative.Time: O(m x n) Pre-release streamly-core Perform a  using a specified concat strategy. The first argument specifies a merge or concat function that is used to merge the streams generated by the map function. streamly-coreCombine streams in pairs using a binary combinator, the resulting streams are then combined again in pairs recursively until we get to a single combined stream. The composition would thus form a binary tree.>For example, you can sort a stream using merge sort like this:4s = StreamK.fromStream $ Stream.fromList [5,1,7,9,2]generate = StreamK.fromPure!combine = StreamK.mergeBy compareStream.fold Fold.toList $ StreamK.toStream $ StreamK.mergeMapWith combine generate s [1,2,5,7,9]Note that if the stream length is not a power of 2, the binary tree composed by mergeMapWith would not be balanced, which may or may not be important depending on what you are trying to achieve.-Caution: the stream of streams must be finite Pre-release streamly-coreYield an input element in the output stream, map a stream generator on it and repeat the process on the resulting stream. Resulting streams are flattened using the  combinator. This can be used for a depth first style (DFS) traversal of a tree like structure.)Example, list a directory tree using DFS:f = StreamK.fromStream . either Dir.readEitherPaths (const Stream.nil)#input = StreamK.fromPure (Left ".")5ls = StreamK.concatIterateWith StreamK.append f input Note that iterateM is a special case of :iterateM f = StreamK.concatIterateWith StreamK.append (StreamK.fromEffect . f) . StreamK.fromEffect Pre-release streamly-coreLike . but uses the pairwise flattening combinator  for flattening the resulting streams. This can be used for a balanced traversal of a tree like structure.8Example, list a directory tree using balanced traversal:f = StreamK.fromStream . either Dir.readEitherPaths (const Stream.nil)#input = StreamK.fromPure (Left ".")8ls = StreamK.mergeIterateWith StreamK.interleave f input Pre-release streamly-coreLike  iterateMap but carries a state in the stream generation function. This can be used to traverse graph like structures, we can remember the visited nodes in the state to avoid cycles.Note that a combination of  iterateMap and  usingState can also be used to traverse graphs. However, this function provides a more localized state instead of using a global state. See also:  Pre-release streamly-coreIn an  stream iterate on s. This is a special case of :concatIterateLeftsWith combine f = StreamK.concatIterateWith combine (either f (const StreamK.nil))To traverse a directory tree:#input = StreamK.fromPure (Left ".")ls = StreamK.concatIterateLeftsWith StreamK.append (StreamK.fromStream . Dir.readEither) input Pre-release streamly-coreNote: When joining many streams in a left associative manner earlier streams will get exponential priority than the ones joining later. Because of exponentially high weighting of left streams it can be used with # even on a large number of streams. streamly-coreLike : but stops interleaving as soon as the first stream stops. streamly-coreLike  but stops interleaving as soon as any of the two streams stops. streamly-core:{unfoldr step s = case step s of Nothing -> StreamK.nil6 Just (a, b) -> a `StreamK.cons` unfoldr step b:}Build a stream by unfolding a pure step function step starting from a seed s. The step function returns the next element in the stream and the next seed value. When it is done it returns # and the stream ends. For example,:{ let f b = if b > 2 then Nothing else Just (b, b + 1)'in StreamK.toList $ StreamK.unfoldr f 0:}[0,1,2] streamly-coreBuild a stream by unfolding a monadic step function starting from a seed. The step function returns the next element in the stream and the next seed value. When it is done it returns # and the stream ends. For example,:{ let f b = if b > 2 then return Nothing% else return (Just (b, b + 1))(in StreamK.toList $ StreamK.unfoldrM f 0:}[0,1,2] streamly-core6Generate an infinite stream by repeating a pure value. Pre-release streamly-coreLike repeatM but takes a stream  operation to combine the actions in a stream specific manner. A serial cons would repeat the values serially while an async cons would repeat concurrently. Pre-release streamly-core&We can define cyclic structures using let:'let (a, b) = ([1, b], head a) in (a, b) ([1,1],1) The function fix defined as:fix f = let x = f x in xensures that the argument of a function and its output refer to the same lazy value x* i.e. the same location in memory. Thus x can be defined in terms of itself, creating structures with cyclic references.f ~(a, b) = ([1, b], head a)fix f ([1,1],1)@A is essentially the same as fix but for monadic values.Using  for streams we can construct a stream in which each element of the stream is defined in a cyclic fashion. The argument of the function being fixed represents the current element of the stream which is being returned by the stream monad. Thus, we can use the argument to construct itself.'In the following example, the argument action of the function f represents the tuple (x,y) returned by it in a given iteration. We define the first element of the tuple in terms of the second.,import System.IO.Unsafe (unsafeInterleaveIO):{main = Stream.fold (Fold.drainMapM print) $ StreamK.toStream $ StreamK.mfix f where# f action = StreamK.unCross $ do let incr n act = fmap ((+n) . snd) $ unsafeInterleaveIO act x <- StreamK.mkCross $ StreamK.fromStream $ Stream.sequence $ Stream.fromList [incr 1 action, incr 2 action] y <- StreamK.mkCross $ StreamK.fromStream $ Stream.fromList [4,5] return (x, y):}Note: you cannot achieve this by just changing the order of the monad statements because that would change the order in which the stream elements are generated.Note that the function f2 must be lazy in its argument, that's why we use unsafeInterleaveIO on action because IO monad is strict. Pre-release streamly-core5fromFoldable = Prelude.foldr StreamK.cons StreamK.nilConstruct a stream from a  containing pure values: streamly-core7Extract all but the last element of the stream, if any.3Note: This will end up buffering the entire stream. Pre-release streamly-coreLazy left fold to a stream. streamly-core+Run an action before evaluating the stream. streamly-core Wrap the  type in a  newtype to enable cross product style applicative and monad instances.8This is a type level operation with no runtime overhead. streamly-core Unwrap the  type from  newtype.8This is a type level operation with no runtime overhead.5556666B!(c) 2017 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHC Safe-Inferred" $%'.145789: streamly-core1Lazy left fold to an arbitrary transformer monad. streamly-core9Right associative fold to an arbitrary transformer monad.C(c) 2019 Composewell Technologies (c) 2013 Gabriel GonzalezBSD3streamly@composewell.com experimentalGHC Safe-Inferred" $%'.145789: F> streamly-core The type  Fold m a b= represents a consumer of an input stream of values of type a% and returning a final value of type b in  m . The constructor of a fold is Fold step initial extract final.(The fold uses an internal state of type s". The initial value of the state s is created by initial. This function is called once and only once before the fold starts consuming input. Any resource allocation can be done in this function.The step function is called on each input, it consumes an input and returns the next intermediate state (see ) or the final result b if the fold terminates.#If the fold is used as a scan, the extract? function is used by the scan driver to map the current state s' of the fold to the fold result. Thus extract can be called multiple times. In some folds, where scanning does not make sense, this function is left unimplemented; such folds cannot be used as scans.Before a fold terminates, final> is called once and only once (unless the fold terminated in initial% itself). Any resources allocated by initial can be released in final,. In folds that do not require any cleanup extract and final are typically the same.When implementing fold combinators, care should be taken to cleanup any state of the argument folds held by the fold by calling the respective final' at all exit points of the fold. Also, final should not be called more than once. Note that if a fold terminates by , constructor, there is no state to cleanup.NOTE: The constructor is not yet released, smart constructors are provided to create folds. streamly-coreFold step initial extract final streamly-core/Map a monadic function on the output of a fold. streamly-coreMake a fold from a left fold style pure step function and initial value of the accumulator.If your  returns only  (i.e. never returns a ) then you can use foldl'* constructors. (s -> a -> s) -> s -> (s -> b) -> Fold m a b mkfoldlx step initial extract = fmap extract (foldl' step initial)  streamly-coreMake a fold from a left fold style monadic step function and initial value of the accumulator.=A fold with an extract function can be expressed using rmapM: mkFoldlxM :: Functor m => (s -> a -> m s) -> m s -> (s -> m b) -> Fold m a b mkFoldlxM step initial extract = rmapM extract (foldlM' step initial)  streamly-coreMake a strict left fold, for non-empty streams, using first element as the starting value. Returns Nothing if the stream is empty. Pre-release streamly-core0Like 'foldl1'' but with a monadic step function. Pre-release streamly-coreMake a fold using a right fold style step function and a terminal value. It performs a strict right fold via a left fold using function composition. Note that a strict right fold can only be useful for constructing strict structures in memory. For reductions this will be very inefficient. Definitions: g . f x) idExample:=Stream.fold (Fold.foldr' (:) []) $ Stream.enumerateFromTo 1 5 [1,2,3,4,5] streamly-core-Like foldr' but with a monadic step function.Example: return $ a : xs) (return []) See also: D Pre-release streamly-coreMake a terminating fold using a pure step function, a pure initial state and a pure state extraction function. Pre-release streamly-coreMake a terminating fold with an effectful step function and initial state, and a state extraction function.foldtM' = Fold.FoldWe can just use % but it is provided for completeness. Pre-release streamly-coreMake a fold from a consumer.Internal streamly-coreA fold that drains all its input, running the effects and discarding the results.*drain = Fold.drainMapM (const (return ()))#drain = Fold.foldl' (\_ _ -> ()) () streamly-core!Folds the input stream to a list.Warning! working on large lists accumulated as buffers in memory could be very inefficient, consider using Streamly.Data.Array instead.toList = Fold.foldr' (:) [] streamly-coreBuffers the input stream to a pure stream in the reverse order of the input.>toStreamKRev = Foldable.foldl' (flip StreamK.cons) StreamK.nilThis is more efficient than . toStreamK has exactly the same performance as reversing the stream after toStreamKRev. Pre-release streamly-core/A fold that buffers its input to a pure stream.*toStreamK = foldr StreamK.cons StreamK.nil2toStreamK = fmap StreamK.reverse Fold.toStreamKRevInternal streamly-coreLike , except with a more general  return value Definition:;lengthGeneric = fmap getSum $ Fold.foldMap (Sum . const 1)-lengthGeneric = Fold.foldl' (\n _ -> n + 1) 0 Pre-release streamly-core)Determine the length of the input stream. Definition:length = Fold.lengthGeneric4length = fmap getSum $ Fold.foldMap (Sum . const 1) streamly-coreMake a fold that yields the supplied value without consuming any further input. Pre-release streamly-coreMake a fold that yields the result of the supplied effectful action without consuming any further input. Pre-release streamly-coreSequential fold application. Apply two folds sequentially to an input stream. The input is provided to the first fold, when it is done - the remaining input is provided to the second fold. When the second fold is done or if the input stream is over, the outputs of the two folds are combined using the supplied function.Example: header = Fold.take 8 Fold.toList+line = Fold.takeEndBy (== '\n') Fold.toList"f = Fold.splitWith (,) header line1Stream.fold f $ Stream.fromList "header: hello\n"("header: ","hello\n").Note: This is dual to appending streams using EF.Note: this implementation allows for stream fusion but has quadratic time complexity, because each composition adds a new branch that each subsequent fold's input element has to traverse, therefore, it cannot scale to a large number of compositions. After around 100 compositions the performance starts dipping rapidly compared to a CPS style implementation.For larger number of compositions you can convert the fold to a parser and use ParserK.3Time: O(n^2) where n is the number of compositions. streamly-coreSame as applicative . Run two folds serially one after the other discarding the result of the first.This was written in the hope that it might be faster than implementing it using splitWith, but the current benchmarks show that it has the same performance. So do not expose it unless some benchmark shows benefit. streamly-coreteeWith k f1 f2 distributes its input to both f1 and f2? until both of them terminate and combines their output using k. Definition:3teeWith k f1 f2 = fmap (uncurry k) (Fold.tee f1 f2)Example:?avg = Fold.teeWith (/) Fold.sum (fmap fromIntegral Fold.length).Stream.fold avg $ Stream.fromList [1.0..100.0]50.57For applicative composition using this combinator see Streamly.Data.Fold.Tee. See also: Streamly.Data.Fold.Tee5Note that nested applications of teeWith do not fuse. streamly-coreLike 5 but terminates as soon as the first fold terminates. Pre-release streamly-coreLike  but terminates as soon as any one of the two folds terminates. Pre-release streamly-coreShortest alternative. Apply both folds in parallel but choose the result from the one which consumed least input i.e. take the shortest succeeding fold.If both the folds finish at the same time or if the result is extracted before any of the folds could finish then the left one is taken. Pre-release streamly-coreLongest alternative. Apply both folds in parallel but choose the result from the one which consumed more input i.e. take the longest succeeding fold.If both the folds finish at the same time or if the result is extracted before any of the folds could finish then the left one is taken. Pre-release streamly-coreMap a ' returning function on the result of a  and run the returned fold. This operation can be used to express data dependencies between fold operations.Let's say the first element in the stream is a count of the following elements that we have to add, then:import Data.Maybe (fromJust)count = fmap fromJust Fold.onetotal n = Fold.take n Fold.sumStream.fold (Fold.concatMap total count) $ Stream.fromList [10,9..1]45#This does not fuse completely, see  for a fusible alternative.Time: O(n^2) where n is the number of compositions. See also: G,  streamly-core lmap f fold maps the function f on the input of the fold. Definition:lmap = Fold.lmapM returnExample:-sumSquared = Fold.lmap (\x -> x * x) Fold.sum5Stream.fold sumSquared (Stream.enumerateFromTo 1 100)338350 streamly-core lmapM f fold maps the monadic function f on the input of the fold. streamly-corePostscan the input of a 2 to change it in a stateful manner using another . postscan scanner collector Pre-release streamly-coreModify a fold to receive a  input, the 6 values are unwrapped and sent to the original fold,  values are discarded.catMaybes = Fold.mapMaybe id3catMaybes = Fold.filter isJust . Fold.lmap fromJust streamly-coreUse a $ returning fold as a filtering scan.2scanMaybe p f = Fold.postscan p (Fold.catMaybes f) Pre-release streamly-core 5) Fold.sum) $ Stream.fromList [1..10]40,filter p = Fold.scanMaybe (Fold.filtering p)$filter p = Fold.filterM (return . p)filter p = Fold.mapMaybe (\x -> if p x then Just x else Nothing) streamly-coreLike  but with a monadic predicate.>= \r -> return $ if r then Just x else Nothing filterM p = Fold.mapMaybeM (f p) streamly-coreDiscard  s and unwrap s in an  stream. Pre-release streamly-coreDiscard  s and unwrap s in an  stream. Pre-release streamly-coreRemove the either wrapper and flatten both lefts and as well as rights in the output stream. Definition:%catEithers = Fold.lmap (either id id) Pre-release streamly-core Take at most n input elements and fold them using the supplied fold. A negative count is treated as 0.?Stream.fold (Fold.take 2 Fold.toList) $ Stream.fromList [1..10][1,2] streamly-coreLike 7 but drops the element on which the predicate succeeds.Example:(input = Stream.fromList "hello\nthere\n",line = Fold.takeEndBy_ (== '\n') Fold.toListStream.fold line input"hello"4Stream.fold Fold.toList $ Stream.foldMany line input["hello","there"] streamly-coreTake the input, stop when the predicate succeeds taking the succeeding element as well.Example:(input = Stream.fromList "hello\nthere\n"+line = Fold.takeEndBy (== '\n') Fold.toListStream.fold line input "hello\n"4Stream.fold Fold.toList $ Stream.foldMany line input["hello\n","there\n"] streamly-core provides the ability to run a fold in parts. The duplicated fold consumes the input and returns the same fold as output instead of returning the final result, the returned fold can be run later to consume more input. essentially appends a stream to the fold without finishing the fold. Compare with - which appends a singleton value to the fold. Pre-release streamly-coreEvaluate the initialization effect of a fold. If we are building the fold by chaining lazy actions in fold init this would reduce the actions to a strict accumulator value. Pre-release streamly-coreAppend an effect to the fold lazily, in other words run a single step of the fold. Pre-release streamly-coreAppend a singleton value to the fold lazily, in other words run a single step of the fold. Definition: snocl f = Fold.snoclM f . returnExample:*import qualified Data.Foldable as Foldable>= Fold.drive Stream.nil[1,2,3] Pre-release streamly-core%Append a singleton value to the fold.See examples under  addStream. Pre-release streamly-core+Extract the accumulated result of the fold. Definition: extractM = Fold.drive Stream.nilExample:Fold.extractM Fold.toList[] Pre-release streamly-core7Close a fold so that it does not accept any more input. streamly-core 0)) undefined$Stream.parse p $ Stream.fromList [1] *** Exception: Prelude.undefined...CAVEAT 2: QUADRATIC TIME COMPLEXITY. Static composition is fast due to stream fusion, but it works well only for limited (e.g. up to 8) compositions, use Streamly.Data.ParserK for larger compositions.9Below are some common idioms that can be expressed using :span p f1 f2 = Parser.splitWith (,) (Parser.takeWhile p f1) (Parser.fromFold f2)spanBy eq f1 f2 = Parser.splitWith (,) (Parser.groupBy eq f1) (Parser.fromFold f2) Pre-release streamly-coreBetter performance  for non-failing parsers.2Does not work correctly for parsers that can fail.ALL THE CAVEATS IN  APPLY HERE AS WELL. streamly-coreSequential parser application ignoring the output of the first parser. Apply two parsers sequentially to an input stream. The input is provided to the first parser, when it is done the remaining input is provided to the second parser. The output of the parser is the output of the second parser. The operation fails if any of the parsers fail.ALL THE CAVEATS IN  APPLY HERE AS WELL.This implementation is strict in the second argument, therefore, the following will fail:Stream.parse (Parser.split_ (Parser.satisfy (> 0)) undefined) $ Stream.fromList [1] *** Exception: Prelude.undefined... Pre-release streamly-coreBetter performance  for non-failing parsers.2Does not work correctly for parsers that can fail.ALL THE CAVEATS IN  APPLY HERE AS WELL. streamly-coreSequential alternative. The input is first passed to the first parser, if it succeeds, the result is returned. However, if the first parser fails, the parser driver backtracks and tries the same input on the second (alternative) parser, returning the result if it succeeds.This combinator delivers high performance by stream fusion but it comes with some limitations. For those cases use the  instance of )K.CAVEAT 1: NO RECURSION. This function is strict in both arguments. As a result, if a parser is defined recursively using this, it may cause an infintie loop. The following example checks the strictness:/p = Parser.satisfy (> 0) `Parser.alt` undefined(Stream.parse p $ Stream.fromList [1..10] *** Exception: Prelude.undefinedCAVEAT 2: QUADRATIC TIME COMPLEXITY. Static composition is fast due to stream fusion, but it works well only for limited (e.g. up to 8) compositions, use Streamly.Data.ParserK for larger compositions.Time Complexity:. O(n^2) where n is the number of compositions. Pre-release streamly-coreSee documentation of L. Pre-release streamly-coreLike splitMany, but inner fold emits an output at the end even if no input is received.Internal streamly-coreSee documentation of M. Pre-release streamly-coreA parser that always fails with an error message without consuming any input. streamly-coreA parser that always fails with an effectful error message and without consuming any input. Pre-release streamly-coreMap a ' returning function on the result of a .ALL THE CAVEATS IN  APPLY HERE AS WELL. Pre-release streamly-coreBetter performance  for non-failing parsers.2Does not work correctly for parsers that can fail.ALL THE CAVEATS IN  APPLY HERE AS WELL. streamly-core lmap f parser maps the function f on the input of the parser.Stream.parse (Parser.lmap (\x -> x * x) (Parser.fromFold Fold.sum)) (Stream.enumerateFromTo 1 100) Right 338350 lmap = Parser.lmapM return streamly-corelmapM f parser maps the monadic function f on the input of the parser. streamly-core2Include only those elements that pass a predicate.Stream.parse (Parser.filter (> 5) (Parser.fromFold Fold.sum)) $ Stream.fromList [1..10]Right 40 streamly-core(Maps a function over the result held by . fmap = secondInternal streamly-corefirst maps on  and second maps on .Internal streamly-core fmap = second streamly-core=Map first function over the state and second over the result. streamly-core#liftIO = Parser.fromEffect . liftIO streamly-corefail = Parser.die streamly-coreREAD THE CAVEATS in  before using this instance.(>>=) = flip Parser.concatMap streamly-coreREAD THE CAVEATS in  before using this instance.empty = Parser.die "empty"(<|>) = Parser.alt#many = flip Parser.many Fold.toList#some = flip Parser.some Fold.toList streamly-coreREAD THE CAVEATS in  before using this instance.pure = Parser.fromPure(<*>) = Parser.splitWith id(*>) = Parser.split_ streamly-core%Map a function on the result i.e. on b in  Parser a m b.!N!(c) 2020 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHC Safe-Inferred" $%'.145789:>Q streamly-coreTee is a newtype wrapper over the  type providing distributing , , , ,  and  instances.#The input received by the composed  is replicated and distributed to the constituent folds of the .For example, to compute the average of numbers in a stream without going through the stream twice:avg = (/) <$> (Tee Fold.sum) <*> (Tee $ fmap fromIntegral Fold.length)6Stream.fold (unTee avg) $ Stream.fromList [1.0..100.0]50.5Similarly, the  and  instances of  distribute the input to both the folds and combine the outputs using Monoid or Semigroup instances of the output types:import Data.Monoid (Sum(..))#t = Tee Fold.one <> Tee Fold.latestStream.fold (unTee t) (fmap Sum $ Stream.enumerateFromTo 1.0 100.0)Just (Sum {getSum = 101.0})The , , and  instances work in the same way. streamly-coreBinary 7 operations distribute the input to both the argument &s and combine their outputs using the  instance of the output type. streamly-coreBinary 7 operations distribute the input to both the argument &s and combine their outputs using the  instance of the output type. streamly-coreBinary 6 operations distribute the input to both the argument 's and combine their outputs using the  instance of the output type. streamly-core, distributes the input to both the argument (s and combines their outputs using the  instance of the output type. streamly-core, distributes the input to both the argument (s and combines their outputs using the  instance of the output type. streamly-core, distributes the input to both the argument 9s and combines their outputs using function application.O!(c) 2023 Composewell Technologies BSD3-3-Clausestreamly@composewell.com experimentalGHC Safe-Inferred$ $%'.145789:D streamly-coreSimplified info about a . Omits deriving, strictness, and kind info. This is much nicer than consuming = directly, because it unifies all the constructors into one. streamly-coreSimplified info about a ;. Omits deriving, strictness, kind info, and whether it's data or newtype. streamly-coreCase analysis for a . If the value is a  n _, apply the first function to n ; if it is  n _ k , apply the second function to n and k. streamly-core&Extract the type variable name from a -, ignoring the kind signature if one exists. streamly-coreGet the  of a  streamly-core Convert a  to a list of  . The result is a list because  and " can define multiple constructors. streamly-core=Reify the given data or newtype declaration, and yields its  representation. streamly-core Given an 8 instance declaration splice without the methods (e.g. ([d|instance Unbox a => Unbox (Maybe a)|]), generate an instance declaration including all the type class method implementations.Usage: 8$(deriveUnbox [d|instance Unbox a => Unbox (Maybe a)|]) P!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHC Safe-Inferred" $%'.145789:Z9! streamly-coreAn  Unfold m a b. is a generator of a stream of values of type b from a seed of type a in  m. streamly-core Unfold step inject streamly-coreMake an unfold from step and inject functions. Pre-release streamly-core$Make an unfold from a step function. See also:  Pre-release streamly-coreBuild a stream by unfolding a monadic step function starting from a seed. The step function returns the next element in the stream and the next seed value. When it is done it returns  and the stream ends. streamly-coreLike  but uses a pure step function.:{ f [] = Nothing f (x:xs) = Just (x, xs):}2Unfold.fold Fold.toList (Unfold.unfoldr f) [1,2,3][1,2,3] streamly-core,Map a function on the input argument of the .+u = Unfold.lmap (fmap (+1)) Unfold.fromList Unfold.fold Fold.toList u [1..5] [2,3,4,5,6] )lmap f = Unfold.many (Unfold.function f)  streamly-core+Map an action on the input argument of the . +lmapM f = Unfold.many (Unfold.functionM f)  streamly-coreSupply the seed to an unfold closing the input end of the unfold. 'both a = Unfold.lmap (Prelude.const a)  Pre-release streamly-coreSupply the first component of the tuple to an unfold that accepts a tuple as a seed resulting in a fold that accepts the second component of the tuple as a seed. first a = Unfold.lmap (a, )  Pre-release streamly-coreSupply the second component of the tuple to an unfold that accepts a tuple as a seed resulting in a fold that accepts the first component of the tuple as a seed. second b = Unfold.lmap (, b)  Pre-release streamly-coreSame as  but with a monadic predicate. streamly-core End the stream generated by the / as soon as the predicate fails on an element. streamly-coreApply a monadic function to each element of the stream and replace it with the output of the resulting action.mapM f = Unfold.mapM2 (const f) streamly-core,map2 f = Unfold.mapM2 (\a b -> pure (f a b))Note that the seed may mutate (e.g. if the seed is a Handle or IORef) as stream is generated from it, so we need to be careful when reusing the seed while the stream is being generated from it. streamly-core5Map a function on the output of the unfold (the type b).map f = Unfold.map2 (const f) Pre-release streamly-coreThe unfold discards its input and generates a function stream using the supplied monadic action. Pre-release streamly-core=Discards the unfold input and always returns the argument of . fromPure = fromEffect . pure Pre-release streamly-core#Convert a list of pure values to a Stream streamly-core+Outer product discarding the first element. Unimplemented streamly-core,Outer product discarding the second element. Unimplemented streamly-coreCreate a cross product (vector product or cartesian product) of the output streams of two unfolds using a monadic combining function.>f1 f u = Unfold.mapM2 (\(_, c) b -> f b c) (Unfold.lmap fst u)&crossWithM f u = Unfold.many2 (f1 f u) Pre-release streamly-coreLike $ but uses a pure combining function. 1crossWith f = crossWithM (\b c -> return $ f b c)$u1 = Unfold.lmap fst Unfold.fromList$u2 = Unfold.lmap snd Unfold.fromListu = Unfold.crossWith (,) u1 u2,Unfold.fold Fold.toList u ([1,2,3], [4,5,6])7[(1,4),(1,5),(1,6),(2,4),(2,5),(2,6),(3,4),(3,5),(3,6)] streamly-coreSee . Definition:cross = Unfold.crossWith (,)To create a cross product of the streams generated from a tuple we can write::{cross :: Monad m => Unfold m a b -> Unfold m c d -> Unfold m (a, c) (b, d)cross u1 u2 = Unfold.cross (Unfold.lmap fst u1) (Unfold.lmap snd u2):} Pre-release streamly-coreMap an unfold generating action to each element of an unfold and flatten the results into a single stream. streamly-coreLift a monadic function into an unfold. The unfold generates a singleton stream. streamly-coreLift a pure function into an unfold. The unfold generates a singleton stream. #function f = functionM $ return . f streamly-coreIdentity unfold. The unfold generates a singleton stream having the input as the only element. identity = function Prelude.id Pre-release streamly-coreApply the first unfold to each output element of the second unfold and flatten the output in a single stream.)many u = Unfold.many2 (Unfold.lmap snd u) streamly-coreDistribute the input to two unfolds and then zip the outputs to a single stream using a monadic zip function.*Stops as soon as any of the unfolds stops. Pre-release streamly-coreLike  but with a pure zip function.+square = fmap (\x -> x * x) Unfold.fromList-cube = fmap (\x -> x * x * x) Unfold.fromList"u = Unfold.zipWith (,) square cube Unfold.fold Fold.toList u [1..5]%[(1,1),(4,8),(9,27),(16,64),(25,125)] -zipWith f = zipWithM (\a b -> return $ f a b) streamly-coreQ for documentation and notes.This is almost identical to unfoldManyInterleave in StreamD module.The  combinator is in fact  manyAppend to be more explicit in naming.Internal streamly-core6Maps a function on the output of the unfold (the type b).-UXVW1R'(c) 2019, 2021 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHC Safe-Inferred" $%'.145789:q2 streamly-coreTypes that can be enumerated as a stream. The operations in this type class are equivalent to those in the  type class, except that these generate a stream instead of a list. Use the functions in )Streamly.Internal.Data.Unfold.Enumeration module to define new instances. Pre-release streamly-coreUnfolds from0 generating a stream starting with the element from, enumerating up to  when the type is 8 or generating an infinite stream when the type is not .Stream.toList $ Stream.take 4 $ Stream.unfold Unfold.enumerateFrom (0 :: Int) [0,1,2,3]For  types, enumeration is numerically stable. However, no overflow or underflow checks are performed.Stream.toList $ Stream.take 4 $ Stream.unfold Unfold.enumerateFrom 1.1[1.1,2.1,3.1,4.1] Pre-release streamly-coreUnfolds  (from, to)7 generating a finite stream starting with the element from', enumerating the type up to the value to. If to is smaller than from" then an empty stream is returned.;Stream.toList $ Stream.unfold Unfold.enumerateFromTo (0, 4) [0,1,2,3,4]For 3 types, the last element is equal to the specified to5 value after rounding to the nearest integral value.=Stream.toList $ Stream.unfold Unfold.enumerateFromTo (1.1, 4)[1.1,2.1,3.1,4.1]?Stream.toList $ Stream.unfold Unfold.enumerateFromTo (1.1, 4.6)[1.1,2.1,3.1,4.1,5.1] Pre-release streamly-coreUnfolds  (from, then)- generating a stream whose first element is from2 and the successive elements are in increments of then. Enumeration can occur downwards or upwards depending on whether then comes before or after from. For  types the stream ends when  is reached, for unbounded types it keeps enumerating infinitely.Stream.toList $ Stream.take 4 $ Stream.unfold Unfold.enumerateFromThen (0, 2) [0,2,4,6]Stream.toList $ Stream.take 4 $ Stream.unfold Unfold.enumerateFromThen (0,(-2)) [0,-2,-4,-6] Pre-release streamly-coreUnfolds (from, then, to)4 generating a finite stream whose first element is from2 and the successive elements are in increments of then up to to. Enumeration can occur downwards or upwards depending on whether then comes before or after from.Stream.toList $ Stream.unfold Unfold.enumerateFromThenTo (0, 2, 6) [0,2,4,6]Stream.toList $ Stream.unfold Unfold.enumerateFromThenTo (0, (-2), (-6)) [0,-2,-4,-6] Pre-release streamly-coreUnfolds (from, stride). generating an infinite stream starting from from and incrementing every time by stride. For  types, after the value overflows it keeps enumerating in a cycle: >>> Stream.toList $ Stream.take 10 $ Stream.unfold Unfold.enumerateFromStepNum (255::Word8,1) [255,0,1,2,3,4,5,6,7,8] The implementation is numerically stable for floating point values.Note  is faster for integrals.Internal streamly-core>Same as 'enumerateFromStepNum (from, next)' using a stride of  next - from: >>> enumerateFromThenNum = lmap ((from, next) -> (from, next - from)) Unfold.enumerateFromStepNum Example: @ >>> Stream.toList $ Stream.take 10 $ Stream.unfold enumerateFromThenNum (255::Word8,0) [255,0,1,2,3,4,5,6,7,8]  The implementation is numerically stable for floating point values. Note that  is faster for integrals. Note that in the strange world of floating point numbers, using %enumerateFromThenNum (from, from + 1) is almost exactly the same as enumerateFromStepNum (from, 1) but not precisely the same. Because (from + 1) - from is not exactly 1, it may lose some precision, the loss may also be aggregated in each step, if you want that precision then use  instead.Internal streamly-coreSame as  using a stride of 1: >>> enumerateFromNum = lmap (from -> (from, 1)) Unfold.enumerateFromStepNum >>> Stream.toList $ Stream.take 6 $ Stream.unfold enumerateFromNum (0.9) [0.9,1.9,2.9,3.9,4.9,5.9] Also, same as * using a stride of 1 but see the note in  about the loss of precision: >>> enumerateFromNum = lmap (from -> (from, from + 1)) Unfold.enumerateFromThenNum >>> Stream.toList $ Stream.take 6 $ Stream.unfold enumerateFromNum (0.9) [0.9,1.9,2.9,3.8999999999999995,4.8999999999999995,5.8999999999999995] Internal streamly-coreCan be used to enumerate unbounded integrals. This does not check for overflow or underflow for bounded integrals.Internal streamly-coreSame as  with a step of 1 and enumerating up to the specified upper limit rounded to the nearest integral value: >>> Stream.toList $ Stream.unfold Unfold.enumerateFromToFractional (0.1, 6.3) [0.1,1.1,2.1,3.1,4.1,5.1,6.1] Internal streamly-core)Enumerate from given starting Enum value from and to Enum value to! with stride of 1 till to value.Internal streamly-core)Enumerate from given starting Enum value from and then Enum value next and to Enum value to? with stride of (fromEnum next - fromEnum from) till to value.Internal streamly-core)Enumerate from given starting Enum value from with stride of 1 till maxBoundInternal streamly-core)Enumerate from given starting Enum value from and next Enum value next? with stride of (fromEnum next - fromEnum from) till maxBound.InternalS(c) 2018 Composewell Technologies (c) Roman Leshchinskiy 2008-2010 BSD-3-Clausestreamly@composewell.com experimentalGHC Safe-Inferred# $%'.145789:4 streamly-coreA newtype wrapper for the 1 type with a cross product style monad instance.A  bind behaves like a for loop::{-Stream.fold Fold.toList $ Stream.unCross $ do/ x <- Stream.mkCross $ Stream.fromList [1,2]= -- Perform the following actions for each x in the stream return x:}[1,2]&Nested monad binds behave like nested for loops::{-Stream.fold Fold.toList $ Stream.unCross $ do/ x <- Stream.mkCross $ Stream.fromList [1,2]/ y <- Stream.mkCross $ Stream.fromList [3,4]; -- Perform the following actions for each x, for each y return (x, y):}[(1,3),(1,4),(2,3),(2,4)] streamly-coreA stream consists of a step function that generates the next step given a current state, and the current state. streamly-coreA stream that terminates without producing any output, but produces a side effect.3Stream.fold Fold.toList (Stream.nilM (print "nil"))"nil"[] Pre-release streamly-coreLike cons- but fuses an effect instead of a pure value. streamly-coreDecompose a stream into its head and tail. If the stream is empty, returns &. If the stream is non-empty, returns  Just (a, ma), where a is the head of the stream and ma its tail. Properties:#Nothing <- Stream.uncons Stream.nil;Just ("a", t) <- Stream.uncons (Stream.cons "a" Stream.nil)This can be used to consume the stream in an imperative manner one element at a time, as it just breaks down the stream into individual elements and we can loop over them as we deem fit. For example, this can be used to convert a streamly stream into other stream types.:All the folds in this module can be expressed in terms of , however, this is generally less efficient than specific folds because it takes apart the stream one element at a time, therefore, does not take adavantage of stream fusion.7 is a more general way of consuming a stream piecemeal.:{uncons xs = do% r <- Stream.foldBreak Fold.one xs return $ case r of (Nothing, _) -> Nothing" (Just h, t) -> Just (h, t):} streamly-core Convert an - into a stream by supplying it an input seed.9s = Stream.unfold Unfold.replicateM (3, putStrLn "hello")Stream.fold Fold.drain shellohellohello streamly-core,Create a singleton stream from a pure value.'fromPure a = a `Stream.cons` Stream.nilfromPure = pure#fromPure = Stream.fromEffect . pure streamly-core0Create a singleton stream from a monadic action.*fromEffect m = m `Stream.consM` Stream.nil.fromEffect = Stream.sequence . Stream.fromPure=Stream.fold Fold.drain $ Stream.fromEffect (putStrLn "hello")hello streamly-core.Construct a stream from a list of pure values. streamly-coreConvert a CPS encoded StreamK to direct style step encoded StreamD streamly-coreConvert a direct style step encoded StreamD to a CPS encoded StreamK streamly-coreFold resulting in either breaking the stream or continuation of the fold. Instead of supplying the input stream in one go we can run the fold multiple times, each time supplying the next segment of the input stream. If the fold has not yet finished it returns a fold that can be run again otherwise it returns the fold result and the residual stream.Internal streamly-coreLike  but also returns the remaining stream. The resulting stream would be TU( if the stream finished before the fold. streamly-core&Fold a stream using the supplied left  and reducing the resulting expression strictly at each step. The behavior is similar to . A  can terminate early without consuming the full stream. See the documentation of individual s for termination behavior. Definitions:&fold f = fmap fst . Stream.foldBreak f)fold f = Stream.parse (Parser.fromFold f)Example:3Stream.fold Fold.sum (Stream.enumerateFromTo 1 100)5050 streamly-coreAppend a stream to a fold lazily to build an accumulator incrementally.Example, to continue folding a list of streams on the same sum fold:;streams = [Stream.fromList [1..5], Stream.fromList [6..10]]5f = Prelude.foldl Stream.foldAddLazy Fold.sum streamsStream.fold f Stream.nil55 streamly-corefoldAdd = flip Fold.addStream streamly-core"Right associative/lazy pull fold. foldrM build final stream9 constructs an output structure using the step function build. build is invoked with the next input element and the remaining (lazy) tail of the output structure. It builds a lazy output expression using the two. When the "tail structure" in the output expression is evaluated it calls build( again thus lazily consuming the input stream. until either the output expression built by build is free of the "tail" or the input is exhausted in which case final is used as the terminating case for the output structure. For more details see the description in the previous section.%Example, determine if any element is  in a stream:%s = Stream.fromList (2:4:5:undefined)-step x xs = if odd x then return True else xs#Stream.foldrM step (return False) sTrue streamly-coreRight fold, lazy for lazy monads and pure streams, and strict for strict monads.Please avoid using this routine in strict monads like IO unless you need a strict right fold. This is provided only for use in lazy monads (e.g. Identity) or pure streams. Note that with this signature it is not possible to implement a lazy foldr when the monad m is strict. In that case it would be strict in its accumulator and therefore would necessarily consume all its input.8foldr f z = Stream.foldrM (\a b -> f a <$> b) (return z)Note: This is similar to Fold.foldr' (the right fold via left fold), but could be more efficient. streamly-core Definitions:drain = Stream.fold Fold.drain/drain = Stream.foldrM (\_ xs -> xs) (return ())%Run a stream, discarding the results. streamly-core Definitions:toList = Stream.foldr (:) [] toList = Stream.fold Fold.toListConvert a stream into a list in the underlying monad. The list can be consumed lazily in a lazy monad (e.g. ). In a strict monad (e.g. IO) the whole list is generated and buffered before it can be consumed.Warning! working on large lists accumulated as buffers in memory could be very inefficient, consider using Streamly.Data.Array instead.6Note that this could a bit more efficient compared to Stream.fold Fold.toList+, and it can fuse with pure list consumers. streamly-core Compare two streams for equality streamly-core&Compare two streams lexicographically. streamly-core!mapM f = Stream.sequence . fmap fApply a monadic function to each element of the stream and replace it with the output of the resulting action.#s = Stream.fromList ["a", "b", "c"]-Stream.fold Fold.drain $ Stream.mapM putStr sabc streamly-core Take first n/ elements from the stream and discard the rest. streamly-coreSame as  but with a monadic predicate. streamly-core and then generated from the array before calling this function. Caching may also improve performance if the stream is expensive to evaluate.See  ?& for a much faster fused alternative.Time: O(m x n) Pre-release streamly-coreunfoldMany unfold stream uses unfold to map the input stream elements to streams and then flattens the generated streams into a single output stream.Like  but uses an  for stream generation. Unlike  this can fuse the  code with the inner loop and therefore provide many times better performance. streamly-coreMap a stream producing monadic function on each element of the stream and then flatten the results into a single stream. Since the stream generation function is monadic, unlike , it can produce an effect at the beginning of each iteration of the inner loop.See  for a fusible alternative. streamly-coreMap a stream producing function on each element of the stream and then flatten the results into a single stream.,concatMap f = Stream.concatMapM (return . f)$concatMap f = Stream.concat . fmap fconcatMap f = Stream.unfoldMany (Unfold.lmap f Unfold.fromStream)See  for a fusible alternative. streamly-core/Flatten a stream of streams to a single stream.concat = Stream.concatMap id Pre-release streamly-coreGiven a stream value in the underlying monad, lift and join the underlying monad with the stream monad.0concatEffect = Stream.concat . Stream.fromEffectconcatEffect eff = Stream.concatMapM (\() -> eff) (Stream.fromPure ()) See also: ,  streamly-coreGenerate a stream from an initial state, scan and concat the stream, generate a stream again from the final state of the previous scan and repeat the process. streamly-coreSame as  except that the traversal of the last element on a level is emitted first and then going backwards up to the first element (reversed ordering). This may be slightly faster than . streamly-core Similar to  except that it traverses the stream in breadth first style (BFS). First, all the elements in the input stream are emitted, and then their traversals are emitted.)Example, list a directory tree using BFS:7f = either (Just . Dir.readEitherPaths) (const Nothing)"input = Stream.fromPure (Left ".")$ls = Stream.concatIterateBfs f input Pre-release streamly-coreTraverse the stream in depth first style (DFS). Map each element in the input stream to a stream and flatten, recursively map the resulting elements as well to a stream and flatten until no more streams are generated.)Example, list a directory tree using DFS:7f = either (Just . Dir.readEitherPaths) (const Nothing)"input = Stream.fromPure (Left ".")$ls = Stream.concatIterateDfs f inputThis is equivalent to using  concatIterateWith StreamK.append. Pre-release streamly-coreSame as concatIterateDfs) but more efficient due to stream fusion.)Example, list a directory tree using DFS:2f = Unfold.either Dir.eitherReaderPaths Unfold.nil"input = Stream.fromPure (Left ".")$ls = Stream.unfoldIterateDfs f input Pre-release streamly-coreLike  but processes the children in reverse order, therefore, may be slightly faster. Pre-release streamly-coreLike ( but uses breadth first style traversal. Pre-release streamly-coreBinary BFS style reduce, folds a level entirely using the supplied fold function, collecting the outputs as next level of the tree, then repeats the same process on the next level. The last elements of a previously folded level are folded first. streamly-coreN-Ary BFS style iterative fold, if the input stream finished before the fold then it returns Left otherwise Right. If the fold returns Left we terminate. Unimplemented streamly-coreLike  but evaluates the fold even if the fold did not receive any input, therefore, always results in a non-empty output even on an empty stream (default result of the fold).Example, empty stream:f = Fold.take 2 Fold.sum7fmany = Stream.fold Fold.toList . Stream.foldManyPost ffmany $ Stream.fromList [][0]Example, last fold empty:fmany $ Stream.fromList [1..4][3,7,0]Example, last fold non-empty:fmany $ Stream.fromList [1..5][3,7,5]#Note that using a closed fold e.g.  Fold.take 0, would result in an infinite stream without consuming the input. Pre-release streamly-coreApply a  repeatedly on a stream and emit the results in the output stream. Definition:1foldMany f = Stream.parseMany (Parser.fromFold f)Example, empty stream:f = Fold.take 2 Fold.sum3fmany = Stream.fold Fold.toList . Stream.foldMany ffmany $ Stream.fromList [][]Example, last fold empty:fmany $ Stream.fromList [1..4][3,7]Example, last fold non-empty:fmany $ Stream.fromList [1..5][3,7,5]#Note that using a closed fold e.g.  Fold.take 0, would result in an infinite stream on a non-empty input stream. streamly-core&Group the input stream into groups of n elements each and then fold each group using the provided fold function. %groupsOf n f = foldMany (FL.take n f)Stream.toList $ Stream.groupsOf 2 Fold.sum (Stream.enumerateFromTo 1 10)[3,7,11,15,19]/This can be considered as an n-fold version of  where we apply = repeatedly on the leftover stream until the stream exhausts. streamly-coreLike  but for the J type. The supplied action is used as the initial value for each refold.Internal streamly-coreLike  foldIterateM but using the J type instead. This could be much more efficient due to stream fusion.Internal streamly-core The refold  indexerBy f n< takes an (index, len) tuple as initial input, and returns (index + len + n, b) as output where b is the output of the fold. streamly-coreLike  splitOnSuffix but generates a stream of (index, len) tuples marking the places where the predicate matches in the stream. Pre-releaseUXVW5 !(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHC Safe-Inferred" $%'.145789:# streamly-core Convert an  into an unfold accepting a tuple as an argument, using the argument of the original fold as the second element of tuple and discarding the first element of the tuple. discardFirst = Unfold.lmap snd  Pre-release streamly-core Convert an  into an unfold accepting a tuple as an argument, using the argument of the original fold as the first element of tuple and discarding the second element of the tuple.  discardSecond = Unfold.lmap fst  Pre-release streamly-core Convert an  that accepts a tuple as an argument into an unfold that accepts a tuple with elements swapped. swap = Unfold.lmap Tuple.swap  Pre-release streamly-core Compose an  and a  . Given an  Unfold m a b and a  Fold m b c, returns a monadic action a -> m c representing the application of the fold on the unfolded stream.-Unfold.fold Fold.sum Unfold.fromList [1..100]5050*fold f u = Stream.fold f . Stream.unfold u Pre-release streamly-core7Apply a fold multiple times on the output of an unfold. Pre-release streamly-core5Choose left or right unfold based on an either input. Pre-release streamly-coreScan the output of an # to change it in a stateful manner. Pre-release streamly-coreScan the output of an  to change it in a stateful manner. Once fold is done it will restart from its initial state.:u = Unfold.scanMany (Fold.take 2 Fold.sum) Unfold.fromList%Unfold.fold Fold.toList u [1,2,3,4,5][0,1,3,0,3,7,0,5] Pre-release streamly-coreScan the output of an  to change it in a stateful manner. Once fold is done it will stop.6u = Unfold.scan (Fold.take 2 Fold.sum) Unfold.fromList%Unfold.fold Fold.toList u [1,2,3,4,5][0,1,3] Pre-release streamly-coreScan the output of an # to change it in a stateful manner. Pre-release streamly-coreLift a monadic function into an unfold generating a nil stream with a side effect. streamly-coreAn empty stream. streamly-core:Prepend a monadic single element generator function to an =. The same seed is used in the action as well as the unfold. Pre-release streamly-core&Convert a list of monadic values to a  streamly-core Given a seed  (n, action)%, generates a stream replicating the action n times. streamly-core0Generates an infinite stream repeating the seed. streamly-coreGenerates an infinite stream starting with the given seed and applying the given function repeatedly. streamly-corefromIndicesM gen. generates an infinite stream of values using gen starting from the seed. 8fromIndicesM f = Unfold.mapM f $ Unfold.enumerateFrom 0  Pre-release streamly-core!u = Unfold.take 2 Unfold.fromList"Unfold.fold Fold.toList u [1..100][1,2] streamly-coreSame as  but with a monadic predicate. streamly-core2Include only those elements that pass a predicate. streamly-core drop n unf drops n' elements from the stream generated by unf. streamly-coredropWhileM f unf- drops elements from the stream generated by unf9 while the condition holds true. The condition function f is monadic in nature. streamly-core Similar to $ but with a pure condition function. streamly-coreLike  but with following differences: alloc action a -> m c# runs with async exceptions enabledcleanup action c -> m d won't run if the stream is garbage collected after partial evaluation.Inhibits stream fusion Pre-release streamly-coreRun the alloc action a -> m c with async exceptions disabled but keeping blocking operations interruptible (see XY). Use the output c as input to  Unfold m c b to generate an output stream. When unfolding use the supplied try operation forall s. m s -> m (Either e s) to catch synchronous exceptions. If an exception occurs run the exception handling unfold Unfold m (c, e) b.The cleanup action c -> m d, runs whenever the stream ends normally, due to a sync or async exception or if it gets garbage collected after a partial lazy evaluation. See bracket) for the semantics of the cleanup action.gbracket6 can express all other exception handling combinators.Inhibits stream fusion Pre-release streamly-coreRun a side effect a -> m c on the input a before unfolding it using  Unfold m a b. (before f = lmapM (\a -> f a >> return a) Pre-release streamly-coreLike after with following differences:action a -> m c won't run if the stream is garbage collected after partial evaluation.Monad m( does not require any other constraints. Pre-release streamly-coreUnfold the input a using  Unfold m a b, run an action on a whenever the unfold stops normally, or if it is garbage collected after a partial lazy evaluation.The semantics of the action a -> m c1 are similar to the cleanup action semantics in bracket. See also  Pre-release streamly-coreUnfold the input a using  Unfold m a b, run the action a -> m c on a* if the unfold aborts due to an exception.Inhibits stream fusion Pre-release streamly-coreLike  with following differences:action a -> m c won't run if the stream is garbage collected after partial evaluation.Inhibits stream fusion Pre-release streamly-coreUnfold the input a using  Unfold m a b, run an action on a whenever the unfold stops normally, aborts due to an exception or if it is garbage collected after a partial lazy evaluation.The semantics of the action a -> m c1 are similar to the cleanup action semantics in bracket. )finally release = bracket return release  See also Inhibits stream fusion Pre-release streamly-coreLike  but with following differences: alloc action a -> m c# runs with async exceptions enabledcleanup action c -> m d won't run if the stream is garbage collected after partial evaluation.Inhibits stream fusion Pre-release streamly-coreRun the alloc action a -> m c with async exceptions disabled but keeping blocking operations interruptible (see XY). Use the output c as input to  Unfold m c b to generate an output stream.c0 is usually a resource under the state of monad m, e.g. a file handle, that requires a cleanup after use. The cleanup action c -> m d, runs whenever the stream ends normally, due to a sync or async exception or if it gets garbage collected after a partial lazy evaluation.bracket only guarantees that the cleanup action runs, and it runs with async exceptions enabled. The action must ensure that it can successfully cleanup the resource in the face of sync or async exceptions.When the stream ends normally or on a sync exception, cleanup action runs immediately in the current thread context, whereas in other cases it runs in the GC context, therefore, cleanup may be delayed until the GC gets to run. See also: , gbracketInhibits stream fusion Pre-release streamly-coreWhen unfolding  Unfold m a b if an exception e occurs, unfold e using  Unfold m e b.Inhibits stream fusion Pre-release streamly-corebefore streamly-coretry (exception handling) streamly-coreafter, on normal stop streamly-core on exception streamly-core unfold to run streamly-corebefore streamly-coreafter, on normal stop, or GC streamly-coreaction on exception streamly-corestream on exception streamly-coretry (exception handling) streamly-core unfold to runUWXVUXVWZ!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.comreleasedGHC Safe-Inferred" $%'.145789:=  [!(c) 2018 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHC Safe-Inferred" $%'.145789:; streamly-core&Lazy left fold to a transformer monad. streamly-coreRight fold to a transformer monad. This is the most general right fold function.  is a special case of  , however ' implementation can be more efficient:foldrS = Stream.foldrT*step f x xs = lift $ f x (runIdentityT xs)?foldrM f z s = runIdentityT $ Stream.foldrT (step f) (lift z) s can be used to translate streamly streams to other transformer monads e.g. to a different streaming type. Pre-release streamly-coreLift the inner monad m of  Stream m a to t m where t is a monad transformer. streamly-core(Evaluate the inner monad of a stream as . streamly-core6Run a stream transformation using a given environment. streamly-core(Evaluate the inner monad of a stream as .,evalStateT s = fmap snd . Stream.runStateT s streamly-core(Evaluate the inner monad of a stream as > and emit the resulting state and value pair after each step. streamly-coreRun a stateful (StateT) stream transformation using a given state. m d won't run if the stream is garbage collected after partial evaluation.Inhibits stream fusion Pre-release streamly-coreRun the alloc action m c with async exceptions disabled but keeping blocking operations interruptible (see XY). Use the output c as input to c -> Stream m b to generate an output stream. When generating the stream use the supplied try operation  forall s. m s -> m (Either e s) to catch synchronous exceptions. If an exception occurs run the exception handler &c -> e -> Stream m b -> m (Stream m b) . Note that  does not rethrow the exception, it has to be done by the exception handler if desired.The cleanup action c -> m d, runs whenever the stream ends normally, due to a sync or async exception or if it gets garbage collected after a partial lazy evaluation. See bracket) for the semantics of the cleanup action.6 can express all other exception handling combinators.Inhibits stream fusion Pre-release streamly-coreRun the action m b, before the stream yields its first element.7Same as the following but more efficient due to fusion:+before action xs = Stream.nilM action <> xsbefore action xs = Stream.concatMap (const xs) (Stream.fromEffect action) streamly-coreLike after, with following differences:action m b won't run if the stream is garbage collected after partial evaluation.Monad m( does not require any other constraints.%has slightly better performance than after..Same as the following, but with stream fusion:0afterUnsafe action xs = xs <> Stream.nilM action Pre-release streamly-coreRun the action IO b whenever the stream is evaluated to completion, or if it is garbage collected after a partial lazy evaluation.The semantics of the action IO b4 are similar to the semantics of cleanup action in . See also  streamly-coreRun the action m b if the stream evaluation is aborted due to an exception. The exception is not caught, simply rethrown.Observes exceptions only in the stream generation, and not in stream consumers.Inhibits stream fusion streamly-coreLike bracket but with following differences: alloc action m b# runs with async exceptions enabledcleanup action b -> m c won't run if the stream is garbage collected after partial evaluation.%has slightly better performance than .Inhibits stream fusion Pre-release streamly-coreLike  but can use 3 separate cleanup actions depending on the mode of termination: When the stream stops normally$When the stream is garbage collected'When the stream encounters an exception0bracketIO3 before onStop onGC onException action runs action using the result of before. If the stream stops, onStop1 action is executed, if the stream is abandoned onGC5 is executed, if the stream encounters an exception  onException is executed.,The exception is not caught, it is rethrown.Inhibits stream fusion Pre-release streamly-coreRun the alloc action IO b with async exceptions disabled but keeping blocking operations interruptible (see XY). Use the output b+ of the IO action as input to the function b -> Stream m a to generate an output stream.b is usually a resource under the IO monad, e.g. a file handle, that requires a cleanup after use. The cleanup action  b -> IO c, runs whenever (1) the stream ends normally, (2) due to a sync or async exception or, (3) if it gets garbage collected after a partial lazy evaluation. The exception is not caught, it is rethrown. only guarantees that the cleanup action runs, and it runs with async exceptions enabled. The action must ensure that it can successfully cleanup the resource in the face of sync or async exceptions.When the stream ends normally or on a sync exception, cleanup action runs immediately in the current thread context, whereas in other cases it runs in the GC context, therefore, cleanup may be delayed until the GC gets to run. An example where GC based cleanup happens is when a stream is being folded but the fold terminates without draining the entire stream or if the consumer of the stream encounters an exception.Observes exceptions only in the stream generation, and not in stream consumers. See also: Inhibits stream fusion streamly-core%Alternate (custom) implementation of bracket. streamly-coreLike finally with following differences:action m b won't run if the stream is garbage collected after partial evaluation.%has slightly better performance than .Inhibits stream fusion Pre-release streamly-coreRun the action IO b whenever the stream stream stops normally, aborts due to an exception or if it is garbage collected after a partial lazy evaluation.$The semantics of running the action IO b; are similar to the cleanup action semantics described in .finallyIO release = Stream.bracketIO (return ()) (const release) See also Inhibits stream fusion streamly-coreLike  but the exception handler is also provided with the stream that generated the exception as input. The exception handler can thus re-evaluate the stream to retry the action that failed. The exception handler can again call * on it to retry the action multiple times.This is highly experimental. In a stream of actions we can map the stream with a retry combinator to retry each action on failure.Inhibits stream fusion Pre-release streamly-coreWhen evaluating a stream if an exception occurs, stream evaluation aborts and the specified exception handler is run with the exception as argument. The exception is caught and handled unless the handler decides to rethrow it. Note that exception handling is not applied to the stream returned by the exception handler.Observes exceptions only in the stream generation, and not in stream consumers.Inhibits stream fusion streamly-core%Alternate (custom) implementation of . streamly-corebefore streamly-coreafter, on normal stop streamly-core on exception streamly-coretry (exception handling) streamly-corestream generator streamly-corebefore streamly-coreon normal stop streamly-core on exception streamly-core&on GC without normal stop or exception streamly-coretry (exception handling) streamly-corestream generator ^!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHC Safe-Inferred" $%'.145789:3 streamly-coreAn  holds a single  -able value. streamly-core Create a new . Pre-release streamly-coreWrite a value to an . Pre-release streamly-coreRead a value from an . Pre-release streamly-coreModify the value of an * using a function with strict application. Pre-release streamly-core4Generate a stream by continuously reading the IORef.This operation reads the IORef without any synchronization. It can be assumed to be atomic because the IORef (MutableByteArray) is always aligned to Int boundaries, we are assuming that compiler uses single instructions to access the memory. It may read stale values though until caches are synchronised in a multiprocessor architecture. Pre-release!(c) 2021 Composewell Technologies BSD-3-Clausestreamly@composewell.com pre-releaseGHC Safe-Inferred" $%'.145789: streamly-coreAdjustable periodic timer. streamly-core asyncClock g starts a clock thread that updates an IORef with current time as a 64-bit value in microseconds, every g seconds. The IORef can be read asynchronously. The thread exits automatically when the reference to the returned  is lost.Minimum granularity of clock update is 1 ms. Higher is better for performance.CAUTION! This is safe only on a 64-bit machine. On a 32-bit machine a 64-bit Var cannot be read consistently without a lock while another thread is writing to it. streamly-core"timer clockType granularity period creates a timer. The timer produces timer ticks at specified time intervals that can be waited upon using . If the previous tick is not yet processed, the new tick is lost. streamly-coreBlocking wait for a timer tick. streamly-coreResets the current period. streamly-core1Elongates the current period by specified amount. Unimplemented streamly-core0Shortens the current period by specified amount. Unimplemented streamly-core3Show the remaining time in the current time period. Unimplemented!(c) 2020 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHC Safe-Inferred# $%'.145789: streamly-coreReturns the heap allocation overhead for allocating a byte array. Each heap object contains a one word header. Byte arrays contain the size of the array after the header.See https://gitlab.haskell.org/ghc/ghc/-/wikis/commentary/rts/storage/heap-objects#arrays streamly-core&When we allocate a byte array of size k2 the allocator actually allocates memory of size k + byteArrayOverhead. arrayPayloadSize n returns the size of the array in bytes that would result in an allocation of n bytes. streamly-coreDefault maximum buffer size in bytes, for reading from and writing to IO devices, the value is 32KB minus GHC allocation overhead, which is a few bytes, so that the actual allocation is 32KB._(c) 2020 Composewell Technologies and Contributors (c) Roman Leshchinskiy 2008-2010 BSD-3-Clausestreamly@composewell.com experimentalGHC Safe-Inferred" $%'.145789:E- streamly-coreTypes that can be enumerated as a stream. The operations in this type class are equivalent to those in the  type class, except that these generate a stream instead of a list. Use the functions in )Streamly.Internal.Data.Stream.Enumeration module to define new instances. streamly-coreenumerateFrom from/ generates a stream starting with the element from, enumerating up to  when the type is 8 or generating an infinite stream when the type is not .?Stream.toList $ Stream.take 4 $ Stream.enumerateFrom (0 :: Int) [0,1,2,3]For  types, enumeration is numerically stable. However, no overflow or underflow checks are performed.8Stream.toList $ Stream.take 4 $ Stream.enumerateFrom 1.1[1.1,2.1,3.1,4.1] streamly-core3Generate a finite stream starting with the element from(, enumerating the type up to the value to. If to is smaller than from# then an empty stream is returned.*Stream.toList $ Stream.enumerateFromTo 0 4 [0,1,2,3,4]For 3 types, the last element is equal to the specified to5 value after rounding to the nearest integral value.,Stream.toList $ Stream.enumerateFromTo 1.1 4[1.1,2.1,3.1,4.1].Stream.toList $ Stream.enumerateFromTo 1.1 4.6[1.1,2.1,3.1,4.1,5.1] streamly-coreenumerateFromThen from then, generates a stream whose first element is from, the second element is then3 and the successive elements are in increments of  then - from. Enumeration can occur downwards or upwards depending on whether then comes before or after from. For  types the stream ends when  is reached, for unbounded types it keeps enumerating infinitely. 2 then return Nothing% else return (Just (b, b + 1))&in Stream.toList $ Stream.unfoldrM f 0:}[0,1,2] streamly-coreBuild a stream by unfolding a pure step function step starting from a seed s. The step function returns the next element in the stream and the next seed value. When it is done it returns # and the stream ends. For example,:{ let f b = if b > 2 then Nothing else Just (b, b + 1)%in Stream.toList $ Stream.unfoldr f 0:}[0,1,2] streamly-core)repeatM = Stream.sequence . Stream.repeatGenerate a stream by repeatedly executing a monadic action forever.:{repeatAction =6 Stream.repeatM (threadDelay 1000000 >> print 1) & Stream.take 10 & Stream.fold Fold.drain:} streamly-core6Generate an infinite stream by repeating a pure value."repeat x = Stream.repeatM (pure x) streamly-core3replicateM n = Stream.sequence . Stream.replicate n1Generate a stream by performing a monadic action n times. streamly-core+replicate n = Stream.take n . Stream.repeat,replicate n x = Stream.replicateM n (pure x)Generate a stream of length n by repeating a value n times. streamly-coreFor floating point numbers if the increment is less than the precision then it just gets lost. Therefore we cannot always increment it correctly by just repeated addition. 9007199254740992 + 1 + 1 :: Double => 9.007199254740992e15 9007199254740992 + 2 :: Double => 9.007199254740994e15Instead we accumulate the increment counter and compute the increment every time before adding it to the starting number.This works for Integrals as well as floating point numbers, but enumerateFromStepIntegral is faster for integrals. streamly-core Enumerate an % type in steps up to a given limit. (enumerateFromThenToIntegral from then to3 generates a finite stream whose first element is from, the second element is then3 and the successive elements are in increments of  then - from up to to.8Stream.toList $ Stream.enumerateFromThenToIntegral 0 2 6 [0,2,4,6]>Stream.toList $ Stream.enumerateFromThenToIntegral 0 (-2) (-6) [0,-2,-4,-6] streamly-core Enumerate an  type in steps. $enumerateFromThenIntegral from then+ generates a stream whose first element is from, the second element is then2 and the successive elements are in increments of  then - from,. The stream is bounded by the size of the  type.Stream.toList $ Stream.take 4 $ Stream.enumerateFromThenIntegral (0 :: Int) 2 [0,2,4,6]Stream.toList $ Stream.take 4 $ Stream.enumerateFromThenIntegral (0 :: Int) (-2) [0,-2,-4,-6] streamly-core#enumerateFromStepIntegral from step6 generates an infinite stream whose first element is from3 and the successive elements are in increments of step.CAUTION: This function is not safe for finite integral types. It does not check for overflow, underflow or bounds.Stream.toList $ Stream.take 4 $ Stream.enumerateFromStepIntegral 0 2 [0,2,4,6]Stream.toList $ Stream.take 3 $ Stream.enumerateFromStepIntegral 0 (-2) [0,-2,-4] streamly-core Enumerate an  type up to a given limit. enumerateFromToIntegral from to3 generates a finite stream whose first element is from. and successive elements are in increments of 1 up to to.2Stream.toList $ Stream.enumerateFromToIntegral 0 4 [0,1,2,3,4] streamly-core Enumerate an  type. enumerateFromIntegral from, generates a stream whose first element is from3 and the successive elements are in increments of 1+. The stream is bounded by the size of the  type.Stream.toList $ Stream.take 4 $ Stream.enumerateFromIntegral (0 :: Int) [0,1,2,3] streamly-core&Numerically stable enumeration from a  number in steps of size 1. enumerateFromFractional from, generates a stream whose first element is from2 and the successive elements are in increments of 12. No overflow or underflow checks are performed.This is the equivalent to  for  types. For example:Stream.toList $ Stream.take 4 $ Stream.enumerateFromFractional 1.1[1.1,2.1,3.1,4.1] streamly-core&Numerically stable enumeration from a  number in steps. %enumerateFromThenFractional from then, generates a stream whose first element is from, the second element is then3 and the successive elements are in increments of  then - from2. No overflow or underflow checks are performed.This is the equivalent of  for  types. For example:Stream.toList $ Stream.take 4 $ Stream.enumerateFromThenFractional 1.1 2.1[1.1,2.1,3.1,4.1]Stream.toList $ Stream.take 4 $ Stream.enumerateFromThenFractional 1.1 (-2.1)0[1.1,-2.1,-5.300000000000001,-8.500000000000002] streamly-core&Numerically stable enumeration from a  number to a given limit. !enumerateFromToFractional from to3 generates a finite stream whose first element is from. and successive elements are in increments of 1 up to to.This is the equivalent of  for  types. For example:6Stream.toList $ Stream.enumerateFromToFractional 1.1 4[1.1,2.1,3.1,4.1]8Stream.toList $ Stream.enumerateFromToFractional 1.1 4.6[1.1,2.1,3.1,4.1,5.1]7Notice that the last element is equal to the specified to. value after rounding to the nearest integer. streamly-core&Numerically stable enumeration from a ( number in steps up to a given limit. *enumerateFromThenToFractional from then to3 generates a finite stream whose first element is from, the second element is then3 and the successive elements are in increments of  then - from up to to.This is the equivalent of  for  types. For example: print x >> threadDelay 1000000)5Stream.fold f $ Stream.take 3 $ Stream.timesWith 0.01(AbsTime (TimeSpec {sec = ..., nsec = ...}),RelTime64 (NanoSecond64 ...))(AbsTime (TimeSpec {sec = ..., nsec = ...}),RelTime64 (NanoSecond64 ...))(AbsTime (TimeSpec {sec = ..., nsec = ...}),RelTime64 (NanoSecond64 ...)).Note: This API is not safe on 32-bit machines. Pre-release streamly-coreabsTimesWith g returns a stream of absolute timestamps using a clock of granularity g specified in seconds. A low granularity clock is more expensive in terms of CPU usage. Any granularity lower than 1 ms is treated as 1 ms.f = Fold.drainMapM printStream.fold f $ Stream.delayPre 1 $ Stream.take 3 $ Stream.absTimesWith 0.01*AbsTime (TimeSpec {sec = ..., nsec = ...})*AbsTime (TimeSpec {sec = ..., nsec = ...})*AbsTime (TimeSpec {sec = ..., nsec = ...}).Note: This API is not safe on 32-bit machines. Pre-release streamly-corerelTimesWith g returns a stream of relative time values starting from 0, using a clock of granularity g specified in seconds. A low granularity clock is more expensive in terms of CPU usage. Any granularity lower than 1 ms is treated as 1 ms.f = Fold.drainMapM printStream.fold f $ Stream.delayPre 1 $ Stream.take 3 $ Stream.relTimesWith 0.01RelTime64 (NanoSecond64 ...)RelTime64 (NanoSecond64 ...)RelTime64 (NanoSecond64 ...).Note: This API is not safe on 32-bit machines. Pre-release streamly-coretimes returns a stream of time value tuples with clock of 10 ms granularity. The first component of the tuple is an absolute time reference (epoch) denoting the start of the stream and the second component is a time relative to the reference.9f = Fold.drainMapM (\x -> print x >> threadDelay 1000000),Stream.fold f $ Stream.take 3 $ Stream.times(AbsTime (TimeSpec {sec = ..., nsec = ...}),RelTime64 (NanoSecond64 ...))(AbsTime (TimeSpec {sec = ..., nsec = ...}),RelTime64 (NanoSecond64 ...))(AbsTime (TimeSpec {sec = ..., nsec = ...}),RelTime64 (NanoSecond64 ...)).Note: This API is not safe on 32-bit machines. Pre-release streamly-coreabsTimes returns a stream of absolute timestamps using a clock of 10 ms granularity.f = Fold.drainMapM printStream.fold f $ Stream.delayPre 1 $ Stream.take 3 $ Stream.absTimes*AbsTime (TimeSpec {sec = ..., nsec = ...})*AbsTime (TimeSpec {sec = ..., nsec = ...})*AbsTime (TimeSpec {sec = ..., nsec = ...}).Note: This API is not safe on 32-bit machines. Pre-release streamly-corerelTimes returns a stream of relative time values starting from 0, using a clock of granularity 10 ms.f = Fold.drainMapM printStream.fold f $ Stream.delayPre 1 $ Stream.take 3 $ Stream.relTimesRelTime64 (NanoSecond64 ...)RelTime64 (NanoSecond64 ...)RelTime64 (NanoSecond64 ...).Note: This API is not safe on 32-bit machines. Pre-release streamly-core durations g returns a stream of relative time values measuring the time elapsed since the immediate predecessor element of the stream was generated. The first element of the stream is always 0.  durations uses a clock of granularity g specified in seconds. A low granularity clock is more expensive in terms of CPU usage. The minimum granularity is 1 millisecond. Durations lower than 1 ms will be 0..Note: This API is not safe on 32-bit machines. Unimplemented streamly-coreGenerate a singleton event at or after the specified absolute time. Note that this is different from a threadDelay, a threadDelay starts from the time when the action is evaluated, whereas if we use AbsTime based timeout it will immediately expire if the action is evaluated too late. Unimplemented streamly-coreGenerate an infinite stream with the first element generated by the action m and each successive element derived by applying the monadic function f on the previous element.:{ print x >> return (x + 1)) (return 0) & Stream.take 3 & Stream.toList:}01[0,1,2] streamly-core!Generate an infinite stream with x as the first element and each successive element derived by applying the function f on the previous element.5Stream.toList $ Stream.take 5 $ Stream.iterate (+1) 1 [1,2,3,4,5] streamly-core'Convert a list of monadic actions to a  streamly-core3fromFoldable = Prelude.foldr Stream.cons Stream.nilConstruct a stream from a  containing pure values:/WARNING: O(n^2), suitable only for a small number of elements in the stream/ streamly-core5fromFoldableM = Prelude.foldr Stream.consM Stream.nilConstruct a stream from a  containing pure values:/WARNING: O(n^2), suitable only for a small number of elements in the stream/ streamly-core Keep reading  elements from an immutable  onwards.Unsafe:/ The caller is responsible for safe addressing. Pre-release streamly-coreTake n % elements starting from an immutable  onwards.+fromPtrN n = Stream.take n . Stream.fromPtrUnsafe:/ The caller is responsible for safe addressing. Pre-release streamly-coreRead bytes from an immutable  until a 0 byte is encountered, the 0 byte is not included in the stream.:set -XMagicHashfromByteStr# addr = Stream.takeWhile (/= 0) $ Stream.fromPtr $ Ptr addrUnsafe:/ The caller is responsible for safe addressing.Note that this is completely safe when reading from Haskell string literals because they are guaranteed to be NULL terminated:/Stream.toList $ Stream.fromByteStr# "\1\2\3\0"#[1,2,3];5!(c) 2020 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHC Safe-Inferred" $%'.145789: streamly-coreMake a  from a . The fold just throws an exception if the parser fails or tries to backtrack.This can be useful in combinators that accept a Fold and we know that a Parser cannot fail or failure exception is acceptable as there is no way to recover. Pre-release streamly-coreMake a  from a 2. This parser sends all of its input to the fold. streamly-coreConvert a Maybe returning fold to an error returning parser. The first argument is the error message that the parser would return when the fold returns Nothing. Pre-release streamly-corePeek the head element of a stream, without consuming it. Fails if it encounters end of input.Stream.parse ((,) <$> Parser.peek <*> Parser.satisfy (> 0)) $ Stream.fromList [1] Right (1,1)  peek = lookAhead (satisfy True)  streamly-core8Succeeds if we are at the end of input, fails otherwise.Stream.parse ((,) <$> Parser.satisfy (> 0) <*> Parser.eof) $ Stream.fromList [1] Right (1,()) streamly-core.Return the next element of the input. Returns ! on end of input. Also known as . Pre-release streamly-coreMap an  returning function on the next element in the stream. If the function returns 'Left err', the parser fails with the error message err otherwise returns the  value. Pre-release streamly-coreMap a  returning function on the next element in the stream. The parser fails if the function returns  otherwise returns the  value.=toEither = Maybe.maybe (Left "maybe: predicate failed") Right&maybe f = Parser.either (toEither . f)maybe f = Parser.fromFoldMaybe "maybe: predicate failed" (Fold.maybe f) Pre-release streamly-coreReturns the next element if it passes the predicate, fails otherwise.>Stream.parse (Parser.satisfy (== 1)) $ Stream.fromList [1,0,1]Right 1-toMaybe f x = if f x then Just x else Nothing$satisfy f = Parser.maybe (toMaybe f) streamly-coreConsume one element from the head of the stream. Fails if it encounters end of input.!one = Parser.satisfy $ const True streamly-coreMatch a specific element.oneEq x = Parser.satisfy (== x) streamly-core/Match anything other than the supplied element."oneNotEq x = Parser.satisfy (/= x) streamly-core3Match any one of the elements in the supplied list..oneOf xs = Parser.satisfy (`Foldable.elem` xs)When performance matters a pattern matching predicate could be more efficient than a  datatype: let p x = case x of a -> True e) -> True _ -> False in satisfy p GHC may use a binary search instead of linear search in the list. Alternatively, you can also use an array instead of list for storage and search. streamly-coreSee performance notes in .2noneOf xs = Parser.satisfy (`Foldable.notElem` xs) streamly-coretakeBetween m n takes a minimum of m and a maximum of n8 input elements and folds them using the supplied fold. Stops after n, elements. Fails if the stream ends before m elements could be taken. Examples: - >>> :{ takeBetween' low high ls = Stream.parse prsr (Stream.fromList ls) where prsr = Parser.takeBetween low high Fold.toList :}  takeBetween' 2 4 [1, 2, 3, 4, 5]Right [1,2,3,4]takeBetween' 2 4 [1, 2] Right [1,2]takeBetween' 2 4 [1]Left (ParseError "takeBetween: Expecting alteast 2 elements, got 1")takeBetween' 0 0 [1, 2]Right []takeBetween' 0 1 []Right [] takeBetween is the most general take operation, other take operations can be defined in terms of takeBetween. For example:take n = Parser.takeBetween 0 n!takeEQ n = Parser.takeBetween n n(takeGE n = Parser.takeBetween n maxBound Pre-release streamly-coreStops after taking exactly n input elements.Stops - after consuming n elements.Fails - if the stream or the collecting fold ends before it can collect exactly n elements.Stream.parse (Parser.takeEQ 2 Fold.toList) $ Stream.fromList [1,0,1] Right [1,0]Stream.parse (Parser.takeEQ 4 Fold.toList) $ Stream.fromList [1,0,1]Left (ParseError "takeEQ: Expecting exactly 4 elements, input terminated on 3") streamly-coreTake at least n& input elements, but can collect more.'Stops - when the collecting fold stops.Fails - if the stream or the collecting fold ends before producing n elements.Stream.parse (Parser.takeGE 4 Fold.toList) $ Stream.fromList [1,0,1]Left (ParseError "takeGE: Expecting at least 4 elements, input terminated on 3")Stream.parse (Parser.takeGE 4 Fold.toList) $ Stream.fromList [1,0,1,0,1]Right [1,0,1,0,1] Pre-release streamly-coreLike  but uses a  instead of a  to collect the input. The combinator stops when the condition fails or if the collecting parser stops.Other interesting parsers can be implemented in terms of this parser:takeWhile1 cond p = Parser.takeWhileP cond (Parser.takeBetween 1 maxBound p)takeWhileBetween cond m n p = Parser.takeWhileP cond (Parser.takeBetween m n p)Stops: when the condition fails or the collecting parser stops. Fails: when the collecting parser fails. Pre-release streamly-coreCollect stream elements until an element fails the predicate. The element on which the predicate fails is returned back to the input stream.>Stops - when the predicate fails or the collecting fold stops.Fails - never.Stream.parse (Parser.takeWhile (== 0) Fold.toList) $ Stream.fromList [0,0,1,0,1] Right [0,0]=takeWhile cond f = Parser.takeWhileP cond (Parser.fromFold f)We can implement a breakOn using : breakOn p = takeWhile (not p)  streamly-coreLike 0 but takes at least one element otherwise fails.takeWhile1 cond p = Parser.takeWhileP cond (Parser.takeBetween 1 maxBound p) streamly-coreDrain the input as long as the predicate succeeds, running the effects and discarding the results.This is also called  skipWhile in some parsing libraries.+dropWhile p = Parser.takeWhile p Fold.drain streamly-coreParse a block enclosed within open, close brackets. Block contents may be quoted, brackets inside quotes are ignored. Quoting characters can be used within quotes if escaped. A block can have a nested block inside it.Quote begin and end chars are the same. Block brackets and quote chars must not overlap. Block start and end brackets must be different for nesting blocks within blocks.p = Parser.blockWithQuotes (== '\\') (== '"') '{' '}' Fold.toList9Stream.parse p $ Stream.fromList "{msg: \"hello world\"}"Right "msg: \"hello world\"" streamly-coretakeEndBy cond parser parses a token that ends by a separator chosen by the supplied predicate. The separator is also taken with the token.This can be combined with other parsers to implement other interesting parsers as follows:takeEndByLE cond n p = Parser.takeEndBy cond (Parser.fromFold $ Fold.take n p)takeEndByBetween cond m n p = Parser.takeEndBy cond (Parser.takeBetween m n p)-takeEndBy = Parser.takeEndByEsc (const False)See also "Streamly.Data.Fold.takeEndBy". Unlike the fold, the collecting parser in the takeEndBy parser can decide whether to fail or not if the stream does not end with separator. Pre-release streamly-coreLike  but the separator elements can be escaped using an escape char determined by the first predicate. The escape characters are removed. pre-release streamly-coreLike  but the separator is dropped.)See also "Streamly.Data.Fold.takeEndBy_". Pre-release streamly-coreTake either the separator or the token. Separator is a Left value and token is Right value. Unimplemented streamly-coreParse a token that starts with an element chosen by the predicate. The parser fails if the input does not start with the selected element. Nothing) streamly-core with quote processing applied and escape function supplied to escape the quote char within a quote. Can be ysed to parse words and processing the quoting and escaping at the same time.wordProcessQuotes = Parser.wordWithQuotes False (\_ _ -> Nothing) streamly-coreGiven an input stream  [a,b,c,...] and a comparison function cmp", the parser assigns the element a to the first group, then if  a `cmp` b is  b) is also assigned to the same group. If  a `cmp` c is  then c is also assigned to the same group and so on. When the comparison fails the parser is terminated. Each group is folded using the  f9 and the result of the fold is the result of the parser."Stops - when the comparison fails.Fails - never.:{ runGroupsBy eq = Stream.fold Fold.toList; . Stream.parseMany (Parser.groupBy eq Fold.toList) . Stream.fromList:}runGroupsBy (<) [][]runGroupsBy (<) [1] [Right [1]]"runGroupsBy (<) [3, 5, 4, 1, 2, 0]%[Right [3,5,4],Right [1,2],Right [0]] streamly-coreUnlike  this combinator performs a rolling comparison of two successive elements in the input stream. Assuming the input stream is  [a,b,c,...] and the comparison function is cmp(, the parser first assigns the element a to the first group, then if  a `cmp` b is  b) is also assigned to the same group. If  b `cmp` c is  then c is also assigned to the same group and so on. When the comparison fails the parser is terminated. Each group is folded using the  f9 and the result of the fold is the result of the parser."Stops - when the comparison fails.Fails - never.:{ runGroupsByRolling eq = Stream.fold Fold.toList . Stream.parseMany (Parser.groupByRolling eq Fold.toList) . Stream.fromList:}runGroupsByRolling (<) [][]runGroupsByRolling (<) [1] [Right [1]])runGroupsByRolling (<) [3, 5, 4, 1, 2, 0]-[Right [3,5],Right [4],Right [1,2],Right [0]] Pre-release streamly-coreLike , but if the predicate is  then collects using the first fold as long as the predicate holds , if the predicate is 6 collects using the second fold as long as it remains  . Returns  for the first case and  for the second case.For example, if we want to detect sorted sequences in a stream, both ascending and descending cases we can use 'groupByRollingEither (<=) Fold.toList Fold.toList'. Pre-release streamly-coreMatch the given sequence of elements using the given comparison function. Returns the original sequence if successful. Definition:listEqBy cmp xs = Parser.streamEqBy cmp (Stream.fromList xs) *> Parser.fromPure xs Examples:Stream.parse (Parser.listEqBy (==) "string") $ Stream.fromList "string"Right "string"Stream.parse (Parser.listEqBy (==) "mismatch") $ Stream.fromList "match"1Left (ParseError "streamEqBy: mismtach occurred") streamly-coreLike  but uses a stream instead of a list and does not return the stream. streamly-coreMatch the input sequence with the supplied list and return it if successful.listEq = Parser.listEqBy (==) streamly-coreMatch if the input stream is a subsequence of the argument stream i.e. all the elements of the input stream occur, in order, in the argument stream. The elements do not have to occur consecutively. A sequence is considered a subsequence of itself. streamly-core4Stateful scan on the input of a parser using a Fold. Unimplemented streamly-core&Zip the input of a fold with a stream. Pre-release streamly-corePair each element of a fold input with its index, starting from index 0. Pre-release streamly-core(makeIndexFilter indexer filter predicate generates a fold filtering function using a fold indexing function that attaches an index to each input element and a filtering function that filters using @(index, element) -> Bool) as predicate. For example: filterWithIndex = makeIndexFilter indexed filter filterWithAbsTime = makeIndexFilter timestamped filter filterWithRelTime = makeIndexFilter timeIndexed filter  Pre-release streamly-coresampleFromthen offset stride samples the element at offset- index and then every element at strides of stride. Pre-release streamly-core span p f1 f2 composes folds f1 and f2 such that f1. consumes the input as long as the predicate p is . f2! consumes the rest of the input. > let span_ p xs = Stream.parse (Parser.span p Fold.toList Fold.toList) $ Stream.fromList xs > span_ (< 1)  [],[1,2,3]1,2,3 > span_ (< 2)  [1],[2,3]1,2,3 > span_ (< 4)  [1,2,3],[]1,2,3  Pre-release streamly-coreBreak the input stream into two groups, the first group takes the input as long as the predicate applied to the first element of the stream and next input element holds /, the second group takes the rest of the input. Pre-release streamly-coreLike  but applies the predicate in a rolling fashion i.e. predicate is applied to the previous and the next input elements. Pre-release streamly-coreTakes at-most n input elements.)Stops - when the collecting parser stops.)Fails - when the collecting parser fails.Stream.parse (Parser.takeP 4 (Parser.takeEQ 2 Fold.toList)) $ Stream.fromList [1, 2, 3, 4, 5] Right [1,2]Stream.parse (Parser.takeP 4 (Parser.takeEQ 5 Fold.toList)) $ Stream.fromList [1, 2, 3, 4, 5]Left (ParseError "takeEQ: Expecting exactly 5 elements, input terminated on 4")Internal streamly-core)Run a parser without consuming the input. streamly-coreLike  but the entire input must satisfy the pattern otherwise the parser fails. This is many times faster than deintercalate.3p1 = Parser.takeWhile1 (not . (== '+')) Fold.toListp2 = Parser.satisfy (== '+')-p = Parser.deintercalateAll p1 p2 Fold.toList#Stream.parse p $ Stream.fromList ""Right []$Stream.parse p $ Stream.fromList "1"Right [Left "1"]%Stream.parse p $ Stream.fromList "1+",Left (ParseError "takeWhile1: end of input")(Stream.parse p $ Stream.fromList "1+2+3"6Right [Left "1",Right '+',Left "2",Right '+',Left "3"] streamly-coreApply two parsers alternately to an input stream. The input stream is considered an interleaving of two patterns. The two parsers represent the two patterns. Parsing starts at the first parser and stops at the first parser. It can be used to parse a infix style pattern e.g. p1 p2 p1 . Empty input or single parse of the first parser is accepted.3p1 = Parser.takeWhile1 (not . (== '+')) Fold.toListp2 = Parser.satisfy (== '+')*p = Parser.deintercalate p1 p2 Fold.toList#Stream.parse p $ Stream.fromList ""Right []$Stream.parse p $ Stream.fromList "1"Right [Left "1"]%Stream.parse p $ Stream.fromList "1+"Right [Left "1"](Stream.parse p $ Stream.fromList "1+2+3"6Right [Left "1",Right '+',Left "2",Right '+',Left "3"] streamly-coreApply two parsers alternately to an input stream. The input stream is considered an interleaving of two patterns. The two parsers represent the two patterns. Parsing starts at the first parser and stops at the first parser. It can be used to parse a infix style pattern e.g. p1 p2 p1 . Empty input or single parse of the first parser is accepted.3p1 = Parser.takeWhile1 (not . (== '+')) Fold.toListp2 = Parser.satisfy (== '+')+p = Parser.deintercalate1 p1 p2 Fold.toList#Stream.parse p $ Stream.fromList "",Left (ParseError "takeWhile1: end of input")$Stream.parse p $ Stream.fromList "1"Right [Left "1"]%Stream.parse p $ Stream.fromList "1+"Right [Left "1"](Stream.parse p $ Stream.fromList "1+2+3"6Right [Left "1",Right '+',Left "2",Right '+',Left "3"] streamly-coreApply two parsers alternately to an input stream. Parsing starts at the first parser and stops at the first parser. The output of the first parser is emiited and the output of the second parser is discarded. It can be used to parse a infix style pattern e.g. p1 p2 p1 . Empty input or single parse of the first parser is accepted. Definitions: Parser.fromEffect (Fold.extractM f) Examples:3p1 = Parser.takeWhile1 (not . (== '+')) Fold.toListp2 = Parser.satisfy (== '+')"p = Parser.sepBy p1 p2 Fold.toList#Stream.parse p $ Stream.fromList ""Right []$Stream.parse p $ Stream.fromList "1" Right ["1"]%Stream.parse p $ Stream.fromList "1+" Right ["1"](Stream.parse p $ Stream.fromList "1+2+3"Right ["1","2","3"] streamly-core8Non-backtracking version of sepBy. Several times faster. streamly-coreLike , but requires at least one successful parse. Definition:>sepBy1 p1 p2 f = Parser.deintercalate1 p1 p2 (Fold.catLefts f) Examples:3p1 = Parser.takeWhile1 (not . (== '+')) Fold.toListp2 = Parser.satisfy (== '+')#p = Parser.sepBy1 p1 p2 Fold.toList#Stream.parse p $ Stream.fromList "",Left (ParseError "takeWhile1: end of input")$Stream.parse p $ Stream.fromList "1" Right ["1"]%Stream.parse p $ Stream.fromList "1+" Right ["1"](Stream.parse p $ Stream.fromList "1+2+3"Right ["1","2","3"] streamly-coreApply a collection of parsers to an input stream in a round robin fashion. Each parser is applied until it stops and then we repeat starting with the the first parser again. Unimplemented streamly-core sequence f p; collects sequential parses of parsers in a serial stream p using the fold f6. Fails if the input ends or any of the parsers fail. Pre-release streamly-coreLike  but uses a  instead of a  to collect the results. Parsing stops or fails if the collecting parser stops or fails. Unimplemented streamly-coreCollect zero or more parses. Apply the supplied parser repeatedly on the input stream and push the parse results to a downstream fold.Stops: when the downstream fold stops or the parser fails. Fails: never, produces zero or more results.%many = Parser.countBetween 0 maxBound Compare with aL. streamly-coreCollect one or more parses. Apply the supplied parser repeatedly on the input stream and push the parse results to a downstream fold.Stops: when the downstream fold stops or the parser fails. Fails: if it stops without producing a single result.-some p f = Parser.manyP p (Parser.takeGE 1 f)%some = Parser.countBetween 1 maxBound Compare with aM. streamly-corecountBetween m n f p collects between m and n sequential parses of parser p using the fold f. Stop after collecting n> results. Fails if the input ends or the parser fails before m results are collected.countBetween m n p f = Parser.manyP p (Parser.takeBetween m n f) Unimplemented streamly-core count n f p collects exactly n sequential parses of parser p using the fold f6. Fails if the input ends or the parser fails before n results are collected.!count n = Parser.countBetween n n0count n p f = Parser.manyP p (Parser.takeEQ n f) Unimplemented streamly-coreLike  but uses a & to collect the results instead of a . Parsing stops or fails if the collecting parser stops or fails.3We can implemnent parsers like the following using : ;countBetweenTill m n f p = manyTillP (takeBetween m n f) p  Unimplemented streamly-coremanyTill chunking test f tries the parser test on the input, if test fails it backtracks and tries chunking, after chunking succeeds test2 is tried again and so on. The parser stops when test succeeds. The output of test is discarded and the output of chunking; is accumulated by the supplied fold. The parser fails if chunking fails.Stops when the fold f stops. streamly-coremanyThen f collect recover repeats the parser collect on the input and collects the output in the supplied fold. If the the parser collect fails, parser recover? is run until it stops and then we start repeating the parser collect6 again. The parser fails if the recovery parser fails.For example, this can be used to find a key frame in a video stream after an error. Unimplemented streamly-core(Keep trying a parser up to a maximum of n failures. When the parser fails the input consumed till now is dropped and the new instance is tried on the fresh input. Unimplemented streamly-coreLike  but aborts after n successive failures. Unimplemented streamly-coreKeep trying a parser until it succeeds. When the parser fails the input consumed till now is dropped and the new instance is tried on the fresh input. Unimplemented streamly-core escape char streamly-core$quote char, to quote inside brackets streamly-coreBlock opening bracket streamly-coreBlock closing bracket streamly-coreMatches escape elem? streamly-coreMatches left quote? streamly-corematches right quote? streamly-corematches word separator? streamly-core0Retain the quotes and escape chars in the output streamly-core-quote char -> escaped char -> translated char streamly-coreMatches an escape elem? streamly-core0If left quote, return right quote, else Nothing. streamly-coreMatches a word separator? streamly-core Escape char streamly-core0If left quote, return right quote, else Nothing. streamly-coreMatches a word separator? streamly-core Escape char streamly-core0If left quote, return right quote, else Nothing. streamly-coreMatches a word separator?b!(c) 2021 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHC Safe-Inferred" $%'.145789: streamly-core&A seed with a buffer. It allows us to  or return some data after reading it. Useful in backtracked parsing. streamly-coreMake a source from a seed value. The buffer would start as empty. Pre-release streamly-coreReturn some unused data back to the source. The data is prepended (or consed) to the source. Pre-release streamly-core!Determine if the source is empty. streamly-coreConvert a producer to a producer from a buffered source. Any buffered data is read first and then the seed is unfolded. Pre-release streamly-coreApply a parser repeatedly on a buffered source producer to generate a producer of parsed values. Pre-release!(c) 2021 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHC Safe-Inferred" $%'.145789:L streamly-core!Simplify a producer to an unfold. Pre-release streamly-core)Convert a StreamD stream into a producer. Pre-releaseYZ[\]^_`abcd\]_^`abcYZ[dc!(c) 2020 Composewell Technologies BSD-3-Clausestreamly@composewell.com pre-releaseGHC Safe-Inferred" $%'.145789:>  !(c) 2020 Composewell Technologies BSD3-3-Clausestreamly@composewell.com experimentalGHC Safe-Inferred# $%'.145789:d$ streamly-coreThe internal contents of the array representing the entire array. streamly-core!The starting index of this slice. streamly-coreThe length of this slice. streamly-coreThis is the true length of the array. Coincidentally, this also represents the first index beyond the maximum acceptable index of the array. This is specific to the array contents itself and not dependent on the slice. This value should not change and is shared across all the slices. streamly-core emptyOf count allocates a zero length array that can be extended to hold up to count items without reallocating. Pre-release streamly-core Definition:nil = MutArray.new 0 streamly-core2Write the given element to the given index of the =. Does not check if the index is out of bounds of the array. Pre-release streamly-coreWrite the given element to the given index of the array. Does not check if the index is out of bounds of the array. Pre-release streamly-coreO(1) Write the given element at the given index in the array. Performs in-place mutation of the array.putIndex ix arr val = MutArray.modifyIndex ix arr (const (val, ())) Pre-release streamly-coreWrite an input stream of (index, value) pairs to an array. Throws an error if any index is out of bounds. Pre-release streamly-coreModify a given index of an array using a modifier function without checking the bounds.9Unsafe because it does not check the bounds of the array. Pre-release streamly-core;Modify a given index of an array using a modifier function. Pre-release streamly-coreReallocates the array according to the new size. This is a safe function that always creates a new array and copies the old array into the new one. If the reallocated size is less than the original array it results in a truncated version of the original array. streamly-coreReally really unsafe, appends the element into the first array, may cause silent data corruption or if you are lucky a segfault if the index is out of bounds.Internal streamly-coresnocWith sizer arr elem mutates arr to append elem*. The length of the array increases by 1.+If there is no reserved space available in arr9 it is reallocated to a size in bytes determined by the  sizer oldSize function, where oldSize$ is the original size of the array.Note that the returned array may be a mutated version of the original array. Pre-release streamly-coreThe array is mutated to append an additional element to it. If there is no reserved space available in the array then it is reallocated to double the original size.This is useful to reduce allocations when appending unknown number of elements.Note that the returned array may be a mutated version of the original array.snoc = MutArray.snocWith (* 2)Performs O(n * log n) copies to grow, but is liberal with memory allocation. Pre-release streamly-coreMake the uninitialized memory in the array available for use extending it by the supplied length beyond the current length of the array. The array may be reallocated. streamly-coreReturn the element at the specified index without checking the bounds from a MutableArray# RealWorld.9Unsafe because it does not check the bounds of the array. streamly-coreReturn the element at the specified index without checking the bounds.9Unsafe because it does not check the bounds of the array. streamly-coreO(1)< Lookup the element at the given index. Index starts from 0. streamly-coreO(1)! Slice an array in constant time.0Unsafe: The bounds of the slice are not checked.Unsafe Pre-release streamly-coreO(1) Slice an array in constant time. Throws an error if the slice extends out of the array bounds. Pre-release streamly-core Convert an Array into a list. Pre-release streamly-core*Generates a stream from the elements of a MutArray.$read = Stream.unfold MutArray.reader streamly-coreThe default chunk size by which the array creation routines increase the size of the array when the array is grown linearly. streamly-coreLike  but does not check the array bounds when writing. The fold driver must not call the step function more than n times otherwise it will corrupt the memory and crash. This function exists mainly because any conditional in the step function blocks fusion causing 10x performance slowdown. Pre-release streamly-core createOf n folds a maximum of n' elements from the input stream to an Array.4createOf n = Fold.take n (MutArray.unsafeCreateOf n) Pre-release streamly-corecreateWith minCount folds the whole input to a single array. The array starts at a size big enough to hold minCount elements, the size is doubled every time the array needs to be grown.-Caution! Do not use this on infinite streams. Pre-release streamly-core'Fold the whole input to a single array.Same as  using an initial array size of ' bytes rounded up to the element size.-Caution! Do not use this on infinite streams. streamly-core Create a  from the first n7 elements of a stream. The array is allocated to size n", if the stream terminates before n- elements then the array may hold less than n elements. streamly-corechunksOf n stream< groups the input stream into a stream of arrays of size n. )chunksOf n = foldMany (MutArray.writeN n) Pre-release streamly-coreResumable unfold of an array. streamly-coreResumable unfold of an array. streamly-coreUnfold an array into a stream. streamly-corePut a sub range of a source array into a subrange of a destination array. This is not safe as it does not check the bounds. streamly-coreCompare the length of the arrays. If the length is equal, compare the lexicographical ordering of two underlying byte arrays otherwise return the result of length comparison. Pre-release streamly-core from index streamly-corelength of the slice streamly-core from index streamly-corelength of the slice33!(c) 2021 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHC Safe-Inferred" $%'.145789: streamly-coreNote that it is not safe to return a reference to the mutable Ring using a scan as the Ring is continuously getting mutated. You could however copy out the Ring. streamly-coreMove the ring head clockwise (+ve adj) or counter clockwise (-ve adj) by the given amount. streamly-core/toMutArray rignHeadAdjustment lengthToRead ring. Convert the ring into a boxed mutable array. Note that the returned MutArray shares the same underlying memory as the Ring, the user of this API needs to ensure that the ring is not mutated during and after the conversion. streamly-core-Copy out the mutable ring to a mutable Array. streamly-core-Seek by n and then read the entire ring. Use & on the stream to restrict the reads.  d!(c) 2020 Composewell Technologies BSD3-3-Clausestreamly@composewell.com experimentalGHC Safe-Inferred" $%'.145789:e!(c) 2020 Composewell Technologies BSD3-3-Clausestreamly@composewell.com experimentalGHC Safe-Inferred# $%'.145789:A+ streamly-coreAn unboxed mutable array. An array is created with a given length and capacity. Length is the number of valid elements in the array. Capacity is the maximum number of elements that the array can be expanded to without having to reallocate the memory.The elements in the array can be mutated in-place without changing the reference (constructor). However, the length of the array cannot be mutated in-place. A new array reference is generated when the length changes. When the length is increased (upto the maximum reserved capacity of the array), the array is not reallocated and the new reference uses the same underlying memory as the old one.Several routines in this module allow the programmer to control the capacity of the array. The programmer can control the trade-off between memory usage and performance impact due to reallocations when growing or shrinking the array. streamly-coreindex into arrContents streamly-coreindex into arrContents Represents the first invalid index of the array. streamly-core#first invalid index of arrContents. streamly-core Given an Unboxed type (unused first arg) and a number of bytes, return how many elements of that type will completely fit in those bytes. streamly-coreReturn a copy of the array in pinned memory if unpinned, else return the original array. streamly-coreReturn a copy of the array in unpinned memory if pinned, else return the original array. streamly-coreReturn , if the array is allocated in pinned memory. streamly-core&newArrayWith allocator alignment count allocates a new array of zero length and with a capacity to hold count elements, using allocator size alignment" as the memory allocator function.Alignment must be greater than or equal to machine word size and a power of 2.Alignment is ignored if the allocator allocates unpinned memory. Pre-release streamly-coreCreate an empty array. streamly-coreAllocates a pinned empty array that with a reserved capacity of bytes. The memory of the array is uninitialized and the allocation is aligned as per the Unboxed instance of the type. Pre-release streamly-coreLike  but using an allocator is a pinned memory allocator and the alignment is dictated by the Unboxed instance of the type.Internal streamly-coreAllocates a pinned array of zero length but growable to the specified capacity without reallocation. streamly-coreAllocates an unpinned array of zero length but growable to the specified capacity without reallocation. streamly-coreWrite the given element to the given index of the array. Does not check if the index is out of bounds of the array. Pre-release streamly-coreO(1) Write the given element at the given index in the array. Performs in-place mutation of the array.putIndex ix arr val = MutArray.modifyIndex ix arr (const (val, ()))f = MutArray.putIndicesputIndex ix arr val = Stream.fold (f arr) (Stream.fromPure (ix, val)) streamly-coreWrite an input stream of (index, value) pairs to an array. Throws an error if any index is out of bounds. Pre-release streamly-core;Modify a given index of an array using a modifier function.9Unsafe because it does not check the bounds of the array. Pre-release streamly-core;Modify a given index of an array using a modifier function. Pre-release streamly-core:Modify the array indices generated by the supplied stream. Pre-release streamly-coreModify each element of an array using the supplied modifier function.=This is an in-place equivalent of an immutable map operation. Pre-release streamly-coreSwap the elements at two indices without validating the indices.Unsafe: This could result in memory corruption if indices are not valid. Pre-release streamly-core!Swap the elements at two indices. Pre-release streamly-coreThe page or block size used by the GHC allocator. Allocator allocates at least a block and then allocates smaller allocations from within a block. streamly-coreAllocations larger than  are in multiples of block size and are always pinned. The space beyond the end of a large object up to the end of the block is unused. streamly-coreRound up an array larger than  to use the whole block. streamly-core%allocBytesToBytes elem allocatedBytes returns the array size in bytes such that the real allocation is less than or equal to allocatedBytes , unless allocatedBytes is less than the size of one array element in which case it returns one element's size. streamly-core Given an Unboxed type (unused first arg) and real allocation size (including overhead), return how many elements of that type will completely fit in it, returns at least 1. streamly-coreThe default chunk size by which the array creation routines increase the size of the array when the array is grown linearly. streamly-coreRound the second argument down to multiples of the first argument. streamly-corerealloc newCapacity array; reallocates the array to the specified capacity in bytes.If the new size is less than the original array the array gets truncated. If the new size is not a multiple of array element size then it is rounded down to multiples of array size. If the new size is more than . then it is rounded up to the block size (4K).If the original array is pinned, the newly allocated array is also pinned. streamly-core-reallocWith label capSizer minIncrBytes array. The label is used in error messages and the capSizer is used to determine the capacity of the new array in bytes given the current byte length of the array. streamly-coregrow newCapacity array changes the total capacity of the array so that it is enough to hold the specified number of elements. Nothing is done if the specified capacity is less than the length of the array.If the capacity is more than / then it is rounded up to the block size (4K). Pre-release streamly-coreLike 2 but if the requested byte capacity is more than 1 then it is rounded up to the closest power of 2. Pre-release streamly-coreResize the allocated memory to drop any reserved free space at the end of the array and reallocate it to reduce wastage.Up to 25% wastage is allowed to avoid reallocations. If the capacity is more than  then free space up to the  is retained. Pre-release streamly-core Snoc using a . Low level reusable function.Internal streamly-coreReally really unsafe, appends the element into the first array, may cause silent data corruption or if you are lucky a segfault if the first array does not have enough space to append the element.Internal streamly-coreLike  but does not reallocate when pre-allocated array capacity becomes full.Internal streamly-coresnocWith sizer arr elem mutates arr to append elem*. The length of the array increases by 1.+If there is no reserved space available in arr9 it is reallocated to a size in bytes determined by the sizer oldSizeBytes function, where  oldSizeBytes, is the original size of the array in bytes.#If the new array size is more than " we automatically round it up to .Note that the returned array may be a mutated version of the original array. Pre-release streamly-coreThe array is mutated to append an additional element to it. If there is no reserved space available in the array then it is reallocated to grow it by  rounded up to " when the size becomes more than .Note that the returned array may be a mutated version of the original array.8Performs O(n^2) copies to grow but is thrifty on memory. Pre-release streamly-coreThe array is mutated to append an additional element to it. If there is no reserved space available in the array then it is reallocated to double the original size.This is useful to reduce allocations when appending unknown number of elements.Note that the returned array may be a mutated version of the original array.snoc = MutArray.snocWith (* 2)Performs O(n * log n) copies to grow, but is liberal with memory allocation. streamly-coreReally really unsafe, unboxes a Haskell type and appends the resulting bytes to the byte array, may cause silent data corruption or if you are lucky a segfault if the array does not have enough space to append the element.Internal streamly-coreSkip the specified number of bytes in the array. The data in the skipped region remains uninitialzed. streamly-coreLike  but does not grow the array when pre-allocated array capacity becomes full.Internal streamly-coreUnbox a Haskell type and append the resulting bytes to a mutable byte array. The array is grown exponentially when more space is needed. Definition:pokeAppend arr x = MutArray.castUnsafe <$> MutArray.snoc (MutArray.castUnsafe arr) x streamly-coreReally really unsafe, create a Haskell value from an unboxed byte array, does not check if the array is big enough, may return garbage or if you are lucky may cause a segfault.Internal streamly-core3Discard the specified number of bytes in the array. streamly-coreCreate a Haskell value from its unboxed representation from the head of a byte array, return the value and the remaining array. streamly-coreReturn the element at the specified index without checking the bounds.9Unsafe because it does not check the bounds of the array. streamly-coreO(1)< Lookup the element at the given index. Index starts from 0. streamly-coreO(1) Lookup the element at the given index from the end of the array. Index starts from 0.Slightly faster than computing the forward index and using getIndex. streamly-coreGiven an unfold that generates array indices, read the elements on those indices from the supplied MutArray. An error is thrown if an index is out of bounds. Pre-release streamly-coreO(1)! Slice an array in constant time.0Unsafe: The bounds of the slice are not checked.Unsafe Pre-release streamly-coreO(1) Slice an array in constant time. Throws an error if the slice extends out of the array bounds. Pre-release streamly-coreYou may not need to reverse an array because you can consume it in reverse using . To reverse large arrays you can read in reverse and write to another array. However, in-place reverse can be useful to take adavantage of cache locality and when you do not want to allocate additional memory. streamly-coreGenerate the next permutation of the sequence, returns False if this is the last permutation. Unimplemented streamly-corePartition an array into two halves using a partitioning predicate. The first half retains values where the predicate is < and the second half retains values where the predicate is . Pre-release streamly-coreShuffle corresponding elements from two arrays using a shuffle function. If the shuffle function returns  then do nothing otherwise swap the elements. This can be used in a bottom up fold to shuffle or reorder the elements. Unimplemented streamly-coredivideBy level partition array performs a top down hierarchical recursive partitioning fold of items in the container using the given function as the partition function. Level indicates the level in the tree where the fold would stop.This performs a quick sort if the partition function is 'partitionBy (< pivot)'. Unimplemented streamly-coremergeBy level merge array performs a pairwise bottom up fold recursively merging the pairs using the supplied merge function. Level indicates the level in the tree where the fold would stop.This performs a random shuffle if the merge function is random. If we stop at level 0 and repeatedly apply the function then we can do a bubble sort. Unimplemented streamly-coreO(1)" Get the byte length of the array. streamly-coreO(1) Get the length of the array i.e. the number of elements in the array. Note that + is less expensive than this operation, as ' involves a costly division operation. streamly-coreGet the total capacity of an array. An array may have space reserved beyond the current used length of the array. Pre-release streamly-coreThe remaining capacity in the array for appending more elements without reallocation. Pre-release streamly-corechunksOf n stream9 groups the elements in the input stream into arrays of n elements each.0Same as the following but may be more efficient:2chunksOf n = Stream.foldMany (MutArray.createOf n) Pre-release streamly-coreLike  but creates pinned arrays. streamly-coreWhen we are buffering a stream of unknown size into an array we do not know how much space to pre-allocate. So we start with the min size and emit the array then keep on doubling the size every time. Thus we do not need to guess the optimum chunk size.We can incorporate this in chunksOfAs if the additional size parameter does not impact perf. streamly-core(Buffer the stream into arrays in memory. streamly-core Use the "reader" unfold instead. concat = unfoldMany reader=We can try this if there are any fusion issues in the unfold. streamly-core#Use the "readerRev" unfold instead. concat = unfoldMany readerRev=We can try this if there are any fusion issues in the unfold. streamly-coreResumable unfold of an array. streamly-coreUnfold an array into a stream. streamly-core/Unfold an array into a stream in reverse order. streamly-core Convert a  into a list. streamly-core Convert a  into a stream.$read = Stream.unfold MutArray.reader streamly-core Convert a  into a stream in reverse order.*readRev = Stream.unfold MutArray.readerRev streamly-coreStrict left fold of an array. streamly-coreRight fold of an array. streamly-coreunsafeAppendN n arr appends up to n$ input items to the supplied array.%Unsafe: Do not drive the fold beyond n: elements, it will lead to memory corruption or segfault.1Any free space left in the array after appending n elements is lost.Internal streamly-coreAppend n elements to an existing array. Any free space left in the array after appending n elements is lost.appendN n initial = Fold.take n (MutArray.unsafeAppendN n initial) streamly-coreappendWith realloc action mutates the array generated by action to append the input stream. If there is no reserved space available in the array it is reallocated to a size in bytes determined by realloc oldSize , where oldSize+ is the current size of the array in bytes.Note that the returned array may be a mutated version of original array.9appendWith sizer = Fold.foldlM' (MutArray.snocWith sizer) Pre-release streamly-core append action mutates the array generated by action to append the input stream. If there is no reserved space available in the array it is reallocated to double the size.Note that the returned array may be a mutated version of original array."append = MutArray.appendWith (* 2) streamly-coreLike ! but takes a new array allocator  alloc size function as argument.?unsafeCreateOfWith alloc n = MutArray.unsafeAppendN (alloc n) n Pre-release streamly-coreLike  but does not check the array bounds when writing. The fold driver must not call the step function more than n times otherwise it will corrupt the memory and crash. This function exists mainly because any conditional in the step function blocks fusion causing 10x performance slowdown.=unsafeCreateOf = MutArray.unsafeCreateOfWith MutArray.emptyOf streamly-coreLike  but creates a pinned array. streamly-corecreateOfWith alloc n folds a maximum of n- elements into an array allocated using the alloc function.createOfWith alloc n = Fold.take n (MutArray.unsafeCreateOfWith alloc n)3createOfWith alloc n = MutArray.appendN (alloc n) n streamly-core createOf n folds a maximum of n' elements from the input stream to an .-createOf = MutArray.createOfWith MutArray.new4createOf n = Fold.take n (MutArray.unsafeCreateOf n)4createOf n = MutArray.appendN n (MutArray.emptyOf n) streamly-coreLike  but creates a pinned array. streamly-core>Like unsafeCreateOfWith but writes the array in reverse order.Internal streamly-core8Like createOfWith but writes the array in reverse order.Internal streamly-coreLike ' but writes the array in reverse order. Pre-release streamly-corepinnedWriteNAligned align n folds a maximum of n& elements from the input stream to a  aligned to the given size.pinnedWriteNAligned align = MutArray.createOfWith (MutArray.pinnedNewAligned align)pinnedWriteNAligned align n = MutArray.appendN n (MutArray.pinnedNewAligned align n) Pre-release streamly-core(Buffer a stream into a stream of arrays.>buildChunks n = Fold.many (MutArray.createOf n) Fold.toStreamKBreaking an array into an array stream can be useful to consume a large array sequentially such that memory of the array is released incrementatlly. See also: arrayStreamKFromStreamD. Unimplemented streamly-corecreateWith minCount folds the whole input to a single array. The array starts at a size big enough to hold minCount elements, the size is doubled every time the array needs to be grown.-Caution! Do not use this on infinite streams.4f n = MutArray.appendWith (* 2) (MutArray.emptyOf n)2createWith n = Fold.rmapM MutArray.rightSize (f n)createWith n = Fold.rmapM MutArray.fromChunksK (MutArray.buildChunks n) Pre-release streamly-core'Fold the whole input to a single array.Same as  using an initial array size of ' bytes rounded up to the element size.-Caution! Do not use this on infinite streams. streamly-coreLike  but creates a pinned array. streamly-coreUse the  fold instead.1fromStreamN n = Stream.fold (MutArray.createOf n)  streamly-core Create a  from the first N elements of a list. The array is allocated to size N, if the list terminates before N elements then the array may hold less than N elements.  streamly-coreLike   but creates a pinned array.  streamly-core5Like fromListN but writes the array in reverse order. Pre-release  streamly-core;Convert a pure stream in Identity monad to a mutable array.  streamly-core;Convert a pure stream in Identity monad to a mutable array.  streamly-core Also see  . streamly-coreConvert an array stream to an array. Note that this requires peak memory that is double the size of the array stream.  streamly-coreConvert an array stream to an array. Note that this requires peak memory that is double the size of the array stream. Also see  .  streamly-core Create an Array from a stream. This is useful when we want to create a single array from a stream of unknown size.  is at least twice as efficient when the size is already known.Note that if the input stream is too large memory allocation for the array may fail. When the stream size is not known,  followed by processing of indvidual arrays in the resulting stream should be preferred. Pre-release  streamly-coreWe could take the approach of doubling the memory allocation on each overflow. This would result in more or less the same amount of copying as in the chunking approach. However, if we have to shrink in the end then it may result in an extra copy of the entire data.*fromStreamD = StreamD.fold MutArray.create  streamly-core Create a . from a list. The list must be of finite size.  streamly-coreLike   but creates a pinned array.  streamly-coreLike  6 but writes the contents of the list in reverse order.  streamly-coreCopy two arrays into a newly allocated array. If the first array is pinned the spliced array is also pinned.  streamly-coreReally really unsafe, appends the second array into the first array. If the first array does not have enough space it may cause silent data corruption or if you are lucky a segfault.  streamly-corespliceWith sizer dst src mutates dst to append src.. If there is no reserved space available in dst0 it is reallocated to a size determined by the sizer dstBytes srcBytes function, where dstBytes% is the size of the first array and srcBytes+ is the size of the second array, in bytes.Note that the returned array may be a mutated version of first array. Pre-release  streamly-coreThe first array is mutated to append the second array. If there is no reserved space available in the first array a new allocation of exact required size is done.Note that the returned array may be a mutated version of first array. splice = MutArray.spliceWith (+)If the original array is pinned the spliced array is also pinned. Pre-release  streamly-coreLike  but the growth of the array is exponential. Whenever a new allocation is required the previous array size is at least doubled.This is useful to reduce allocations when folding many arrays together.Note that the returned array may be a mutated version of first array.spliceExp = MutArray.spliceWith (\l1 l2 -> max (l1 * 2) (l1 + l2)) Pre-release  streamly-coreGenerate a stream of array slices using a predicate. The array element matching the predicate is dropped. Pre-release  streamly-coreDrops the separator byte streamly-coreLike  / but does not check whether the index is valid.  streamly-coreCreate two slices of an array without copying the original array. The specified index i( is the first index of the second slice.  streamly-core&Cast an array having elements of type a( into an array having elements of type b8. The array size must be a multiple of the size of type b otherwise accessing the last element of the array may result into a crash or a random value. Pre-release  streamly-coreCast an  MutArray a into an MutArray Word8.  streamly-core&Cast an array having elements of type a( into an array having elements of type b. The length of the array should be a multiple of the size of the target element otherwise  is returned.  streamly-coreUse a  MutArray a as Ptr a. This is useful when we want to pass an array as a pointer to some operating system call or to a "safe" FFI call.If the array is not pinned it is copied to pinned memory before passing it to the monadic action.Performance Notes: Forces a copy if the array is not pinned. It is advised that the programmer keeps this in mind and creates a pinned array opportunistically before this operation occurs, to avoid the cost of a copy if possible.Unsafe because of direct pointer operations. The user must ensure that they are writing within the legal bounds of the array. Pre-release  streamly-coreByte compare two arrays. Compare the length of the arrays. If the length is equal, compare the lexicographical ordering of two underlying byte arrays otherwise return the result of length comparison.Unsafe: Note that the  instance of sum types with constructors of different sizes may leave some memory uninitialized which can make byte comparison unreliable. Pre-release  streamly-coreByte equality of two arrays.5byteEq arr1 arr2 = (==) EQ $ MArray.byteCmp arr1 arr2Unsafe: See  .  streamly-coreParser pCompactLE maxElems coalesces adjacent arrays in the input stream only if the combined size would be less than or equal to maxElems elements. Note that it won't split an array if the original array is already larger than maxElems.maxElems must be greater than 0.Generates unpinned arrays irrespective of the pinning status of input arrays.Internal  streamly-corePinned version of  .  streamly-coreFold fCompactGE minElems coalesces adjacent arrays in the input stream until the size becomes greater than or equal to minElems.Generates unpinned arrays irrespective of the pinning status of input arrays.  streamly-corePinned version of  .  streamly-coreLike  . but for transforming folds instead of stream.0lCompactGE n = Fold.many (MutArray.fCompactGE n)Generates unpinned arrays irrespective of the pinning status of input arrays.  streamly-corePinned version of  .  streamly-corecompactGE n stream" coalesces adjacent arrays in the stream2 until the size becomes greater than or equal to n.5compactGE n = Stream.foldMany (MutArray.fCompactGE n)  streamly-core'compactEQ n' coalesces adajacent arrays in the input stream to arrays of exact size n. Unimplemented  streamly-core9Strip elements which match with predicate from both ends. Pre-release  streamly-coreGiven an array sorted in ascending order except the last element being out of order, use bubble sort to place the last element at the right place such that the array remains sorted in ascending order. Pre-release streamly-core from index streamly-corelength of the slice streamly-core from index streamly-corelength of the slice f!(c) 2020 Composewell Technologies BSD3-3-Clausestreamly@composewell.com experimentalGHC Safe-Inferred" $%'.145789:kW7  streamly-coreUse an Array a as Ptr a.See  . in the Mutable array module for more details.Unsafe Pre-release  streamly-coreMakes an immutable array using the underlying memory of the mutable array.Please make sure that there are no other references to the mutable array lying around, so that it is never used after freezing it using  unsafeFreeze. If the underlying array is mutated, the immutable promise is lost. Pre-release  streamly-core Similar to   but uses  on the mutable array first.  streamly-coreMakes a mutable array using the underlying memory of the immutable array.Please make sure that there are no other references to the immutable array lying around, so that it is never used after thawing it using  unsafeThaw. If the resulting array is mutated, any references to the older immutable array are mutated as well. Pre-release  streamly-coreReturn a copy of the  ? in pinned memory if unpinned, else return the original array.  streamly-coreReturn a copy of the  ? in unpinned memory if pinned, else return the original array.  streamly-coreReturn , if the array is allocated in pinned memory.  streamly-coreCopy two immutable arrays into a new array. If you want to splice more than two arrays then this operation would be highly inefficient because it would make a copy on every splice operation, instead use the  ) operation to combine n immutable arrays.  streamly-core Create an   from the first N elements of a list. The array is allocated to size N, if the list terminates before N elements then the array may hold less than N elements.  streamly-coreLike   but creates a pinned array.  streamly-core Create an   from the first N elements of a list in reverse order. The array is allocated to size N, if the list terminates before N elements then the array may hold less than N elements. Pre-release  streamly-core Create an  . from a list. The list must be of finite size.  streamly-coreLike   but creates a pinned array.  streamly-core Create an   from a list in reverse order. The list must be of finite size. Pre-release  streamly-core Create an   from the first N elements of a stream. The array is allocated to size N, if the stream terminates before N elements then the array may hold less than N elements.,fromStreamN n = Stream.fold (Array.writeN n) Pre-release  streamly-core Create an   from a stream. This is useful when we want to create a single array from a stream of unknown size.   is at least twice as efficient when the size is already known.$fromStream = Stream.fold Array.writeNote that if the input stream is too large memory allocation for the array may fail. When the stream size is not known,   followed by processing of indvidual arrays in the resulting stream should be preferred. Pre-release  streamly-corechunksOf n stream9 groups the elements in the input stream into arrays of n elements each.0Same as the following but may be more efficient:-chunksOf n = Stream.foldMany (Array.writeN n) Pre-release  streamly-coreLike   but creates pinned arrays.  streamly-core;Convert a stream of arrays into a stream of their elements.'concat = Stream.unfoldMany Array.reader  streamly-coreConvert a stream of arrays into a stream of their elements reversing the contents of each array before flattening.-concatRev = Stream.unfoldMany Array.readerRev  streamly-coreFold  fCompactGE n coalesces adjacent arrays in the input stream until the size becomes greater than or equal to n.Generates unpinned arrays irrespective of the pinning status of input arrays.  streamly-corePInned version of  .  streamly-corecompactGE n stream" coalesces adjacent arrays in the stream2 until the size becomes greater than or equal to n.2compactGE n = Stream.foldMany (Array.fCompactGE n)Generates unpinned arrays irrespective of the pinning status of input arrays.  streamly-coreLike  . but for transforming folds instead of stream.-lCompactGE n = Fold.many (Array.fCompactGE n)Generates unpinned arrays irrespective of the pinning status of input arrays.  streamly-corePinned version of  .  streamly-coreReturn element at the specified index without checking the bounds.9Unsafe because it does not check the bounds of the array.  streamly-coreReturn element at the specified index without checking the bounds.  streamly-coreO(1)" Get the byte length of the array.  streamly-coreO(1) Get the length of the array i.e. the number of elements in the array.  streamly-coreUnfold an array into a stream.  streamly-coreUnfold an array into a stream, does not check the end of the array, the user is responsible for terminating the stream within the array bounds. For high performance application where the end condition can be determined by a terminating fold.Written in the hope that it may be faster than "read", however, in the case for which this was written, "read" proves to be faster even though the core generated with unsafeRead looks simpler. Pre-release  streamly-core/Unfold an array into a stream in reverse order.  streamly-core Convert an   into a stream. Pre-release  streamly-coreSame as   streamly-core Convert an   into a stream in reverse order. Pre-release  streamly-coreSame as   streamly-coreCreate two slices of an array without copying the original array. The specified index i( is the first index of the second slice.  streamly-core Convert an   into a list.  streamly-core createOf n folds a maximum of n' elements from the input stream to an  .  streamly-coreLike   but creates a pinned array.  streamly-corepinnedWriteNAligned alignment n folds a maximum of n' elements from the input stream to an   aligned to the given size. Pre-release  streamly-coreLike   but does not check the array bounds when writing. The fold driver must not call the step function more than n times otherwise it will corrupt the memory and crash. This function exists mainly because any conditional in the step function blocks fusion causing 10x performance slowdown.  streamly-core'Fold the whole input to a single array.-Caution! Do not use this on infinite streams.  streamly-coreLike   but creates a pinned array.  streamly-coreFold "step" has a dependency on "initial", and each step is dependent on the previous invocation of step due to state passing, finally extract depends on the result of step, therefore, as long as the fold is driven in the correct order the operations would be correctly ordered. We need to ensure that we strictly evaluate the previous step completely before the next step.To not share the same array we need to make sure that the result of "initial" is not shared. Existential type ensures that it does not get shared across different folds. However, if we invoke "initial" multiple times for the same fold, there is a possiblity of sharing the two because the compiler would consider it as a pure value. One such example is the chunksOf combinator, or using an array creation fold with foldMany combinator. Is there a proper way in GHC to tell it to not share a pure expression in a particular case?For this reason array creation folds have a MonadIO constraint. Pure folds could be unsafe and dangerous. This is dangerous especially when used with foldMany like operations.2unsafePureWrite = Array.unsafeMakePure Array.write  streamly-core>Convert a pure stream in Identity monad to an immutable array.2Same as the following but with better performance:=fromPureStream = Array.fromList . runIdentity . Stream.toList  streamly-core5Copy an immutable 'Ptr Word8' sequence into an array.Unsafe:/ The caller is responsible for safe addressing.Note that this should be evaluated strictly to ensure that we do not hold the reference to the pointer in a lazy thunk.  streamly-core!Copy a null terminated immutable  Word8 sequence into an array.Unsafe:/ The caller is responsible for safe addressing.Note that this is completely safe when reading from Haskell string literals because they are guaranteed to be NULL terminated:-Array.toList $ Array.fromByteStr# "\1\2\3\0"#[1,2,3]Note that this should be evaluated strictly to ensure that we do not hold the reference to the pointer in a lazy thunk.  streamly-coreNote that this should be evaluated strictly to ensure that we do not hold the reference to the pointer in a lazy thunk.  streamly-coreConvert an array stream to an array. Note that this requires peak memory that is double the size of the array stream.  streamly-coreGiven a stream of arrays, splice them all together to generate a single array. The stream must be finite.  streamly-coreByte compare two arrays. Compare the length of the arrays. If the length is equal, compare the lexicographical ordering of two underlying byte arrays otherwise return the result of length comparison.Unsafe: Note that the  instance of sum types with constructors of different sizes may leave some memory uninitialized which can make byte comparison unreliable. Pre-release  streamly-coreByte equality of two arrays.4byteEq arr1 arr2 = (==) EQ $ Array.byteCmp arr1 arr2Unsafe: See  . streamly-coreThis should not be used for combining many or N arrays as it would copy the two arrays everytime to a new array. For coalescing multiple arrays use   instead. streamly-coreIf the type allows a byte-by-byte comparison this instance can be overlapped by a more specific instance that uses  /. Byte comparison can be significantly faster. !(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHC Safe-Inferred" $%'.145789:-!  streamly-coreA ring buffer is a mutable array of fixed size. Initially the array is empty, with ringStart pointing at the start of allocated memory. We call the next location to be written in the ring as ringHead. Initially ringHead == ringStart. When the first item is added, ringHead points to ringStart + sizeof item. When the buffer becomes full ringHead would wrap around to ringStart. When the buffer is full, ringHead always points at the oldest item in the ring and the newest item added always overwrites the oldest item.When using it we should keep in mind that a ringBuffer is a mutable data structure. We should not leak out references to it for immutable use.  streamly-core/Get the first address of the ring as a pointer.  streamly-coreCreate a new ringbuffer and return the ring buffer and the ringHead. Returns the ring and the ringHead, the ringHead is same as ringStart.  streamly-core newRing count( allocates an empty array that can hold count items. The memory of the array is uninitialized and the allocation is aligned as per the  instance of the type. Unimplemented  streamly-coreAdvance the ringHead by 1 item, wrap around if we hit the end of the array.  streamly-coreMove the ringHead by n items. The direction depends on the sign on whether n is positive or negative. Wrap around if we hit the beginning or end of the array.  streamly-corewriteN n is a rolling fold that keeps the last n elements of the stream in a ring array. Unimplemented  streamly-core%Cast a mutable array to a ring array.  streamly-core?Modify a given index of a ring array using a modifier function. Unimplemented  streamly-coreO(1) Write the given element at the given index in the ring array. Performs in-place mutation of the array.?putIndex arr ix val = Ring.modifyIndex arr ix (const (val, ())) Unimplemented  streamly-coreInsert an item at the head of the ring, when the ring is full this replaces the oldest item in the ring with the new item. This is unsafe beause ringHead supplied is not verified to be within the Ring. Also, the ringStart foreignPtr must be guaranteed to be alive by the caller.  streamly-coreInsert an item at the head of the ring, when the ring is full this replaces the oldest item in the ring with the new item. Unimplemented  streamly-coreReturn the element at the specified index without checking the bounds.>Unsafe because it does not check the bounds of the ring array.  streamly-coreO(1)< Lookup the element at the given index. Index starts from 0.  streamly-coreO(1) Lookup the element at the given index from the end of the array. Index starts from 0.Slightly faster than computing the forward index and using getIndex.  streamly-coreO(1)" Get the byte length of the array. Unimplemented  streamly-coreO(1) Get the length of the array i.e. the number of elements in the array. Note that  + is less expensive than this operation, as  ' involves a costly division operation. Unimplemented  streamly-coreGet the total capacity of an array. An array may have space reserved beyond the current used length of the array. Pre-release  streamly-coreThe remaining capacity in the array for appending more elements without reallocation. Pre-release  streamly-coreRead n elements from the ring starting at the supplied ring head. If n is more than the ring size it keeps reading the ring in a circular fashion.If the ring is not full the user must ensure than n is less than or equal to the number of valid elements in the ring.Internal  streamly-core3Unfold a ring array into a stream in reverse order. Unimplemented  streamly-coreringsOf n stream groups the input stream into a stream of ring arrays of size n. Each ring is a sliding window of size n. Unimplemented  streamly-core&Cast an array having elements of type a( into an array having elements of type b8. The array size must be a multiple of the size of type b. Unimplemented  streamly-coreCast an Array a into an  Array Word8. Unimplemented  streamly-core&Cast an array having elements of type a( into an array having elements of type b. The length of the array should be a multiple of the size of the target element otherwise  is returned. Pre-release  streamly-coreLike   but compares only N bytes instead of entire length of the ring buffer. This is unsafe because the ringHead Ptr is not checked to be in range.  streamly-coreByte compare the entire length of ringBuffer with the given array, starting at the supplied ringHead pointer. Returns true if the Array and the ringBuffer have identical contents.This is unsafe because the ringHead Ptr is not checked to be in range. The supplied array must be equal to or bigger than the ringBuffer, ARRAY BOUNDS ARE NOT CHECKED.  streamly-core8Fold the buffer starting from ringStart up to the given  using a pure step function. This is useful to fold the items in the ring when the ring is not full. The supplied pointer is usually the end of the ring.>Unsafe because the supplied Ptr is not checked to be in range.  streamly-core5Like unsafeFoldRing but with a monadic step function.  streamly-coreFold the entire length of a ring buffer starting at the supplied ringHead pointer. Assuming the supplied ringHead pointer points to the oldest item, this would fold the ring starting from the oldest item to the newest item in the ring.(Note, this will crash on ring of 0 size.  streamly-coreFold Int items in the ring starting at Ptr a0. Won't fold more than the length of the ring.(Note, this will crash on ring of 0 size.  streamly-coreLike slidingWindow but also provides the entire ring contents as an Array. The array reflects the state of the ring after inserting the incoming element.>IMPORTANT NOTE: The ring is mutable, therefore, the result of (m (Array a)) action depends on when it is executed. It does not capture the sanpshot of the ring at a particular time.  streamly-coreslidingWindow collector is an incremental sliding window fold that does not require all the intermediate elements in a computation. This maintains n elements in the window, when a new element comes it slides out the oldest element and the new element along with the old element are supplied to the collector fold.The  type is for the case when initially the window is filling and there is no old element.$ $ g!(c) 2020 Composewell Technologies Apache-2.0streamly@composewell.com experimentalGHC Safe-Inferred" $%'.145789:  streamly-coreMap a function on the incoming as well as outgoing element of a rolling window fold.$lmap f = Fold.lmap (bimap f (f <$>))  streamly-coreConvert an incremental fold to a cumulative fold using the entire input stream as a single window./cumulative f = Fold.lmap (\x -> (x, Nothing)) f  streamly-coreApply an effectful function on the latest and the oldest element of the window.  streamly-coreApply a pure function on the latest and the oldest element of the window.windowRollingMap f = Fold.windowRollingMapM (\x y -> return $ f x y)  streamly-coreThe sum of all the elements in a rolling window. The input elements are required to be intergal numbers.This was written in the hope that it would be a tiny bit faster than sum for  values. But turns out that sum2 is 2% faster than this even for intergal values!Internal  streamly-core,Sum of all the elements in a rolling window:S = \sum_{i=1}^n x_{i}This is the first power sum.sum = powerSum 1Uses Kahan-Babuska-Neumaier style summation for numerical stability of floating precision arithmetic.Space: \mathcal{O}(1)Time: \mathcal{O}(n)  streamly-core-The number of elements in the rolling window. This is the 0 th power sum.length = powerSum 0  streamly-core Sum of the k1th power of all the elements in a rolling window:S_k = \sum_{i=1}^n x_{i}^kpowerSum k = lmap (^ k) sumSpace: \mathcal{O}(1)Time: \mathcal{O}(n)  streamly-coreLike powerSum but powers can be negative or fractional. This is slower than powerSum for positive intergal powers. powerSumFrac p = lmap (** p) sum  streamly-core6Determine the maximum and minimum in a rolling window.6If you want to compute the range of the entire stream +Fold.teeWith (,) Fold.maximum Fold.minimum would be much faster.Space: \mathcal{O}(n) where n is the window size.Time: \mathcal{O}(n*w) where w is the window size.  streamly-core-Find the minimum element in a rolling window.This implementation traverses the entire window buffer to compute the minimum whenever we demand it. It performs better than the dequeue based implementation in streamly-statistics/ package when the window size is small (< 30).9If you want to compute the minimum of the entire stream  h is much faster.Time: \mathcal{O}(n*w) where w is the window size.  streamly-core(The maximum element in a rolling window.(See the performance related comments in minimum.8If you want to compute the maximum of the entire stream ij would be much faster.Time: \mathcal{O}(n*w) where w is the window size.  streamly-core0Arithmetic mean of elements in a sliding window:"\mu = \frac{\sum_{i=1}^n x_{i}}{n}This is also known as the Simple Moving Average (SMA) when used in the sliding window and Cumulative Moving Avergae (CMA) when used on the entire stream."mean = Fold.teeWith (/) sum lengthSpace: \mathcal{O}(1)Time: \mathcal{O}(n) k(c) 2019 Composewell Technologies (c) 2013 Gabriel GonzalezBSD3streamly@composewell.com experimentalGHC Safe-Inferred" $%'.145789:5  streamly-core Drive a fold using the supplied ;, reducing the resulting expression strictly at each step. Definition:drive = flip Stream.foldExample:2Fold.drive (Stream.enumerateFromTo 1 100) Fold.sum5050  streamly-coreAppend a stream to a fold to build the fold accumulator incrementally. We can repeatedly call   on the same fold to continue building the fold and finally use  : to finish the fold and extract the result. Also see the  l, operation which is a singleton version of  . Definitions:5addStream stream = Fold.drive stream . Fold.duplicate$Example, build a list incrementally::{'pure (Fold.toList :: Fold IO Int [Int]) >>= Fold.addOne 13 >>= Fold.addStream (Stream.enumerateFromTo 2 4) >>= Fold.drive Stream.nil >>= print:} [1,2,3,4]?This can be used as an O(n) list append compared to the O(n^2) ++. when used for incrementally building a list.&Example, build a stream incrementally::{9pure (Fold.toStream :: Fold IO Int (Stream Identity Int)) >>= Fold.addOne 13 >>= Fold.addStream (Stream.enumerateFromTo 2 4) >>= Fold.drive Stream.nil >>= print:}fromList [1,2,3,4]This can be used as an O(n) stream append compared to the O(n^2) <>0 when used for incrementally building a stream.&Example, build an array incrementally::{-pure (Array.write :: Fold IO Int (Array Int)) >>= Fold.addOne 13 >>= Fold.addStream (Stream.enumerateFromTo 2 4) >>= Fold.drive Stream.nil >>= print:}fromList [1,2,3,4]-Example, build an array stream incrementally::{ 2let f :: Fold IO Int (Stream Identity (Array Int))6 f = Fold.groupsOf 2 (Array.writeN 3) Fold.toStream in pure f >>= Fold.addOne 13 >>= Fold.addStream (Stream.enumerateFromTo 2 4) >>= Fold.drive Stream.nil >>= print:}(fromList [fromList [1,2],fromList [3,4]]  streamly-core4Flatten the monadic output of a fold to pure output.  streamly-core/Map a monadic function on the output of a fold.  streamly-core+mapMaybeM f = Fold.lmapM f . Fold.catMaybes  streamly-coremapMaybe f fold maps a  returning function f( on the input of the fold, filters out 1 elements, and return the values extracted from .)mapMaybe f = Fold.lmap f . Fold.catMaybes(mapMaybe f = Fold.mapMaybeM (return . f)(f x = if even x then Just x else Nothing!fld = Fold.mapMaybe f Fold.toList-Stream.fold fld (Stream.enumerateFromTo 1 10) [2,4,6,8,10]  streamly-core;Apply a monadic function on the input and return the input.Stream.fold (Fold.lmapM (Fold.tracing print) Fold.drain) $ (Stream.enumerateFromTo (1 :: Int) 2)12 Pre-release  streamly-coreApply a monadic function to each element flowing through and discard the results.Stream.fold (Fold.trace print Fold.drain) $ (Stream.enumerateFromTo (1 :: Int) 2)12%trace f = Fold.lmapM (Fold.tracing f) Pre-release  streamly-coreApply a transformation on a  using a {. Pre-release  streamly-coreScan the input of a 2 to change it in a stateful manner using another 0. The scan stops as soon as the fold terminates. Pre-release  streamly-coreScan the input of a 2 to change it in a stateful manner using another >. The scan restarts with a fresh state if the fold terminates. Pre-release  streamly-coreReturns the latest element omitting the first occurrence that satisfies the given equality predicate.Example:!input = Stream.fromList [1,3,3,5]Stream.fold Fold.toList $ Stream.scanMaybe (Fold.deleteBy (==) 3) input[1,3,5]  streamly-core.Provide a sliding window of length 2 elements.See "Streamly.Internal.Data.Fold.Window.  streamly-coreReturn the latest unique element using the supplied comparison function. Returns  if the current element is same as the last element otherwise returns .)Example, strip duplicate path separators: input = Stream.fromList "//a//b"f x y = x == '/' && y == '/'Stream.fold Fold.toList $ Stream.scanMaybe (Fold.uniqBy f) input"/a/b"Space: O(1) Pre-release  streamly-coreSee  . Definition:uniq = Fold.uniqBy (==)  streamly-coreStrip all leading and trailing occurrences of an element passing a predicate and make all other consecutive occurrences uniq. > prune p = Stream.dropWhileAround p $ Stream.uniqBy (x y -> p x && p y) > Stream.prune isSpace (Stream.fromList " hello world! ") "hello world!" Space: O(1) Unimplemented  streamly-core"Emit only repeated elements, once. Unimplemented  streamly-core Definitions:%drainMapM f = Fold.lmapM f Fold.drain&drainMapM f = Fold.foldMapM (void . f)Drain all input after passing it through a monadic function. This is the dual of mapM_ on stream producers.  streamly-core7Returns the latest element of the input stream, if any.!latest = Fold.foldl1' (\_ x -> x)2latest = fmap getLast $ Fold.foldMap (Last . Just)  streamly-coreTerminates with  as soon as it finds an element different than the previous one, returns  ; element if the entire input consists of the same element.  streamly-coreDetermine the sum of all elements of a stream of numbers. Returns additive identity (0) when the stream is empty. Note that this is not numerically stable for floating point numbers.$sum = Fold.cumulative Fold.windowSum)Same as following but numerically stable:sum = Fold.foldl' (+) 0This hash is often used in Rabin-Karp string search algorithm.See *https://en.wikipedia.org/wiki/Rolling_hash  streamly-core-A default salt used in the implementation of  .  streamly-core Compute an + sized polynomial rolling hash of a stream.7rollingHash = Fold.rollingHashWithSalt Fold.defaultSalt  streamly-core Compute an  sized polynomial rolling hash of the first n elements of a stream.2rollingHashFirstN n = Fold.take n Fold.rollingHash Pre-release  streamly-coreApply a function on every two successive elements of a stream. The first argument of the map function is the previous element and the second argument is the current element. When processing the very first element in the stream, the previous element is . Pre-release streamly-core8rollingMap f = Fold.rollingMapM (\x y -> return $ f x y)  streamly-coreSemigroup concat. Append the elements of an input stream to a provided starting value. Definition:sconcat = Fold.foldl' (<>)?semigroups = fmap Data.Monoid.Sum $ Stream.enumerateFromTo 1 10(Stream.fold (Fold.sconcat 10) semigroupsSum {getSum = 65}  streamly-coreMonoid concat. Fold an input stream consisting of monoidal elements using  and . Definition:mconcat = Fold.sconcat memptysatisfy f = Fold.maybe (\a -> if f a then Just a else Nothing) Pre-release  streamly-core*Take one element from the stream and stop. Definition:one = Fold.maybe JustThis is similar to the stream Tm operation.  streamly-core0Extract the first element of the stream, if any.head = Fold.one  streamly-core=Returns the first element that satisfies the given predicate. Pre-release  streamly-core=Returns the first element that satisfies the given predicate.  streamly-core!In a stream of (key-value) pairs (a, b), return the value b9 of the first pair where the key equals the given value a. Definition:0lookup x = fmap snd <$> Fold.find ((== x) . fst)  streamly-core;Returns the first index that satisfies the given predicate.  streamly-coreReturns the index of the latest element if the element satisfies the given predicate.  streamly-coreReturns the index of the latest element if the element matches the given value. Definition:'elemIndices a = Fold.findIndices (== a)  streamly-coreReturns the first index where a given value is found in the stream. Definition:#elemIndex a = Fold.findIndex (== a)  streamly-coreConsume one element, return  if successful else return 5. In other words, test if the input is empty or not.WARNING! It consumes one element if the stream is not empty. If that is not what you want please use the eof parser instead. Definition:null = fmap isJust Fold.one  streamly-coreReturns 5 if any element of the input satisfies the predicate. Definition:any p = Fold.lmap p Fold.orExample:7Stream.fold (Fold.any (== 0)) $ Stream.fromList [1,0,1]True  streamly-coreReturn / if the given element is present in the stream. Definition:elem a = Fold.any (== a)  streamly-coreReturns 4 if all elements of the input satisfy the predicate. Definition:all p = Fold.lmap p Fold.andExample:7Stream.fold (Fold.all (== 0)) $ Stream.fromList [1,0,1]False  streamly-coreReturns 3 if the given element is not present in the stream. Definition:notElem a = Fold.all (/= a)  streamly-coreReturns  if all elements are ,  otherwise Definition:and = Fold.all (== True)  streamly-coreReturns  if any element is ,  otherwise Definition:or = Fold.any (== True)  streamly-coresplitAt n f1 f2 composes folds f1 and f2 such that first n- elements of its input are consumed by fold f11 and the rest of the stream is consumed by fold f2.let splitAt_ n xs = Stream.fold (Fold.splitAt n Fold.toList Fold.toList) $ Stream.fromList xssplitAt_ 6 "Hello World!"("Hello ","World!")splitAt_ (-1) [1,2,3] ([],[1,2,3])splitAt_ 0 [1,2,3] ([],[1,2,3])splitAt_ 1 [1,2,3] ([1],[2,3])splitAt_ 3 [1,2,3] ([1,2,3],[])splitAt_ 4 [1,2,3] ([1,2,3],[]) 8splitAt n f1 f2 = Fold.splitWith (,) (Fold.take n f1) f2Internal  streamly-core.takingEndBy p = Fold.takingEndByM (return . p)  streamly-core0takingEndBy_ p = Fold.takingEndByM_ (return . p)  streamly-core2droppingWhile p = Fold.droppingWhileM (return . p)  streamly-coreContinue taking the input until the input sequence matches the supplied sequence, taking the supplied sequence as well. If the pattern is empty this acts as an identity fold./s = Stream.fromList "hello there. How are you?"7f = Fold.takeEndBySeq (Array.fromList "re") Fold.toListStream.fold f s "hello there"-Stream.fold Fold.toList $ Stream.foldMany f s#["hello there",". How are"," you?"] Pre-release  streamly-coreLike  # but discards the matched sequence. Pre-release  streamly-coreDistribute one copy of the stream to each fold and zip the results.  |-------Fold m a b--------| ---stream m a---| |---m (b,c) |-------Fold m a c--------|  Definition:tee = Fold.teeWith (,)Example:!t = Fold.tee Fold.sum Fold.length0Stream.fold t (Stream.enumerateFromTo 1.0 100.0) (5050.0,100)  streamly-coreDistribute one copy of the stream to each fold and collect the results in a container.  |-------Fold m a b--------| ---stream m a---| |---m [b] |-------Fold m a b--------| | | ... Stream.fold (Fold.distribute [Fold.sum, Fold.length]) (Stream.enumerateFromTo 1 5)[15,5]distribute = Prelude.foldr (Fold.teeWith (:)) (Fold.fromPure [])4This is the consumer side dual of the producer side   operation.Stops when all the folds stop.  streamly-core,Partition the input over two folds using an  partitioning predicate.  |-------Fold b x--------| -----stream m a --> (Either b c)----| |----(x,y) |-------Fold c y--------| ,Example, send input to either fold randomly::set -package randomimport System.Random (randomIO)randomly a = randomIO >>= \x -> return $ if x then Left a else Right a6f = Fold.partitionByM randomly Fold.length Fold.length,Stream.fold f (Stream.enumerateFromTo 1 100)... do r <- readIORef ref writeIORef ref $ tail r return $ Prelude.head r a:}:{ main = do g <- proportionately 2 14 let f = Fold.partitionByM g Fold.length Fold.length; r <- Stream.fold f (Stream.enumerateFromTo (1 :: Int) 100) print r:}main(67,33)4This is the consumer side dual of the producer side mergeBy operation.When one fold is done, any input meant for it is ignored until the other fold is also done.Stops when both the folds stop. See also:   and  . Pre-release  streamly-core Similar to  / but terminates when the first fold terminates.  streamly-core Similar to  ) but terminates when any fold terminates.  streamly-coreSame as  $ but with a pure partition function.0Example, count even and odd numbers in a stream::{ let f = Fold.partitionBy (\n -> if even n then Left n else Right n)= (fmap (("Even " ++) . show) Fold.length)= (fmap (("Odd " ++) . show) Fold.length)1 in Stream.fold f (Stream.enumerateFromTo 1 100):}("Even 50","Odd 50") Pre-release  streamly-coreCompose two folds such that the combined fold accepts a stream of  and routes the  values to the first fold and  values to the second fold. Definition:partition = Fold.partitionBy id  streamly-coreLike  & but with a monadic splitter function. Definition:4unzipWithM k f1 f2 = Fold.lmapM k (Fold.unzip f1 f2) Pre-release  streamly-core Similar to  / but terminates when the first fold terminates.  streamly-core Similar to  ) but terminates when any fold terminates.  streamly-coreSplit elements in the input stream into two parts using a pure splitter function, direct each part to a different fold and zip the results. Definitions:*unzipWith f = Fold.unzipWithM (return . f):unzipWith f fld1 fld2 = Fold.lmap f (Fold.unzip fld1 fld2)9This fold terminates when both the input folds terminate. Pre-release  streamly-coreSend the elements of tuples in a stream of tuples through two different folds.  |-------Fold m a x--------| ---------stream of (a,b)--| |----m (x,y) |-------Fold m b y--------|  Definition:unzip = Fold.unzipWith id4This is the consumer side dual of the producer side zip operation.  streamly-coreZip a stream with the input of a fold using the supplied function. Unimplemented  streamly-core&Zip a stream with the input of a fold.(zip = Fold.zipStreamWithM (curry return) Unimplemented  streamly-corePair each element of a fold input with its index, starting from index 0.  streamly-core$indexing = Fold.indexingWith 0 (+ 1)  streamly-core0indexingRev n = Fold.indexingWith n (subtract 1)  streamly-corePair each element of a fold input with its index, starting from index 0.&indexed = Fold.scanMaybe Fold.indexing  streamly-core-Change the predicate function of a Fold from a -> b& to accept an additional state input  (s, a) -> b?. Convenient to filter with an addiitonal index or time input.4filterWithIndex = Fold.with Fold.indexed Fold.filter filterWithAbsTime = with timestamped filter filterWithRelTime = with timeIndexed filter  Pre-release  streamly-coresampleFromthen offset stride samples the element at offset- index and then every element at strides of stride.  streamly-coreconcatSequence f t applies folds from stream t7 sequentially and collects the results using the fold f. Unimplemented  streamly-core7Group the input stream into groups of elements between low and high". Collection starts in chunks of low) and then keeps doubling until we reach high8. Each chunk is folded using the provided fold function.This could be useful, for example, when we are folding a stream of unknown size to a stream of arrays and we want to minimize the number of allocations.NOTE: this would be an application of "many" using a terminating fold. Unimplemented  streamly-core/A fold that buffers its input to a pure stream.Warning! working on large streams accumulated as buffers in memory could be very inefficient, consider using Streamly.Data.Array instead.+toStream = fmap Stream.fromList Fold.toList Pre-release  streamly-coreBuffers the input stream to a pure stream in the reverse order of the input.1toStreamRev = fmap Stream.fromList Fold.toListRevWarning! working on large streams accumulated as buffers in memory could be very inefficient, consider using Streamly.Data.Array instead. Pre-release  streamly-core.Unfold and flatten the input stream of a fold. Stream.fold (unfoldMany u f) = Stream.fold f . Stream.unfoldMany u  Pre-release  streamly-coreGet the bottom most n1 elements using the supplied comparison function.  streamly-core Get the top n1 elements using the supplied comparison function.!To get bottom n elements instead:$bottomBy cmp = Fold.topBy (flip cmp)Example:3stream = Stream.fromList [2::Int,7,9,3,1,5,6,11,17]=Stream.fold (Fold.topBy compare 3) stream >>= MutArray.toList [17,11,9] Pre-release  streamly-core(Fold the input stream to top n elements. Definition:top = Fold.topBy compare3stream = Stream.fromList [2::Int,7,9,3,1,5,6,11,17]3Stream.fold (Fold.top 3) stream >>= MutArray.toList [17,11,9] Pre-release  streamly-core+Fold the input stream to bottom n elements. Definition:bottom = Fold.bottomBy compare3stream = Stream.fromList [2::Int,7,9,3,1,5,6,11,17]6Stream.fold (Fold.bottom 3) stream >>= MutArray.toList[1,2,3] Pre-release n!(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHC Safe-Inferred" $%'.145789:  streamly-coreFold the input to a set. Definition:/toSet = Fold.foldl' (flip Set.insert) Set.empty  streamly-coreFold the input to an int set. For integer inputs this performs better than  . Definition:8toIntSet = Fold.foldl' (flip IntSet.insert) IntSet.empty  streamly-coreUsed as a scan. Returns 2 for the first occurrence of an element, returns  for any other occurrences.Example:3stream = Stream.fromList [1::Int,1,2,3,4,4,5,1,5,7]:Stream.fold Fold.toList $ Stream.scanMaybe Fold.nub stream [1,2,3,4,5,7] Pre-release  streamly-coreLike   but specialized to a stream of , for better performance. Pre-release  streamly-core+Count non-duplicate elements in the stream. Definition:(countDistinct = fmap Set.size Fold.toSetcountDistinct = Fold.postscan Fold.nub $ Fold.catMaybes $ Fold.lengthThe memory used is proportional to the number of distinct elements in the stream, to guard against using too much memory use it as a scan and terminate if the count reaches more than a threshold.Space: \mathcal{O}(n) Pre-release  streamly-coreLike   but specialized to a stream of , for better performance. Definition:1countDistinctInt = fmap IntSet.size Fold.toIntSetcountDistinctInt = Fold.postscan Fold.nubInt $ Fold.catMaybes $ Fold.length Pre-release  streamly-core;This is the most general of all demux, classify operations.See   for documentation.  streamly-coredemux getKey getFold: In a key value stream, fold values corresponding to each key using a key specific fold. getFold is invoked to generate a key specific fold when a key is encountered for the first time in the stream.The first component of the output tuple is a key-value Map of in-progress folds. The fold returns the fold result as the second component of the output tuple whenever a fold terminates.If a fold terminates, another instance of the fold is started upon receiving an input with that key, getFold9 is invoked again whenever the key is encountered again.This can be used to scan a stream and collect the results from the scan output.Since the fold generator function is monadic we can add folds dynamically. For example, we can maintain a Map of keys to folds in an IORef and lookup the fold from that corresponding to a key. This Map can be changed dynamically, folds for new keys can be added or folds for old keys can be deleted or modified. Compare with  , the fold in   is a static fold. Pre-release  streamly-coreThis is specialized version of   that uses mutable IO cells as fold accumulators for better performance.  streamly-coreThis is specialized version of   that uses mutable IO cells as fold accumulators for better performance.Keep in mind that the values in the returned Map may be changed by the ongoing fold if you are using those concurrently in another thread. streamly-coreFold a key value stream to a key-value Map. If the same key appears multiple times, only the last value is retained.  streamly-core!This collects all the results of   in a Map.  streamly-coreSame as   but uses   for better performance.  streamly-coreFold a stream of key value pairs using a function that maps keys to folds. Definition:>demuxKvToMap f = Fold.demuxToContainer fst (Fold.lmap snd . f)Example:import Data.Map (Map):{ let f "SUM" = return Fold.sum f _ = return Fold.product input = Stream.fromList [("SUM",1),("PRODUCT",2),("SUM",3),("PRODUCT",4)] in Stream.fold (Fold.demuxKvToMap f) input :: IO (Map String Int):}"fromList [("PRODUCT",8),("SUM",4)] Pre-release  streamly-coreFolds the values for each key using the supplied fold. When scanning, as soon as the fold is complete, its result is available in the second component of the tuple. The first component of the tuple is a snapshot of the in-progress folds.Once the fold for a key is done, any future values of the key are ignored. Definition:)classify f fld = Fold.demux f (const fld)  streamly-coreSame as classify except that it uses mutable IORef cells in the Map providing better performance. Be aware that if this is used as a scan, the values in the intermediate Maps would be mutable. Definitions:-classifyIO f fld = Fold.demuxIO f (const fld)  streamly-coreSplit the input stream based on a key field and fold each split using the given fold. Useful for map/reduce, bucketizing the input in different bins or for generating histograms.Example:import Data.Map.Strict (Map):{ let input = Stream.fromList [("ONE",1),("ONE",1.1),("TWO",2), ("TWO",2.2)]: classify = Fold.toMap fst (Fold.lmap snd Fold.toList); in Stream.fold classify input :: IO (Map String [Double]):}.fromList [("ONE",[1.0,1.1]),("TWO",[2.0,2.2])]Once the classifier fold terminates for a particular key any further inputs in that bucket are ignored.Space used is proportional to the number of keys seen till now and monotonically increases because it stores whether a key has been seen or not.See   for a more powerful version where you can use a different fold for each key. A simpler version of  < retaining only the last value for a key can be written as:?toMap = Fold.foldl' (\kv (k, v) -> Map.insert k v kv) Map.empty Stops: never Pre-release  streamly-coreSame as   but maybe faster because it uses mutable cells as fold accumulators in the Map.  streamly-coreGiven an input stream of key value pairs and a fold for values, fold all the values belonging to each key. Useful for map/reduce, bucketizing the input in different bins or for generating histograms. Definition:(kvToMap = Fold.toMap fst . Fold.lmap sndExample::{ let input = Stream.fromList [("ONE",1),("ONE",1.1),("TWO",2), ("TWO",2.2)]1 in Stream.fold (Fold.kvToMap Fold.toList) input:}.fromList [("ONE",[1.0,1.1]),("TWO",[2.0,2.2])] Pre-release  streamly-core6Determine the frequency of each element in the stream.You can just collect the keys of the resulting map to get the unique elements in the stream. Definition:%frequency = Fold.toMap id Fold.length (c) 2019 Composewell Technologies (c) 2013 Gabriel GonzalezBSD3streamly@composewell.com experimentalGHC Safe-Inferred" $%'.145789:  o(c) 2018 Composewell Technologies (c) Roman Leshchinskiy 2008-2010 BSD-3-Clausestreamly@composewell.com experimentalGHC Safe-Inferred" $%'.145789:P?  streamly-coreUse a { to transform a stream. Pre-release  streamly-coresequence = Stream.mapM idReplace the elements of a stream of monadic actions with the outputs of those actions.:s = Stream.fromList [putStr "a", putStr "b", putStrLn "c"]*Stream.fold Fold.drain $ Stream.sequence sabc  streamly-core-Tap the data flowing through a stream into a . For example, you may add a tap to log the contents flowing through the stream. The fold is used only for effects, its result is discarded.  Fold m a b | -----stream m a ---------------stream m a----- s = Stream.enumerateFromTo 1 2 x <= 10)0 $ Stream.postscan (Fold.tee Fold.latest avg) s:}[1.0,2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0,10.0,11.0,12.0,13.0,14.0,15.0,16.0,17.0,18.0,19.0]  streamly-core=Strict left scan. Scan a stream using the given monadic fold.s = Stream.fromList [1..10]Stream.fold Fold.toList $ Stream.takeWhile (< 10) $ Stream.scan Fold.sum s [0,1,3,6] See also:  usingStateT  streamly-coreLike   but restarts scanning afresh when the scanning fold terminates.  streamly-coreLike  5 but with a monadic step function and a monadic seed.  streamly-core+scanlMAfter' accumulate initial done stream is like  ( except that it provides an additional done function to be applied on the accumulator when the stream stops. The result of done is also emitted in the stream.This function can be used to allocate a resource in the beginning of the scan and release it when the stream ends or to flush the internal state of the scan at the end. Pre-release  streamly-coreStrict left scan. Like ,   too is a one to one transformation, however it adds an extra element.?Stream.toList $ Stream.scanl' (+) 0 $ Stream.fromList [1,2,3,4] [0,1,3,6,10]Stream.toList $ Stream.scanl' (flip (:)) [] $ Stream.fromList [1,2,3,4] [[],[1],[2,1],[3,2,1],[4,3,2,1]]The output of   is the initial value of the accumulator followed by all the intermediate steps and the final result of .By streaming the accumulated state after each fold step, we can share the state across multiple stages of stream composition. Each stage can modify or extend the state, do some processing with it and emit it for the next stage, thus modularizing the stream processing. This can be useful in stateful or event-driven programming.Consider the following monolithic example, computing the sum and the product of the elements in a stream in one go using a foldl':Stream.fold (Fold.foldl' (\(s, p) x -> (s + x, p * x)) (0,1)) $ Stream.fromList [1,2,3,4](10,24)Using scanl' we can make it modular by computing the sum in the first stage and passing it down to the next stage for computing the product::{ Stream.fold (Fold.foldl' (\(_, p) (s, x) -> (s, p * x)) (0,1))1 $ Stream.scanl' (\(s, _) x -> (s + x, x)) (0,1) $ Stream.fromList [1,2,3,4]:}(10,24) IMPORTANT:   evaluates the accumulator to WHNF. To avoid building lazy expressions inside the accumulator, it is recommended that a strict data structure is used for accumulator.0scanl' step z = Stream.scan (Fold.foldl' step z)scanl' f z xs = Stream.scanlM' (\a b -> return (f a b)) (return z) xs See also:  usingStateT  streamly-coreLike  " but with a monadic step function.  streamly-coreLike   but for a non-empty stream. The first element of the stream is used as the initial value of the accumulator. Does nothing if the stream is empty.>Stream.toList $ Stream.scanl1' (+) $ Stream.fromList [1,2,3,4] [1,3,6,10]  streamly-core Modify a Stream m a -> Stream m a1 stream transformation that accepts a predicate (a -> b) to accept  ((s, a) -> b)% instead, provided a transformation Stream m a -> Stream m (s, a)+. Convenient to filter with index or time.:filterWithIndex = Stream.with Stream.indexed Stream.filter Pre-release  streamly-coreSame as   but with a monadic predicate.>= \r -> return $ if r then Just x else Nothing"filterM p = Stream.mapMaybeM (f p)  streamly-core2Include only those elements that pass a predicate.&filter p = Stream.filterM (return . p)filter p = Stream.mapMaybe (\x -> if p x then Just x else Nothing).filter p = Stream.scanMaybe (Fold.filtering p)  streamly-coreDrop repeated elements that are adjacent to each other using the supplied comparison function.uniq = Stream.uniqBy (==)#To strip duplicate path separators: input = Stream.fromList "//a//b"f x y = x == '/' && y == '/'/Stream.fold Fold.toList $ Stream.uniqBy f input"/a/b"Space: O(1) Pre-release  streamly-core7Drop repeated elements that are adjacent to each other.uniq = Stream.uniqBy (==)  streamly-coreDeletes the first occurrence of the element in the stream that satisfies the given equality predicate.!input = Stream.fromList [1,3,3,5]6Stream.fold Fold.toList $ Stream.deleteBy (==) 3 input[1,3,5]  streamly-coreStrip all leading and trailing occurrences of an element passing a predicate and make all other consecutive occurrences uniq. > prune p = Stream.dropWhileAround p $ Stream.uniqBy (x y -> p x && p y) > Stream.prune isSpace (Stream.fromList " hello world! ") "hello world!" Space: O(1) Unimplemented  streamly-core"Emit only repeated elements, once. Unimplemented  streamly-coreTake all consecutive elements at the end of the stream for which the predicate is true.1O(n) space, where n is the number elements taken. Unimplemented  streamly-coreLike  and   combined.>O(n) space, where n is the number elements taken from the end. Unimplemented  streamly-coreDiscard first n, elements from the stream and take the rest.  streamly-coreSame as   but with a monadic predicate.  streamly-coreDrop elements in the stream as long as the predicate succeeds and then take the rest of the stream.  streamly-coreDrop n# elements at the end of the stream.3O(n) space, where n is the number elements dropped. Unimplemented  streamly-coreDrop all consecutive elements at the end of the stream for which the predicate is true.3O(n) space, where n is the number elements dropped. Unimplemented  streamly-coreLike   and   combined.O(n) space, where n is the number elements dropped from the end. Unimplemented  streamly-coreinsertBy cmp elem stream inserts elem before the first element in stream that is less than elem when compared using cmp.7insertBy cmp x = Stream.mergeBy cmp (Stream.fromPure x)input = Stream.fromList [1,3,5]9Stream.fold Fold.toList $ Stream.insertBy compare 2 input [1,2,3,5]  streamly-coreInsert an effect and its output before consuming an element of a stream except the first one.input = Stream.fromList "hello"Stream.fold Fold.toList $ Stream.trace putChar $ Stream.intersperseM (putChar '.' >> return ',') inputh.,e.,l.,l.,o"h,e,l,l,o"Be careful about the order of effects. In the above example we used trace after the intersperse, if we use it before the intersperse the output would be he.l.l.o."h,e,l,l,o".Stream.fold Fold.toList $ Stream.intersperseM (putChar '.' >> return ',') $ Stream.trace putChar inputhe.l.l.o."h,e,l,l,o"  streamly-core input = Stream.fromList "hello" > Stream.fold Fold.toList $ Stream.intersperseMWith 2 (return ',') input "he,ll,o" Unimplemented  streamly-coreInsert an effect and its output after consuming an element of a stream.input = Stream.fromList "hello"Stream.fold Fold.toList $ Stream.trace putChar $ Stream.intersperseMSuffix (putChar '.' >> return ',') inputh.,e.,l.,l.,o.,"h,e,l,l,o," Pre-release  streamly-coreintersperseMPrefix_ m = Stream.mapM (\x -> void m >> return x)input = Stream.fromList "hello"Stream.fold Fold.toList $ Stream.trace putChar $ Stream.intersperseMPrefix_ (putChar '.' >> return ',') input.h.e.l.l.o"hello"Same as  . Pre-release streamly-core9Block the current thread for specified number of seconds.  streamly-coreIntroduce a delay of specified seconds between elements of the stream. Definition:4sleep n = liftIO $ threadDelay $ round $ n * 1000000$delay = Stream.intersperseM_ . sleepExample:"input = Stream.enumerateFromTo 1 39Stream.fold (Fold.drainMapM print) $ Stream.delay 1 input123  streamly-coreIntroduce a delay of specified seconds after consuming an element of a stream. Definition:4sleep n = liftIO $ threadDelay $ round $ n * 1000000.delayPost = Stream.intersperseMSuffix_ . sleepExample:"input = Stream.enumerateFromTo 1 3=Stream.fold (Fold.drainMapM print) $ Stream.delayPost 1 input123 Pre-release  streamly-coreIntroduce a delay of specified seconds before consuming an element of a stream. Definition:4sleep n = liftIO $ threadDelay $ round $ n * 1000000,delayPre = Stream.intersperseMPrefix_. sleepExample:"input = Stream.enumerateFromTo 1 3>= return . Stream.fromList  streamly-coreLike  ' but several times faster, requires an  instance. O(n) space Pre-release  streamly-coreBuffer until the next element in sequence arrives. The function argument determines the difference in sequence numbers. This could be useful in implementing sequenced streams, for example, TCP reassembly. Unimplemented  streamly-core8f = Fold.foldl' (\(i, _) x -> (i + 1, x)) (-1,undefined)indexed = Stream.postscan f5indexed = Stream.zipWith (,) (Stream.enumerateFrom 0)3indexedR n = fmap (\(i, a) -> (n - i, a)) . indexedPair each element in a stream with its index, starting from index 0.Stream.fold Fold.toList $ Stream.indexed $ Stream.fromList "hello")[(0,'h'),(1,'e'),(2,'l'),(3,'l'),(4,'o')]  streamly-core=f n = Fold.foldl' (\(i, _) x -> (i - 1, x)) (n + 1,undefined)"indexedR n = Stream.postscan (f n)(s n = Stream.enumerateFromThen n (n - 1)%indexedR n = Stream.zipWith (,) (s n)Pair each element in a stream with its index, starting from the given index n and counting down.Stream.fold Fold.toList $ Stream.indexedR 10 $ Stream.fromList "hello"*[(10,'h'),(9,'e'),(8,'l'),(7,'l'),(6,'o')]  streamly-corePair each element in a stream with an absolute timestamp, using a clock of specified granularity. The timestamp is generated just before the element is consumed.Stream.fold Fold.toList $ Stream.timestampWith 0.01 $ Stream.delay 1 $ Stream.enumerateFromTo 1 3[(AbsTime (TimeSpec {sec = ..., nsec = ...}),1),(AbsTime (TimeSpec {sec = ..., nsec = ...}),2),(AbsTime (TimeSpec {sec = ..., nsec = ...}),3)] Pre-release  streamly-corePair each element in a stream with relative times starting from 0, using a clock with the specified granularity. The time is measured just before the element is consumed.Stream.fold Fold.toList $ Stream.timeIndexWith 0.01 $ Stream.delay 1 $ Stream.enumerateFromTo 1 3[(RelTime64 (NanoSecond64 ...),1),(RelTime64 (NanoSecond64 ...),2),(RelTime64 (NanoSecond64 ...),3)] Pre-release  streamly-corePair each element in a stream with relative times starting from 0, using a 10 ms granularity clock. The time is measured just before the element is consumed.Stream.fold Fold.toList $ Stream.timeIndexed $ Stream.delay 1 $ Stream.enumerateFromTo 1 3[(RelTime64 (NanoSecond64 ...),1),(RelTime64 (NanoSecond64 ...),2),(RelTime64 (NanoSecond64 ...),3)] Pre-release  streamly-coreFind all the indices where the element in the stream satisfies the given predicate.5findIndices p = Stream.scanMaybe (Fold.findIndices p)  streamly-coreFind all the indices where the value of the element in the stream is equal to the given value.)elemIndices a = Stream.findIndices (== a)  streamly-coreLike  $ but with an effectful map function. Pre-release  streamly-coreApply a function on every two successive elements of a stream. The first argument of the map function is the previous element and the second argument is the current element. When the current element is the first element, the previous element is . Pre-release  streamly-coreLike   but requires at least two elements in the stream, returns an empty stream otherwise.0This is the stream equivalent of the list idiom zipWith f xs (tail xs). Pre-release  streamly-coreMap a 0 returning function to a stream, filter out the 9 elements, and return a stream of values extracted from .Equivalent to:&mapMaybe f = Stream.catMaybes . fmap f  streamly-coreLike   but maps a monadic function.Equivalent to:.mapMaybeM f = Stream.catMaybes . Stream.mapM f.mapM f = Stream.mapMaybeM (\x -> Just <$> f x)  streamly-coreIn a stream of  s, discard  s and unwrap s.catMaybes = Stream.mapMaybe id0catMaybes = fmap fromJust . Stream.filter isJust Pre-release  streamly-core!Use a filtering fold on a stream.2scanMaybe f = Stream.catMaybes . Stream.postscan f  streamly-coreDiscard  s and unwrap s in an  stream.;catLefts = fmap (fromLeft undefined) . Stream.filter isLeft Pre-release  streamly-coreDiscard  s and unwrap s in an  stream.>catRights = fmap (fromRight undefined) . Stream.filter isRight Pre-release  streamly-coreRemove the either wrapper and flatten both lefts and as well as rights in the output stream. catEithers = fmap (either id id) Pre-release  streamly-coreSplit on an infixed separator element, dropping the separator. The supplied  is applied on the split segments. Splits the stream on separator elements determined by the supplied predicate, separator is considered as infixed between two segments:splitOn' p xs = Stream.fold Fold.toList $ Stream.splitOn p Fold.toList (Stream.fromList xs)splitOn' (== '.') "a.b" ["a","b"];An empty stream is folded to the default value of the fold:splitOn' (== '.') ""[""]If one or both sides of the separator are missing then the empty segment on that side is folded to the default output of the fold:splitOn' (== '.') "."["",""]splitOn' (== '.') ".a"["","a"]splitOn' (== '.') "a."["a",""]splitOn' (== '.') "a..b" ["a","","b"]6splitOn is an inverse of intercalating single element: Stream.intercalate (Stream.fromPure '.') Unfold.fromList . Stream.splitOn (== '.') Fold.toList === id9Assuming the input stream does not contain the separator: Stream.splitOn (== '.') Fold.toList . Stream.intercalate (Stream.fromPure '.') Unfold.fromList === id p(c) 2018 Composewell Technologies (c) Roman Leshchinskiy 2008-2010 BSD-3-Clausestreamly@composewell.com experimentalGHC Safe-Inferred" $%'.145789:' streamly-coreICALFirstYield s1 s2 i1 streamly-coreICALFirstBuf s1 s2 i1 i2 streamly-coreInterposeFirstYield s1 i1 streamly-coreInterposeFirstBuf s1 i1 streamly-coreInterposeSuffixFirstYield s1 i1  streamly-coreWARNING! O(n^2) time complexity wrt number of streams. Suitable for statically fusing a small number of streams. Use the O(n) complexity StreamK.VF otherwise.Fuses two streams sequentially, yielding all elements from the first stream, and then all elements from the second stream.s1 = Stream.fromList [1,2]s2 = Stream.fromList [3,4]/Stream.fold Fold.toList $ s1 `Stream.append` s2 [1,2,3,4]  streamly-coreWARNING! O(n^2) time complexity wrt number of streams. Suitable for statically fusing a small number of streams. Use the O(n) complexity StreamK.Vq otherwise.Interleaves two streams, yielding one element from each stream alternately. When one stream stops the rest of the other stream is used in the output stream.  streamly-coreLike   but stops interleaving as soon as any of the two streams stops.  streamly-coreInterleaves the outputs of two streams, yielding elements from each stream alternately, starting from the first stream. As soon as the first stream finishes, the output stops, discarding the remaining part of the second stream. In this case, the last element in the resulting stream would be from the second stream. If the second stream finishes early then the first stream still continues to yield elements until it finishes.:set -XOverloadedStrings'import Data.Functor.Identity (Identity)?Stream.interleaveFstSuffix "abc" ",,,," :: Stream Identity CharfromList "a,b,c," randomly _ _ = randomIO >>= x -> return $ if x then LT else GT > Stream.toList $ Stream.mergeByM randomly (Stream.fromList [1,1,1,1]) (Stream.fromList [2,2,2,2]) [2,1,2,2,2,1,1,1] 2Example, merge two streams in a proportion of 2:1::{do' let s1 = Stream.fromList [1,1,1,1,1,1]! s2 = Stream.fromList [2,2,2] let proportionately m n = do ref <- newIORef $ cycle $ Prelude.concat [Prelude.replicate m LT, Prelude.replicate n GT] return $ \_ _ -> do r <- readIORef ref( writeIORef ref $ Prelude.tail r return $ Prelude.head r f <- proportionately 2 18 xs <- Stream.fold Fold.toList $ Stream.mergeByM f s1 s2 print xs:}[1,1,2,1,1,2,1,1,2]  streamly-coreWARNING! O(n^2) time complexity wrt number of streams. Suitable for statically fusing a small number of streams. Use the O(n) complexity StreamK.Vr otherwise.Merge two streams using a comparison function. The head elements of both the streams are compared and the smaller of the two elements is emitted, if both elements are equal then the element from the first stream is used first.If the streams are sorted in ascending order, the resulting stream would also remain sorted in ascending order.s1 = Stream.fromList [1,3,5]s2 = Stream.fromList [2,4,6,8]6Stream.fold Fold.toList $ Stream.mergeBy compare s1 s2[1,2,3,4,5,6,8]  streamly-coreLike  ; but stops merging as soon as any of the two streams stops. Unimplemented  streamly-coreLike  5 but stops merging as soon as the first stream stops. Unimplemented  streamly-coreThis does not pair streams like mergeMapWith, instead, it goes through each stream one by one and yields one element from each stream. After it goes to the last stream it reverses the traversal to come back to the first stream yielding elements from each stream on its way back to the first stream and so on.7lists = Stream.fromList [[1,1],[2,2],[3,3],[4,4],[5,5]];interleaved = Stream.unfoldInterleave Unfold.fromList lists#Stream.fold Fold.toList interleaved[1,2,3,4,5,5,4,3,2,1]Note that this is order of magnitude more efficient than "mergeMapWith interleave" because of fusion.  streamly-core  switches to the next stream whenever a value from a stream is yielded, it does not switch on a W. So if a stream keeps skipping for long time other streams won't get a chance to run.   switches on Skip as well. So it basically schedules each stream fairly irrespective of whether it produces a value or not.  streamly-coreUnfold the elements of a stream, append the given element after each unfolded stream and then concat them into a single stream.%unlines = Stream.interposeSuffix '\n' Pre-release  streamly-coreUnfold the elements of a stream, intersperse the given element between the unfolded streams and then concat them into a single stream.unwords = Stream.interpose ' ' Pre-release  streamly-core  followed by unfold and concat. Pre-release  streamly-core  followed by unfold and concat. Pre-release  streamly-core  followed by unfold and concat.intercalateSuffix u a = Stream.unfoldMany u . Stream.intersperseMSuffix a=intersperseMSuffix = Stream.intercalateSuffix Unfold.identity7unlines = Stream.intercalateSuffix Unfold.fromList "\n"-input = Stream.fromList ["abc", "def", "ghi"]Stream.fold Fold.toList $ Stream.intercalateSuffix Unfold.fromList "\n" input"abc\ndef\nghi\n"  streamly-core  followed by unfold and concat. Parser.takeBetween 0 2 (Fold.sconcat b)) (Sum 0) $ fmap Sum s[3,10,21,36,55,55]This is the streaming equivalent of monad like sequenced application of parsers where next parser is dependent on the previous parser. Pre-release  streamly-core1The argument order of the comparison function in   is different than that of  .In   the comparison function takes the next element as the first argument and the previous element as the second argument. In   the first argument is the previous element and second argument is the next element.  streamly-coreSplit the stream after stripping leading, trailing, and repeated separators as per the fold supplied. Therefore, ".a..b." with & as the separator would be parsed as  ["a","b"]. In other words, its like parsing words from whitespace separated text.  streamly-core)Split post any one of the given patterns. Unimplemented  streamly-coreSplit on a prefixed separator element, dropping the separator. The supplied " is applied on the split segments. > splitOnPrefix' p xs = Stream.toList $ Stream.splitOnPrefix p (Fold.toList) (Stream.fromList xs) > splitOnPrefix' (== ) ".a.b" ["a","b"] 4An empty stream results in an empty output stream:  > splitOnPrefix' (==  ) "" [] An empty segment consisting of only a prefix is folded to the default output of the fold: > splitOnPrefix' (== !) "." [""] > splitOnPrefix' (== -) ".a.b." ["a","b",""] > splitOnPrefix' (== ) ".a..b" ["a","","b"] 4A prefix is optional at the beginning of the stream: > splitOnPrefix' (== ") "a" ["a"] > splitOnPrefix' (== ) "a.b" ["a","b"]   is an inverse of intercalatePrefix with a single element: Stream.intercalatePrefix (Stream.fromPure '.') Unfold.fromList . Stream.splitOnPrefix (== '.') Fold.toList === id9Assuming the input stream does not contain the separator: Stream.splitOnPrefix (== '.') Fold.toList . Stream.intercalatePrefix (Stream.fromPure '.') Unfold.fromList === id Unimplemented  streamly-core'Split on any one of the given patterns. Unimplemented  streamly-core)Performs infix separator style splitting.  streamly-core)Performs infix separator style splitting.  streamly-core-Drop prefix from the input stream if present.Space: O(1) Unimplemented  streamly-coreDrop all matching infix from the input stream if present. Infix stream may be consumed multiple times.Space: O(n)$ where n is the length of the infix. Unimplemented  streamly-coreDrop suffix from the input stream if present. Suffix stream may be consumed multiple times.Space: O(n)% where n is the length of the suffix. Unimplemented s!(c) 2020 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHC Safe-Inferred" $%'.145789:  streamly-corestrideFromthen offset stride takes the element at offset- index and then every element at strides of stride.Stream.fold Fold.toList $ Stream.strideFromThen 2 3 $ Stream.enumerateFromTo 0 10[2,5,8]  streamly-coreLike # but emits only those tuples where a == b( using the supplied equality predicate. Definition:joinInnerGeneric eq s1 s2 = Stream.filter (\(a, b) -> a `eq` b) $ Stream.cross s1 s2 You should almost always prefer  joinInnerOrd over   if possible.  joinInnerOrd is an order of magnitude faster but may take more space for caching the second stream.See  t& for a much faster fused alternative.Time: O(m x n) Pre-release  streamly-coreA more efficient  joinInner for sorted streams. Space: O(1)Time: O(m + n) Unimplemented  streamly-coreA more efficient joinLeft for sorted streams. Space: O(1)Time: O(m + n) Unimplemented  streamly-coreA more efficient  joinOuter for sorted streams. Space: O(1)Time: O(m + n) Unimplemented streamly-coreKeep only those elements in the second stream that are present in the first stream too. The first stream is folded to a container using the supplied fold and then the elements in the container are looked up using the supplied lookup function.3The first stream must be finite and must not block.  streamly-core  retains only those elements in the second stream that are present in the first stream.Stream.fold Fold.toList $ Stream.filterInStreamGenericBy (==) (Stream.fromList [1,2,2,4]) (Stream.fromList [2,1,1,3])[2,1,1]Stream.fold Fold.toList $ Stream.filterInStreamGenericBy (==) (Stream.fromList [2,1,1,3]) (Stream.fromList [1,2,2,4])[1,2,2]Similar to the list intersectBy operation but with the stream argument order flipped.The first stream must be finite and must not block. Second stream is processed only after the first stream is fully realized.Space: O(n) where n0 is the number of elements in the second stream.Time: O(m x n) where m4 is the number of elements in the first stream and n0 is the number of elements in the second stream. Pre-release  streamly-coreLike   but assumes that the input streams are sorted in ascending order. To use it on streams sorted in descending order pass an inverted comparison function returning GT for less than and LT for greater than. Space: O(1) Time: O(m+n) Pre-release  streamly-coreDelete all elements of the first stream from the seconds stream. If an element occurs multiple times in the first stream as many occurrences of it are deleted from the second stream.Stream.fold Fold.toList $ Stream.deleteInStreamGenericBy (==) (Stream.fromList [1,2,3]) (Stream.fromList [1,2,2])[2]The following laws hold: deleteInStreamGenericBy (==) s1 (s1 `append` s2) === s2 deleteInStreamGenericBy (==) s1 (s1 `interleave` s2) === s2Same as the list uv+ operation but with argument order flipped.The first stream must be finite and must not block. Second stream is processed only after the first stream is fully realized.Space: O(m) where m/ is the number of elements in the first stream.Time: O(m x n) where m4 is the number of elements in the first stream and n0 is the number of elements in the second stream. Pre-release  streamly-coreA more efficient  ' for streams sorted in ascending order. Space: O(1) Unimplemented  streamly-coreThis essentially appends to the second stream all the occurrences of elements in the first stream that are not already present in the second stream.(Equivalent to the following except that s2 is evaluated only once:unionWithStreamGenericBy eq s1 s2 = s2 `Stream.append` (Stream.deleteInStreamGenericBy eq s2 s1)Example:Stream.fold Fold.toList $ Stream.unionWithStreamGenericBy (==) (Stream.fromList [1,1,2,3]) (Stream.fromList [1,2,2,4]) [1,2,2,4,3] Space: O(n)Time: O(m x n) Pre-release  streamly-coreA more efficient   for sorted streams. Space: O(1) Unimplemented !(c) 2020 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHC Safe-Inferred" $%'.145789:  streamly-coreGenerate a stream of array slice descriptors ((index, len)) of specified length from an array, starting from the supplied array index. The last slice may be shorter than the requested length depending on the array length. Pre-release  streamly-coreGenerate a stream of slices of specified length from an array, starting from the supplied array index. The last slice may be shorter than the requested length depending on the array length. Pre-release  streamly-corecompactLE maxElems coalesces adjacent arrays in the input stream only if the combined size would be less than or equal to maxElems elements. Note that it won't split an array if the original array is already larger than maxElems.maxElems must be greater than 0.Generates unpinned arrays irrespective of the pinning status of input arrays.  streamly-corePinned version of  . streamly-coreSplit a stream of arrays on a given separator byte, dropping the separator and coalescing all the arrays between two separators into a single array.  streamly-coreSplit a stream of arrays on a given separator byte, dropping the separator and coalescing all the arrays between two separators into a single array.  streamly-coreLike   considers the separator in suffix position instead of infix position.  streamly-core from index streamly-corelength of the slice  streamly-core from index streamly-corelength of the slice  streamly-core from index streamly-corelength of the slice  streamly-core from index streamly-corelength of the slice               w!(c) 2023 Composewell Technologies BSD3-3-Clausestreamly@composewell.comGHC Safe-Inferred" $%'.145789:K  streamly-coreThe   type class provides operations for serialization and deserialization of general Haskell data types to and from their byte stream representation.Unlike ,   uses variable length encoding, therefore, it can serialize recursive and variable length data types like lists, or variable length sum types where the length of the value may vary depending on a particular constructor. For variable length data types the length is encoded along with the data.The   operation reads bytes from the mutable byte array and builds a Haskell data type from these bytes, the number of bytes it reads depends on the type and the encoded value it is reading.   operation converts a Haskell data type to its binary representation which must consist of as many bytes as added by the  addSizeTo operation for that value and then stores these bytes into the mutable byte array. The programmer is expected to use the  addSizeTo operation and allocate an array of sufficient length before calling  .IMPORTANT: The serialized data's byte ordering remains the same as the host machine's byte order. Therefore, it can not be deserialized from host machines with a different byte ordering.Instances can be derived via Template Haskell, or written manually.Here is an example, for deriving an instance of this type class using template Haskell::{data Object = Object { _obj1 :: [Int] , _obj2 :: Int }:} import Streamly.Data.MutByteArray (deriveSerialize) $(deriveSerialize [d|instance Serialize Object|]) See 9x and 9y: for more information on deriving using Template Haskell.(Here is an example of a manual instance.1import Streamly.Data.MutByteArray (Serialize(..)):{ instance Serialize Object where addSizeTo acc obj = addSizeTo (addSizeTo acc (_obj1 obj)) (_obj2 obj) deserializeAt i arr len = do1 -- Check the array bounds before reading+ (i1, x0) <- deserializeAt i arr len, (i2, x1) <- deserializeAt i1 arr len pure (i2, Object x0 x1)) serializeAt i arr (Object x0 x1) = do" i1 <- serializeAt i arr x0# i2 <- serializeAt i1 arr x1 pure i2:}  streamly-coreaddSizeTo accum value returns accum> incremented by the size of the serialized representation of value> in bytes. Size cannot be zero. It should be at least 1 byte.  streamly-core(deserializeAt byte-offset array arrayLen deserializes a value from the given byte-offset in the array. Returns a tuple consisting of the next byte-offset and the deserialized value.The arrayLen passed is the entire length of the input buffer. It is to be used to check if we would overflow the input buffer when deserializing.Throws an exception if the operation would exceed the supplied arrayLen.  streamly-core#serializeAt byte-offset array value. writes the serialized representation of the value in the array at the given byte-offset. Returns the next byte-offset.This is an unsafe operation, the programmer must ensure that the array has enough space available to serialize the value as determined by the  addSizeTo operation. z!(c) 2023 Composewell Technologies BSD3-3-Clausestreamly@composewell.com experimentalGHC Safe-Inferred$ $%'.145789:1  streamly-core!Configuration to control how the   instance is generated. The configuration is opaque and is modified by composing config modifier functions, for example:(inlineSerializeAt (Just NoInline)) . (inlineSerializeAt (Just Inlinable))'The default configuration settings are:  Nothing  (Just Inline)  (Just Inline)6The following experimental options are also available:  False  False  streamly-coreHow should we inline the   function? The default is 6 which means left to the compiler. Forcing inline on  addSizeTo> function actually worsens some benchmarks and improves none.  streamly-coreHow should we inline the  serialize function? The default 'Just Inline'. However, aggressive inlining can bloat the code and increase in compilation times when there are big functions and too many nesting levels so you can change it accordingly. A , value leaves the decision to the compiler.  streamly-coreHow should we inline the  deserialize function? See guidelines in  .  streamly-core ExperimentalIn sum types, use Latin-1 encoded original constructor names rather than binary values to identify constructors. This option is not applicable to product types.+This option enables the following behavior: Reordering: Order of the fields can be changed without affecting serialization.Addition: If a field is added in the new version, the old version of the data type can still be deserialized by the new version. The new value would never occur in the old one.Deletion: If a field is deleted in the new version, deserialization of the old version will result in an error. TBD: We can possibly designate a catch-all case to handle this scenario.Note that if you change a type, change the semantics of a type, or delete a field and add a new field with the same name, deserialization of old data may result in silent unexpected behavior.This option has to be the same on both encoding and decoding side.The default is .  streamly-core ExperimentalIn explicit record types, use Latin-1 encoded record field names rather than binary values to identify the record fields. Note that this option is not applicable to sum types. Also, it does not work on a product type which is not a record, because there are no field names to begin with.+This option enables the following behavior: Reordering: Order of the fields can be changed without affecting serialization.Addition: If a  type field is added in the new version, the old version of the data type can still be deserialized by the new version, the field value in the older version is assumed to be . If any other type of field is added, deserialization of the older version results in an error but only when that field is actually accessed in the deserialized record.Deletion: If a field is deleted in the new version and it is encountered in a previously serialized version then the field is discarded.This option has to be the same on both encoding and decoding side.There is a constant performance overhead proportional to the total length of the record field names and the number of record fields.The default is .1 {!(c) 2023 Composewell Technologies BSD3-3-Clausestreamly@composewell.com experimentalGHC Safe-Inferred$ $%'.145789:J |!(c) 2023 Composewell Technologies BSD3-3-Clausestreamly@composewell.com experimentalGHC Safe-Inferred$ $%'.145789: }!(c) 2023 Composewell Technologies BSD3-3-Clausestreamly@composewell.com experimentalGHC Safe-Inferred$ $%'.145789:  streamly-core0deriveSerializeWith config-modifier instance-dec generates a template Haskell splice consisting of a declaration of a   instance.  instance-dec is a template Haskell declaration splice consisting of a standard Haskell instance declaration without the type class methods (e.g. 0[d|instance Serialize a => Serialize (Maybe a)|]).The type class methods for the given instance are generated according to the supplied config-modifier parameter. See  % for default configuration settings.Usage: $(deriveSerializeWith ( inlineSerializeAt (Just NoInline) . inlineDeserializeAt (Just NoInline) ) [d|instance Serialize a => Serialize (Maybe a)|])  streamly-core Given an  8 instance declaration splice without the methods (e.g. 0[d|instance Serialize a => Serialize (Maybe a)|]), generate an instance declaration including all the type class method implementations.(deriveSerialize = deriveSerializeWith idUsage: $(deriveSerialize [d|instance Serialize a => Serialize (Maybe a)|]) : !(c) 2022 Composewell TechnologiesBSD3streamly@composewell.comreleasedGHC Safe-Inferred" $%'.145789::) )   ~(c) 2018 Composewell Technologies (c) Roman Leshchinskiy 2008-2010 BSD-3-Clausestreamly@composewell.com experimentalGHC Safe-Inferred" $%'.145789:  streamly-coreRun a Parse over a stream.  streamly-core"Parse a stream using the supplied Parser. Parsers (See Streamly.Internal.Data.Parser) are more powerful folds that add backtracking and error functionality to terminating folds. Unlike folds, parsers may not always result in a valid output, they may result in an error. For example:4Stream.parse (Parser.takeEQ 1 Fold.drain) Stream.nilLeft (ParseError "takeEQ: Expecting exactly 1 elements, input terminated on 0")Note: parse p is not the same as head . parseMany p on an empty stream.  streamly-coreRun a Parse- over a stream and return rest of the Stream.  streamly-core"Parse a stream using the supplied Parser.  streamly-core1Execute a monadic action for each element of the   streamly-coreReturns  if the first stream is the same as or a prefix of the second. A stream is a prefix of itself.Stream.isPrefixOf (Stream.fromList "hello") (Stream.fromList "hello" :: Stream IO Char)True  streamly-coreReturns  if all the elements of the first stream occur, in order, in the second stream. The elements do not have to occur consecutively. A stream is a subsequence of itself.Stream.isSubsequenceOf (Stream.fromList "hlo") (Stream.fromList "hello" :: Stream IO Char)True  streamly-corestripPrefix prefix input strips the prefix stream from the input- stream if it is a prefix of input. Returns  if the input does not start with the given prefix, stripped input otherwise. Returns Just nil2 when the prefix is the same as the input stream.Space: O(1)  streamly-coreReturns  if the first stream is an infix of the second. A stream is considered an infix of itself.-s = Stream.fromList "hello" :: Stream IO CharStream.isInfixOf s sTrueSpace: O(n) worst case where n is the length of the infix. Pre-release Requires  constraint  streamly-coreReturns  if the first stream is a suffix of the second. A stream is considered a suffix of itself.Stream.isSuffixOf (Stream.fromList "hello") (Stream.fromList "hello" :: Stream IO Char)TrueSpace: O(n)-, buffers entire input stream and the suffix. Pre-release Suboptimal - Help wanted.  streamly-coreMuch faster than  .  streamly-core.Drops the given suffix from a stream. Returns < if the stream does not end with the given suffix. Returns Just nil, when the suffix is the same as the stream.It may be more efficient to convert the stream to an Array and use stripSuffix on that especially if the elements have a Storable or Prim instance.Space: O(n)7, buffers the entire input stream as well as the suffix Pre-release  streamly-coreMuch faster than  ..  !(c) 2019 Composewell Technologies BSD-3-Clausestreamly@composewell.comreleasedGHC Safe-Inferred" $%'.145789:+           !!(c) 2019 Composewell Technologies BSD-3-Clausestreamly@composewell.comGHC Safe-Inferred" $%'.145789:  streamly-coreThe internal contents of the array representing the entire array.  streamly-core!The starting index of this slice.  streamly-coreThe length of this slice.  streamly-core'Fold the whole input to a single array.-Caution! Do not use this on infinite streams.  streamly-coreO(1) Lookup the element at the given index. Index starts from 0. Does not check the bounds.  streamly-core;Lookup the element at the given index. Index starts from 0.  streamly-coreTruncate the array at the beginning and end as long as the predicate holds true. Returns a slice of the original array. !(c) 2019 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHC Safe-Inferred" $%'.145789:۹  streamly-coreThe memory used is proportional to the number of unique elements in the stream. If we want to limit the memory we can just use "take" to limit the uniq elements in the stream.  streamly-coreLike   but uses a Map for efficiency.If the input streams have duplicate keys, the behavior is undefined.For space efficiency use the smaller stream as the second stream. Space: O(n)Time: O(m + n) Pre-release  streamly-coreLike   but emit  (a, Just b), and additionally, for those a's that are not equal to any b emit  (a, Nothing).The second stream is evaluated multiple times. If the stream is a consume-once stream then the caller should cache it in an => before calling this function. Caching may also improve performance if the stream is expensive to evaluate.6joinRightGeneric eq = flip (Stream.joinLeftGeneric eq);Space: O(n) assuming the second stream is cached in memory.Time: O(m x n) Unimplemented  streamly-coreA more efficient   using a hashmap for efficiency. Space: O(n)Time: O(m + n) Pre-release  streamly-coreLike   but emits a (Just a, Just b). Like   , for those a's that are not equal to any b emit (Just a, Nothing), but additionally, for those b's that are not equal to any a emit (Nothing, Just b).For space efficiency use the smaller stream as the second stream. Space: O(n)Time: O(m x n) Pre-release  streamly-coreLike   but uses a Map for efficiency.Space: O(m + n)Time: O(m + n) Pre-release !(c) 2018 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHC Safe-Inferred" $%'.145789:yUWXV UXVW              !(c) 2018 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHC Safe-Inferred" $%'.145789:UWXV "!(c) 2019 Composewell Technologies BSD3-3-Clausestreamly@composewell.com experimentalGHC Safe-Inferred" $%'.145789:B  streamly-coreThis mutates the first array (if it has space) to append values from the second one. This would work for immutable arrays as well because an immutable array never has space so a new array is allocated instead of mutating it.| Coalesce adjacent arrays in incoming stream to form bigger arrays of a maximum specified size. Note that if a single array is bigger than the specified size we do not split it to fit. When we coalesce multiple arrays if the size would exceed the specified size we do not coalesce therefore the actual array size may be less than the specified chunk size.  streamly-coreCoalesce adjacent arrays in incoming stream to form bigger arrays of a maximum specified size in bytes.Internal streamly-coreCoalesce adjacent arrays in incoming stream to form bigger arrays of a maximum specified size. Note that if a single array is bigger than the specified size we do not split it to fit. When we coalesce multiple arrays if the size would exceed the specified size we do not coalesce therefore the actual array size may be less than the specified chunk size.Internal streamly-coreCoalesce adjacent arrays in incoming stream to form bigger arrays of a minimum specified size. Note that if all the arrays in the stream together are smaller than the specified size the resulting array will be smaller than the specified size. When we coalesce multiple arrays if the size would exceed the specified size we stop coalescing further.Internal  streamly-coreCoalesce adjacent arrays in incoming stream to form bigger arrays of a maximum specified size in bytes.Internal  streamly-coreLike   but generates arrays of exactly equal to the size specified except for the last array in the stream which could be shorter. Unimplemented  streamly-coreLike   but generates arrays of size greater than or equal to the specified except for the last array in the stream which could be shorter.Internal    !(c) 2017 Composewell TechnologiesBSD3streamly@composewell.comreleasedGHC Safe-Inferred" $%'.145789:1         #!(c) 2018 Composewell TechnologiesBSD3streamly@composewell.comGHC Safe-Inferred" $%'.145789:  streamly-coreRead directories as Left and files as Right. Filter out "." and ".." entries.Internal  streamly-coreRead files only.Internal  streamly-core7Read directories only. Filter out "." and ".." entries.Internal  streamly-coreRaw read of a directory. Pre-release  streamly-coreRead directories as Left and files as Right. Filter out "." and ".." entries. The output contains the names of the directories and files. Pre-release  streamly-coreLike   but prefix the names of the files and directories with the supplied directory path.  streamly-coreRead files only.Internal  streamly-coreRead directories only.Internal  !(c) 2018 Composewell TechnologiesBSD3streamly@composewell.com pre-releaseGHC Safe-Inferred" $%'.145789:  !(c) 2020 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHC Safe-Inferred" $%'.145789:  streamly-coreA continuation passing style parser representation. A continuation of  ;s, each step passes a state and a parse result to the next  . The resulting  . may carry a continuation that consumes input a and results in another  . Essentially, the continuation may either consume input without a result or return a result with no further input to be consumed.  streamly-coreThe parser's result.Int is the position index into the current input array. Could be negative. Cannot be beyond the input array max bound. Pre-release  streamly-coreThe intermediate result of running a parser step. The parser driver may stop with a final result, pause with a continuation to resume, or fail with an error.See ParserD docs. This is the same as the ParserD Step except that it uses a continuation in Partial and Continue constructors instead of a state in case of ParserD. Pre-release streamly-coreA parser that always yields a pure value without consuming any input. Pre-release streamly-coreSee . Pre-release streamly-coreA parser that always fails with an error message without consuming any input. Pre-release streamly-coreConvert an element Parser to a chunked  =. A chunked parser is more efficient than an element parser. Pre-release streamly-core Convert a Parser to  . Pre-release streamly-core A generic  . Similar to  but is not constrained to  types. Pre-release streamly-coreMap a function over  . streamly-core is same as , it aborts the parser.  is same as ), it selects the first succeeding parser. streamly-core p1 <|> p2 passes the input to parser p1, if it succeeds, the result is returned. However, if p1 fails, the parser driver backtracks and tries the same input on the alternative parser p2, returning the result if it succeeds. streamly-coreMonad composition can be used for lookbehind parsers, we can dynamically compose new parsers based on the results of the previously parsed values. streamly-coref <$> p1 <*> p2 applies parsers p1 and p2 sequentially to an input stream. The first parser runs and processes the input, the remaining input is then passed to the second parser. If both parsers succeed, their outputs are applied to the function f/. If either parser fails, the operation fails. streamly-core%Map a function on the result i.e. on b in  Parser a m b.  !(c) 2017 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHC Safe-Inferred" $%'.145789: streamly-coreConvert a fused  to . For example:/s1 = StreamK.fromStream $ Stream.fromList [1,2]/s2 = StreamK.fromStream $ Stream.fromList [3,4]Stream.fold Fold.toList $ StreamK.toStream $ s1 `StreamK.append` s2 [1,2,3,4] streamly-core Convert a  to a fused . streamly-core+repeatM = StreamK.sequence . StreamK.repeatrepeatM = fix . StreamK.consM%repeatM = cycle1 . StreamK.fromEffectGenerate a stream by repeatedly executing a monadic action forever.:{repeatAction =7 StreamK.repeatM (threadDelay 1000000 >> print 1) & StreamK.take 10 & StreamK.fold Fold.drain:} streamly-core*iterate f x = x `StreamK.cons` iterate f x!Generate an infinite stream with x as the first element and each successive element derived by applying the function f on the previous element.8StreamK.toList $ StreamK.take 5 $ StreamK.iterate (+1) 1 [1,2,3,4,5] streamly-coreiterateM f m = m >>= \a -> return a `StreamK.consM` iterateM f (f a)Generate an infinite stream with the first element generated by the action m and each successive element derived by applying the monadic function f on the previous element.:{=StreamK.iterateM (\x -> print x >> return (x + 1)) (return 0) & StreamK.take 3 & StreamK.toList:}01[0,1,2] streamly-core&Fold a stream using the supplied left  and reducing the resulting expression strictly at each step. The behavior is similar to . A  can terminate early without consuming the full stream. See the documentation of individual s for termination behavior. Definitions:'fold f = fmap fst . StreamK.foldBreak f+fold f = StreamK.parseD (Parser.fromFold f)Example:StreamK.fold Fold.sum $ StreamK.fromStream $ Stream.enumerateFromTo 1 1005050 streamly-coreFold resulting in either breaking the stream or continuation of the fold. Instead of supplying the input stream in one go we can run the fold multiple times, each time supplying the next segment of the input stream. If the fold has not yet finished it returns a fold that can be run again otherwise it returns the fold result and the residual stream.Internal streamly-coreLike  but also returns the remaining stream. The resulting stream would be U( if the stream finished before the fold. streamly-coreGenerate streams from individual elements of a stream and fold the concatenation of those streams using the supplied fold. Return the result of the fold and residual stream.For example, this can be used to efficiently fold an Array Word8 stream using Word8 folds.Internal streamly-core/Extract the last element of the stream, if any. streamly-coreApply a monadic action to each element of the stream and discard the output of the action. streamly-coreLike Streamly.Data.Stream. but with one significant difference, this function observes exceptions from the consumer of the stream as well.You can also convert  to " and use exception handling from  module:handle f s = StreamK.fromStream $ Stream.handle (\e -> StreamK.toStream (f e)) (StreamK.toStream s) streamly-coreLike Streamly.Data.Stream. but with one significant difference, this function observes exceptions from the consumer of the stream as well. Therefore, it cleans up the resource promptly when the consumer encounters an exception.You can also convert  to ! and use resource handling from  module:bracketIO bef aft bet = StreamK.fromStream $ Stream.bracketIO bef aft (StreamK.toStream . bet) streamly-core Zipping of n streams can be performed by combining the streams pair wise using 2 with O(n * log n) time complexity. If used with ! it will have O(n^2) performance. streamly-core Merging of n streams can be performed by combining the streams pair wise using 5 to give O(n * log n) time complexity. If used with ! it will have O(n^2) performance. streamly-coreRun a Parser- over a stream and return rest of the Stream. streamly-core?A continuation to extract the result when a CPS parser is done. streamly-coreRun a   over a chunked 7 and return the parse result and the remaining Stream. streamly-core Similar to  but works on singular elements. streamly-coreRun a   over a  . Please use ) where possible, for better performance. streamly-core Similar to  but works on generic arrays streamly-core;Sort the input stream using a supplied comparison function."Sorting can be achieved by simply:sortBy cmp = StreamK.mergeMapWith (StreamK.mergeBy cmp) StreamK.fromPureHowever, this combinator uses a parser to first split the input stream into down and up sorted segments and then merges them to optimize sorting when pre-sorted sequences exist in the input stream. O(n) spaceV!(c) 2017 Composewell TechnologiesBSD3streamly@composewell.comreleasedGHC Safe-Inferred" $%'.145789:!!$!(c) 2020 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHC Safe-Inferred" $%'.145789:    !(c) 2019 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHC Safe-Inferred" $%'.145789: !(c) 2019 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHC Safe-Inferred" $%'.145789:3! streamly-core$null arr = Array.byteLength arr == 0 Pre-release streamly-coreLike / but indexes the array in reverse from the end. Pre-release streamly-core"last arr = Array.getIndexRev arr 0 Pre-release streamly-core writeLastN n folds a maximum of n2 elements from the end of the input stream to an  . streamly-coreGiven a sorted array, perform a binary search to find the given element. Returns the index of the element if found. Unimplemented streamly-corePerform a linear search to find all the indices where a given element is present in an array. Unimplemented streamly-core Unimplemented streamly-coreO(1)! Slice an array in constant time.1Caution: The bounds of the slice are not checked.Unsafe Pre-release streamly-coreSplit the array into a stream of slices using a predicate. The element matching the predicate is dropped. Pre-release streamly-coreGenerate a stream of slices of specified length from an array, starting from the supplied array index. The last slice may be shorter than the requested length. Pre-release/ streamly-coreO(1)< Lookup the element at the given index. Index starts from 0. streamly-coreGiven a stream of array indices, read the elements on those indices from the supplied Array. An exception is thrown if an index is out of bounds.This is the most general operation. We can implement other operations in terms of this: read = let u = lmap (arr -> (0, length arr - 1)) Unfold.enumerateFromTo in Unfold.lmap f (indexReader arr) readRev = let i = length arr - 1 in Unfold.lmap f (indexReaderFromThenTo i (i - 1) 0)  Pre-release streamly-coreUnfolds (from, then, to, array) generating a finite stream whose first element is the array value from the index from and the successive elements are from the indices in increments of then up to to. Index enumeration can occur downwards or upwards depending on whether then comes before or after from. getIndicesFromThenTo = let f (from, next, to, arr) = (Stream.enumerateFromThenTo from next to, arr) in Unfold.lmap f getIndices  Unimplemented streamly-coreTransform an array into another array using a stream transformation operation. Pre-release streamly-core&Cast an array having elements of type a( into an array having elements of type b8. The array size must be a multiple of the size of type b otherwise accessing the last element of the array may result into a crash or a random value. Pre-release streamly-coreCast an Array a into an  Array Word8. streamly-core&Cast an array having elements of type a( into an array having elements of type b. The length of the array should be a multiple of the size of the target element otherwise  is returned. streamly-coreConvert an array of any type into a null terminated CString Ptr. If the array is unpinned it is first converted to a pinned array which requires a copy.Unsafe(O(n) Time: (creates a copy of the array) Pre-release streamly-coreFold an array using a . Pre-release streamly-core,Fold an array using a stream fold operation. Pre-release streamly-coreProperties: 1. Identity: deserialize . serialize == id 2. Encoded equivalence: serialize a == serialize a streamly-coreSerialize a Haskell type to a pinned byte array. The array is allocated using pinned memory so that it can be used directly in OS APIs for writing to file or sending over the network.Properties: 1. Identity: #deserialize . pinnedSerialize == id 2. Encoded equivalence: &pinnedSerialize a == pinnedSerialize a streamly-coreDecode a Haskell type from a byte array containing its serialized representation. streamly-core4Insert the given element between arrays and flatten.-interpose x = Stream.interpose x Array.reader streamly-coreInsert the given element after each array and flatten. This is similar to unlines.9interposeSuffix x = Stream.interposeSuffix x Array.reader streamly-core4Insert the given array after each array and flatten.9intercalateSuffix = Stream.intercalateSuffix Array.reader streamly-core compactLE n coalesces adjacent arrays in the input stream only if the combined size would be less than or equal to n.Generates unpinned arrays irrespective of the pinning status of input arrays. streamly-corePinned version of . streamly-coreSplit a stream of arrays on a given separator byte, dropping the separator and coalescing all the arrays between two separators into a single array. streamly-coreLike  considers the separator in suffix position instead of infix position. streamly-core Fold a stream of arrays using a '. This is equivalent to the following:=foldChunks f = Stream.fold f . Stream.unfoldMany Array.reader streamly-core Fold a stream of arrays using a ! and return the remaining stream.The following alternative to this function allows composing the fold using the parser Monad: foldBreakStreamK f s = fmap (first (fromRight undefined)) $ StreamK.parseBreakChunks (ParserK.adaptC (Parser.fromFold f)) s We can compare perf and remove this one or define it in terms of that. streamly-core)Parse an array stream using the supplied ?. Returns the parse result and the unconsumed stream. Throws  if the parse fails.The following alternative to this function allows composing the parser using the parser Monad:parseBreakStreamK p = StreamK.parseBreakChunks (ParserK.adaptC p)We can compare perf and remove this one or define it in terms of that.Internal streamly-corestarting index streamly-corelength of the slice streamly-core from index streamly-corelength of the slice streamly-core from index streamly-corelength of the slice streamly-core from index streamly-corelength of the slice streamly-core from index streamly-corelength of the slice  %!(c) 2018 Composewell TechnologiesBSD3streamly@composewell.com experimentalGHC Safe-Inferred" $%'.145789:L5 streamly-coreRead a  ByteArray consisting of one or more bytes from a file handle. If no data is available on the handle it blocks until at least one byte becomes available. If any data is available then it immediately returns that data without blocking. As a result of this behavior, it may read less than or equal to the size requested. streamly-coreRead a  ByteArray consisting of exactly the specified number of bytes from a file handle. Unimplemented streamly-coregetChunksWith size h+ reads a stream of arrays from file handle h6. The maximum size of a single array is specified by size5. The actual size read may be less than or equal to size. streamly-corereadChunksWith size handle0 reads a stream of arrays from the file handle handle4. The maximum size of a single array is limited to size5. The actual size read may be less than or equal to size.readChunksWith size h = Stream.unfold Handle.chunkReaderWith (size, h) streamly-coreUnfold the tuple (bufsize, handle) into a stream of  arrays. Read requests to the IO device are performed using a buffer of size bufsize. The size of an array in the resulting stream is always less than or equal to bufsize. streamly-coreSame as  streamly-coreThe input to the unfold is (from, to, bufferSize, handle)%. It starts reading from the offset from( in the file and reads up to the offset to. streamly-coregetChunks handle reads a stream of arrays from the specified file handle. The maximum size of a single array is limited to defaultChunkSize5. The actual size read may be less than or equal to defaultChunkSize.6readChunks = Handle.readChunksWith IO.defaultChunkSize Pre-release streamly-core"Unfolds a handle into a stream of  arrays. Requests to the IO device are performed using a buffer of size f. The size of arrays in the resulting stream are therefore less than or equal to f.chunkReader = Unfold.first IO.defaultChunkSize Handle.chunkReaderWith streamly-coreUnfolds the tuple (bufsize, handle) into a byte stream, read requests to the IO device are performed using buffers of bufsize.?@ABCDEFGHI 3428567GHI=B?@A9CDEF:<;> 9!(c) 2023 Composewell Technologies BSD3-3-Clausestreamly@composewell.comGHC Safe-Inferred" $%'.145789:d3?@GHI 3GHI?@ '!(c) 2021 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHC Safe-Inferred" $%'.145789:nT streamly-coreArray stream fold.2An array stream fold is basically an array stream Parser that does not fail. In case of array stream folds the count in ,  and  is a count of elements that includes the leftover element count in the array that is currently being processed by the parser. If none of the elements is consumed by the parser the count is at least the whole array length. If the whole array is consumed by the parser then the count will be 0. Pre-release streamly-coreConvert an element Fold into an array stream fold. Pre-release streamly-coreConvert an element  into an array stream fold. If the parser fails the fold would throw an exception. Pre-release streamly-coreConvert an element  into an array stream fold. If the parser fails the fold would throw an exception. Pre-release streamly-coreAdapt an array stream fold. Pre-release streamly-core/Map a monadic function on the output of a fold. Pre-release streamly-coreA fold that always yields a pure value without consuming any input. Pre-release streamly-coreA fold that always yields the result of an effectful action without consuming any input. Pre-release streamly-coreApplies two folds sequentially on the input stream and combines their results using the supplied function. Pre-release streamly-coreApplies two folds sequentially on the input stream and combines their results using the supplied function. Pre-release streamly-coreApplies a fold on the input stream, generates the next fold from the output of the previously applied fold and then applies that fold. Pre-release streamly-coreTake n array elements (a) from a stream of arrays (Array a). streamly-coreMonad instance applies folds sequentially. Next fold can depend on the output of the previous fold. See . (>>=) = flip concatMap streamly-core form of . > ( *) = splitWith id streamly-core(Maps a function over the result of fold. Pre-release  (!(c) 2019 Composewell Technologies BSD3-3-Clausestreamly@composewell.comGHC Safe-Inferred" $%'.145789:x  streamly-coreCoalesce adjacent arrays in incoming stream to form bigger arrays of a maximum specified size in bytes. streamly-coreGiven a stream of arrays, splice them all together to generate a single array. The stream must be finite. streamly-coreSplit a stream of arrays on a given separator byte, dropping the separator and coalescing all the arrays between two separators into a single array. streamly-core(Fold an array stream using the supplied 5. Returns the fold result and the unconsumed stream. 6foldBreak f = runArrayFoldBreak (ChunkFold.fromFold f)Instead of using this we can adapt the fold to ParserK and use parseBreakChunks instead. ParserK allows composing using Monad as well. foldBreak f s = fmap (first (fromRight undefined)) $ K.parseBreakChunks (ParserK.adaptC (PR.fromFold f)) s We can compare perf and remove this one or define it in terms of that.Internal streamly-core)Parse an array stream using the supplied Parser?. Returns the parse result and the unconsumed stream. Throws  if the parse fails. 6> parseBreak p = K.parseBreakChunks (ParserK.adaptC p)This is redundant and we can just use parseBreakChunks, as ParserK can be composed using Monad. The only advantage of this is that we do not need to adapt.We can compare perf and remove this one or define it in terms of that.Internal streamly-core*Note that this is not the same as using a Parser (Array a) m b with the regular "Streamly.Internal.Data.IsStream.parse" function. The regular parse would consume the input arrays as single unit. This parser parses in the way as described in the ChunkFold module. The input arrays are treated as n element units and can be consumed partially. The remaining elements are inserted in the source stream as an array. streamly-core5Fold an array stream using the supplied array stream . Pre-release streamly-coreLike fold' but also returns the remaining stream. Pre-release streamly-core Apply an  repeatedly on an array stream and emit the fold outputs in the output stream.5See "Streamly.Data.Stream.foldMany" for more details. Pre-release  )!(c) 2023 Composewell Technologies BSD-3-Clausestreamly@composewell.com pre-releaseGHC Safe-Inferred" $%'.145789:y,  !(c) 2019 Composewell TechnologiesBSD3streamly@composewell.comreleasedGHC Safe-Inferred" $%'.145789:y      *!(c) 2020 Composewell Technologies BSD-3-Clausestreamly@composewell.comGHC Safe-Inferred" $%'.145789: streamly-core'Decode a byte stream to a Haskell type. streamly-coreA value of type () is encoded as 0 in binary encoding.  0 ==> ()  Pre-release streamly-coreA value of type * is encoded as follows in binary encoding. 0 ==> False 1 ==> True  Pre-release streamly-coreA value of type * is encoded as follows in binary encoding. 0 ==> LT 1 ==> EQ 2 ==> GT  Pre-release streamly-coreAccept the input byte only if it is equal to the specified value. Pre-release streamly-coreAccept any byte. Pre-release streamly-coreBig endian (MSB first) Word16 streamly-coreParse two bytes as a , the first byte is the MSB of the Word16 and second byte is the LSB (big endian representation). Pre-release streamly-core Little endian (LSB first) Word16 streamly-coreParse two bytes as a , the first byte is the LSB of the Word16 and second byte is the MSB (little endian representation). Pre-release streamly-coreBig endian (MSB first) Word32 streamly-coreParse four bytes as a , the first byte is the MSB of the Word32 and last byte is the LSB (big endian representation). Pre-release streamly-core Little endian (LSB first) Word32 streamly-coreParse four bytes as a , the first byte is the MSB of the Word32 and last byte is the LSB (big endian representation). Pre-release streamly-coreBig endian (MSB first) Word64 streamly-coreParse eight bytes as a , the first byte is the MSB of the Word64 and last byte is the LSB (big endian representation). Pre-release streamly-core Little endian (LSB first) Word64 streamly-coreParse eight bytes as a , the first byte is the MSB of the Word64 and last byte is the LSB (big endian representation). Pre-release streamly-coreParse two bytes as a , the first byte is the MSB of the Int16 and second byte is the LSB (big endian representation). Pre-release streamly-coreParse two bytes as a , the first byte is the LSB of the Int16 and second byte is the MSB (little endian representation). Pre-release streamly-coreParse four bytes as a , the first byte is the MSB of the Int32 and last byte is the LSB (big endian representation). Pre-release streamly-coreParse four bytes as a , the first byte is the MSB of the Int32 and last byte is the LSB (big endian representation). Pre-release streamly-coreParse eight bytes as a , the first byte is the MSB of the Int64 and last byte is the LSB (big endian representation). Pre-release streamly-coreParse eight bytes as a , the first byte is the MSB of the Int64 and last byte is the LSB (big endian representation). Pre-release streamly-coreAccept any byte. Pre-release streamly-coreParse eight bytes as a  in the host byte order. Pre-release+!(c) 2021 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHC Safe-Inferred" $%'.145789:  streamly-core#Match any character that satisfies  streamly-core#Match any character that satisfies  streamly-core#Match any character that satisfies  streamly-core#Match any character that satisfies  streamly-core#Match any character that satisfies  streamly-core#Match any character that satisfies  streamly-core#Match any character that satisfies  streamly-core#Match any character that satisfies  streamly-core#Match any character that satisfies  streamly-core#Match any character that satisfies  streamly-core#Match any character that satisfies  streamly-core#Match any character that satisfies  streamly-core#Match any character that satisfies  streamly-core#Match any character that satisfies  streamly-core#Match any character that satisfies  streamly-core#Match any character that satisfies  streamly-core#Match any character that satisfies  streamly-core#Match any character that satisfies  streamly-core#Match any character that satisfies  streamly-coreMatch a specific character. streamly-core)Match a specific character ignoring case. streamly-coreMatch the input with the supplied string and return it if successful. streamly-coreMatch the input with the supplied string and return it if successful. streamly-coreDrop zero or more white space characters. streamly-coreDrop one or more white space characters. streamly-core5Parse and decode an unsigned integral decimal number. streamly-coreParse and decode an unsigned integral hexadecimal number. The hex digits 'a' through 'f' may be upper or lower case.,Note: This parser does not accept a leading "0x" string. streamly-coreAllow an optional leading '+' or '-'# sign character before any parser. streamly-coreA generic parser for scientific notation of numbers. Returns (mantissa, exponent) tuple. The result can be mapped to * or any other number representation e.g.  Scientific.For example, using the  scientific package: >> parserScientific = uncurry Data.Scientific.scientific  $  streamly-coreA fast, custom parser for double precision flaoting point numbers. Returns (mantissa, exponent) tuple. This is much faster than . because it assumes the number will fit in a  type and uses # representation to store mantissa.Number larger than < may overflow. Int overflow is not checked in the exponent. streamly-coremkDouble mantissa exponent( converts a mantissa and exponent to a  value equivalent to mantissa * 10^exponent. It does not check for overflow, powers more than 308 will overflow. streamly-coreParse a decimal  value. This parser accepts an optional sign (+ or -) followed by at least one decimal digit. Decimal digits are optionally followed by a decimal point and at least one decimal digit after the point. This parser accepts the maximal valid input as long as it gives a valid number. Specifcally a trailing decimal point is allowed but not consumed. This function does not accept "NaN" or "Infinity" string representations of double values. Definition:4double = uncurry Unicode.mkDouble <$> Unicode.number Examples:1p = Stream.parse Unicode.double . Stream.fromListp "-1.23e-123"Right (-1.23e-123)Trailing input examples:p "1." Right 1.0 p "1.2.3" Right 1.2p "1e" Right 1.0 p "1e2.3" Right 100.0p "1+2" Right 1.0 Error cases:p ""Left (ParseError "number: expecting sign or decimal digit, got end of input")p ".1"Left (ParseError "number: expecting sign or decimal digit, got '.'")p "+"Left (ParseError "number: expecting decimal digit, got end of input")  ,(c) 2018 Composewell Technologies (c) Bjoern Hoehrmann 2008-2009 BSD-3-Clausestreamly@composewell.com experimentalGHCNone" $%'.145789:! streamly-coreDecode a stream of bytes to Unicode characters by mapping each byte to a corresponding Unicode  in 0-255 range. streamly-coreEncode a stream of Unicode characters to bytes by mapping each character to a byte in 0-255 range. Throws an error if the input stream contains characters beyond 255. streamly-coreLike  but silently maps input codepoints beyond 255 to arbitrary Latin1 chars in 0-255 range. No error or exception is thrown when such mapping occurs. streamly-coreLike + but drops the input characters beyond 255. streamly-coreSame as  streamly-coreWe do not want to garbage collect this and free the memory, we want to keep this persistent. We don't know how to do that with GHC without having a reference in some global structure. So we use a hack, use mallocBytes so that the GC has no way to free it. streamly-coreReturn element at the specified index without checking the bounds. and without touching the foreign ptr. streamly-core Pre-release streamly-core Pre-release streamly-coreDecode a UTF-8 encoded bytestream to a stream of Unicode characters. Any invalid codepoint encountered is replaced with the unicode replacement character. streamly-coreDecode a UTF-8 encoded bytestream to a stream of Unicode characters. The function throws an error if an invalid codepoint is encountered. streamly-coreDecode a UTF-8 encoded bytestream to a stream of Unicode characters. Any invalid codepoint encountered is dropped. streamly-coreSame as  streamly-coreDecode a UTF-16 little endian encoded bytestream to a stream of Unicode characters. The function throws an error if an invalid codepoint is encountered. Unimplemented streamly-coreLike  but for a chunked stream. It may be slightly faster than flattening the stream and then decoding with . streamly-coreLike 'decodeUtf8'' but for a chunked stream. It may be slightly faster than flattening the stream and then decoding with 'decodeUtf8''. streamly-coreLike  but for a chunked stream. It may be slightly faster than flattening the stream and then decoding with . streamly-coreEncode a stream of Unicode characters to a UTF-8 encoded bytestream. When any invalid character (U+D800-U+D8FF) is encountered in the input stream the function errors out. streamly-core-See section "3.9 Unicode Encoding Forms" in https://www.unicode.org/versions/Unicode13.0.0/UnicodeStandard-13.0.pdf streamly-coreEncode a stream of Unicode characters to a UTF-8 encoded bytestream. Any Invalid characters (U+D800-U+D8FF) in the input stream are replaced by the Unicode replacement character U+FFFD. streamly-coreEncode a stream of Unicode characters to a UTF-8 encoded bytestream. Any Invalid characters (U+D800-U+D8FF) in the input stream are dropped. streamly-coreSame as  streamly-coreEncode a stream of Unicode characters to a UTF-16 little endian encoded bytestream. Unimplemented streamly-core*Read UTF-8 encoded bytes as chars from an  until a 0 byte is encountered, the 0 byte is not included in the stream.Unsafe:/ The caller is responsible for safe addressing.Note that this is completely safe when reading from Haskell string literals because they are guaranteed to be NULL terminated:5Stream.fold Fold.toList (Unicode.fromStr# "Haskell"#) "Haskell" streamly-coreEncode a container to  Array Word8 provided an unfold to covert it to a Char stream and an encoding function.Internal streamly-coreEncode a stream of container objects using the supplied encoding scheme. Each object is encoded as an  Array Word8.Internal streamly-coreEncode a stream of  using the supplied encoding scheme. Each string is encoded as an  Array Word8. streamly-core(Remove leading whitespace from a string. $stripHead = Stream.dropWhile isSpace Pre-release streamly-core0Fold each line of the stream using the supplied  and stream the result.Stream.fold Fold.toList $ Unicode.lines Fold.toList (Stream.fromList "lines\nthis\nstring\n\n\n")["lines","this","string","",""] &lines = Stream.splitOnSuffix (== '\n') Pre-release streamly-core,Code copied from base/Data.Char to INLINE it streamly-core0Fold each word of the stream using the supplied  and stream the result.Stream.fold Fold.toList $ Unicode.words Fold.toList (Stream.fromList "fold these words")["fold","these","words"] words = Stream.wordsBy isSpace Pre-release streamly-core8Unfold a stream to character streams using the supplied 7 and concat the results suffixing a newline character \n to each stream. unlines = Stream.interposeSuffix 'n' unlines = Stream.intercalateSuffix Unfold.fromList "n"  Pre-release streamly-coreUnfold the elements of a stream to character streams using the supplied  and concat the results with a whitespace character infixed between the streams. unwords = Stream.interpose ' ' unwords = Stream.intercalate Unfold.fromList " "  Pre-release00-!(c) 2018 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHC Safe-Inferred" $%'.145789: streamly-coreBreak a string up into a stream of strings at newline characters. The resulting strings do not contain newlines. lines = S.lines A.writeStream.fold Fold.toList $ Unicode.lines $ Stream.fromList "lines\nthis\nstring\n\n\n"[fromList "lines",fromList "this",fromList "string",fromList "",fromList ""] streamly-coreBreak a string up into a stream of strings, which were delimited by characters representing white space. words = S.words A.writeStream.fold Fold.toList $ Unicode.words $ Stream.fromList "A newline\nis considered white space?"[fromList "A",fromList "newline",fromList "is",fromList "considered",fromList "white",fromList "space?"] streamly-coreFlattens the stream of  Array Char8, after appending a terminating newline to each string. is an inverse operation to .Stream.fold Fold.toList $ Unicode.unlines $ Stream.fromList ["lines", "this", "string"]"lines\nthis\nstring\n" unlines = S.unlines A.readNote that, in general unlines . lines /= id streamly-coreFlattens the stream of  Array Char5, after appending a separating space to each string. is an inverse operation to .Stream.fold Fold.toList $ Unicode.unwords $ Stream.fromList ["unwords", "this", "string"]"unwords this string" unwords = S.unwords A.readNote that, in general unwords . words /= id.!(c) 2023 Composewell TechnologiesBSD3streamly@composewell.comGHC Safe-Inferred$ $%'.145789:-( streamly-core A member of & knows how to convert to and from the  type. streamly-coreLike & but does not check the properties of . Provides performance and simplicity when we know that the properties of the path are already verified, for example, when we get the path from the file system or the OS APIs. streamly-coreConvert a raw  to other forms of well-typed paths. It may fail if the path does not satisfy the properties of the target type.Path components may have limits. Total path length may have a limit. streamly-core#Convert a well-typed path to a raw . Never fails. streamly-core#A type representing relative paths. streamly-core#A type representing absolute paths. streamly-core%A type representing a directory path. streamly-core A type representing a file path. streamly-core?A type representing file system paths for directories or files. streamly-core%Exceptions thrown by path operations. streamly-coreConvert a path type to another path type. This operation may fail with a  when converting a less restrictive path type to a more restrictive one. streamly-coreUnsafe: On Posix, a path cannot contain null characters. On Windows, the array passed must be a multiple of 2 bytes as the underlying representation uses Word16. streamly-coreOn Posix it may fail if the byte array contains null characters. On Windows the array passed must be a multiple of 2 bytes as the underlying representation uses Word16.Throws . streamly-coreConvert  to an array of bytes. streamly-core Encode a Unicode char stream to  using strict UTF-8 encoding on Posix. On Posix it may fail if the stream contains null characters. TBD: Use UTF16LE on Windows. streamly-coreDecode the path to a stream of Unicode chars using strict UTF-8 decoding on Posix. TBD: Use UTF16LE on Windows. streamly-coreEncode a Unicode string to  using strict UTF-8 encoding on Posix. On Posix it may fail if the stream contains null characters. TBD: Use UTF16LE on Windows. streamly-coreDecode the path to a Unicode string using strict UTF-8 decoding on Posix. TBD: Use UTF16LE on Windows. streamly-core Generates a * type from an interpolated string literal. Unimplemented streamly-core Generates an Abs Path* type from an interpolated string literal. Unimplemented streamly-core Generates an Rel Path* type from an interpolated string literal. Unimplemented streamly-core Generates an Dir Path* type from an interpolated string literal. Unimplemented streamly-core Generates an  File Path* type from an interpolated string literal. Unimplemented streamly-core Generates an Abs (Dir Path)* type from an interpolated string literal. Unimplemented streamly-core Generates an Rel (Dir Path)* type from an interpolated string literal. Unimplemented streamly-core Generates an Abs (File Path)* type from an interpolated string literal. Unimplemented streamly-core Generates an Rel (File Path)* type from an interpolated string literal. Unimplemented streamly-core Generates a  type. Unimplemented streamly-core Generates an Abs Path type. Unimplemented streamly-core Generates an Rel Path type. Unimplemented streamly-core Generates an Dir Path type. Unimplemented streamly-core Generates an  File Path type. Unimplemented streamly-core Generates an Abs (Dir Path) type. Unimplemented streamly-core Generates an Rel (Dir Path) type. Unimplemented streamly-core Generates an Abs (File Path) type. Unimplemented streamly-core Generates an Rel (File Path) type. Unimplemented streamly-core"Primary path separator character, / on Posix and \ on Windows. Windows supports / too as a separator. Please use , for testing if a char is a separator char. streamly-coreOn Posix only /8 is a path separator but in windows it could be either / or \. streamly-coreLike  but for the less restrictive 6 type which will always create a syntactically valid  type but it may not be semantically valid because we may append an absolute path or we may append to a file path. The onus lies on the user to ensure that the first path is not a file and the second path is not absolute. streamly-coreExtend a directory path by appending a relative path to it. This is the equivalent to the  / operator from the filepath package.((/!(c) 2022 Composewell Technologies BSD-3-Clausestreamly@composewell.comGHC Safe-Inferred" $%'.145789: streamly-core(Convert a Haskell type to a byte stream. streamly-coreA value of type () is encoded as 0 in binary encoding.  0 ==> ()  Pre-release streamly-coreA value of type * is encoded as follows in binary encoding. 0 ==> False 1 ==> True  Pre-release streamly-coreA value of type * is encoded as follows in binary encoding. 0 ==> LT 1 ==> EQ 2 ==> GT  Pre-release streamly-core Stream a . Pre-release streamly-core Stream a  as two bytes, the first byte is the MSB of the Word16 and second byte is the LSB (big endian representation). Pre-release streamly-core Little endian (LSB first) Word16 streamly-core Stream a  as two bytes, the first byte is the LSB of the Word16 and second byte is the MSB (little endian representation). Pre-release streamly-coreBig endian (MSB first) Word32 streamly-core Stream a  as four bytes, the first byte is the MSB of the Word32 and last byte is the LSB (big endian representation). Pre-release streamly-core Little endian (LSB first) Word32 streamly-core Stream a  as four bytes, the first byte is the MSB of the Word32 and last byte is the LSB (big endian representation). Pre-release streamly-coreBig endian (MSB first) Word64 streamly-core Stream a  as eight bytes, the first byte is the MSB of the Word64 and last byte is the LSB (big endian representation). Pre-release streamly-core Little endian (LSB first) Word64 streamly-core Stream a  as eight bytes, the first byte is the MSB of the Word64 and last byte is the LSB (big endian representation). Pre-release streamly-core Stream a  as two bytes, the first byte is the MSB of the Int16 and second byte is the LSB (big endian representation). Pre-release streamly-core Stream a  as two bytes, the first byte is the LSB of the Int16 and second byte is the MSB (little endian representation). Pre-release streamly-core Stream a  as four bytes, the first byte is the MSB of the Int32 and last byte is the LSB (big endian representation). Pre-release streamly-core Stream a  as eight bytes, the first byte is the MSB of the Int64 and last byte is the LSB (big endian representation). Pre-release streamly-core Stream a  as eight bytes, the first byte is the LSB of the Int64 and last byte is the MSB (little endian representation). Pre-release streamly-coreBig endian (MSB first) Float streamly-coreLittle endian (LSB first) Float streamly-coreBig endian (MSB first) Double streamly-core Little endian (LSB first) Double streamly-core=Encode a Unicode character to stream of bytes in 0-255 range. streamly-core Stream a ' as eight bytes in the host byte order. Pre-release0!(c) 2018 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHC Safe-Inferred" $%'.145789: streamly-core'Unfold standard input into a stream of . streamly-core'Read a byte stream from standard input. =read = Handle.read stdin read = Stream.unfold Stdio.reader () Pre-release streamly-core9Read a character stream from Utf8 encoded standard input. )readChars = Unicode.decodeUtf8 Stdio.read Pre-release streamly-core(Unfolds standard input into a stream of  arrays. streamly-coreRead a stream of chunks from standard input. The maximum size of a single chunk is limited to defaultChunkSize). The actual size read may be less than defaultChunkSize. readChunks = Handle.readChunks stdin readChunks = Stream.unfold Stdio.chunkReader () Pre-release streamly-coreFold a stream of  to standard output. streamly-coreFold a stream of  to standard error. streamly-core+Write a stream of bytes to standard output. putBytes = Handle.putBytes stdout putBytes = Stream.fold Stdio.write Pre-release streamly-coreEncode a character stream to Utf8 and write it to standard output. .putChars = Stdio.putBytes . Unicode.encodeUtf8 Pre-release streamly-coreFold a stream of  Array Word8 to standard output. streamly-coreFold a stream of  Array Word8 to standard error. streamly-core,Write a stream of chunks to standard output. putChunks = Handle.putChunks stdout putChunks = Stream.fold Stdio.writeChunks Pre-release streamly-coreWrite a stream of strings to standard output using the supplied encoding. Output is flushed to the device for each string. Pre-release streamly-coreWrite a stream of strings to standard output using UTF8 encoding. Output is flushed to the device for each string. Pre-release streamly-coreLike . but adds a newline at the end of each string.XXX This is not portable, on Windows we need to use "rn" instead. Pre-release!(c) 2021 Composewell Technologies BSD-3-Clausestreamly@composewell.comreleasedGHC Safe-Inferred" $%'.145789:޿1!(c) 2022 Composewell Technologies BSD-3-Clausestreamly@composewell.com experimentalGHC Safe-Inferred$ $%'.145789: streamly-core8A QuasiQuoter that treats the input as a string literal:[str|x|]"x"Any  #{symbol}0 is replaced by the value of the Haskell symbol symbol which is in scope: x = "hello"[str|#{x} world!|]"hello world!"## means a literal #> without the special meaning for referencing haskell symbols:[str|##{x} world!|] "#{x} world!"A # at the end of line means the line continues to the next line without introducing a newline character::{ [str|hello#world!|]:}"hello world!"Bugs: because of a bug in parsers, a lone # at the end of input gets removed.!(c) 2021 Composewell Technologies BSD-3-Clausestreamly@composewell.comreleasedGHC Safe-Inferred" $%'.145789:!(c) 2020 Composewell Technologies BSD-3-Clausestreamly@composewell.comreleasedGHC Safe-Inferred" $%'.145789:  !(c) 2022 Composewell Technologies BSD-3-Clausestreamly@composewell.comreleasedGHC Safe-Inferred" $%'.145789:22222                   3333333333333U33333333333        F   55556666666U66666777777777W7778888888888888888888888888;;;;;;;;;;<<<<T<<<<<`<<U<<<<<<<<<<<<<<<<<D<<<<<<<F<<<<<<<<<<<<?<<<H<<<<<<q<<<<<<<<<<<A<<<m<<<<<<<<<BBBBCCiCiCCCCCCCCCCCCCCCCCCCCCCCCCCCCHCCCCCCCCCCCCCCCCCCCCCCClCCCCLCCCCCCCJJJJJJJJJJJJJJJJJJJJJJJJJJJJJHJJJJNNNNOOOOOOOOOOOOOO:PPPPPPPPPPPPPPPPPPPPPPPPPPPPPP?PPPHPPPPPLPPWPRRRRRRRRRRRRRRRRRRRRRRRRRSSSSSSSSSSSSSSSSTSSTSSSmSSSSSSSSSSSSDSSSSSSSSSSSSSSSSSSSSWSSSSS?SSSHSSSSSSSSSSSSSISSSSSSS     I          U              t           [[[[[[[[\\\\\]]]]]]]]]]]]]^^^^^^______U_`____________________________________________LMbbbbbbbbUeeeeeeeeeeeeeeeeeeeeeeUeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeereeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeFeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeef>f>fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffUgggggggggggggkkkkkkkkkkkkkkkkkkkkkkkkkkjkkhkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk k kk k k k kk k k k k k k k k k k kk kk k kkkk k k k k n n n n n n n n n n n n n n n n n n n n n n n n n n oooo o oo oooo o o o o o oo o o o o o o o o o o o o o oooooooo o oooo o o o o o o o o o o o o o o oo o oo o o o o ooo oo o oooooooop p p p p p p p p p p p p pFpqpp ppp prp p p p p p p p p p p p p p pGppp p p p p p p p p p p p p p p p p p s sts s s s s s s s s         w w w w z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z z { { | | | | | }y}x ~ ~ ~~ ~ ~~~ ~~~~~~~j~~h~~ ~~~~~~ ~ ~ ~ ~ ~ ~ ~ ~ !>!>!!!!U!!!!!!!!!!!!!!!!!!!!! !!!!!! ! ! !       " " " " ""## # # # ### # # # # # # K             hj        W r                                  % % % % % % % % % % % %%% % % % % %% % % %% % % %% & & & & & & & & & & & & & & &&& && & & &&& && && ' ' '' ' ' '''''H'' ' ' ( ( ( ( (( ( (( ( ( ( ( )) * * * * * * * * * * * * * * * * * * * * * * * * * * + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , - - - - . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . / / / / / / / / / / / / / / / / / / / / / / / / / / 000 0 0 00 0 0 00 0 0 0 0 1 1 1     2 2     66   < < < <       C  C J  J J J J J J J J J J     N N N N N N   O    O O  O   P    S   ] ]           e e e e e e e eeeeeeeff k  noppppps""    %   &  '  * ** ** *                       ,,,, ,../////streamly-core-0.2.2-inplaceStreamly.Internal.BaseCompat#Streamly.Internal.Control.Exception Streamly.Internal.Control.ForkIOStreamly.Internal.Control.MonadStreamly.Internal.Data.Builder$Streamly.Internal.Data.Either.StrictStreamly.Internal.Data.Fold"Streamly.Internal.Data.IOFinalizerStreamly.Internal.Data.IsMap#Streamly.Internal.Data.Maybe.Strict#Streamly.Internal.Data.MutByteArray"Streamly.Internal.Data.Refold.TypeStreamly.Internal.Data.UnfoldStreamly.Internal.Data.Producer$Streamly.Internal.Data.Time.TimeSpec#Streamly.Internal.Data.Tuple.StrictStreamly.Internal.Data.PipeStreamly.Data.MutArray!Streamly.Internal.Data.Time.Units!Streamly.Internal.Data.Time.Clock Streamly.Internal.Data.SVar.TypeStreamly.Internal.Data.StreamKStreamly.Internal.Data.ParserStreamly.Internal.Data.StreamStreamly.Internal.Data.MutArrayStreamly.Internal.System.IO'Streamly.Internal.Data.MutArray.Generic#Streamly.Internal.Data.Ring.GenericStreamly.Internal.Data.ArrayStreamly.Data.StreamStreamly.Internal.Data.RingStreamly.Data.Fold$Streamly.Internal.Data.Array.Generic&Streamly.Internal.Data.MutArray.Stream Streamly.Internal.FileSystem.DirStreamly.Internal.Data.ParserK#Streamly.Internal.FileSystem.Handle!Streamly.Internal.FileSystem.File#Streamly.Internal.Data.Fold.Chunked#Streamly.Internal.Data.Array.StreamStreamly.Data.ParserK$Streamly.Internal.Data.Binary.Parser Streamly.Internal.Unicode.Parser Streamly.Internal.Unicode.StreamStreamly.Internal.Unicode.Array!Streamly.Internal.FileSystem.Path$Streamly.Internal.Data.Binary.StreamStreamly.Internal.Console.Stdio Streamly.Internal.Unicode.String Streamly.Internal.Data.Fold.Step(Streamly.Internal.Data.MutByteArray.Type!Streamly.Internal.Data.Parser.Tee"Streamly.Internal.Data.Stream.Step$Streamly.Internal.Data.Producer.Type Streamly.Internal.Data.Pipe.TypeStreamly.Internal.Data.UnboxStreamly.Data.MutByteArray deriveUnbox&Streamly.Internal.Data.Time.Clock.Type#Streamly.Internal.Data.StreamK.Type Data.ArrayArraycross Control.Monadmfix*Streamly.Internal.Data.StreamK.Transformer Streamly.Internal.Data.Fold.TypefoldrM Data.Streamappend foldIterateM concatMapfoldMany"Streamly.Internal.Data.Parser.TypeParserKmanysomeStreamly.Internal.Data.Fold.TeeStreamly.Internal.Data.Unbox.TH"Streamly.Internal.Data.Unfold.TypeunfoldManyInterleave)Streamly.Internal.Data.Unfold.Enumeration"Streamly.Internal.Data.Stream.TypeStreamnilStreamly.Data.StreamKzipWithControl.ExceptionmaskStreamly.Data.Unfold)Streamly.Internal.Data.Stream.Transformer"Streamly.Internal.Data.Stream.Lift'Streamly.Internal.Data.Stream.Exception$Streamly.Internal.Data.IORef.Unboxed&Streamly.Internal.Data.Stream.GenerateconsControl.Applicative&Streamly.Internal.Data.Producer.SourceStreamly.Data.ParserStreamly.Data.MutArray.Generic$Streamly.Internal.Data.MutArray.Type!Streamly.Internal.Data.Array.Type"Streamly.Internal.Data.Fold.WindowminimumFoldmaximum'Streamly.Internal.Data.Fold.CombinatorsaddOneuncons%Streamly.Internal.Data.Fold.Container'Streamly.Internal.Data.Stream.Transform%Streamly.Internal.Data.Stream.Nesting interleavemergeBy!Streamly.Internal.Data.Stream.TopjoinInnerGeneric Data.List//%Streamly.Internal.Data.Serialize.TypederiveSerializederiveSerializeWith*Streamly.Internal.Data.Serialize.TH.Bottom*Streamly.Internal.Data.Serialize.TH.Common-Streamly.Internal.Data.Serialize.TH.RecHeader#Streamly.Internal.Data.Serialize.TH'Streamly.Internal.Data.Stream.Eliminate'Streamly.Internal.Data.Stream.Container%Streamly.Internal.Data.Stream.StreamDStreamly.FileSystem.Dir#Streamly.Internal.Data.ParserK.Type fromEffectStreamKhandle bracketIOStreamly.Data.Array.GenericdefaultChunkSizeStreamly.FileSystem.HandleAStreamly.FileSystem.FileStreamly.Data.ArrayStreamly.Console.StdioStreamly.Unicode.ParserStreamly.Unicode.StreamStreamly.Unicode.String#.unsafeWithForeignPtrverifyverifyM rawForkIOforkManagedWith forkIOManageddiscardBuilder$fMonadBuilder$fApplicativeBuilder$fFunctorBuilderEither'Left'Right'isLeft'isRight' fromLeft' fromRight' $fShowEither'StepPartialDonemapMStep chainStepM IOFinalizernewIOFinalizerrunIOFinalizerclearingIOFinalizerIsMapKeymapEmpty mapAlterF mapLookup mapInsert mapDeletemapUnionmapNullmapTraverseWithKey $fIsMapIntMap $fIsMapMapMaybe'Just'Nothing'toMaybe fromJust'isJust' $fShowMaybe'MutableByteArray MutByteArray PinnedStatePinnedUnpinnedgetMutableByteArray#sizeOfMutableByteArrayunsafePinnedAsPtr asPtrUnsafe unsafeAsPtremptynew pinnedNewpinnedNewAlignedBytes newBytesAsputSliceUnsafecloneSliceUnsafeAscloneSliceUnsafepinnedCloneSliceUnsafeisPinnedpinunpinRefoldfoldl'lmapMrmapMdrainBysconcatiteratetake$fShowTuple'FusedYieldSkipStop NestedLoop OuterLoop InnerLoopProducernilMunfoldrMfromList translatelmapconcatTimeSpecsecnsec$fStorableTimeSpec $fNumTimeSpec $fOrdTimeSpec $fEqTimeSpec$fReadTimeSpec$fShowTimeSpecTuple4' Tuple3Fused'Tuple3'Tuple' $fShowTuple4'$fShowTuple3Fused' $fShowTuple3' $fShowTuple'Pipe PipeStateConsumeProduceContinueteemapcomposemapMPeekReppeekRepPokeReppokeRep SizeOfRep sizeOfRepPeeker BoundedPtrUnboxsizeOfpeekAt peekByteIndexpokeAt pokeByteIndex readUnsafereadskipByte runPeekerpokeBoundedPtrUnsafepokeBoundedPtr genericSizeOfgenericPokeByteIndexgenericPeekByteIndexRelTime RelTime64AbsTime TimeUnit64TimeUnit MilliSecond64 MicroSecond64 NanoSecond64 toAbsTime fromAbsTime toRelTime64 fromRelTime64 diffAbsTime64addToAbsTime64 toRelTime fromRelTime diffAbsTime addToAbsTimeshowNanoSecond64 showRelTime64$fTimeUnitMilliSecond64$fTimeUnitMicroSecond64$fTimeUnitNanoSecond64$fTimeUnitTimeSpec$fTimeUnit64MilliSecond64$fTimeUnit64MicroSecond64$fTimeUnit64NanoSecond64 $fEqRelTime $fReadRelTime $fShowRelTime $fNumRelTime $fOrdRelTime $fEqRelTime64$fReadRelTime64$fShowRelTime64$fEnumRelTime64$fBoundedRelTime64$fNumRelTime64$fRealRelTime64$fIntegralRelTime64$fOrdRelTime64 $fEqAbsTime $fOrdAbsTime $fShowAbsTime$fEqMilliSecond64$fReadMilliSecond64$fShowMilliSecond64$fEnumMilliSecond64$fBoundedMilliSecond64$fNumMilliSecond64$fRealMilliSecond64$fIntegralMilliSecond64$fOrdMilliSecond64$fStorableMilliSecond64$fUnboxMilliSecond64$fEqMicroSecond64$fReadMicroSecond64$fShowMicroSecond64$fEnumMicroSecond64$fBoundedMicroSecond64$fNumMicroSecond64$fRealMicroSecond64$fIntegralMicroSecond64$fOrdMicroSecond64$fStorableMicroSecond64$fUnboxMicroSecond64$fEqNanoSecond64$fReadNanoSecond64$fShowNanoSecond64$fEnumNanoSecond64$fBoundedNanoSecond64$fNumNanoSecond64$fRealNanoSecond64$fIntegralNanoSecond64$fOrdNanoSecond64$fStorableNanoSecond64$fUnboxNanoSecond64Clock MonotonicRealtimeProcessCPUTime ThreadCPUTime MonotonicRawMonotonicCoarseUptimeRealtimeCoarsegetTimeState streamVarRaterateLowrateGoalrateHigh rateBufferSVar svarStylesvarMrun svarStopStyle svarStopBy outputQueueoutputDoorBell readOutputQ postProcessoutputQueueFromConsumeroutputDoorBellFromConsumermaxWorkerLimitmaxBufferLimitpushBufferSpacepushBufferPolicypushBufferMVar remainingWork yieldRateInfoenqueue isWorkDone isQueueDone needDoorBellworkLoop workerThreads workerCount accountThreadworkerStopMVar svarStatssvarRefsvarInspectMode svarCreator outputHeapaheadWorkQueuePushBufferPolicyPushBufferDropNewPushBufferDropOldPushBufferBlock SVarStopStyleStopNoneStopAnyStopByLimit UnlimitedLimited SVarStatstotalDispatches maxWorkers maxOutQSize maxHeapSize maxWorkQSizeavgWorkerLatencyminWorkerLatencymaxWorkerLatency svarStopTime YieldRateInfosvarLatencyTargetsvarLatencyRangesvarRateBuffersvarGainedLostYieldssvarAllTimeLatencyworkerBootstrapLatencyworkerPollingIntervalworkerPendingLatencyworkerCollectedLatencyworkerMeasuredLatency LatencyRange minLatency maxLatency WorkerInfoworkerYieldMaxworkerYieldCountworkerLatencyStart SVarStyleAsyncVar WAsyncVar ParallelVarAheadVarAheadHeapEntryAheadEntryNullAheadEntryPureAheadEntryStreamRunInIOrunInIO ChildEvent ChildYield ChildStop ThreadAbortCountmagicMaxBufferdefState adaptState setYieldLimit getYieldLimit setMaxThreads getMaxThreads setMaxBuffer getMaxBuffer setStreamRate getStreamRatesetStreamLatencygetStreamLatencysetInspectModegetInspectMode$fExceptionThreadAbort $fOrdLimit $fEqLimit$fEqSVarStopStyle$fShowSVarStopStyle $fShowLimit$fShowLatencyRange $fEqSVarStyle$fShowSVarStyle$fShowThreadAbort $fEqCount $fReadCount $fShowCount $fEnumCount$fBoundedCount $fNumCount $fRealCount$fIntegralCount $fOrdCount CrossStreamKMkStreammkStream fromStopK fromYieldKconsK.:fromPureconsMconsMByfoldStreamShared foldStream foldrSSharedfoldrSfoldrSMbuildbuildSbuildSMbuildMaugmentS augmentSMfoldlx'foldlMx'foldlM'drainnullconjoin mapMSerialmapMWithfoldrunSharecrossApplyWith crossApply crossApplySnd crossApplyFst crossWithbindWith concatMapWith mergeMapWithconcatIterateWithmergeIterateWithconcatIterateScanWithconcatIterateLeftsWith interleaveFst interleaveMinunfoldr unfoldrMWithrepeat repeatMWithreplicateMWithfromIndicesMWith iterateMWith fromFoldable fromFoldableMtailinitfoldlSreversebefore concatEffectconcatMapEffectmkCrossunCrossfoldlTfoldrT evalStateT liftInner ManyStatefoldl1'foldlM1'foldr'foldrM'foldt'foldtM' fromRefoldtoList toStreamKRev toStreamK lengthGenericlength splitWith serialWithsplit_teeWith teeWithFst teeWithMinshortestlongestpostscan catMaybes scanMaybe filteringfilterfilterMcatLefts catRights catEitherstakingdropping takeEndBy_ takeEndBy duplicatereducesnoclMsnoclsnocMsnocextractMcloseisClosedmanyPostgroupsOf refoldMany refoldMany1refold morphInnergeneralizeInner ParseErrorParserErrorInitialIPartialIDoneIErrorbimapOverrideCount extractStepnoErrorUnsafeSplitWithnoErrorUnsafeSplit_alt splitMany splitManyPost splitSomediedieMnoErrorUnsafeConcatMapTeeunTeetoFoldDataCondcNamedcTvsdcCxtdcFieldsDataTypedtNamedtTvsdtCxtdtCons reifyDataType ConcatState ConcatOuter ConcatInnerUnfold mkUnfoldM mkUnfoldrMbothfirstsecondtakeWhileMWithInput takeWhileM takeWhilemapM2map2many2 crossWithM concatMapMbind functionMfunctionidentityzipWithMmanyInterleave Enumerable enumerateFromenumerateFromToenumerateFromThenenumerateFromThenToenumerateFromStepNumenumerateFromThenNumenumerateFromNumenumerateFromStepIntegralenumerateFromIntegralenumerateFromThenIntegralenumerateFromToIntegralenumerateFromThenToIntegralenumerateFromIntegralBounded enumerateFromThenIntegralBoundedenumerateFromToIntegralBounded"enumerateFromThenToIntegralBoundedenumerateFromFractionalenumerateFromThenFractionalenumerateFromToFractionalenumerateFromThenToFractionalenumerateFromToSmallenumerateFromThenToSmallenumerateFromSmallBoundedenumerateFromThenSmallBounded CrossStreamFoldMany FoldManyStart FoldManyFirst FoldManyLoop FoldManyYield FoldManyDone FoldManyPostFoldManyPostStartFoldManyPostLoopFoldManyPostYieldFoldManyPostDoneConcatMapUStateConcatMapUOuterConcatMapUInnerUnStreamunfold fromStreamK foldEither foldBreakfold foldAddLazyfoldAddfoldrMxeqBycmpBy takeEndByM unfoldManyconcatIterateScanconcatIterateBfsRevconcatIterateBfsconcatIterateDfsunfoldIterateDfsunfoldIterateBfsRevunfoldIterateBfsreduceIterateBfsfoldIterateBfs foldManyPostrefoldIterateM indexOnSuffix sliceOnSuffix discardFirst discardSecondswapeitherscanManyscan postscanlM' fromStreamD fromStream fromListMfromPtr replicateMrepeatMiterateM fromIndicesMdrop dropWhileM dropWhile gbracket_ gbracketIOafter_afterIO onExceptionfinally_ finallyIObracket_ runReaderT usingReaderT runStateT usingStateT liftInnerWith runInnerWithrunInnerWithStategbracket afterUnsafe bracketUnsafe bracketIO3 finallyUnsafeghandleIORefnewIORef writeIORef readIORef modifyIORef' pollIntIORefTimer asyncClock readClocktimer waitTimer resetTimer extendTimer shortenTimer readTimerunsafeInlineIObyteArrayOverheadarrayPayloadSize replicate enumerate enumerateToenumerateFromBounded timesWith absTimesWith relTimesWithtimesabsTimesrelTimes durationstimeout fromIndices generateMgeneratefromPtrN fromByteStr#fromFold fromFoldMaybepeekeofnextmaybesatisfyoneoneEqoneNotEqoneOfnoneOf takeBetweentakeEQtakeGE takeWhileP takeWhile1takeFramedByGenericblockWithQuotes takeEndByEsctakeEitherSepBy takeStartBy takeStartBy_takeFramedByEsc_ takeFramedBy_wordBy wordFramedBywordWithQuoteswordKeepQuoteswordProcessQuotesgroupBygroupByRollinggroupByRollingEitherlistEqBy streamEqBylistEq subsequenceByzipindexedmakeIndexFiltersampleFromthenspanspanBy spanByRollingtakeP lookAheaddeintercalateAll deintercalatedeintercalate1sepBysepByAllsepBy1 roundRobinsequencemanyP countBetweencount manyTillPmanyTillmanyThen retryMaxTotalretryMaxSuccessiveretrySourcesourceunreadisEmptyproducerparse parseManyD parseManysimplifyMutArray arrContents#arrStartarrLen arrTrueLenemptyOfputIndexUnsafeputIndex putIndicesmodifyIndexUnsafe modifyIndexrealloc snocUnsafesnocWithuninitgetIndexUnsafeWithgetIndexUnsafegetIndexgetSliceUnsafegetSlicereadRevunsafeCreateOf writeNUnsafecreateOfwriteN createWith writeWithcreatewrite fromStreamN fromListNfromPureStreamchunksOf producerWithreaderclonecmpeqstripRingringArrringHeadringMax createRing writeLastNunsafeInsertRingWithseek toMutArraycopyToMutArray toStreamWith SpliceState SpliceInitialSpliceBufferingSpliceYielding SpliceFinish ArrayUnsafe arrContentsarrEndarrBoundc_memchrmemcpymemcmp newArrayWithpinnedNewBytespinnedNewAligned pinnedEmptyOf modifyIndicesmodifyunsafeSwapIndices swapIndices blockSizeroundUpToPower2allocBytesToElemCountarrayChunkBytesgrowresizegrowExp resizeExp rightSizesnocMay snocLinearpokeSkipUnsafe pokeAppendMay pokeAppendpeekUnconsUnsafepeekSkipUnsafe peekUncons getIndexRevindexReaderWithgetIndicesWith indexReader getIndicespermute partitionBy shuffleBydivideBy byteLength byteCapacity bytesFreepinnedChunksOf concatWith flattenArrays concatRevWith concatRevflattenArraysRev readerRevWith readerRev toStreamKWithtoStreamRevWithtoStreamKRevWith unsafeAppendNwriteAppendNUnsafeappendN writeAppendN appendWithwriteAppendWith writeAppendunsafeCreateOfWithwriteNWithUnsafeunsafePinnedCreateOfpinnedWriteNUnsafe createOfWith writeNWithpinnedCreateOf pinnedWriteN revCreateOf writeRevNpinnedWriteNAligned buildChunks writeChunks pinnedCreate pinnedWrite fromStreamDNpinnedFromListN fromListRevNfromPureStreamNfromChunksRealloced fromChunksKfromArrayStreamKpinnedFromList fromListRev pinnedClone spliceCopy spliceUnsafe spliceWithsplice spliceExpsplitOnbreakOnsplitAt castUnsafeasBytescastbyteCmpbyteEq pCompactLEpPinnedCompactLE compactLeAs fCompactGEfPinnedCompactGE lCompactGElPinnedCompactGE compactGE compactEQbubble unsafeFreezeunsafeFreezeWithShrink unsafeThaw bufferChunks unsafeIndexIO unsafeIndex readerUnsafe toStreamD toStreamDRevtoStream toStreamRevunsafeMakePure fromByteStr fromChunks ringStart ringBoundstartOfnewRingadvancemoveBy fromArray unsafeInsertslideringsOfunsafeEqArrayN unsafeEqArrayunsafeFoldRingunsafeFoldRingMunsafeFoldRingFullMunsafeFoldRingNMslidingWindowWith slidingWindow windowLmap cumulativewindowRollingMapMwindowRollingMap windowSumInt windowSum windowLengthwindowPowerSumwindowPowerSumFrac windowRange windowMinimum windowMaximum windowMeandrive addStream mapMaybeMmapMaybetracingtrace transformdeleteByslide2uniqByuniqprunerepeated drainMapMlatestlastthesumproduct maximumBy minimumBymeanvariancestdDevrollingHashWithSalt defaultSalt rollingHashrollingHashFirstN rollingMapMmconcatfoldMapfoldMapM toListRevdrainN indexGenericindexheadfindMfindlookup findIndex findIndices elemIndices elemIndexanyelemallnotElemandor takingEndByM takingEndBy takingEndByM_ takingEndBy_droppingWhileM droppingWhile takeEndBySeq takeEndBySeq_ distribute partitionByMpartitionByFstMpartitionByMinM partition unzipWithM unzipWithFstM unzipWithMinM unzipWithunzipzipStreamWithM zipStream indexingWithindexing indexingRevwithconcatSequence chunksBetweenbottomBytopBytopbottomintersperseWithQuotestoSettoIntSetnubnubInt countDistinctcountDistinctInt demuxGenericdemuxdemuxGenericIOdemuxIOdemuxToContainer demuxToMapdemuxToContainerIO demuxToMapIOdemuxKvToContainer demuxKvToMapclassifyGenericclassifyclassifyGenericIO classifyIO toContainertoMap toContainerIOtoMapIOkvToMap frequencytaptapOffsetEverytrace_ prescanlM' prescanl' postscanlMx' postscanlx'scanlMx'scanlx' postscanl'postscanlMAfter' postscanlM postscanlscanlM' scanlMAfter'scanl'scanlMscanlscanl1Mscanl1scanl1M'scanl1' takeWhileLasttakeWhileArounddropLast dropWhileLastdropWhileAroundinsertBy intersperseM intersperse intersperseM_intersperseMWithintersperseMSuffixintersperseMSuffix_intersperseMSuffixWithintersperseMPrefix_delay delayPostdelayPre reverseUnbox reassembleByindexedR timestampWith timestamped timeIndexWith timeIndexedslicesBy rollingMap rollingMap2ConcatUnfoldInterleaveStateConcatUnfoldInterleaveOuterConcatUnfoldInterleaveInnerConcatUnfoldInterleaveInnerLConcatUnfoldInterleaveInnerRInterleaveStateInterleaveFirstInterleaveSecondInterleaveSecondOnlyInterleaveFirstOnly AppendState AppendFirst AppendSecondinterleaveFstSuffixmergeByM mergeMinBy mergeFstByintersectBySortedunfoldInterleaveunfoldRoundRobininterposeSuffixMinterposeSuffix interposeM interposegintercalateSuffix gintercalateintercalateSuffix intercalate foldSequence parseSequence parseManyTill parseIterateD parseIterate groupsWhilegroupsBygroupsRollingBywordsBy splitOnSeqsplitOnSuffixSeqsplitOnSuffixSeqAny splitOnPrefix splitOnAny splitInnerBysplitInnerBySuffix dropPrefix dropInfix dropSuffixstrideFromThenjoinInnerAscBy joinLeftAscByjoinOuterAscByfilterInStreamGenericByfilterInStreamAscBydeleteInStreamGenericBydeleteInStreamAscByunionWithStreamGenericByunionWithStreamAscBysliceIndexerFromLengenSlicesFromLen slicerFromLengetSlicesFromLen compactLEpinnedCompactLE compactOnBytecompactOnByteSuffix Serialize addSizeTo deserializeAt serializeAt TypeOfTypeUnitTypeTheType MultiType SimpleDataConFieldSerializeConfig cfgInlineSizecfgInlineSerializecfgInlineDeserializecfgConstructorTagAsStringcfgRecordSyntaxWithHeaderinlineAddSizeToinlineSerializeAtinlineDeserializeAtencodeConstrNamesencodeRecordFieldsserializeConfig_x_acc_arr_tag_initialOffset _endOffset_val mkFieldNamemakeImakeNmakeAopenConstructormatchConstructorsimplifyDataCon typeOfType isUnitTypeisRecordSyntaxint_w8int_w32w8_intw32_intc2w wListToStringxorCmpserializeW8List litIntegrallitProxyerrorUnsupportederrorUnimplementedmkDeserializeExprOnemkSerializeExprFieldsmkRecSizeOfExprmkRecSerializeExprconUpdateFuncDecmkDeserializeKeysDecmkRecDeserializeExpr newPinnedfoldr1parseD parseBreakD parseBreakheadElse!!mapM_ isPrefixOfisSubsequenceOf stripPrefix isInfixOf isSuffixOfisSuffixOfUnbox stripSuffixstripSuffixUnbox streamFold $fReadArray $fShowArray $fOrdArray $fEqArray joinInnerjoinLeftGenericjoinLeftjoinOuterGeneric joinOuterpackArraysChunksOflpackArraysChunksOfcompact eitherReadereitherReaderPaths fileReader dirReader readEitherreadEitherPathstoEither readFilestoFilesreadDirstoDirsMkParser runParser ParseResultSuccessFailureInputNoneChunkadaptCadaptadaptCG foldConcathoist parseDBreakparseBreakChunks parseChunksparseBreakChunksGenericparseChunksGenericsortBy binarySearch indexFinder findIndicesOfindexReaderFromThenTostreamTransformasCStringUnsafeencodeAs serializepinnedSerialize deserializefoldBreakChunks foldChunksfoldBreakChunksKparseBreakChunksKgetChunk getChunkOfreadChunksWithchunkReaderWithreadChunksWithBufferOfchunkReaderFromToWith readChunks chunkReader readerWithreadWithBufferOfreadWithputChunk putChunks putChunksWith putBytesWithputBytes chunkWriterwriteChunksWithwriteChunksWithBufferOfwriteWithBufferOfwriteMaybesWith writerWithwriterwithFilewriteAppendArraytoChunksWithBufferOftoChunksreadChunksFromToWithtoBytes fromBytesWithfromBytesWithBufferOf fromByteswriteAppendChunks$fSerializeInteger$fSerializeProxy$fSerializeEither$fSerializeMaybe$fSerializeLiftedInteger ChunkFold fromParserD fromParser adaptFold$fMonadChunkFold$fApplicativeChunkFold$fFunctorChunkFoldunlinestoArray splitOnSuffix foldBreakDrunArrayParserDBreak runArrayFoldrunArrayFoldBreakrunArrayFoldMany FromBytesunitboolorderingeqWord8word8word16beword16leword32beword32leword64beword64leint8int16beint16leint32beint32leint64beint64le float32be float32le double64be double64le charLatin1 word64hostspacelowerupperalphaalphaNum printabledigitoctDigithexDigitlettermarknumeric punctuationsymbol separatorasciilatin1 asciiUpper asciiLowercharcharIgnoreCasestringstringIgnoreCase dropSpace dropSpace1decimal hexadecimalsignednumber doubleParsermkDoubledoubleCodingFailureModeTransliterateCodingFailureErrorOnCodingFailureDropOnCodingFailure DecodeError DecodeState CodePoint decodeLatin1 encodeLatin1' encodeLatin1 encodeLatin1_encodeLatin1LaxresumeDecodeUtf8EitherDdecodeUtf8EitherDdecodeUtf8EitherresumeDecodeUtf8EitherwriteCharUtf8'parseCharUtf8With decodeUtf8D decodeUtf8 decodeUtf8D' decodeUtf8' decodeUtf8D_ decodeUtf8_ decodeUtf8LaxdecodeUtf16le'decodeUtf8ChunksdecodeUtf8Chunks'decodeUtf8Chunks_ readCharUtf8' encodeUtf8D' encodeUtf8' readCharUtf8 encodeUtf8D encodeUtf8 readCharUtf8_ encodeUtf8D_ encodeUtf8_ encodeUtf8LaxencodeUtf16le'fromStr# encodeStrings stripHeadlineswordsunwords$fShowCodingFailureMode$fShowDecodeErrorIsPathfromPathUnsafefromPathtoPathRelAbsDirFilePath adaptPathfromChunkUnsafe fromChunktoChunk fromCharstoChars fromStringtoStringpathabsreldirfileabsdirreldirabsfilerelfilemkPathmkAbsmkRelmkDirmkFilemkAbsDirmkRelDir mkAbsFile mkRelFileprimarySeparator isSeparator extendPath extendDir$fExceptionPathException $fIsPathRel $fIsPathRel0 $fIsPathAbs $fIsPathAbs0 $fIsPathRel1 $fIsPathAbs1 $fIsPathDir $fIsPathFile $fIsPathPath$fShowPathException$fEqPathExceptionToBytescharUtf8 readCharswriteErrputCharswriteErrChunksputStringsWith putStrings putStringsLnstr$fShowStrSegment$fEqStrSegment sequenceWithbase Data.EitherEitherghc-prim GHC.TypesTrueFalse $fFunctorStepGHC.BasefmapData.Bifunctor$fBifunctorStep Data.IORef mkWeakIORefrunFinalizerGC GHC.MaybeMaybeMonad$fFunctorProducerGHC.IntInt64 ghc-bignumGHC.Num.IntegerIntegerYieldKStopK Data.Foldable foldrSWith sharedMWithLeftNothingFoldableGHC.NumNum*>JustRight$fApplicativeFold Applicative $fFunctorFold Alternative$fFunctorInitial$fBifunctorInitial$fMonadIOParser$fMonadFailParser $fMonadParser$fAlternativeParser$fApplicativeParser$fFunctorParser SemigroupMonoid GHC.FloatFloatingGHC.Real Fractional $fFloatingTee$fFractionalTee$fNumTee $fMonoidTee<>$fSemigroupTee$fApplicativeTee<*>template-haskellLanguage.Haskell.TH.SyntaxConDataDelimTV TyVarBndrPlainTVKindedTVtvName tyVarBndrNameName conToDataConsGadtCRecGadtC$fFunctorUnfoldGHC.EnumEnummaxBoundBoundedoddData.Functor.IdentityIdentityData.Traversable indexerBytransformers-0.5.6.2Control.Monad.Trans.ReaderReaderT Control.Monad.Trans.State.StrictStateT_bracket_handle GHC.Conc.SyncThreadIdIntegralenumFrom enumFromThen enumFromToenumFromThenToInttoEnumminBoundForeign.StorableStorableGHC.PtrPtrGHC.PrimAddr#GHC.List.putIndexUnsafeWith MutableArray#arrayChunkSizebytesToElemCountlargeObjectThresholdroundUpLargeArrayallocBytesToBytes roundDownTo reallocWith snocNewEndpokeAppendUnsafe_chunksOfRangearrayStreamKFromStreamDAswriteRevNWithUnsafe writeRevNWith fromChunkskAs unsafeSplitAt$fSemigroupArraymappendmemptykvToMapOverwriteGenericsleepICALFirstInnerICALSecondInnerInterposeFirstInnerInterposeSecondYieldInterposeSuffixFirstInnerfilterStreamWith_compactOnByteCustomcompactLEParserD compactGEFold$fFunctorParseResult$fMonadPlusParserKmzeromplus<|>$fAlternativeParserK$fMonadParserK$fApplicativeParserK$fFunctorParserK parserDone_getChunksWithGHC.WordWord8GHC.IO.Handle.TypesHandleGHC.IO.StdHandlesopenFile usingFileGHC.IOFilePath GHC.IO.IOModeReadModeBoolOrdering word16beDWord16 word16leD word32beDWord32 word32leD word64beDWord64 word64leDInt16Int32 GHC.UnicodeisSpaceisLowerisUpperisAlpha isAlphaNumisPrintisDigit isOctDigit isHexDigit Data.CharisLetterisMarkisNumber isPunctuationisSymbolisAsciiisLatin1 isAsciiUpper isAsciiLowerDoubleCharutf8dunsafePeekElemOff encodeObject encodeObjectsString PathException InvalidPath