Copyright | (c) 2006-2008 Duncan Coutts |
---|---|
License | BSD-style |
Maintainer | duncan@haskell.org |
Stability | provisional |
Portability | portable (H98 + FFI) |
Safe Haskell | Safe-Inferred |
Language | Haskell2010 |
Pure stream based interface to lower level bzlib wrapper
Synopsis
- compress :: CompressParams -> ByteString -> ByteString
- decompress :: DecompressParams -> ByteString -> ByteString
- data CompressStream m
- = CompressInputRequired {
- compressSupplyInput :: ByteString -> m (CompressStream m)
- | CompressOutputAvailable {
- compressOutput :: !ByteString
- compressNext :: m (CompressStream m)
- | CompressStreamEnd
- = CompressInputRequired {
- compressST :: CompressParams -> CompressStream (ST s)
- compressIO :: CompressParams -> CompressStream IO
- foldCompressStream :: Monad m => ((ByteString -> m a) -> m a) -> (ByteString -> m a -> m a) -> m a -> CompressStream m -> m a
- foldCompressStreamWithInput :: (ByteString -> a -> a) -> a -> (forall s. CompressStream (ST s)) -> ByteString -> a
- data DecompressStream m
- = DecompressInputRequired {
- decompressSupplyInput :: ByteString -> m (DecompressStream m)
- | DecompressOutputAvailable {
- decompressOutput :: !ByteString
- decompressNext :: m (DecompressStream m)
- | DecompressStreamEnd { }
- | DecompressStreamError {
- decompressStreamError :: DecompressError
- = DecompressInputRequired {
- decompressST :: DecompressParams -> DecompressStream (ST s)
- decompressIO :: DecompressParams -> DecompressStream IO
- foldDecompressStream :: Monad m => ((ByteString -> m a) -> m a) -> (ByteString -> m a -> m a) -> (ByteString -> m a) -> (DecompressError -> m a) -> DecompressStream m -> m a
- foldDecompressStreamWithInput :: (ByteString -> a -> a) -> (ByteString -> a) -> (DecompressError -> a) -> (forall s. DecompressStream (ST s)) -> ByteString -> a
- data CompressParams = CompressParams {}
- defaultCompressParams :: CompressParams
- data DecompressParams = DecompressParams {}
- defaultDecompressParams :: DecompressParams
- data BlockSize
- data WorkFactor
- data MemoryLevel
Pure interface
compress :: CompressParams -> ByteString -> ByteString Source #
decompress :: DecompressParams -> ByteString -> ByteString Source #
Monadic incremental interface
Incremental compression
data CompressStream m Source #
The unfolding of the compression process, where you provide a sequence of uncompressed data chunks as input and receive a sequence of compressed data chunks as output. The process is incremental, in that the demand for input and provision of output are interleaved.
compressST :: CompressParams -> CompressStream (ST s) Source #
foldCompressStream :: Monad m => ((ByteString -> m a) -> m a) -> (ByteString -> m a -> m a) -> m a -> CompressStream m -> m a Source #
A fold over the CompressStream
in the given monad.
One way to look at this is that it runs the stream, using callback functions for the three stream events.
foldCompressStreamWithInput :: (ByteString -> a -> a) -> a -> (forall s. CompressStream (ST s)) -> ByteString -> a Source #
A variant on foldCompressStream
that is pure rather than operating in a
monad and where the input is provided by a lazy ByteString
. So we only
have to deal with the output and end parts, making it just like a foldr on a
list of output chunks.
For example:
toChunks = foldCompressStreamWithInput (:) []
Incremental decompression
data DecompressStream m Source #
DecompressInputRequired | |
| |
DecompressOutputAvailable | |
| |
DecompressStreamEnd | Includes any trailing unconsumed input data. |
DecompressStreamError | An error code |
|
decompressST :: DecompressParams -> DecompressStream (ST s) Source #
foldDecompressStream :: Monad m => ((ByteString -> m a) -> m a) -> (ByteString -> m a -> m a) -> (ByteString -> m a) -> (DecompressError -> m a) -> DecompressStream m -> m a Source #
foldDecompressStreamWithInput :: (ByteString -> a -> a) -> (ByteString -> a) -> (DecompressError -> a) -> (forall s. DecompressStream (ST s)) -> ByteString -> a Source #
The compression parameter types
data CompressParams Source #
The full set of parameters for compression. The defaults are
defaultCompressParams
.
The compressBufferSize
is the size of the first output buffer containing
the compressed data. If you know an approximate upper bound on the size of
the compressed data then setting this parameter can save memory. The default
compression output buffer size is 16k
. If your estimate is wrong it does
not matter too much, the default buffer size will be used for the remaining
chunks.
Instances
Show CompressParams Source # | |
Defined in Codec.Compression.BZip.Internal showsPrec :: Int -> CompressParams -> ShowS # show :: CompressParams -> String # showList :: [CompressParams] -> ShowS # |
defaultCompressParams :: CompressParams Source #
The default set of parameters for compression. This is typically used with
the compressWith
function with specific parameters overridden.
data DecompressParams Source #
The full set of parameters for decompression. The defaults are
defaultDecompressParams
.
The decompressBufferSize
is the size of the first output buffer,
containing the uncompressed data. If you know an exact or approximate upper
bound on the size of the decompressed data then setting this parameter can
save memory. The default decompression output buffer size is 32k
. If your
estimate is wrong it does not matter too much, the default buffer size will
be used for the remaining chunks.
One particular use case for setting the decompressBufferSize
is if you
know the exact size of the decompressed data and want to produce a strict
ByteString
. The compression and decompression functions
use lazy ByteString
s but if you set the
decompressBufferSize
correctly then you can generate a lazy
ByteString
with exactly one chunk, which can be
converted to a strict ByteString
in O(1)
time using
.concat
. toChunks
Instances
Show DecompressParams Source # | |
Defined in Codec.Compression.BZip.Internal showsPrec :: Int -> DecompressParams -> ShowS # show :: DecompressParams -> String # showList :: [DecompressParams] -> ShowS # |
defaultDecompressParams :: DecompressParams Source #
The default set of parameters for decompression. This is typically used with
the compressWith
function with specific parameters overridden.
The block size affects both the compression ratio achieved, and the amount of memory needed for compression and decompression.
through BlockSize
1
specify the block size to be 100,000
bytes through 900,000 bytes respectively. The default is to use the maximum
block size.BlockSize
9
Larger block sizes give rapidly diminishing marginal returns. Most of the compression comes from the first two or three hundred k of block size, a fact worth bearing in mind when using bzip2 on small machines. It is also important to appreciate that the decompression memory requirement is set at compression time by the choice of block size.
- In general, try and use the largest block size memory constraints allow, since that maximises the compression achieved.
- Compression and decompression speed are virtually unaffected by block size.
Another significant point applies to files which fit in a single block -
that means most files you'd encounter using a large block size. The amount
of real memory touched is proportional to the size of the file, since the
file is smaller than a block. For example, compressing a file 20,000 bytes
long with the flag
will cause the compressor to allocate
around 7600k of memory, but only touch 400k + 20000 * 8 = 560 kbytes of it.
Similarly, the decompressor will allocate 3700k but only touch 100k + 20000
* 4 = 180 kbytes.BlockSize
9
DefaultBlockSize | The default block size is also the maximum. |
BlockSize Int | A specific block size between 1 and 9. |
data WorkFactor Source #
The WorkFactor
parameter controls how the compression phase behaves when
presented with worst case, highly repetitive, input data. If compression
runs into difficulties caused by repetitive data, the library switches from
the standard sorting algorithm to a fallback algorithm. The fallback is
slower than the standard algorithm by perhaps a factor of three, but always
behaves reasonably, no matter how bad the input.
Lower values of WorkFactor
reduce the amount of effort the standard
algorithm will expend before resorting to the fallback. You should set this
parameter carefully; too low, and many inputs will be handled by the
fallback algorithm and so compress rather slowly, too high, and your
average-to-worst case compression times can become very large. The default
value of 30 gives reasonable behaviour over a wide range of circumstances.
- Note that the compressed output generated is the same regardless of whether or not the fallback algorithm is used.
DefaultWorkFactor | The default work factor is 30. |
WorkFactor Int | Allowable values range from 1 to 250 inclusive. |
Instances
Show WorkFactor Source # | |
Defined in Codec.Compression.BZip.Stream showsPrec :: Int -> WorkFactor -> ShowS # show :: WorkFactor -> String # showList :: [WorkFactor] -> ShowS # |
data MemoryLevel Source #
For files compressed with the default 900k block size, decompression will require about 3700k to decompress. To support decompression of any file in less than 4Mb there is the option to decompress using approximately half this amount of memory, about 2300k. Decompression speed is also halved, so you should use this option only where necessary.
DefaultMemoryLevel | The default. |
MinMemoryLevel | Use minimum memory during decompression. This halves the memory needed but also halves the decompression speed. |
Instances
Show MemoryLevel Source # | |
Defined in Codec.Compression.BZip.Stream showsPrec :: Int -> MemoryLevel -> ShowS # show :: MemoryLevel -> String # showList :: [MemoryLevel] -> ShowS # |