Copyright | (c) 2006-2014 Duncan Coutts |
---|---|
License | BSD-style |
Maintainer | duncan@community.haskell.org |
Safe Haskell | Safe-Inferred |
Language | Haskell2010 |
Compression and decompression of data streams in the raw deflate format.
The format is described in detail in RFC #1951: http://www.ietf.org/rfc/rfc1951.txt
See also the zlib home page: http://zlib.net/
Synopsis
- compress :: ByteString -> ByteString
- decompress :: ByteString -> ByteString
- data DecompressError
- compressWith :: CompressParams -> ByteString -> ByteString
- decompressWith :: DecompressParams -> ByteString -> ByteString
- data CompressParams = CompressParams {}
- defaultCompressParams :: CompressParams
- data DecompressParams = DecompressParams {}
- defaultDecompressParams :: DecompressParams
- newtype CompressionLevel = CompressionLevel Int
- defaultCompression :: CompressionLevel
- noCompression :: CompressionLevel
- bestSpeed :: CompressionLevel
- bestCompression :: CompressionLevel
- compressionLevel :: Int -> CompressionLevel
- data Method
- deflateMethod :: Method
- newtype WindowBits = WindowBits Int
- defaultWindowBits :: WindowBits
- windowBits :: Int -> WindowBits
- newtype MemoryLevel = MemoryLevel Int
- defaultMemoryLevel :: MemoryLevel
- minMemoryLevel :: MemoryLevel
- maxMemoryLevel :: MemoryLevel
- memoryLevel :: Int -> MemoryLevel
- data CompressionStrategy
- defaultStrategy :: CompressionStrategy
- filteredStrategy :: CompressionStrategy
- huffmanOnlyStrategy :: CompressionStrategy
- rleStrategy :: CompressionStrategy
- fixedStrategy :: CompressionStrategy
Simple compression and decompression
compress :: ByteString -> ByteString Source #
Compress a stream of data into the raw deflate format.
decompress :: ByteString -> ByteString Source #
Decompress a stream of data in the raw deflate format.
data DecompressError Source #
The possible error cases when decompressing a stream.
This can be show
n to give a human readable error message.
TruncatedInput | The compressed data stream ended prematurely. This may happen if the input data stream was truncated. |
DictionaryRequired | It is possible to do zlib compression with a custom dictionary. This allows slightly higher compression ratios for short files. However such compressed streams require the same dictionary when decompressing. This error is for when we encounter a compressed stream that needs a dictionary, and it's not provided. |
DictionaryMismatch | If the stream requires a dictionary and you provide one with the
wrong |
DataFormatError String | If the compressed data stream is corrupted in any way then you will
get this error, for example if the input data just isn't a compressed
zlib data stream. In particular if the data checksum turns out to be
wrong then you will get all the decompressed data but this error at the
end, instead of the normal successful |
Instances
Extended API with control over compression parameters
compressWith :: CompressParams -> ByteString -> ByteString Source #
Like compress
but with the ability to specify various decompression
parameters.
decompressWith :: DecompressParams -> ByteString -> ByteString Source #
Like decompress
but with the ability to specify various decompression
parameters.
data CompressParams Source #
The full set of parameters for compression. The defaults are
defaultCompressParams
.
The compressBufferSize
is the size of the first output buffer containing
the compressed data. If you know an approximate upper bound on the size of
the compressed data then setting this parameter can save memory. The default
compression output buffer size is 16k
. If your estimate is wrong it does
not matter too much, the default buffer size will be used for the remaining
chunks.
Instances
defaultCompressParams :: CompressParams Source #
The default set of parameters for compression. This is typically used with
compressWith
or compressWith
with specific parameters overridden.
data DecompressParams Source #
The full set of parameters for decompression. The defaults are
defaultDecompressParams
.
The decompressBufferSize
is the size of the first output buffer,
containing the uncompressed data. If you know an exact or approximate upper
bound on the size of the decompressed data then setting this parameter can
save memory. The default decompression output buffer size is 32k
. If your
estimate is wrong it does not matter too much, the default buffer size will
be used for the remaining chunks.
One particular use case for setting the decompressBufferSize
is if you
know the exact size of the decompressed data and want to produce a strict
ByteString
. The compression and decompression functions
use lazy ByteString
s but if you set the
decompressBufferSize
correctly then you can generate a lazy
ByteString
with exactly one chunk, which can be
converted to a strict ByteString
in O(1)
time using
.concat
. toChunks
Instances
defaultDecompressParams :: DecompressParams Source #
The default set of parameters for decompression. This is typically used with
decompressWith
or decompressWith
with specific parameters overridden.
The compression parameter types
newtype CompressionLevel Source #
The compression level parameter controls the amount of compression. This is a trade-off between the amount of compression and the time required to do the compression.
Instances
defaultCompression :: CompressionLevel Source #
The default CompressionLevel
.
noCompression :: CompressionLevel Source #
No compression, just a block copy.
bestSpeed :: CompressionLevel Source #
The fastest compression method (less compression).
bestCompression :: CompressionLevel Source #
The slowest compression method (best compression).
compressionLevel :: Int -> CompressionLevel Source #
A specific compression level in the range 0..9
.
Throws an error for arguments outside of this range.
- 0 stands for
noCompression
, - 1 stands for
bestSpeed
, - 6 stands for
defaultCompression
, - 9 stands for
bestCompression
.
The compression method
deflateMethod :: Method Source #
The only method supported in this version of zlib. Indeed it is likely to be the only method that ever will be supported.
newtype WindowBits Source #
This specifies the size of the compression window. Larger values of this parameter result in better compression at the expense of higher memory usage.
The compression window size is the value of the the window bits raised to
the power 2. The window bits must be in the range 9..15
which corresponds
to compression window sizes of 512b to 32Kb. The default is 15 which is also
the maximum size.
The total amount of memory used depends on the window bits and the
MemoryLevel
. See the MemoryLevel
for the details.
Instances
defaultWindowBits :: WindowBits Source #
The default WindowBits
. Equivalent to
.
which is also the maximum size.windowBits
15
windowBits :: Int -> WindowBits Source #
A specific compression window size, specified in bits in the range 9..15
.
Throws an error for arguments outside of this range.
newtype MemoryLevel Source #
The MemoryLevel
parameter specifies how much memory should be allocated
for the internal compression state. It is a trade-off between memory usage,
compression ratio and compression speed. Using more memory allows faster
compression and a better compression ratio.
The total amount of memory used for compression depends on the WindowBits
and the MemoryLevel
. For decompression it depends only on the
WindowBits
. The totals are given by the functions:
compressTotal windowBits memLevel = 4 * 2^windowBits + 512 * 2^memLevel decompressTotal windowBits = 2^windowBits
For example, for compression with the default windowBits = 15
and
memLevel = 8
uses 256Kb
. So for example a network server with 100
concurrent compressed streams would use 25Mb
. The memory per stream can be
halved (at the cost of somewhat degraded and slower compression) by
reducing the windowBits
and memLevel
by one.
Decompression takes less memory, the default windowBits = 15
corresponds
to just 32Kb
.
Instances
defaultMemoryLevel :: MemoryLevel Source #
The default MemoryLevel
. Equivalent to
.memoryLevel
8
minMemoryLevel :: MemoryLevel Source #
Use minimum memory. This is slow and reduces the compression ratio.
Equivalent to
.memoryLevel
1
maxMemoryLevel :: MemoryLevel Source #
Use maximum memory for optimal compression speed.
Equivalent to
.memoryLevel
9
memoryLevel :: Int -> MemoryLevel Source #
A specific memory level in the range 1..9
.
Throws an error for arguments outside of this range.
data CompressionStrategy Source #
The strategy parameter is used to tune the compression algorithm.
The strategy parameter only affects the compression ratio but not the correctness of the compressed output even if it is not set appropriately.
Instances
defaultStrategy :: CompressionStrategy Source #
Use this default compression strategy for normal data.
filteredStrategy :: CompressionStrategy Source #
Use the filtered compression strategy for data produced by a filter (or
predictor). Filtered data consists mostly of small values with a somewhat
random distribution. In this case, the compression algorithm is tuned to
compress them better. The effect of this strategy is to force more Huffman
coding and less string matching; it is somewhat intermediate between
defaultStrategy
and huffmanOnlyStrategy
.
huffmanOnlyStrategy :: CompressionStrategy Source #
Use the Huffman-only compression strategy to force Huffman encoding only (no string match).
rleStrategy :: CompressionStrategy Source #
Use rleStrategy
to limit match distances to one (run-length
encoding). rleStrategy
is designed to be almost as fast as
huffmanOnlyStrategy
, but give better compression for PNG
image data.
Since: 0.7.0.0
fixedStrategy :: CompressionStrategy Source #
fixedStrategy
prevents the use of dynamic Huffman codes,
allowing for a simpler decoder for special applications.
Since: 0.7.0.0