bytezap: Bytestring builder with zero intermediate allocation

[ data, generics, library, mit, serialization ] [ Propose Tags ] [ Report a vulnerability ]
Versions [RSS] 1.0.0, 1.1.0, 1.3.0, 1.3.1, 1.4.0, 1.5.0, 1.6.0
Change log CHANGELOG.md
Dependencies base (>=4.18.0.0 && <4.21), bytestring (>=0.11.5.3 && <0.13.0.0), defun-core (>=0.1 && <0.2), generic-type-functions (>=0.1.0 && <0.2), primitive (>=0.8.0.0 && <0.10.0.0), text (>=2.0.2 && <2.2), type-level-bytestrings (>=0.1.0 && <0.3) [details]
Tested with ghc ==9.6.4
License MIT
Author Ben Orchard
Maintainer Ben Orchard <thefirstmuffinman@gmail.com>
Category Data, Serialization, Generics
Home page https://github.com/raehik/bytezap#readme
Bug tracker https://github.com/raehik/bytezap/issues
Source repo head: git clone https://github.com/raehik/bytezap
Uploaded by raehik at 2024-09-28T01:29:01Z
Distributions LTSHaskell:1.6.0, NixOS:1.6.0
Reverse Dependencies 1 direct, 2 indirect [details]
Downloads 224 total (17 in the last 30 days)
Rating (no votes yet) [estimated by Bayesian average]
Your Rating
  • λ
  • λ
  • λ
Status Docs uploaded by user
Build status unknown [no reports yet]

Readme for bytezap-1.6.0

[back to package description]

bytezap

Build strict bytestrings with zero intermediate allocation.

If you're looking for general high-performance serialization, you probably want mason. But if you're dealing with data that is already "shaped" like binary data, e.g. using types defined in binrep, and you want the best performance possible, read on...

Why?

Most binary serialization libraries tend towards a model where the serializer itself handles allocation. In the plumbing, serialization operations are bracketed by a check to ensure the current buffer has enough space for the next operation; if not, we obtain more space in some way, and serialization continues. This design is nice because we can chunk the serializing:

  • for writing to a lazy bytestring, we can emit a new chunk and clear our buffer
  • for writing to a handle, we can write, flush and clear our buffer
  • for writing to a strict bytestring, we must grow our current buffer (meh)

But if we know the size of the serialized data before serializing it, we don't need those space checks, nor these intermediate steps. We may allocate a single buffer with the required size upfront, then use that as we like.

Great, you say, but most data isn't so simple that we can easily calculate its serialized length without actually performing the serialization. This is true. bytezap is designed specifically for cases where

  • it's easy to calculate the serialized length of your data, and
  • you want to write to a strict bytestring (one big contiguous block of memory)

This last point notably may limit usage for serializing large data, depending on memory limitations. In most cases, we'll use more memory than a buffering library such as mason.

So... why?

Well, bytezap will be slightly faster where it's applicable, and the implementation is extremely simple. It's a fun niche to fill, and it's convenient for my binrep library.

Struct handling

We define even simpler parser & serializer types which can only handle "C struct"-like types, where there is only one constructor and all fields have known length at compile time. The way these work, GHC is pretty much guaranteed to generate the fastest code possible. These are very experimental, but see binrep for example usage.

Non-features

Serialize to ByteString (pinned byte arrays) only

No ShortByteStrings, no writing directly to handles.

(One may support writing to ShortByteStrings (unpinned byte arrays) by doing a bunch of class indirection. But it's a lot of extra work for a use case that I don't see as very popular at all. Check the Git history for an early implementation.)

License

Provided under the MIT license. See LICENSE for license text.