megaparsec: Monadic parser combinators

[ bsd2, library, parsing ] [ Propose Tags ] [ Report a vulnerability ]

This is an industrial-strength monadic parser combinator library. Megaparsec is a feature-rich package that tries to find a nice balance between speed, flexibility, and quality of parse errors.


[Skip to Readme]

Flags

Manual Flags

NameDescriptionDefault
dev

Turn on development settings.

Disabled

Use -f <flag> to enable a flag, or -f -<flag> to disable that flag. More info

Downloads

Note: This package has metadata revisions in the cabal description newer than included in the tarball. To unpack the package including the revisions, use 'cabal get'.

Maintainer's Corner

Package maintainers

For package maintainers and hackage trustees

Candidates

Versions [RSS] 4.0.0, 4.1.0, 4.1.1, 4.2.0, 4.3.0, 4.4.0, 5.0.0, 5.0.1, 5.1.0, 5.1.1, 5.1.2, 5.2.0, 5.3.0, 5.3.1, 6.0.0, 6.0.1, 6.0.2, 6.1.0, 6.1.1, 6.2.0, 6.3.0, 6.4.0, 6.4.1, 6.5.0, 7.0.0, 7.0.1, 7.0.2, 7.0.3, 7.0.4, 7.0.5, 8.0.0, 9.0.0, 9.0.1, 9.1.0, 9.2.0, 9.2.1, 9.2.2, 9.3.0, 9.3.1, 9.4.0, 9.4.1, 9.5.0, 9.6.0, 9.6.1 (info)
Change log CHANGELOG.md
Dependencies base (>=4.13 && <5.0), bytestring (>=0.2 && <0.12), case-insensitive (>=1.2 && <1.3), containers (>=0.5 && <0.7), deepseq (>=1.3 && <1.5), mtl (>=2.2.2 && <3.0), parser-combinators (>=1.0 && <2.0), scientific (>=0.3.7 && <0.4), text (>=0.2 && <2.1), transformers (>=0.4 && <0.7) [details]
Tested with ghc ==8.10.7, ghc ==9.0.2, ghc ==9.2.1
License BSD-2-Clause
Author Megaparsec contributors, Paolo Martini <paolo@nemail.it>, Daan Leijen <daan@microsoft.com>
Maintainer Mark Karpov <markkarpov92@gmail.com>
Revised Revision 1 made by mrkkrp at 2022-08-16T08:16:07Z
Category Parsing
Home page https://github.com/mrkkrp/megaparsec
Bug tracker https://github.com/mrkkrp/megaparsec/issues
Source repo head: git clone https://github.com/mrkkrp/megaparsec.git
Uploaded by mrkkrp at 2022-05-09T08:26:35Z
Distributions Arch:9.6.1, Debian:8.0.0, Fedora:9.3.1, LTSHaskell:9.5.0, NixOS:9.5.0, Stackage:9.6.1, openSUSE:9.6.1
Reverse Dependencies 282 direct, 448 indirect [details]
Downloads 103662 total (629 in the last 30 days)
Rating 2.75 (votes: 24) [estimated by Bayesian average]
Your Rating
  • λ
  • λ
  • λ
Status Docs available [build log]
Last success reported on 2022-05-09 [all 1 reports]

Readme for megaparsec-9.2.1

[back to package description]

Megaparsec

License FreeBSD Hackage Stackage Nightly Stackage LTS CI

This is an industrial-strength monadic parser combinator library. Megaparsec is a feature-rich package that tries to find a nice balance between speed, flexibility, and quality of parse errors.

Features

The project provides flexible solutions to satisfy common parsing needs. The section describes them shortly. If you're looking for comprehensive documentation, see the section about documentation.

Core features

The package is built around MonadParsec, an MTL-style monad transformer. Most features work with all instances of MonadParsec. One can achieve various effects combining monad transformers, i.e. building a monadic stack. Since the common monad transformers like WriterT, StateT, ReaderT and others are instances of the MonadParsec type class, one can also wrap ParsecT in these monads, achieving, for example, backtracking state.

On the other hand ParsecT is an instance of many type classes as well. The most useful ones are Monad, Applicative, Alternative, and MonadParsec.

Megaparsec includes all functionality that is typically available in Parsec-like libraries and also features some special combinators:

  • parseError allows us to end parsing and report an arbitrary parse error.
  • withRecovery can be used to recover from parse errors “on-the-fly” and continue parsing. Once parsing is finished, several parse errors may be reported or ignored altogether.
  • observing makes it possible to “observe” parse errors without ending parsing.

In addition to that, Megaparsec features high-performance combinators similar to those found in Attoparsec:

  • tokens makes it easy to parse several tokens in a row (string and string' are built on top of this primitive). This is about 100 times faster than matching a string token by token. tokens returns “chunk” of original input, meaning that if you parse Text, it'll return Text without repacking.
  • takeWhile and takeWhile1 are about 150 times faster than approaches involving many, manyTill and other similar combinators.
  • takeP allows us to grab n tokens from the stream and returns them as a “chunk” of the stream.

Megaparsec is about as fast as Attoparsec if you write your parser carefully (see also the section about performance).

The library can currently work with the following types of input stream out-of-the-box:

  • String = [Char]
  • ByteString (strict and lazy)
  • Text (strict and lazy)

It's also possible to make it work with custom token streams by making them an instance of the Stream type class.

Error messages

  • Megaparsec has typed error messages and the ability to signal custom parse errors that better suit the user's domain of interest.

  • Since version 8, the location of parse errors can independent of current offset in the input stream. It is useful when you want a parse error to point to a particular position after performing some checks.

  • Instead of a single parse error Megaparsec produces so-called ParseErrorBundle data type that helps to manage multi-error messages and pretty-print them. Since version 8, reporting multiple parse errors at once has become easier.

External lexers

Megaparsec works well with streams of tokens produced by tools like Alex. The design of the Stream type class has been changed significantly in the recent versions, but user can still work with custom streams of tokens.

Character and binary parsing

Megaparsec has decent support for Unicode-aware character parsing. Functions for character parsing live in the Text.Megaparsec.Char module. Similarly, there is Text.Megaparsec.Byte module for parsing streams of bytes.

Lexer

Text.Megaparsec.Char.Lexer is a module that should help you write your lexer. If you have used Parsec in the past, this module “fixes” its particularly inflexible Text.Parsec.Token.

Text.Megaparsec.Char.Lexer is intended to be imported using a qualified import, it's not included in Text.Megaparsec. The module doesn't impose how you should write your parser, but certain approaches may be more elegant than others. An especially important theme is parsing of white space, comments, and indentation.

The design of the module allows one quickly solve simple tasks and doesn't get in the way when the need to implement something less standard arises.

Text.Megaparsec.Byte.Lexer is also available for users who wish to parse binary data.

Documentation

Megaparsec is well-documented. See the current version of Megaparsec documentation on Hackage.

Tutorials

You can find the most complete Megaparsec tutorial here. It should provide sufficient guidance to help you start with your parsing tasks.

Performance

Despite being flexible, Megaparsec is also fast. Here is how Megaparsec compares to Attoparsec (the fastest widely used parsing library in the Haskell ecosystem):

Test case Execution time Allocated Max residency
CSV (Attoparsec) 76.50 μs 397,784 10,544
CSV (Megaparsec) 64.69 μs 352,408 9,104
Log (Attoparsec) 302.8 μs 1,150,032 10,912
Log (Megaparsec) 337.8 μs 1,246,496 10,912
JSON (Attoparsec) 18.20 μs 128,368 9,032
JSON (Megaparsec) 25.45 μs 203,824 9,176

You can run the benchmarks yourself by executing:

$ nix-build -A benches.parsers-bench
$ cd result/bench
$ ./bench-memory
$ ./bench-speed

More information about benchmarking and development can be found here.

Comparison with other solutions

There are quite a few libraries that can be used for parsing in Haskell, let's compare Megaparsec with some of them.

Megaparsec vs Attoparsec

Attoparsec is another prominent Haskell library for parsing. Although both libraries deal with parsing, it's usually easy to decide which you will need in particular project:

  • Attoparsec is sometimes faster but not that feature-rich. It should be used when you want to process large amounts of data where performance matters more than quality of error messages.

  • Megaparsec is good for parsing of source code or other human-readable texts. It has better error messages and it's implemented as a monad transformer.

So, if you work with something human-readable where the size of input data is moderate, it makes sense to go with Megaparsec, otherwise Attoparsec may be a better choice.

Megaparsec vs Parsec

Since Megaparsec is a fork of Parsec, we are bound to list the main differences between the two libraries:

  • Better error messages. Megaparsec has typed error messages and custom error messages, it can also report multiple parse errors at once.

  • Megaparsec can show the line on which parse error happened as part of parse error. This makes it a lot easier to figure out where the error happened.

  • Some quirks and bugs of Parsec are fixed.

  • Better support for Unicode parsing in Text.Megaparsec.Char.

  • Megaparsec has more powerful combinators and can parse languages where indentation matters.

  • Better documentation.

  • Megaparsec can recover from parse errors “on the fly” and continue parsing.

  • Megaparsec allows us to conditionally process parse errors inside a running parser. In particular, it's possible to define regions in which parse errors, should they happen, will get a “context tag”, e.g. we could build a context stack like “in function definition foo”, “in expression x”, etc.

  • Megaparsec is faster and supports efficient operations tokens, takeWhileP, takeWhile1P, takeP, like Attoparsec.

If you want to see a detailed change log, CHANGELOG.md may be helpful. Also see this original announcement for another comparison.

Megaparsec vs Trifecta

Trifecta is another Haskell library featuring good error messages. These are the common reasons why Trifecta may be problematic to use:

  • Complicated, doesn't have any tutorials available, and documentation doesn't help much.

  • Trifecta can parse String and ByteString natively, but not Text.

  • Depends on lens, which is a very heavy dependency. If you're not into lens, you may not like the API.

Idris has switched from Trifecta to Megaparsec which allowed it to have better error messages and fewer dependencies.

Megaparsec vs Earley

Earley is a newer library that allows us to safely parse context-free grammars (CFG). Megaparsec is a lower-level library compared to Earley, but there are still enough reasons to choose it:

  • Megaparsec is faster.

  • Your grammar may be not context-free or you may want introduce some sort of state to the parsing process. Almost all non-trivial parsers require state. Even if your grammar is context-free, state may allow for additional niceties. Earley does not support that.

  • Megaparsec's error messages are more flexible allowing to include arbitrary data in them, return multiple error messages, mark regions that affect any error that happens in those regions, etc.

In other words, Megaparsec is less safe but also more powerful.

The following packages are designed to be used with Megaparsec (open a PR if you want to add something to the list):

Prominent projects that use Megaparsec

Some prominent projects that use Megaparsec:

  • Idris—a general-purpose functional programming language with dependent types
  • Dhall—an advanced configuration language
  • hnix—re-implementation of the Nix language in Haskell
  • Hledger—an accounting tool
  • MMark—strict markdown processor for writers

Here are some blog posts mainly announcing new features of the project and describing what sort of things are now possible:

Contribution

Issues (bugs, feature requests or otherwise feedback) may be reported in the GitHub issue tracker for this project.

Pull requests are also welcome. If you would like to contribute to the project, you may find this document helpful.

License

Copyright © 2015–present Megaparsec contributors
Copyright © 2007 Paolo Martini
Copyright © 1999–2000 Daan Leijen

Distributed under FreeBSD license.