frisby-0.2.2: Linear time composable parser for PEG grammars

Safe HaskellNone
LanguageHaskell98

Text.Parsers.Frisby

Contents

Description

Linear time composable parser for PEG grammars.

frisby is a parser library that can parse arbitrary PEG grammars in linear time. Unlike other parsers of PEG grammars, frisby need not be supplied with all possible rules up front, allowing composition of smaller parsers.

PEG parsers are never ambiguous and allow infinite lookahead with no backtracking penalty. Since PEG parsers can look ahead arbitrarily, they can easily express rules such as the maximal munch rule used in lexers, meaning no separate lexer is needed.

In addition to many standard combinators, frisby provides routines to translate standard regex syntax into frisby parsers.

PEG based parsers have a number of advantages over other parsing strategies:

  • PEG parsers are never ambiguous
  • PEG is a generalization of regexes, they can be though of as extended regexes with recursion, predicates, and ordered choice
  • you never need a separate lexing pass with PEG parsers, since you have arbitrary lookahead there is no need to break the stream into tokens to allow the limited LALR or LL lookahead to work.
  • things like the maximal munch and minimal munch rules are trivial to specify with PEGs, yet tricky with other parsers
  • since you have ordered choice, things like the if then else ambiguity are nonexistent.
  • parsers are very very fast, guaranteeing time linear in the size of the input, at the cost of greater memory consumption
  • the ability to make local choices about whether to accept something lets you write parsers that deal gracefully with errors very easy to write, no more uninformative "parse error" messages
  • PEG parsers can be fully lazy, only as much of the input is read as is needed to satisfy the demand on the output, and once the output has been processed, the memory is immediately reclaimed since a PEG parser never backtracks
  • PEG parsers can deal with infinite input, acting in a streaming manner
  • PEG parsers support predicates, letting you decide what rules to follow based on whether other rules apply, so you can have rules that match only if another rule does not match, or a rule that matches only if two other rules both match the same input.

Traditionally, PEG parsers have suffered from two major flaws:

  • A global table of all productions must be generated or written by hand, disallowing composable parsers implemented as libraries and in general requiring the use of a parser generator tool like pappy
  • Although memory consumption is linear in the size of the input, the constant factor is very large.

frisby attempts to address both these concerns.

frisby parsers achieve composability by having a compilation pass, recursive parsers are specified using the recursive do notation 'mdo' which builds up a description of your parser where the recursive calls for which memoized entries must be made are explicit. then runPeg takes this description and compiles it into a form that can be applied, during this compilation step it examines your composed parser, and collects the global table of rules needed for a packrat parser to work.

Memory consumption is much less of an issue on modern machines; tests show it is not a major concern, however frisby uses a couple of techniques for reducing the impact. First it attempts to create parsers that are as lazy as possible -- this means that no more of the file is read into memory than is needed, and more importantly, memory used by the parser can be reclaimed as you process its output.

frisby also attempts to optimize your parser, using specialized strategies when allowed to reduce the number of entries in your memoization tables.

frisby attempts to be lazy in reading the results of parsers, parsers tend to work via sending out 'feeler' predicates to get an idea of what the rest of the file looks like before deciding what pass to take, frisby attempts to optimize these feeler predicates via extra lazyness such that they do not cause the actual computation of the results, but rather just compute enough to determine whether a predicate would have succeeded or not.

(It is interesting to note that the memory efficiency of frisby depends vitally on being as lazy as possible, in contrast to traditional thoughts when it comes to memory consumption)

frisby is a work in progress, it has a darcs repo at http://repetae.net/repos/frisby which may be browsed at http://repetae.net/dw/darcsweb.cgi?r=frisby;a=summary

And its homepage is at http://repetae.net/computer/frisby

To learn more about PEG parsers, see this paper http://pdos.csail.mit.edu/~baford/packrat/popl04 and Bryan Ford's packrat parsing page http://pdos.csail.mit.edu/~baford/packrat/

Synopsis

The basic types

The type of primitive parsers

data P s a Source #

Instances

Functor (P s) Source # 

Methods

fmap :: (a -> b) -> P s a -> P s b #

(<$) :: a -> P s b -> P s a #

Applicative (P s) Source # 

Methods

pure :: a -> P s a #

(<*>) :: P s (a -> b) -> P s a -> P s b #

liftA2 :: (a -> b -> c) -> P s a -> P s b -> P s c #

(*>) :: P s a -> P s b -> P s b #

(<*) :: P s a -> P s b -> P s a #

Alternative (P s) Source # 

Methods

empty :: P s a #

(<|>) :: P s a -> P s a -> P s a #

some :: P s a -> P s [a] #

many :: P s a -> P s [a] #

Semigroup (P s a) Source # 

Methods

(<>) :: P s a -> P s a -> P s a #

sconcat :: NonEmpty (P s a) -> P s a #

stimes :: Integral b => b -> P s a -> P s a #

Monoid (P s a) Source # 

Methods

mempty :: P s a #

mappend :: P s a -> P s a -> P s a #

mconcat :: [P s a] -> P s a #

The monad used to create recursive parsers via rules

data PM s a Source #

Instances

Monad (PM s) Source # 

Methods

(>>=) :: PM s a -> (a -> PM s b) -> PM s b #

(>>) :: PM s a -> PM s b -> PM s b #

return :: a -> PM s a #

fail :: String -> PM s a #

Functor (PM s) Source # 

Methods

fmap :: (a -> b) -> PM s a -> PM s b #

(<$) :: a -> PM s b -> PM s a #

MonadFix (PM s) Source # 

Methods

mfix :: (a -> PM s a) -> PM s a #

Applicative (PM s) Source # 

Methods

pure :: a -> PM s a #

(<*>) :: PM s (a -> b) -> PM s a -> PM s b #

liftA2 :: (a -> b -> c) -> PM s a -> PM s b -> PM s c #

(*>) :: PM s a -> PM s b -> PM s b #

(<*) :: PM s a -> PM s b -> PM s a #

newRule :: P s a -> PM s (P s a) Source #

Create a new rule, which may be used recursively and caches its results.

This is intended to be use in an 'mdo' block. such as the following.

additive = mdo
    additive <- newRule $ multitive <> char '+' ->> additive ## uncurry (+) // multitive
    multitive <- newRule $ primary <> char '*' ->> multitive ## uncurry (*) // primary
    primary <- newRule $ char '(' ->> additive <<- char ')' // decimal
    decimal <- newRule $ many1 (oneOf ['0' .. '9']) ## read
    return additive

All recursive calls must be bound via a rule. Left recursion should be avoided.

runPeg :: (forall s. PM s (P s a)) -> String -> a Source #

Run a PEG grammar. Takes the rank-2 argument in order to ensure a rule created in one PM session isn't returned and used in another PEG parser.

There is no need for special error handling, as it can be trivially implemented via

 -- parse complete file, returning 'Nothing' if parse fails
 fmap Just (myParser <<- eof) // unit Nothing

There is also no need for the parser to return its unused input, as that can be retrieved via rest.

-- Now this returns (a,String) where String is the unconsumed input.
myParser <> rest

Basic operators

(//) :: P s a -> P s a -> P s a infixl 1 Source #

Ordered choice, try left argument, if it fails try the right one. This does not introduce any backtracking or penalty.

(<>) :: P s a -> P s b -> P s (a, b) infixl 3 Source #

Match first argument, then match the second, returning both in a tuple

(<++>) :: P s [a] -> P s [a] -> P s [a] infixl 3 Source #

Match a pair of lists and concatenate them

Derived operators

(->>) :: P s a -> P s b -> P s b infixl 4 Source #

Match first argument, then match the second, returning only the value on the right.

x ->> y = x <> y ## snd

(<<-) :: P s a -> P s b -> P s a infixl 4 Source #

Match first argument, then match the second, returning only the value on the left.

x <<- y = x <> y ## fst

(//>) :: P s a -> a -> P s a infixl 1 Source #

Ordered choice, try left argument, if it fails then return right argument.

Modification operators

(##) :: P s a -> (a -> b) -> P s b infix 2 Source #

Map a parser through a function. a fancy version of fmap.

(##>) :: P s a -> b -> P s b infix 2 Source #

Parse left argument and return the right argument.

Basic combinators

anyChar :: P s Char Source #

Match any character, fails on EOF

bof :: P s () Source #

am at the beginning of the string.

eof :: P s () Source #

am at the end of string.

getPos :: P s Int Source #

Get current position in file as number of characters since the beginning.

char :: Char -> P s Char Source #

Match a specified character

noneOf :: [Char] -> P s Char Source #

Match any character other than the ones in the list.

oneOf :: [Char] -> P s Char Source #

Match one of the set of characters.

text :: String -> P s String Source #

Match some text

unit :: a -> P s a Source #

Return a value, always succeeds

rest :: P s String Source #

Immediately consume and return the rest of the input equivalent to (many anyChar), but more efficient.

discard :: P s a -> P s () Source #

Throw away the result of something.

discard p = p ->> unit ()

parseFailure :: P s a Source #

Fails, is identity of (//) and unit of (<>).

Speculative combinators

These are how a frisby parser decides what path to take, whereas a backtracking parser might try a path, then backtrack if it got it wrong, a frisby parser can look at all possible paths before deciding which one to take via these predicates. this is what allows much of the power of packrat parsing, a parser is free to evaluate every alternative fully before committing to a particular path.

Packrat parsers have no past, but can 'see' arbitrarily far into all of its potential futures, traditional monadic parsers can accumulate a history, but cannot see more than a token or two into the future, and evaluating multiple futures to any degree imposes a significant run-time penalty due to backtracking.

peek :: P s a -> P s a Source #

Parse something and return it, but do not advance the input stream.

doesNotMatch :: P s a -> P s () Source #

Succeeds when the argument does not.

isMatch :: P s a -> P s Bool Source #

always succeeds, returning true if it consumed something.

onlyIf :: P s a -> (a -> Bool) -> P s a Source #

Succeed only if thing parsed passes a predicate.

matches :: P s a -> P s () Source #

Succeeds when the argument does, but consumes no input. Equivalant to p -> discard (peek p)

Looping combinators

many :: P s a -> P s [a] Source #

Parse many of something. Behaves like * in regexes. This eats as much as it possibly can, if you want a minimal much rule, then use manyUntil which stops when a.

many1 :: P s a -> P s [a] Source #

Match one or more of something via maximal munch rule.

manyUntil :: P s b -> P s a -> PM s (P s [a]) Source #

Parse many of something via the minimal munch rule. behaves like *? in perl regexes. The final item is not consumed.

Various utility combinators

between :: P s a -> P s b -> P s c -> P s c Source #

Equivalent to

between open close thing = open ->> thing <<- close

choice :: [P s a] -> P s a Source #

First matching parse wins, a simple iteration of (//).

option :: a -> P s a -> P s a Source #

Parse something if you can, else return first value

option a p = p // unit a

optional :: P s a -> P s () Source #

Parse something if you can, discarding it.

option a p = discard p // unit ()

Regular expression syntax

Take a string in extended regex format and return a frisby parser that has the same behavior. The behavior is slightly different than POSIX regular expressions. frisby regular expressions always follow the true maximal or minimal munch rules, rather than the (unuseful and inefficient) greedy rule of POSIX regexs. What this means is something like x*x will never match, because the first x* will munch every x available so the second won't match. Minimal munching can be expressed like in perl, .*?y will grab everything up to the next y. In posix it would grab everything up til the last y in the file. These are much more natural semantics and much more efficient to implement.

newRegex :: Monad m => String -> m (PM s (P s String)) Source #

Create a new regular expression matching parser. it returns something in a possibly failing monad to indicate an error in the regular expression itself.

regex :: String -> PM s (P s String) Source #

Make a new regex but abort on an error in the regex string itself.

showRegex :: String -> IO () Source #

Show a representation of the parsed regex, mainly for debugging.