maxent-learner-hw-0.2.0: Hayes and Wilson's maxent learning algorithm for phonotactic grammars.

Copyright© 2016-2017 George Steel and Peter Jurgec
LicenseGPL-2+
Maintainergeorge.steel@gmail.com
Safe HaskellNone
LanguageHaskell2010

Text.PhonotacticLearner

Description

Main learning algorithm for inferring granmmar from constraint set and lexicon. To set up the parameters, you will need to use sortLexicon to prepare the text input some way of generating a set of constraint candidates (reperssented as DFSTs). The PhonotacticConstraints module and its Generators submodule will be useful here.

Synopsis

Documentation

generateGrammarIO Source #

Arguments

:: (Show clabel, Ix sigma, NFData sigma, NFData clabel, Eq clabel) 
=> Int

Monte Carlo sample size

-> [Double]

List of accuracy thresholds

-> [(clabel, ShortDFST sigma)]

List of candidate constraints. Labels must be unique. All DFSTs must share the same input bounds.

-> [([sigma], Int)]

Corpus of sample words and their relative frequencies

-> IO (Array Length Int, [clabel], MulticountDFST sigma, Vec)

Computed grammar

Infer a phonotactic grammar from a list of candidate constraints and a corpus of texts.

This algorithm works by keeping a running grammar (starting with an empty one) and repeatedly taking the next constraint on the list of candidates which improves the grammar by a large enough margin. Constraints are selected for inclusion when they have an observed number of violations less then a threshold factor times the expected number of violations in the running grammar. After each constraint is added, the weights of the running grammar are optimized. Multiple passes are made with an increasing sequence of thresholds.

For maximum flexibility, constraints are reperesented as DFSTs which count the number of violations of the constraint in an input string. In order to have understandable output, each candadite DFST is paired with a unique label, with the final grammar output as a merged DFST, a list of selected labels, and a weight vector. If the constraints are generated from a list of globs, aplying (id &&& matchCounter) to the list will produce suitable candidates.

Since we need to process these words using DFSTs, the set of valid segments must form a continous Ix range, such as ['a'..'z']. If the segments do not, please replace them with indices into a lookup table.

Since the algorithm works by continuous refinement, this action will catch SIGINT and terminate early if the signal is received.

generateGrammarCB Source #

Arguments

:: (Show clabel, Ix sigma, NFData sigma, NFData clabel, Eq clabel) 
=> (Int -> Int -> IO ())

callback for reporting progress

-> (Array Length Int -> [clabel] -> MulticountDFST sigma -> Vec -> IO ())

callback for reporting grammar progress

-> Int

Monte Carlo sample size

-> [Double]

List of accuracy thresholds

-> [(clabel, ShortDFST sigma)]

List of candidate constraints. Labels must be unique. All DFSTs must share the same input bounds.

-> [([sigma], Int)]

Corpus of sample words and their relative frequencies

-> IO ([clabel], MulticountDFST sigma, Vec)

Computed grammar

Like generateGrammarIO but calls callbacks to report progress and non-final grammars. Useful to run in the background and display intermediate results Does not catch SIGINT.