Z-Data-0.1.7.2: Array, vector and text

Copyright(c) Dong Han 2017-2018
LicenseBSD
Maintainerwinterland1989@gmail.com
Stabilityexperimental
Portabilitynon-portable
Safe HaskellNone
LanguageHaskell2010

Z.Data.Text.Base

Contents

Description

A Text wrap a Bytes which will be interpreted using UTF-8 encoding. User should always use validate to construt a Text (instead of using construtor directly or coercing), otherwise illegal UTF-8 encoded codepoints will cause undefined behaviours.

Synopsis

Text type

newtype Text Source #

Text represented as UTF-8 encoded Bytes

Constructors

Text 

Fields

Instances
Eq Text Source # 
Instance details

Defined in Z.Data.Text.Base

Methods

(==) :: Text -> Text -> Bool #

(/=) :: Text -> Text -> Bool #

Ord Text Source # 
Instance details

Defined in Z.Data.Text.Base

Methods

compare :: Text -> Text -> Ordering #

(<) :: Text -> Text -> Bool #

(<=) :: Text -> Text -> Bool #

(>) :: Text -> Text -> Bool #

(>=) :: Text -> Text -> Bool #

max :: Text -> Text -> Text #

min :: Text -> Text -> Text #

Show Text Source #

The escaping rules is different from String 's Show instance, see "Z.Data.Text.Builder.escapeTextJSON"

Instance details

Defined in Z.Data.Text.Base

Methods

showsPrec :: Int -> Text -> ShowS #

show :: Text -> String #

showList :: [Text] -> ShowS #

IsString Text Source # 
Instance details

Defined in Z.Data.Text.Base

Methods

fromString :: String -> Text #

Semigroup Text Source # 
Instance details

Defined in Z.Data.Text.Base

Methods

(<>) :: Text -> Text -> Text #

sconcat :: NonEmpty Text -> Text #

stimes :: Integral b => b -> Text -> Text #

Monoid Text Source # 
Instance details

Defined in Z.Data.Text.Base

Methods

mempty :: Text #

mappend :: Text -> Text -> Text #

mconcat :: [Text] -> Text #

Arbitrary Text Source # 
Instance details

Defined in Z.Data.Text.Base

Methods

arbitrary :: Gen Text #

shrink :: Text -> [Text] #

CoArbitrary Text Source # 
Instance details

Defined in Z.Data.Text.Base

Methods

coarbitrary :: Text -> Gen b -> Gen b #

NFData Text Source # 
Instance details

Defined in Z.Data.Text.Base

Methods

rnf :: Text -> () #

Hashable Text Source # 
Instance details

Defined in Z.Data.Text.Base

Methods

hashWithSalt :: Int -> Text -> Int #

hash :: Text -> Int #

ShowT Text Source #

The escaping rules is same with Show instance: we reuse JSON escaping rules here, so it will be faster.

Instance details

Defined in Z.Data.Text.ShowT

FromValue Text Source # 
Instance details

Defined in Z.Data.JSON.Base

EncodeJSON Text Source # 
Instance details

Defined in Z.Data.JSON.Base

Methods

encodeJSON :: Text -> Builder () Source #

ToValue Text Source # 
Instance details

Defined in Z.Data.JSON.Base

Methods

toValue :: Text -> Value Source #

FromValue a => FromValue (HashMap Text a) Source #

default instance prefer later key

Instance details

Defined in Z.Data.JSON.Base

FromValue a => FromValue (FlatMap Text a) Source #

default instance prefer later key

Instance details

Defined in Z.Data.JSON.Base

EncodeJSON a => EncodeJSON (HashMap Text a) Source # 
Instance details

Defined in Z.Data.JSON.Base

EncodeJSON a => EncodeJSON (FlatMap Text a) Source # 
Instance details

Defined in Z.Data.JSON.Base

ToValue a => ToValue (HashMap Text a) Source # 
Instance details

Defined in Z.Data.JSON.Base

Methods

toValue :: HashMap Text a -> Value Source #

ToValue a => ToValue (FlatMap Text a) Source # 
Instance details

Defined in Z.Data.JSON.Base

Methods

toValue :: FlatMap Text a -> Value Source #

Building text

validate :: HasCallStack => Bytes -> Text Source #

O(n) Validate a sequence of bytes is UTF-8 encoded.

Throw InvalidUTF8Exception in case of invalid codepoint.

validateASCII :: HasCallStack => Bytes -> Text Source #

O(n) Validate a sequence of bytes is all ascii char byte(<128).

Throw InvalidASCIIException in case of invalid byte, It's not faster than validate, use it only if you want to validate if a ASCII char sequence.

validateMaybe :: Bytes -> Maybe Text Source #

O(n) Validate a sequence of bytes is UTF-8 encoded.

Return Nothing in case of invalid codepoint.

validateASCIIMaybe :: Bytes -> Maybe Text Source #

O(n) Validate a sequence of bytes is all ascii char byte(<128).

Return Nothing in case of invalid byte.

index :: HasCallStack => Text -> Int -> Char Source #

O(n) Get the nth codepoint from Text, throw IndexOutOfTextRange when out of bound.

indexMaybe :: Text -> Int -> Maybe Char Source #

O(n) Get the nth codepoint from Text.

charByteIndex :: Text -> Int -> Int Source #

O(n) Find the nth codepoint's byte index (pointing to the nth char's begining byte).

The index is only meaningful to the whole byte slice, if there's less than n codepoints, the index will point to next byte after the end.

indexR :: HasCallStack => Text -> Int -> Char Source #

O(n) Get the nth codepoint from Text counting from the end, throw IndexOutOfVectorRange n callStack when out of bound.

indexMaybeR :: Text -> Int -> Maybe Char Source #

O(n) Get the nth codepoint from Text counting from the end.

charByteIndexR :: Text -> Int -> Int Source #

O(n) Find the nth codepoint's byte index from the end (pointing to the previous char's ending byte).

The index is only meaningful to the whole byte slice, if there's less than n codepoints, the index will point to previous byte before the start.

Basic creating

empty :: Text Source #

O(1). Empty text.

singleton :: Char -> Text Source #

O(1). Single char text.

copy :: Text -> Text Source #

O(n). Copy a text from slice.

replicate :: Int -> Char -> Text Source #

O(n) replicate char n time.

cycleN :: Int -> Text -> Text Source #

O(n*m) cycleN a text n times.

Conversion between list

pack :: String -> Text Source #

O(n) Convert a string into a text

Alias for packN defaultInitSize, will be rewritten to a memcpy if possible.

packN :: Int -> String -> Text Source #

O(n) Convert a list into a text with an approximate size(in bytes, not codepoints).

If the encoded bytes length is larger than the size given, we simply double the buffer size and continue building.

This function is a good consumer in the sense of build/foldr fusion.

packR :: String -> Text Source #

O(n) Alias for packRN defaultInitSize.

packRN :: Int -> String -> Text Source #

O(n) packN in reverse order.

This function is a good consumer in the sense of build/foldr fusion.

unpack :: Text -> String Source #

O(n) Convert text to a char list.

Unpacking is done lazily. i.e. we will retain reference to the array until all element are consumed.

This function is a good producer in the sense of build/foldr fusion.

unpackR :: Text -> String Source #

O(n) Convert text to a list in reverse order.

This function is a good producer in the sense of build/foldr fusion.

Conversion between codepoint vector

fromVector :: PrimVector Char -> Text Source #

O(n) convert from a char vector.

toVector :: Text -> PrimVector Char Source #

O(n) convert to a char vector.

Basic interface

null :: Text -> Bool Source #

O(1) Test whether a text is empty.

length :: Text -> Int Source #

O(n) The char length of a text.

append :: Text -> Text -> Text Source #

O(m+n)

There's no need to guard empty vector because we guard them for you, so appending empty text are no-ops.

map' :: (Char -> Char) -> Text -> Text Source #

O(n) map f t is the Text obtained by applying f to each char of t. Performs replacement on invalid scalar values.

imap' :: (Int -> Char -> Char) -> Text -> Text Source #

Strict mapping with index.

foldl' :: (b -> Char -> b) -> b -> Text -> b Source #

Strict left to right fold.

ifoldl' :: (b -> Int -> Char -> b) -> b -> Text -> b Source #

Strict left to right fold with index.

foldr' :: (Char -> b -> b) -> b -> Text -> b Source #

Strict right to left fold

ifoldr' :: (Int -> Char -> b -> b) -> b -> Text -> b Source #

Strict right to left fold with index

NOTE: the index is counting from 0, not backwards

concat :: [Text] -> Text Source #

O(n) Concatenate a list of text.

Note: concat have to force the entire list to filter out empty text and calculate the length for allocation.

concatMap :: (Char -> Text) -> Text -> Text Source #

Map a function over a text and concatenate the results

Special folds

count :: Char -> Text -> Int Source #

O(n) count returns count of an element from a text.

all :: (Char -> Bool) -> Text -> Bool Source #

O(n) Applied to a predicate and text, all determines if all chars of the text satisfy the predicate.

any :: (Char -> Bool) -> Text -> Bool Source #

O(n) Applied to a predicate and a text, any determines if any chars of the text satisfy the predicate.

normalization

data NormalizationResult Source #

Instances
Eq NormalizationResult Source # 
Instance details

Defined in Z.Data.Text.UTF8Rewind

Ord NormalizationResult Source # 
Instance details

Defined in Z.Data.Text.UTF8Rewind

Show NormalizationResult Source # 
Instance details

Defined in Z.Data.Text.UTF8Rewind

Generic NormalizationResult Source # 
Instance details

Defined in Z.Data.Text.UTF8Rewind

Associated Types

type Rep NormalizationResult :: Type -> Type #

type Rep NormalizationResult Source # 
Instance details

Defined in Z.Data.Text.UTF8Rewind

type Rep NormalizationResult = D1 (MetaData "NormalizationResult" "Z.Data.Text.UTF8Rewind" "Z-Data-0.1.7.2-HYBgokB2u6K9HVDoyDxkSZ" False) (C1 (MetaCons "NormalizedYes" PrefixI False) (U1 :: Type -> Type) :+: (C1 (MetaCons "NormalizedMaybe" PrefixI False) (U1 :: Type -> Type) :+: C1 (MetaCons "NormalizedNo" PrefixI False) (U1 :: Type -> Type)))

data NormalizeMode Source #

These are the Unicode Normalization Forms:

Form                         | Description
---------------------------- | ---------------------------------------------
Normalization Form D (NFD)   | Canonical decomposition
Normalization Form C (NFC)   | Canonical decomposition, followed by canonical composition
Normalization Form KD (NFKD) | Compatibility decomposition
Normalization Form KC (NFKC) | Compatibility decomposition, followed by canonical composition

Constructors

NFC 
NFKC 
NFD 
NFKD 
Instances
Eq NormalizeMode Source # 
Instance details

Defined in Z.Data.Text.UTF8Rewind

Ord NormalizeMode Source # 
Instance details

Defined in Z.Data.Text.UTF8Rewind

Show NormalizeMode Source # 
Instance details

Defined in Z.Data.Text.UTF8Rewind

Generic NormalizeMode Source # 
Instance details

Defined in Z.Data.Text.UTF8Rewind

Associated Types

type Rep NormalizeMode :: Type -> Type #

type Rep NormalizeMode Source # 
Instance details

Defined in Z.Data.Text.UTF8Rewind

type Rep NormalizeMode = D1 (MetaData "NormalizeMode" "Z.Data.Text.UTF8Rewind" "Z-Data-0.1.7.2-HYBgokB2u6K9HVDoyDxkSZ" False) ((C1 (MetaCons "NFC" PrefixI False) (U1 :: Type -> Type) :+: C1 (MetaCons "NFKC" PrefixI False) (U1 :: Type -> Type)) :+: (C1 (MetaCons "NFD" PrefixI False) (U1 :: Type -> Type) :+: C1 (MetaCons "NFKD" PrefixI False) (U1 :: Type -> Type)))

isNormalized :: Text -> NormalizationResult Source #

Check if a string is stable in the NFC (Normalization Form C).

isNormalizedTo :: NormalizeMode -> Text -> NormalizationResult Source #

Check if a string is stable in the specified Unicode Normalization Form.

This function can be used as a preprocessing step, before attempting to normalize a string. Normalization is a very expensive process, it is often cheaper to first determine if the string is unstable in the requested normalization form.

The result of the check will be YES if the string is stable and MAYBE or NO if it is unstable. If the result is MAYBE, the string does not necessarily have to be normalized.

For more information, please review Unicode Standard Annex #15 - Unicode Normalization Forms.

normalize :: Text -> Text Source #

Normalize a string to NFC (Normalization Form C).

normalizeTo :: NormalizeMode -> Text -> Text Source #

Normalize a string to the specified Unicode Normalization Form.

The Unicode standard defines two standards for equivalence between characters: canonical and compatibility equivalence. Canonically equivalent characters and sequence represent the same abstract character and must be rendered with the same appearance and behavior. Compatibility equivalent characters have a weaker equivalence and may be rendered differently.

Unicode Normalization Forms are formally defined standards that can be used to test whether any two strings of characters are equivalent to each other. This equivalence may be canonical or compatibility.

The algorithm puts all combining marks into a specified order and uses the rules for decomposition and composition to transform the string into one of four Unicode Normalization Forms. A binary comparison can then be used to determine equivalence.

Case conversion

 

envLocale :: IO Locale Source #

Get environment locale

caseFold :: Text -> Text Source #

Remove case distinction from UTF-8 encoded text with default locale.

caseFoldWith :: Locale -> Text -> Text Source #

Remove case distinction from UTF-8 encoded text.

Case folding is the process of eliminating differences between code points concerning case mapping. It is most commonly used for comparing strings in a case-insensitive manner. Conversion is fully compliant with the Unicode 7.0 standard.

Although similar to lowercasing text, there are significant differences. For one, case folding does _not_ take locale into account when converting. In some cases, case folding can be up to 20% faster than lowercasing the same text, but the result cannot be treated as correct lowercased text.

Only two locale-specific exception are made when case folding text. In Turkish, U+0049 LATIN CAPITAL LETTER I maps to U+0131 LATIN SMALL LETTER DOTLESS I and U+0130 LATIN CAPITAL LETTER I WITH DOT ABOVE maps to U+0069 LATIN SMALL LETTER I.

Although most code points can be case folded without changing length, there are notable exceptions. For example, U+0130 (LATIN CAPITAL LETTER I WITH DOT ABOVE) maps to "U+0069 U+0307" (LATIN SMALL LETTER I and COMBINING DOT ABOVE) when converted to lowercase.

Only a handful of scripts make a distinction between upper- and lowercase. In addition to modern scripts, such as Latin, Greek, Armenian and Cyrillic, a few historic or archaic scripts have case. The vast majority of scripts do not have case distinctions.

toLower :: Text -> Text Source #

Convert UTF-8 encoded text to lowercase with default locale.

toLowerWith :: Locale -> Text -> Text Source #

Convert UTF-8 encoded text to lowercase.

This function allows conversion of UTF-8 encoded strings to lowercase without first changing the encoding to UTF-32. Conversion is fully compliant with the Unicode 7.0 standard.

Although most code points can be converted to lowercase with changing length, there are notable exceptions. For example, U+0130 (LATIN CAPITAL LETTER I WITH DOT ABOVE) maps to "U+0069 U+0307" (LATIN SMALL LETTER I and COMBINING DOT ABOVE) when converted to lowercase.

Only a handful of scripts make a distinction between upper- and lowercase. In addition to modern scripts, such as Latin, Greek, Armenian and Cyrillic, a few historic or archaic scripts have case. The vast majority of scripts do not have case distinctions.

Case mapping is not reversible. That is, toUpper(toLower(x)) != toLower(toUpper(x)).

Certain code points (or combinations of code points) apply rules based on the locale. For more information about these exceptional code points, please refer to the Unicode standard: ftp:/ftp.unicode.orgPublicUNIDATASpecialCasing.txt

toUpper :: Text -> Text Source #

Convert UTF-8 encoded text to uppercase with default locale.

toUpperWith :: Locale -> Text -> Text Source #

Convert UTF-8 encoded text to uppercase.

Conversion is fully compliant with the Unicode 7.0 standard.

Although most code points can be converted without changing length, there are notable exceptions. For example, U+00DF (LATIN SMALL LETTER SHARP S) maps to "U+0053 U+0053" (LATIN CAPITAL LETTER S and LATIN CAPITAL LETTER S) when converted to uppercase.

Only a handful of scripts make a distinction between upper and lowercase. In addition to modern scripts, such as Latin, Greek, Armenian and Cyrillic, a few historic or archaic scripts have case. The vast majority of scripts do not have case distinctions.

Case mapping is not reversible. That is, toUpper(toLower(x)) != toLower(toUpper(x)).

Certain code points (or combinations of code points) apply rules based on the locale. For more information about these exceptional code points, please refer to the Unicode standard: ftp:/ftp.unicode.orgPublicUNIDATASpecialCasing.txt

toTitle :: Text -> Text Source #

Convert UTF-8 encoded text to titlecase with default locale.

toTitleWith :: Locale -> Text -> Text Source #

Convert UTF-8 encoded text to titlecase.

This function allows conversion of UTF-8 encoded strings to titlecase. Conversion is fully compliant with the Unicode 7.0 standard.

Titlecase requires a bit more explanation than uppercase and lowercase, because it is not a common text transformation. Titlecase uses uppercase for the first letter of each word and lowercase for the rest. Words are defined as "collections of code points with general category Lu, Ll, Lt, Lm or Lo according to the Unicode database".

Effectively, any type of punctuation can break up a word, even if this is not grammatically valid. This happens because the titlecasing algorithm does not and cannot take grammar rules into account.

Text                                 | Titlecase
-------------------------------------|-------------------------------------
The running man                      | The Running Man
NATO Alliance                        | Nato Alliance
You're amazing at building libraries | You'Re Amazing At Building Libraries

Although most code points can be converted to titlecase without changing length, there are notable exceptions. For example, U+00DF (LATIN SMALL LETTER SHARP S) maps to "U+0053 U+0073" (LATIN CAPITAL LETTER S and LATIN SMALL LETTER S) when converted to titlecase.

Certain code points (or combinations of code points) apply rules based on the locale. For more information about these exceptional code points, please refer to the Unicode standard: ftp:/ftp.unicode.orgPublicUNIDATASpecialCasing.txt

Unicode category

isCategory :: Category -> Text -> Bool Source #

Check if the input string conforms to the category specified by the flags.

This function can be used to check if the code points in a string are part of a category. Valid flags are members of the "list of categories". The category for a code point is defined as part of the entry in UnicodeData.txt, the data file for the Unicode code point database.

By default, the function will treat grapheme clusters as a single code point. This means that the following string:

Code point | Canonical combining class | General category      | Name
---------- | ------------------------- | --------------------- | ----------------------
U+0045     | 0                         | Lu (Uppercase letter) | LATIN CAPITAL LETTER E
U+0300     | 230                       | Mn (Non-spacing mark) | COMBINING GRAVE ACCENT

Will match with CategoryLetterUppercase in its entirety, because the COMBINING GRAVE ACCENT is treated as part of the grapheme cluster. This is useful when e.g. creating a text parser, because you do not have to normalize the text first.

If this is undesired behavior, specify the CategoryIgnoreGraphemeCluster flag.

In order to maintain backwards compatibility with POSIX functions like isdigit and isspace, compatibility flags have been provided. Note, however, that the result is only guaranteed to be correct for code points in the Basic Latin range, between U+0000 and 0+007F. Combining a compatibility flag with a regular category flag will result in undefined behavior.

spanCategory :: Category -> Text -> (Text, Text) Source #

Try to match as many code points with the matching category flags as possible and return the prefix and suffix.

Constants

Locale

type Locale = CSize Source #

Locale for case mapping.

Category

type Category = CSize Source #

Unicode categories.

See isCategory, you can combine categories with bitwise or.

Misc