amazonka-comprehend-2.0: Amazon Comprehend SDK.
Copyright(c) 2013-2023 Brendan Hay
LicenseMozilla Public License, v. 2.0.
MaintainerBrendan Hay
Stabilityauto-generated
Portabilitynon-portable (GHC extensions)
Safe HaskellSafe-Inferred
LanguageHaskell2010

Amazonka.Comprehend.Types.EntityRecognizerEvaluationMetrics

Description

 
Synopsis

Documentation

data EntityRecognizerEvaluationMetrics Source #

Detailed information about the accuracy of an entity recognizer.

See: newEntityRecognizerEvaluationMetrics smart constructor.

Constructors

EntityRecognizerEvaluationMetrics' 

Fields

  • f1Score :: Maybe Double

    A measure of how accurate the recognizer results are for the test data. It is derived from the Precision and Recall values. The F1Score is the harmonic average of the two scores. For plain text entity recognizer models, the range is 0 to 100, where 100 is the best score. For PDF/Word entity recognizer models, the range is 0 to 1, where 1 is the best score.

  • precision :: Maybe Double

    A measure of the usefulness of the recognizer results in the test data. High precision means that the recognizer returned substantially more relevant results than irrelevant ones.

  • recall :: Maybe Double

    A measure of how complete the recognizer results are for the test data. High recall means that the recognizer returned most of the relevant results.

Instances

Instances details
FromJSON EntityRecognizerEvaluationMetrics Source # 
Instance details

Defined in Amazonka.Comprehend.Types.EntityRecognizerEvaluationMetrics

Generic EntityRecognizerEvaluationMetrics Source # 
Instance details

Defined in Amazonka.Comprehend.Types.EntityRecognizerEvaluationMetrics

Associated Types

type Rep EntityRecognizerEvaluationMetrics :: Type -> Type #

Read EntityRecognizerEvaluationMetrics Source # 
Instance details

Defined in Amazonka.Comprehend.Types.EntityRecognizerEvaluationMetrics

Show EntityRecognizerEvaluationMetrics Source # 
Instance details

Defined in Amazonka.Comprehend.Types.EntityRecognizerEvaluationMetrics

NFData EntityRecognizerEvaluationMetrics Source # 
Instance details

Defined in Amazonka.Comprehend.Types.EntityRecognizerEvaluationMetrics

Eq EntityRecognizerEvaluationMetrics Source # 
Instance details

Defined in Amazonka.Comprehend.Types.EntityRecognizerEvaluationMetrics

Hashable EntityRecognizerEvaluationMetrics Source # 
Instance details

Defined in Amazonka.Comprehend.Types.EntityRecognizerEvaluationMetrics

type Rep EntityRecognizerEvaluationMetrics Source # 
Instance details

Defined in Amazonka.Comprehend.Types.EntityRecognizerEvaluationMetrics

type Rep EntityRecognizerEvaluationMetrics = D1 ('MetaData "EntityRecognizerEvaluationMetrics" "Amazonka.Comprehend.Types.EntityRecognizerEvaluationMetrics" "amazonka-comprehend-2.0-Ko6GCjAQF2RARapSdPn69F" 'False) (C1 ('MetaCons "EntityRecognizerEvaluationMetrics'" 'PrefixI 'True) (S1 ('MetaSel ('Just "f1Score") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Double)) :*: (S1 ('MetaSel ('Just "precision") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Double)) :*: S1 ('MetaSel ('Just "recall") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Double)))))

newEntityRecognizerEvaluationMetrics :: EntityRecognizerEvaluationMetrics Source #

Create a value of EntityRecognizerEvaluationMetrics with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:f1Score:EntityRecognizerEvaluationMetrics', entityRecognizerEvaluationMetrics_f1Score - A measure of how accurate the recognizer results are for the test data. It is derived from the Precision and Recall values. The F1Score is the harmonic average of the two scores. For plain text entity recognizer models, the range is 0 to 100, where 100 is the best score. For PDF/Word entity recognizer models, the range is 0 to 1, where 1 is the best score.

$sel:precision:EntityRecognizerEvaluationMetrics', entityRecognizerEvaluationMetrics_precision - A measure of the usefulness of the recognizer results in the test data. High precision means that the recognizer returned substantially more relevant results than irrelevant ones.

$sel:recall:EntityRecognizerEvaluationMetrics', entityRecognizerEvaluationMetrics_recall - A measure of how complete the recognizer results are for the test data. High recall means that the recognizer returned most of the relevant results.

entityRecognizerEvaluationMetrics_f1Score :: Lens' EntityRecognizerEvaluationMetrics (Maybe Double) Source #

A measure of how accurate the recognizer results are for the test data. It is derived from the Precision and Recall values. The F1Score is the harmonic average of the two scores. For plain text entity recognizer models, the range is 0 to 100, where 100 is the best score. For PDF/Word entity recognizer models, the range is 0 to 1, where 1 is the best score.

entityRecognizerEvaluationMetrics_precision :: Lens' EntityRecognizerEvaluationMetrics (Maybe Double) Source #

A measure of the usefulness of the recognizer results in the test data. High precision means that the recognizer returned substantially more relevant results than irrelevant ones.

entityRecognizerEvaluationMetrics_recall :: Lens' EntityRecognizerEvaluationMetrics (Maybe Double) Source #

A measure of how complete the recognizer results are for the test data. High recall means that the recognizer returned most of the relevant results.