amazonka-transcribe-2.0: Amazon Transcribe Service SDK.
Copyright(c) 2013-2023 Brendan Hay
LicenseMozilla Public License, v. 2.0.
MaintainerBrendan Hay
Stabilityauto-generated
Portabilitynon-portable (GHC extensions)
Safe HaskellSafe-Inferred
LanguageHaskell2010

Amazonka.Transcribe.Types

Description

 
Synopsis

Service Configuration

defaultService :: Service Source #

API version 2017-10-26 of the Amazon Transcribe Service SDK configuration.

Errors

_BadRequestException :: AsError a => Fold a ServiceError Source #

Your request didn't pass one or more validation tests. This can occur when the entity you're trying to delete doesn't exist or if it's in a non-terminal state (such as IN PROGRESS). See the exception message field for more information.

_ConflictException :: AsError a => Fold a ServiceError Source #

A resource already exists with this name. Resource names must be unique within an Amazon Web Services account.

_InternalFailureException :: AsError a => Fold a ServiceError Source #

There was an internal error. Check the error message, correct the issue, and try your request again.

_LimitExceededException :: AsError a => Fold a ServiceError Source #

You've either sent too many requests or your input file is too long. Wait before retrying your request, or use a smaller file and try your request again.

_NotFoundException :: AsError a => Fold a ServiceError Source #

We can't find the requested resource. Check that the specified name is correct and try your request again.

BaseModelName

newtype BaseModelName Source #

Constructors

BaseModelName' 

Instances

Instances details
FromJSON BaseModelName Source # 
Instance details

Defined in Amazonka.Transcribe.Types.BaseModelName

FromJSONKey BaseModelName Source # 
Instance details

Defined in Amazonka.Transcribe.Types.BaseModelName

ToJSON BaseModelName Source # 
Instance details

Defined in Amazonka.Transcribe.Types.BaseModelName

ToJSONKey BaseModelName Source # 
Instance details

Defined in Amazonka.Transcribe.Types.BaseModelName

ToByteString BaseModelName Source # 
Instance details

Defined in Amazonka.Transcribe.Types.BaseModelName

ToHeader BaseModelName Source # 
Instance details

Defined in Amazonka.Transcribe.Types.BaseModelName

ToLog BaseModelName Source # 
Instance details

Defined in Amazonka.Transcribe.Types.BaseModelName

ToQuery BaseModelName Source # 
Instance details

Defined in Amazonka.Transcribe.Types.BaseModelName

FromText BaseModelName Source # 
Instance details

Defined in Amazonka.Transcribe.Types.BaseModelName

ToText BaseModelName Source # 
Instance details

Defined in Amazonka.Transcribe.Types.BaseModelName

Methods

toText :: BaseModelName -> Text #

FromXML BaseModelName Source # 
Instance details

Defined in Amazonka.Transcribe.Types.BaseModelName

ToXML BaseModelName Source # 
Instance details

Defined in Amazonka.Transcribe.Types.BaseModelName

Methods

toXML :: BaseModelName -> XML #

Generic BaseModelName Source # 
Instance details

Defined in Amazonka.Transcribe.Types.BaseModelName

Associated Types

type Rep BaseModelName :: Type -> Type #

Read BaseModelName Source # 
Instance details

Defined in Amazonka.Transcribe.Types.BaseModelName

Show BaseModelName Source # 
Instance details

Defined in Amazonka.Transcribe.Types.BaseModelName

NFData BaseModelName Source # 
Instance details

Defined in Amazonka.Transcribe.Types.BaseModelName

Methods

rnf :: BaseModelName -> () #

Eq BaseModelName Source # 
Instance details

Defined in Amazonka.Transcribe.Types.BaseModelName

Ord BaseModelName Source # 
Instance details

Defined in Amazonka.Transcribe.Types.BaseModelName

Hashable BaseModelName Source # 
Instance details

Defined in Amazonka.Transcribe.Types.BaseModelName

type Rep BaseModelName Source # 
Instance details

Defined in Amazonka.Transcribe.Types.BaseModelName

type Rep BaseModelName = D1 ('MetaData "BaseModelName" "Amazonka.Transcribe.Types.BaseModelName" "amazonka-transcribe-2.0-4BjJqzwmm0K94bsEMe1tnF" 'True) (C1 ('MetaCons "BaseModelName'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromBaseModelName") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

CLMLanguageCode

newtype CLMLanguageCode Source #

Constructors

CLMLanguageCode' 

Instances

Instances details
FromJSON CLMLanguageCode Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CLMLanguageCode

FromJSONKey CLMLanguageCode Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CLMLanguageCode

ToJSON CLMLanguageCode Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CLMLanguageCode

ToJSONKey CLMLanguageCode Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CLMLanguageCode

ToByteString CLMLanguageCode Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CLMLanguageCode

ToHeader CLMLanguageCode Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CLMLanguageCode

ToLog CLMLanguageCode Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CLMLanguageCode

ToQuery CLMLanguageCode Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CLMLanguageCode

FromText CLMLanguageCode Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CLMLanguageCode

ToText CLMLanguageCode Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CLMLanguageCode

FromXML CLMLanguageCode Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CLMLanguageCode

ToXML CLMLanguageCode Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CLMLanguageCode

Methods

toXML :: CLMLanguageCode -> XML #

Generic CLMLanguageCode Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CLMLanguageCode

Associated Types

type Rep CLMLanguageCode :: Type -> Type #

Read CLMLanguageCode Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CLMLanguageCode

Show CLMLanguageCode Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CLMLanguageCode

NFData CLMLanguageCode Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CLMLanguageCode

Methods

rnf :: CLMLanguageCode -> () #

Eq CLMLanguageCode Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CLMLanguageCode

Ord CLMLanguageCode Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CLMLanguageCode

Hashable CLMLanguageCode Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CLMLanguageCode

type Rep CLMLanguageCode Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CLMLanguageCode

type Rep CLMLanguageCode = D1 ('MetaData "CLMLanguageCode" "Amazonka.Transcribe.Types.CLMLanguageCode" "amazonka-transcribe-2.0-4BjJqzwmm0K94bsEMe1tnF" 'True) (C1 ('MetaCons "CLMLanguageCode'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromCLMLanguageCode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

CallAnalyticsJobStatus

newtype CallAnalyticsJobStatus Source #

Instances

Instances details
FromJSON CallAnalyticsJobStatus Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CallAnalyticsJobStatus

FromJSONKey CallAnalyticsJobStatus Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CallAnalyticsJobStatus

ToJSON CallAnalyticsJobStatus Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CallAnalyticsJobStatus

ToJSONKey CallAnalyticsJobStatus Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CallAnalyticsJobStatus

ToByteString CallAnalyticsJobStatus Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CallAnalyticsJobStatus

ToHeader CallAnalyticsJobStatus Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CallAnalyticsJobStatus

ToLog CallAnalyticsJobStatus Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CallAnalyticsJobStatus

ToQuery CallAnalyticsJobStatus Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CallAnalyticsJobStatus

FromText CallAnalyticsJobStatus Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CallAnalyticsJobStatus

ToText CallAnalyticsJobStatus Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CallAnalyticsJobStatus

FromXML CallAnalyticsJobStatus Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CallAnalyticsJobStatus

ToXML CallAnalyticsJobStatus Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CallAnalyticsJobStatus

Generic CallAnalyticsJobStatus Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CallAnalyticsJobStatus

Associated Types

type Rep CallAnalyticsJobStatus :: Type -> Type #

Read CallAnalyticsJobStatus Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CallAnalyticsJobStatus

Show CallAnalyticsJobStatus Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CallAnalyticsJobStatus

NFData CallAnalyticsJobStatus Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CallAnalyticsJobStatus

Methods

rnf :: CallAnalyticsJobStatus -> () #

Eq CallAnalyticsJobStatus Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CallAnalyticsJobStatus

Ord CallAnalyticsJobStatus Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CallAnalyticsJobStatus

Hashable CallAnalyticsJobStatus Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CallAnalyticsJobStatus

type Rep CallAnalyticsJobStatus Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CallAnalyticsJobStatus

type Rep CallAnalyticsJobStatus = D1 ('MetaData "CallAnalyticsJobStatus" "Amazonka.Transcribe.Types.CallAnalyticsJobStatus" "amazonka-transcribe-2.0-4BjJqzwmm0K94bsEMe1tnF" 'True) (C1 ('MetaCons "CallAnalyticsJobStatus'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromCallAnalyticsJobStatus") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

InputType

newtype InputType Source #

Constructors

InputType' 

Fields

Bundled Patterns

pattern InputType_POST_CALL :: InputType 
pattern InputType_REAL_TIME :: InputType 

Instances

Instances details
FromJSON InputType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.InputType

FromJSONKey InputType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.InputType

ToJSON InputType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.InputType

ToJSONKey InputType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.InputType

ToByteString InputType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.InputType

Methods

toBS :: InputType -> ByteString #

ToHeader InputType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.InputType

ToLog InputType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.InputType

ToQuery InputType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.InputType

FromText InputType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.InputType

ToText InputType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.InputType

Methods

toText :: InputType -> Text #

FromXML InputType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.InputType

ToXML InputType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.InputType

Methods

toXML :: InputType -> XML #

Generic InputType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.InputType

Associated Types

type Rep InputType :: Type -> Type #

Read InputType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.InputType

Show InputType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.InputType

NFData InputType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.InputType

Methods

rnf :: InputType -> () #

Eq InputType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.InputType

Ord InputType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.InputType

Hashable InputType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.InputType

type Rep InputType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.InputType

type Rep InputType = D1 ('MetaData "InputType" "Amazonka.Transcribe.Types.InputType" "amazonka-transcribe-2.0-4BjJqzwmm0K94bsEMe1tnF" 'True) (C1 ('MetaCons "InputType'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromInputType") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

LanguageCode

newtype LanguageCode Source #

Constructors

LanguageCode' 

Bundled Patterns

pattern LanguageCode_Af_ZA :: LanguageCode 
pattern LanguageCode_Ar_AE :: LanguageCode 
pattern LanguageCode_Ar_SA :: LanguageCode 
pattern LanguageCode_Da_DK :: LanguageCode 
pattern LanguageCode_De_CH :: LanguageCode 
pattern LanguageCode_De_DE :: LanguageCode 
pattern LanguageCode_En_AB :: LanguageCode 
pattern LanguageCode_En_AU :: LanguageCode 
pattern LanguageCode_En_GB :: LanguageCode 
pattern LanguageCode_En_IE :: LanguageCode 
pattern LanguageCode_En_IN :: LanguageCode 
pattern LanguageCode_En_NZ :: LanguageCode 
pattern LanguageCode_En_US :: LanguageCode 
pattern LanguageCode_En_WL :: LanguageCode 
pattern LanguageCode_En_ZA :: LanguageCode 
pattern LanguageCode_Es_ES :: LanguageCode 
pattern LanguageCode_Es_US :: LanguageCode 
pattern LanguageCode_Fa_IR :: LanguageCode 
pattern LanguageCode_Fr_CA :: LanguageCode 
pattern LanguageCode_Fr_FR :: LanguageCode 
pattern LanguageCode_He_IL :: LanguageCode 
pattern LanguageCode_Hi_IN :: LanguageCode 
pattern LanguageCode_Id_ID :: LanguageCode 
pattern LanguageCode_It_IT :: LanguageCode 
pattern LanguageCode_Ja_JP :: LanguageCode 
pattern LanguageCode_Ko_KR :: LanguageCode 
pattern LanguageCode_Ms_MY :: LanguageCode 
pattern LanguageCode_Nl_NL :: LanguageCode 
pattern LanguageCode_Pt_BR :: LanguageCode 
pattern LanguageCode_Pt_PT :: LanguageCode 
pattern LanguageCode_Ru_RU :: LanguageCode 
pattern LanguageCode_Sv_SE :: LanguageCode 
pattern LanguageCode_Ta_IN :: LanguageCode 
pattern LanguageCode_Te_IN :: LanguageCode 
pattern LanguageCode_Th_TH :: LanguageCode 
pattern LanguageCode_Tr_TR :: LanguageCode 
pattern LanguageCode_Vi_VN :: LanguageCode 
pattern LanguageCode_Zh_CN :: LanguageCode 
pattern LanguageCode_Zh_TW :: LanguageCode 

Instances

Instances details
FromJSON LanguageCode Source # 
Instance details

Defined in Amazonka.Transcribe.Types.LanguageCode

FromJSONKey LanguageCode Source # 
Instance details

Defined in Amazonka.Transcribe.Types.LanguageCode

ToJSON LanguageCode Source # 
Instance details

Defined in Amazonka.Transcribe.Types.LanguageCode

ToJSONKey LanguageCode Source # 
Instance details

Defined in Amazonka.Transcribe.Types.LanguageCode

ToByteString LanguageCode Source # 
Instance details

Defined in Amazonka.Transcribe.Types.LanguageCode

ToHeader LanguageCode Source # 
Instance details

Defined in Amazonka.Transcribe.Types.LanguageCode

ToLog LanguageCode Source # 
Instance details

Defined in Amazonka.Transcribe.Types.LanguageCode

ToQuery LanguageCode Source # 
Instance details

Defined in Amazonka.Transcribe.Types.LanguageCode

FromText LanguageCode Source # 
Instance details

Defined in Amazonka.Transcribe.Types.LanguageCode

ToText LanguageCode Source # 
Instance details

Defined in Amazonka.Transcribe.Types.LanguageCode

Methods

toText :: LanguageCode -> Text #

FromXML LanguageCode Source # 
Instance details

Defined in Amazonka.Transcribe.Types.LanguageCode

ToXML LanguageCode Source # 
Instance details

Defined in Amazonka.Transcribe.Types.LanguageCode

Methods

toXML :: LanguageCode -> XML #

Generic LanguageCode Source # 
Instance details

Defined in Amazonka.Transcribe.Types.LanguageCode

Associated Types

type Rep LanguageCode :: Type -> Type #

Read LanguageCode Source # 
Instance details

Defined in Amazonka.Transcribe.Types.LanguageCode

Show LanguageCode Source # 
Instance details

Defined in Amazonka.Transcribe.Types.LanguageCode

NFData LanguageCode Source # 
Instance details

Defined in Amazonka.Transcribe.Types.LanguageCode

Methods

rnf :: LanguageCode -> () #

Eq LanguageCode Source # 
Instance details

Defined in Amazonka.Transcribe.Types.LanguageCode

Ord LanguageCode Source # 
Instance details

Defined in Amazonka.Transcribe.Types.LanguageCode

Hashable LanguageCode Source # 
Instance details

Defined in Amazonka.Transcribe.Types.LanguageCode

type Rep LanguageCode Source # 
Instance details

Defined in Amazonka.Transcribe.Types.LanguageCode

type Rep LanguageCode = D1 ('MetaData "LanguageCode" "Amazonka.Transcribe.Types.LanguageCode" "amazonka-transcribe-2.0-4BjJqzwmm0K94bsEMe1tnF" 'True) (C1 ('MetaCons "LanguageCode'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromLanguageCode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

MediaFormat

newtype MediaFormat Source #

Constructors

MediaFormat' 

Instances

Instances details
FromJSON MediaFormat Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MediaFormat

FromJSONKey MediaFormat Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MediaFormat

ToJSON MediaFormat Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MediaFormat

ToJSONKey MediaFormat Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MediaFormat

ToByteString MediaFormat Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MediaFormat

ToHeader MediaFormat Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MediaFormat

ToLog MediaFormat Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MediaFormat

ToQuery MediaFormat Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MediaFormat

FromText MediaFormat Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MediaFormat

ToText MediaFormat Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MediaFormat

Methods

toText :: MediaFormat -> Text #

FromXML MediaFormat Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MediaFormat

ToXML MediaFormat Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MediaFormat

Methods

toXML :: MediaFormat -> XML #

Generic MediaFormat Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MediaFormat

Associated Types

type Rep MediaFormat :: Type -> Type #

Read MediaFormat Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MediaFormat

Show MediaFormat Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MediaFormat

NFData MediaFormat Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MediaFormat

Methods

rnf :: MediaFormat -> () #

Eq MediaFormat Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MediaFormat

Ord MediaFormat Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MediaFormat

Hashable MediaFormat Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MediaFormat

type Rep MediaFormat Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MediaFormat

type Rep MediaFormat = D1 ('MetaData "MediaFormat" "Amazonka.Transcribe.Types.MediaFormat" "amazonka-transcribe-2.0-4BjJqzwmm0K94bsEMe1tnF" 'True) (C1 ('MetaCons "MediaFormat'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromMediaFormat") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

MedicalContentIdentificationType

newtype MedicalContentIdentificationType Source #

Instances

Instances details
FromJSON MedicalContentIdentificationType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MedicalContentIdentificationType

FromJSONKey MedicalContentIdentificationType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MedicalContentIdentificationType

ToJSON MedicalContentIdentificationType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MedicalContentIdentificationType

ToJSONKey MedicalContentIdentificationType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MedicalContentIdentificationType

ToByteString MedicalContentIdentificationType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MedicalContentIdentificationType

ToHeader MedicalContentIdentificationType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MedicalContentIdentificationType

ToLog MedicalContentIdentificationType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MedicalContentIdentificationType

ToQuery MedicalContentIdentificationType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MedicalContentIdentificationType

FromText MedicalContentIdentificationType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MedicalContentIdentificationType

ToText MedicalContentIdentificationType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MedicalContentIdentificationType

FromXML MedicalContentIdentificationType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MedicalContentIdentificationType

ToXML MedicalContentIdentificationType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MedicalContentIdentificationType

Generic MedicalContentIdentificationType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MedicalContentIdentificationType

Associated Types

type Rep MedicalContentIdentificationType :: Type -> Type #

Read MedicalContentIdentificationType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MedicalContentIdentificationType

Show MedicalContentIdentificationType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MedicalContentIdentificationType

NFData MedicalContentIdentificationType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MedicalContentIdentificationType

Eq MedicalContentIdentificationType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MedicalContentIdentificationType

Ord MedicalContentIdentificationType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MedicalContentIdentificationType

Hashable MedicalContentIdentificationType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MedicalContentIdentificationType

type Rep MedicalContentIdentificationType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MedicalContentIdentificationType

type Rep MedicalContentIdentificationType = D1 ('MetaData "MedicalContentIdentificationType" "Amazonka.Transcribe.Types.MedicalContentIdentificationType" "amazonka-transcribe-2.0-4BjJqzwmm0K94bsEMe1tnF" 'True) (C1 ('MetaCons "MedicalContentIdentificationType'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromMedicalContentIdentificationType") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

ModelStatus

newtype ModelStatus Source #

Constructors

ModelStatus' 

Instances

Instances details
FromJSON ModelStatus Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ModelStatus

FromJSONKey ModelStatus Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ModelStatus

ToJSON ModelStatus Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ModelStatus

ToJSONKey ModelStatus Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ModelStatus

ToByteString ModelStatus Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ModelStatus

ToHeader ModelStatus Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ModelStatus

ToLog ModelStatus Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ModelStatus

ToQuery ModelStatus Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ModelStatus

FromText ModelStatus Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ModelStatus

ToText ModelStatus Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ModelStatus

Methods

toText :: ModelStatus -> Text #

FromXML ModelStatus Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ModelStatus

ToXML ModelStatus Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ModelStatus

Methods

toXML :: ModelStatus -> XML #

Generic ModelStatus Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ModelStatus

Associated Types

type Rep ModelStatus :: Type -> Type #

Read ModelStatus Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ModelStatus

Show ModelStatus Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ModelStatus

NFData ModelStatus Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ModelStatus

Methods

rnf :: ModelStatus -> () #

Eq ModelStatus Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ModelStatus

Ord ModelStatus Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ModelStatus

Hashable ModelStatus Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ModelStatus

type Rep ModelStatus Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ModelStatus

type Rep ModelStatus = D1 ('MetaData "ModelStatus" "Amazonka.Transcribe.Types.ModelStatus" "amazonka-transcribe-2.0-4BjJqzwmm0K94bsEMe1tnF" 'True) (C1 ('MetaCons "ModelStatus'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromModelStatus") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

OutputLocationType

newtype OutputLocationType Source #

Instances

Instances details
FromJSON OutputLocationType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.OutputLocationType

FromJSONKey OutputLocationType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.OutputLocationType

ToJSON OutputLocationType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.OutputLocationType

ToJSONKey OutputLocationType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.OutputLocationType

ToByteString OutputLocationType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.OutputLocationType

ToHeader OutputLocationType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.OutputLocationType

ToLog OutputLocationType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.OutputLocationType

ToQuery OutputLocationType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.OutputLocationType

FromText OutputLocationType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.OutputLocationType

ToText OutputLocationType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.OutputLocationType

FromXML OutputLocationType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.OutputLocationType

ToXML OutputLocationType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.OutputLocationType

Generic OutputLocationType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.OutputLocationType

Associated Types

type Rep OutputLocationType :: Type -> Type #

Read OutputLocationType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.OutputLocationType

Show OutputLocationType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.OutputLocationType

NFData OutputLocationType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.OutputLocationType

Methods

rnf :: OutputLocationType -> () #

Eq OutputLocationType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.OutputLocationType

Ord OutputLocationType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.OutputLocationType

Hashable OutputLocationType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.OutputLocationType

type Rep OutputLocationType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.OutputLocationType

type Rep OutputLocationType = D1 ('MetaData "OutputLocationType" "Amazonka.Transcribe.Types.OutputLocationType" "amazonka-transcribe-2.0-4BjJqzwmm0K94bsEMe1tnF" 'True) (C1 ('MetaCons "OutputLocationType'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromOutputLocationType") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

ParticipantRole

newtype ParticipantRole Source #

Constructors

ParticipantRole' 

Instances

Instances details
FromJSON ParticipantRole Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ParticipantRole

FromJSONKey ParticipantRole Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ParticipantRole

ToJSON ParticipantRole Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ParticipantRole

ToJSONKey ParticipantRole Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ParticipantRole

ToByteString ParticipantRole Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ParticipantRole

ToHeader ParticipantRole Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ParticipantRole

ToLog ParticipantRole Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ParticipantRole

ToQuery ParticipantRole Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ParticipantRole

FromText ParticipantRole Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ParticipantRole

ToText ParticipantRole Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ParticipantRole

FromXML ParticipantRole Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ParticipantRole

ToXML ParticipantRole Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ParticipantRole

Methods

toXML :: ParticipantRole -> XML #

Generic ParticipantRole Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ParticipantRole

Associated Types

type Rep ParticipantRole :: Type -> Type #

Read ParticipantRole Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ParticipantRole

Show ParticipantRole Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ParticipantRole

NFData ParticipantRole Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ParticipantRole

Methods

rnf :: ParticipantRole -> () #

Eq ParticipantRole Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ParticipantRole

Ord ParticipantRole Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ParticipantRole

Hashable ParticipantRole Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ParticipantRole

type Rep ParticipantRole Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ParticipantRole

type Rep ParticipantRole = D1 ('MetaData "ParticipantRole" "Amazonka.Transcribe.Types.ParticipantRole" "amazonka-transcribe-2.0-4BjJqzwmm0K94bsEMe1tnF" 'True) (C1 ('MetaCons "ParticipantRole'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromParticipantRole") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

PiiEntityType

newtype PiiEntityType Source #

Constructors

PiiEntityType' 

Instances

Instances details
FromJSON PiiEntityType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.PiiEntityType

FromJSONKey PiiEntityType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.PiiEntityType

ToJSON PiiEntityType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.PiiEntityType

ToJSONKey PiiEntityType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.PiiEntityType

ToByteString PiiEntityType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.PiiEntityType

ToHeader PiiEntityType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.PiiEntityType

ToLog PiiEntityType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.PiiEntityType

ToQuery PiiEntityType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.PiiEntityType

FromText PiiEntityType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.PiiEntityType

ToText PiiEntityType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.PiiEntityType

Methods

toText :: PiiEntityType -> Text #

FromXML PiiEntityType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.PiiEntityType

ToXML PiiEntityType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.PiiEntityType

Methods

toXML :: PiiEntityType -> XML #

Generic PiiEntityType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.PiiEntityType

Associated Types

type Rep PiiEntityType :: Type -> Type #

Read PiiEntityType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.PiiEntityType

Show PiiEntityType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.PiiEntityType

NFData PiiEntityType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.PiiEntityType

Methods

rnf :: PiiEntityType -> () #

Eq PiiEntityType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.PiiEntityType

Ord PiiEntityType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.PiiEntityType

Hashable PiiEntityType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.PiiEntityType

type Rep PiiEntityType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.PiiEntityType

type Rep PiiEntityType = D1 ('MetaData "PiiEntityType" "Amazonka.Transcribe.Types.PiiEntityType" "amazonka-transcribe-2.0-4BjJqzwmm0K94bsEMe1tnF" 'True) (C1 ('MetaCons "PiiEntityType'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromPiiEntityType") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

RedactionOutput

newtype RedactionOutput Source #

Constructors

RedactionOutput' 

Instances

Instances details
FromJSON RedactionOutput Source # 
Instance details

Defined in Amazonka.Transcribe.Types.RedactionOutput

FromJSONKey RedactionOutput Source # 
Instance details

Defined in Amazonka.Transcribe.Types.RedactionOutput

ToJSON RedactionOutput Source # 
Instance details

Defined in Amazonka.Transcribe.Types.RedactionOutput

ToJSONKey RedactionOutput Source # 
Instance details

Defined in Amazonka.Transcribe.Types.RedactionOutput

ToByteString RedactionOutput Source # 
Instance details

Defined in Amazonka.Transcribe.Types.RedactionOutput

ToHeader RedactionOutput Source # 
Instance details

Defined in Amazonka.Transcribe.Types.RedactionOutput

ToLog RedactionOutput Source # 
Instance details

Defined in Amazonka.Transcribe.Types.RedactionOutput

ToQuery RedactionOutput Source # 
Instance details

Defined in Amazonka.Transcribe.Types.RedactionOutput

FromText RedactionOutput Source # 
Instance details

Defined in Amazonka.Transcribe.Types.RedactionOutput

ToText RedactionOutput Source # 
Instance details

Defined in Amazonka.Transcribe.Types.RedactionOutput

FromXML RedactionOutput Source # 
Instance details

Defined in Amazonka.Transcribe.Types.RedactionOutput

ToXML RedactionOutput Source # 
Instance details

Defined in Amazonka.Transcribe.Types.RedactionOutput

Methods

toXML :: RedactionOutput -> XML #

Generic RedactionOutput Source # 
Instance details

Defined in Amazonka.Transcribe.Types.RedactionOutput

Associated Types

type Rep RedactionOutput :: Type -> Type #

Read RedactionOutput Source # 
Instance details

Defined in Amazonka.Transcribe.Types.RedactionOutput

Show RedactionOutput Source # 
Instance details

Defined in Amazonka.Transcribe.Types.RedactionOutput

NFData RedactionOutput Source # 
Instance details

Defined in Amazonka.Transcribe.Types.RedactionOutput

Methods

rnf :: RedactionOutput -> () #

Eq RedactionOutput Source # 
Instance details

Defined in Amazonka.Transcribe.Types.RedactionOutput

Ord RedactionOutput Source # 
Instance details

Defined in Amazonka.Transcribe.Types.RedactionOutput

Hashable RedactionOutput Source # 
Instance details

Defined in Amazonka.Transcribe.Types.RedactionOutput

type Rep RedactionOutput Source # 
Instance details

Defined in Amazonka.Transcribe.Types.RedactionOutput

type Rep RedactionOutput = D1 ('MetaData "RedactionOutput" "Amazonka.Transcribe.Types.RedactionOutput" "amazonka-transcribe-2.0-4BjJqzwmm0K94bsEMe1tnF" 'True) (C1 ('MetaCons "RedactionOutput'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromRedactionOutput") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

RedactionType

newtype RedactionType Source #

Constructors

RedactionType' 

Bundled Patterns

pattern RedactionType_PII :: RedactionType 

Instances

Instances details
FromJSON RedactionType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.RedactionType

FromJSONKey RedactionType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.RedactionType

ToJSON RedactionType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.RedactionType

ToJSONKey RedactionType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.RedactionType

ToByteString RedactionType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.RedactionType

ToHeader RedactionType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.RedactionType

ToLog RedactionType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.RedactionType

ToQuery RedactionType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.RedactionType

FromText RedactionType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.RedactionType

ToText RedactionType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.RedactionType

Methods

toText :: RedactionType -> Text #

FromXML RedactionType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.RedactionType

ToXML RedactionType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.RedactionType

Methods

toXML :: RedactionType -> XML #

Generic RedactionType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.RedactionType

Associated Types

type Rep RedactionType :: Type -> Type #

Read RedactionType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.RedactionType

Show RedactionType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.RedactionType

NFData RedactionType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.RedactionType

Methods

rnf :: RedactionType -> () #

Eq RedactionType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.RedactionType

Ord RedactionType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.RedactionType

Hashable RedactionType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.RedactionType

type Rep RedactionType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.RedactionType

type Rep RedactionType = D1 ('MetaData "RedactionType" "Amazonka.Transcribe.Types.RedactionType" "amazonka-transcribe-2.0-4BjJqzwmm0K94bsEMe1tnF" 'True) (C1 ('MetaCons "RedactionType'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromRedactionType") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

SentimentValue

newtype SentimentValue Source #

Constructors

SentimentValue' 

Instances

Instances details
FromJSON SentimentValue Source # 
Instance details

Defined in Amazonka.Transcribe.Types.SentimentValue

FromJSONKey SentimentValue Source # 
Instance details

Defined in Amazonka.Transcribe.Types.SentimentValue

ToJSON SentimentValue Source # 
Instance details

Defined in Amazonka.Transcribe.Types.SentimentValue

ToJSONKey SentimentValue Source # 
Instance details

Defined in Amazonka.Transcribe.Types.SentimentValue

ToByteString SentimentValue Source # 
Instance details

Defined in Amazonka.Transcribe.Types.SentimentValue

ToHeader SentimentValue Source # 
Instance details

Defined in Amazonka.Transcribe.Types.SentimentValue

ToLog SentimentValue Source # 
Instance details

Defined in Amazonka.Transcribe.Types.SentimentValue

ToQuery SentimentValue Source # 
Instance details

Defined in Amazonka.Transcribe.Types.SentimentValue

FromText SentimentValue Source # 
Instance details

Defined in Amazonka.Transcribe.Types.SentimentValue

ToText SentimentValue Source # 
Instance details

Defined in Amazonka.Transcribe.Types.SentimentValue

FromXML SentimentValue Source # 
Instance details

Defined in Amazonka.Transcribe.Types.SentimentValue

ToXML SentimentValue Source # 
Instance details

Defined in Amazonka.Transcribe.Types.SentimentValue

Methods

toXML :: SentimentValue -> XML #

Generic SentimentValue Source # 
Instance details

Defined in Amazonka.Transcribe.Types.SentimentValue

Associated Types

type Rep SentimentValue :: Type -> Type #

Read SentimentValue Source # 
Instance details

Defined in Amazonka.Transcribe.Types.SentimentValue

Show SentimentValue Source # 
Instance details

Defined in Amazonka.Transcribe.Types.SentimentValue

NFData SentimentValue Source # 
Instance details

Defined in Amazonka.Transcribe.Types.SentimentValue

Methods

rnf :: SentimentValue -> () #

Eq SentimentValue Source # 
Instance details

Defined in Amazonka.Transcribe.Types.SentimentValue

Ord SentimentValue Source # 
Instance details

Defined in Amazonka.Transcribe.Types.SentimentValue

Hashable SentimentValue Source # 
Instance details

Defined in Amazonka.Transcribe.Types.SentimentValue

type Rep SentimentValue Source # 
Instance details

Defined in Amazonka.Transcribe.Types.SentimentValue

type Rep SentimentValue = D1 ('MetaData "SentimentValue" "Amazonka.Transcribe.Types.SentimentValue" "amazonka-transcribe-2.0-4BjJqzwmm0K94bsEMe1tnF" 'True) (C1 ('MetaCons "SentimentValue'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromSentimentValue") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Specialty

newtype Specialty Source #

Constructors

Specialty' 

Fields

Bundled Patterns

pattern Specialty_PRIMARYCARE :: Specialty 

Instances

Instances details
FromJSON Specialty Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Specialty

FromJSONKey Specialty Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Specialty

ToJSON Specialty Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Specialty

ToJSONKey Specialty Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Specialty

ToByteString Specialty Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Specialty

Methods

toBS :: Specialty -> ByteString #

ToHeader Specialty Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Specialty

ToLog Specialty Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Specialty

ToQuery Specialty Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Specialty

FromText Specialty Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Specialty

ToText Specialty Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Specialty

Methods

toText :: Specialty -> Text #

FromXML Specialty Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Specialty

ToXML Specialty Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Specialty

Methods

toXML :: Specialty -> XML #

Generic Specialty Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Specialty

Associated Types

type Rep Specialty :: Type -> Type #

Read Specialty Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Specialty

Show Specialty Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Specialty

NFData Specialty Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Specialty

Methods

rnf :: Specialty -> () #

Eq Specialty Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Specialty

Ord Specialty Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Specialty

Hashable Specialty Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Specialty

type Rep Specialty Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Specialty

type Rep Specialty = D1 ('MetaData "Specialty" "Amazonka.Transcribe.Types.Specialty" "amazonka-transcribe-2.0-4BjJqzwmm0K94bsEMe1tnF" 'True) (C1 ('MetaCons "Specialty'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromSpecialty") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

SubtitleFormat

newtype SubtitleFormat Source #

Constructors

SubtitleFormat' 

Bundled Patterns

pattern SubtitleFormat_Srt :: SubtitleFormat 
pattern SubtitleFormat_Vtt :: SubtitleFormat 

Instances

Instances details
FromJSON SubtitleFormat Source # 
Instance details

Defined in Amazonka.Transcribe.Types.SubtitleFormat

FromJSONKey SubtitleFormat Source # 
Instance details

Defined in Amazonka.Transcribe.Types.SubtitleFormat

ToJSON SubtitleFormat Source # 
Instance details

Defined in Amazonka.Transcribe.Types.SubtitleFormat

ToJSONKey SubtitleFormat Source # 
Instance details

Defined in Amazonka.Transcribe.Types.SubtitleFormat

ToByteString SubtitleFormat Source # 
Instance details

Defined in Amazonka.Transcribe.Types.SubtitleFormat

ToHeader SubtitleFormat Source # 
Instance details

Defined in Amazonka.Transcribe.Types.SubtitleFormat

ToLog SubtitleFormat Source # 
Instance details

Defined in Amazonka.Transcribe.Types.SubtitleFormat

ToQuery SubtitleFormat Source # 
Instance details

Defined in Amazonka.Transcribe.Types.SubtitleFormat

FromText SubtitleFormat Source # 
Instance details

Defined in Amazonka.Transcribe.Types.SubtitleFormat

ToText SubtitleFormat Source # 
Instance details

Defined in Amazonka.Transcribe.Types.SubtitleFormat

FromXML SubtitleFormat Source # 
Instance details

Defined in Amazonka.Transcribe.Types.SubtitleFormat

ToXML SubtitleFormat Source # 
Instance details

Defined in Amazonka.Transcribe.Types.SubtitleFormat

Methods

toXML :: SubtitleFormat -> XML #

Generic SubtitleFormat Source # 
Instance details

Defined in Amazonka.Transcribe.Types.SubtitleFormat

Associated Types

type Rep SubtitleFormat :: Type -> Type #

Read SubtitleFormat Source # 
Instance details

Defined in Amazonka.Transcribe.Types.SubtitleFormat

Show SubtitleFormat Source # 
Instance details

Defined in Amazonka.Transcribe.Types.SubtitleFormat

NFData SubtitleFormat Source # 
Instance details

Defined in Amazonka.Transcribe.Types.SubtitleFormat

Methods

rnf :: SubtitleFormat -> () #

Eq SubtitleFormat Source # 
Instance details

Defined in Amazonka.Transcribe.Types.SubtitleFormat

Ord SubtitleFormat Source # 
Instance details

Defined in Amazonka.Transcribe.Types.SubtitleFormat

Hashable SubtitleFormat Source # 
Instance details

Defined in Amazonka.Transcribe.Types.SubtitleFormat

type Rep SubtitleFormat Source # 
Instance details

Defined in Amazonka.Transcribe.Types.SubtitleFormat

type Rep SubtitleFormat = D1 ('MetaData "SubtitleFormat" "Amazonka.Transcribe.Types.SubtitleFormat" "amazonka-transcribe-2.0-4BjJqzwmm0K94bsEMe1tnF" 'True) (C1 ('MetaCons "SubtitleFormat'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromSubtitleFormat") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

TranscriptFilterType

newtype TranscriptFilterType Source #

Instances

Instances details
FromJSON TranscriptFilterType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptFilterType

FromJSONKey TranscriptFilterType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptFilterType

ToJSON TranscriptFilterType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptFilterType

ToJSONKey TranscriptFilterType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptFilterType

ToByteString TranscriptFilterType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptFilterType

ToHeader TranscriptFilterType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptFilterType

ToLog TranscriptFilterType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptFilterType

ToQuery TranscriptFilterType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptFilterType

FromText TranscriptFilterType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptFilterType

ToText TranscriptFilterType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptFilterType

FromXML TranscriptFilterType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptFilterType

ToXML TranscriptFilterType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptFilterType

Generic TranscriptFilterType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptFilterType

Associated Types

type Rep TranscriptFilterType :: Type -> Type #

Read TranscriptFilterType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptFilterType

Show TranscriptFilterType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptFilterType

NFData TranscriptFilterType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptFilterType

Methods

rnf :: TranscriptFilterType -> () #

Eq TranscriptFilterType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptFilterType

Ord TranscriptFilterType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptFilterType

Hashable TranscriptFilterType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptFilterType

type Rep TranscriptFilterType Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptFilterType

type Rep TranscriptFilterType = D1 ('MetaData "TranscriptFilterType" "Amazonka.Transcribe.Types.TranscriptFilterType" "amazonka-transcribe-2.0-4BjJqzwmm0K94bsEMe1tnF" 'True) (C1 ('MetaCons "TranscriptFilterType'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromTranscriptFilterType") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

TranscriptionJobStatus

newtype TranscriptionJobStatus Source #

Instances

Instances details
FromJSON TranscriptionJobStatus Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptionJobStatus

FromJSONKey TranscriptionJobStatus Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptionJobStatus

ToJSON TranscriptionJobStatus Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptionJobStatus

ToJSONKey TranscriptionJobStatus Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptionJobStatus

ToByteString TranscriptionJobStatus Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptionJobStatus

ToHeader TranscriptionJobStatus Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptionJobStatus

ToLog TranscriptionJobStatus Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptionJobStatus

ToQuery TranscriptionJobStatus Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptionJobStatus

FromText TranscriptionJobStatus Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptionJobStatus

ToText TranscriptionJobStatus Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptionJobStatus

FromXML TranscriptionJobStatus Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptionJobStatus

ToXML TranscriptionJobStatus Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptionJobStatus

Generic TranscriptionJobStatus Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptionJobStatus

Associated Types

type Rep TranscriptionJobStatus :: Type -> Type #

Read TranscriptionJobStatus Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptionJobStatus

Show TranscriptionJobStatus Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptionJobStatus

NFData TranscriptionJobStatus Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptionJobStatus

Methods

rnf :: TranscriptionJobStatus -> () #

Eq TranscriptionJobStatus Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptionJobStatus

Ord TranscriptionJobStatus Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptionJobStatus

Hashable TranscriptionJobStatus Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptionJobStatus

type Rep TranscriptionJobStatus Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptionJobStatus

type Rep TranscriptionJobStatus = D1 ('MetaData "TranscriptionJobStatus" "Amazonka.Transcribe.Types.TranscriptionJobStatus" "amazonka-transcribe-2.0-4BjJqzwmm0K94bsEMe1tnF" 'True) (C1 ('MetaCons "TranscriptionJobStatus'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromTranscriptionJobStatus") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

Type

newtype Type Source #

Constructors

Type' 

Fields

Bundled Patterns

pattern Type_CONVERSATION :: Type 
pattern Type_DICTATION :: Type 

Instances

Instances details
FromJSON Type Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Type

FromJSONKey Type Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Type

ToJSON Type Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Type

ToJSONKey Type Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Type

ToByteString Type Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Type

Methods

toBS :: Type -> ByteString #

ToHeader Type Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Type

Methods

toHeader :: HeaderName -> Type -> [Header] #

ToLog Type Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Type

ToQuery Type Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Type

Methods

toQuery :: Type -> QueryString #

FromText Type Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Type

ToText Type Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Type

Methods

toText :: Type -> Text #

FromXML Type Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Type

Methods

parseXML :: [Node] -> Either String Type #

ToXML Type Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Type

Methods

toXML :: Type -> XML #

Generic Type Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Type

Associated Types

type Rep Type :: Type -> Type #

Methods

from :: Type -> Rep Type x #

to :: Rep Type x -> Type #

Read Type Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Type

Show Type Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Type

Methods

showsPrec :: Int -> Type -> ShowS #

show :: Type -> String #

showList :: [Type] -> ShowS #

NFData Type Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Type

Methods

rnf :: Type -> () #

Eq Type Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Type

Methods

(==) :: Type -> Type -> Bool #

(/=) :: Type -> Type -> Bool #

Ord Type Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Type

Methods

compare :: Type -> Type -> Ordering #

(<) :: Type -> Type -> Bool #

(<=) :: Type -> Type -> Bool #

(>) :: Type -> Type -> Bool #

(>=) :: Type -> Type -> Bool #

max :: Type -> Type -> Type #

min :: Type -> Type -> Type #

Hashable Type Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Type

Methods

hashWithSalt :: Int -> Type -> Int #

hash :: Type -> Int #

type Rep Type Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Type

type Rep Type = D1 ('MetaData "Type" "Amazonka.Transcribe.Types.Type" "amazonka-transcribe-2.0-4BjJqzwmm0K94bsEMe1tnF" 'True) (C1 ('MetaCons "Type'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromType") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

VocabularyFilterMethod

newtype VocabularyFilterMethod Source #

Instances

Instances details
FromJSON VocabularyFilterMethod Source # 
Instance details

Defined in Amazonka.Transcribe.Types.VocabularyFilterMethod

FromJSONKey VocabularyFilterMethod Source # 
Instance details

Defined in Amazonka.Transcribe.Types.VocabularyFilterMethod

ToJSON VocabularyFilterMethod Source # 
Instance details

Defined in Amazonka.Transcribe.Types.VocabularyFilterMethod

ToJSONKey VocabularyFilterMethod Source # 
Instance details

Defined in Amazonka.Transcribe.Types.VocabularyFilterMethod

ToByteString VocabularyFilterMethod Source # 
Instance details

Defined in Amazonka.Transcribe.Types.VocabularyFilterMethod

ToHeader VocabularyFilterMethod Source # 
Instance details

Defined in Amazonka.Transcribe.Types.VocabularyFilterMethod

ToLog VocabularyFilterMethod Source # 
Instance details

Defined in Amazonka.Transcribe.Types.VocabularyFilterMethod

ToQuery VocabularyFilterMethod Source # 
Instance details

Defined in Amazonka.Transcribe.Types.VocabularyFilterMethod

FromText VocabularyFilterMethod Source # 
Instance details

Defined in Amazonka.Transcribe.Types.VocabularyFilterMethod

ToText VocabularyFilterMethod Source # 
Instance details

Defined in Amazonka.Transcribe.Types.VocabularyFilterMethod

FromXML VocabularyFilterMethod Source # 
Instance details

Defined in Amazonka.Transcribe.Types.VocabularyFilterMethod

ToXML VocabularyFilterMethod Source # 
Instance details

Defined in Amazonka.Transcribe.Types.VocabularyFilterMethod

Generic VocabularyFilterMethod Source # 
Instance details

Defined in Amazonka.Transcribe.Types.VocabularyFilterMethod

Associated Types

type Rep VocabularyFilterMethod :: Type -> Type #

Read VocabularyFilterMethod Source # 
Instance details

Defined in Amazonka.Transcribe.Types.VocabularyFilterMethod

Show VocabularyFilterMethod Source # 
Instance details

Defined in Amazonka.Transcribe.Types.VocabularyFilterMethod

NFData VocabularyFilterMethod Source # 
Instance details

Defined in Amazonka.Transcribe.Types.VocabularyFilterMethod

Methods

rnf :: VocabularyFilterMethod -> () #

Eq VocabularyFilterMethod Source # 
Instance details

Defined in Amazonka.Transcribe.Types.VocabularyFilterMethod

Ord VocabularyFilterMethod Source # 
Instance details

Defined in Amazonka.Transcribe.Types.VocabularyFilterMethod

Hashable VocabularyFilterMethod Source # 
Instance details

Defined in Amazonka.Transcribe.Types.VocabularyFilterMethod

type Rep VocabularyFilterMethod Source # 
Instance details

Defined in Amazonka.Transcribe.Types.VocabularyFilterMethod

type Rep VocabularyFilterMethod = D1 ('MetaData "VocabularyFilterMethod" "Amazonka.Transcribe.Types.VocabularyFilterMethod" "amazonka-transcribe-2.0-4BjJqzwmm0K94bsEMe1tnF" 'True) (C1 ('MetaCons "VocabularyFilterMethod'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromVocabularyFilterMethod") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

VocabularyState

newtype VocabularyState Source #

Constructors

VocabularyState' 

Instances

Instances details
FromJSON VocabularyState Source # 
Instance details

Defined in Amazonka.Transcribe.Types.VocabularyState

FromJSONKey VocabularyState Source # 
Instance details

Defined in Amazonka.Transcribe.Types.VocabularyState

ToJSON VocabularyState Source # 
Instance details

Defined in Amazonka.Transcribe.Types.VocabularyState

ToJSONKey VocabularyState Source # 
Instance details

Defined in Amazonka.Transcribe.Types.VocabularyState

ToByteString VocabularyState Source # 
Instance details

Defined in Amazonka.Transcribe.Types.VocabularyState

ToHeader VocabularyState Source # 
Instance details

Defined in Amazonka.Transcribe.Types.VocabularyState

ToLog VocabularyState Source # 
Instance details

Defined in Amazonka.Transcribe.Types.VocabularyState

ToQuery VocabularyState Source # 
Instance details

Defined in Amazonka.Transcribe.Types.VocabularyState

FromText VocabularyState Source # 
Instance details

Defined in Amazonka.Transcribe.Types.VocabularyState

ToText VocabularyState Source # 
Instance details

Defined in Amazonka.Transcribe.Types.VocabularyState

FromXML VocabularyState Source # 
Instance details

Defined in Amazonka.Transcribe.Types.VocabularyState

ToXML VocabularyState Source # 
Instance details

Defined in Amazonka.Transcribe.Types.VocabularyState

Methods

toXML :: VocabularyState -> XML #

Generic VocabularyState Source # 
Instance details

Defined in Amazonka.Transcribe.Types.VocabularyState

Associated Types

type Rep VocabularyState :: Type -> Type #

Read VocabularyState Source # 
Instance details

Defined in Amazonka.Transcribe.Types.VocabularyState

Show VocabularyState Source # 
Instance details

Defined in Amazonka.Transcribe.Types.VocabularyState

NFData VocabularyState Source # 
Instance details

Defined in Amazonka.Transcribe.Types.VocabularyState

Methods

rnf :: VocabularyState -> () #

Eq VocabularyState Source # 
Instance details

Defined in Amazonka.Transcribe.Types.VocabularyState

Ord VocabularyState Source # 
Instance details

Defined in Amazonka.Transcribe.Types.VocabularyState

Hashable VocabularyState Source # 
Instance details

Defined in Amazonka.Transcribe.Types.VocabularyState

type Rep VocabularyState Source # 
Instance details

Defined in Amazonka.Transcribe.Types.VocabularyState

type Rep VocabularyState = D1 ('MetaData "VocabularyState" "Amazonka.Transcribe.Types.VocabularyState" "amazonka-transcribe-2.0-4BjJqzwmm0K94bsEMe1tnF" 'True) (C1 ('MetaCons "VocabularyState'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromVocabularyState") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

AbsoluteTimeRange

data AbsoluteTimeRange Source #

A time range, in milliseconds, between two points in your media file.

You can use StartTime and EndTime to search a custom segment. For example, setting StartTime to 10000 and EndTime to 50000 only searches for your specified criteria in the audio contained between the 10,000 millisecond mark and the 50,000 millisecond mark of your media file. You must use StartTime and EndTime as a set; that is, if you include one, you must include both.

You can use also First to search from the start of the audio until the time that you specify, or Last to search from the time that you specify until the end of the audio. For example, setting First to 50000 only searches for your specified criteria in the audio contained between the start of the media file to the 50,000 millisecond mark. You can use First and Last independently of each other.

If you prefer to use percentage instead of milliseconds, see .

See: newAbsoluteTimeRange smart constructor.

Constructors

AbsoluteTimeRange' 

Fields

  • endTime :: Maybe Natural

    The time, in milliseconds, when Amazon Transcribe stops searching for the specified criteria in your audio. If you include EndTime in your request, you must also include StartTime.

  • first :: Maybe Natural

    The time, in milliseconds, from the start of your media file until the specified value. Amazon Transcribe searches for your specified criteria in this time segment.

  • last :: Maybe Natural

    The time, in milliseconds, from the specified value until the end of your media file. Amazon Transcribe searches for your specified criteria in this time segment.

  • startTime :: Maybe Natural

    The time, in milliseconds, when Amazon Transcribe starts searching for the specified criteria in your audio. If you include StartTime in your request, you must also include EndTime.

Instances

Instances details
FromJSON AbsoluteTimeRange Source # 
Instance details

Defined in Amazonka.Transcribe.Types.AbsoluteTimeRange

ToJSON AbsoluteTimeRange Source # 
Instance details

Defined in Amazonka.Transcribe.Types.AbsoluteTimeRange

Generic AbsoluteTimeRange Source # 
Instance details

Defined in Amazonka.Transcribe.Types.AbsoluteTimeRange

Associated Types

type Rep AbsoluteTimeRange :: Type -> Type #

Read AbsoluteTimeRange Source # 
Instance details

Defined in Amazonka.Transcribe.Types.AbsoluteTimeRange

Show AbsoluteTimeRange Source # 
Instance details

Defined in Amazonka.Transcribe.Types.AbsoluteTimeRange

NFData AbsoluteTimeRange Source # 
Instance details

Defined in Amazonka.Transcribe.Types.AbsoluteTimeRange

Methods

rnf :: AbsoluteTimeRange -> () #

Eq AbsoluteTimeRange Source # 
Instance details

Defined in Amazonka.Transcribe.Types.AbsoluteTimeRange

Hashable AbsoluteTimeRange Source # 
Instance details

Defined in Amazonka.Transcribe.Types.AbsoluteTimeRange

type Rep AbsoluteTimeRange Source # 
Instance details

Defined in Amazonka.Transcribe.Types.AbsoluteTimeRange

type Rep AbsoluteTimeRange = D1 ('MetaData "AbsoluteTimeRange" "Amazonka.Transcribe.Types.AbsoluteTimeRange" "amazonka-transcribe-2.0-4BjJqzwmm0K94bsEMe1tnF" 'False) (C1 ('MetaCons "AbsoluteTimeRange'" 'PrefixI 'True) ((S1 ('MetaSel ('Just "endTime") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "first") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))) :*: (S1 ('MetaSel ('Just "last") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "startTime") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)))))

newAbsoluteTimeRange :: AbsoluteTimeRange Source #

Create a value of AbsoluteTimeRange with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:endTime:AbsoluteTimeRange', absoluteTimeRange_endTime - The time, in milliseconds, when Amazon Transcribe stops searching for the specified criteria in your audio. If you include EndTime in your request, you must also include StartTime.

$sel:first:AbsoluteTimeRange', absoluteTimeRange_first - The time, in milliseconds, from the start of your media file until the specified value. Amazon Transcribe searches for your specified criteria in this time segment.

$sel:last:AbsoluteTimeRange', absoluteTimeRange_last - The time, in milliseconds, from the specified value until the end of your media file. Amazon Transcribe searches for your specified criteria in this time segment.

$sel:startTime:AbsoluteTimeRange', absoluteTimeRange_startTime - The time, in milliseconds, when Amazon Transcribe starts searching for the specified criteria in your audio. If you include StartTime in your request, you must also include EndTime.

absoluteTimeRange_endTime :: Lens' AbsoluteTimeRange (Maybe Natural) Source #

The time, in milliseconds, when Amazon Transcribe stops searching for the specified criteria in your audio. If you include EndTime in your request, you must also include StartTime.

absoluteTimeRange_first :: Lens' AbsoluteTimeRange (Maybe Natural) Source #

The time, in milliseconds, from the start of your media file until the specified value. Amazon Transcribe searches for your specified criteria in this time segment.

absoluteTimeRange_last :: Lens' AbsoluteTimeRange (Maybe Natural) Source #

The time, in milliseconds, from the specified value until the end of your media file. Amazon Transcribe searches for your specified criteria in this time segment.

absoluteTimeRange_startTime :: Lens' AbsoluteTimeRange (Maybe Natural) Source #

The time, in milliseconds, when Amazon Transcribe starts searching for the specified criteria in your audio. If you include StartTime in your request, you must also include EndTime.

CallAnalyticsJob

data CallAnalyticsJob Source #

Provides detailed information about a Call Analytics job.

To view the job's status, refer to CallAnalyticsJobStatus. If the status is COMPLETED, the job is finished. You can find your completed transcript at the URI specified in TranscriptFileUri. If the status is FAILED, FailureReason provides details on why your transcription job failed.

If you enabled personally identifiable information (PII) redaction, the redacted transcript appears at the location specified in RedactedTranscriptFileUri.

If you chose to redact the audio in your media file, you can find your redacted media file at the location specified in the RedactedMediaFileUri field of your response.

See: newCallAnalyticsJob smart constructor.

Constructors

CallAnalyticsJob' 

Fields

  • callAnalyticsJobName :: Maybe Text

    The name of the Call Analytics job. Job names are case sensitive and must be unique within an Amazon Web Services account.

  • callAnalyticsJobStatus :: Maybe CallAnalyticsJobStatus

    Provides the status of the specified Call Analytics job.

    If the status is COMPLETED, the job is finished and you can find the results at the location specified in TranscriptFileUri (or RedactedTranscriptFileUri, if you requested transcript redaction). If the status is FAILED, FailureReason provides details on why your transcription job failed.

  • channelDefinitions :: Maybe (NonEmpty ChannelDefinition)

    Indicates which speaker is on which channel.

  • completionTime :: Maybe POSIX

    The date and time the specified Call Analytics job finished processing.

    Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:33:13.922000-07:00 represents a transcription job that started processing at 12:33 PM UTC-7 on May 4, 2022.

  • creationTime :: Maybe POSIX

    The date and time the specified Call Analytics job request was made.

    Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:32:58.761000-07:00 represents a transcription job that started processing at 12:32 PM UTC-7 on May 4, 2022.

  • dataAccessRoleArn :: Maybe Text

    The Amazon Resource Name (ARN) you included in your request.

  • failureReason :: Maybe Text

    If CallAnalyticsJobStatus is FAILED, FailureReason contains information about why the Call Analytics job request failed.

    The FailureReason field contains one of the following values:

    • Unsupported media format.

      The media format specified in MediaFormat isn't valid. Refer to MediaFormat for a list of supported formats.

    • The media format provided does not match the detected media format.

      The media format specified in MediaFormat doesn't match the format of the input file. Check the media format of your media file and correct the specified value.

    • Invalid sample rate for audio file.

      The sample rate specified in MediaSampleRateHertz isn't valid. The sample rate must be between 8,000 and 48,000 hertz.

    • The sample rate provided does not match the detected sample rate.

      The sample rate specified in MediaSampleRateHertz doesn't match the sample rate detected in your input media file. Check the sample rate of your media file and correct the specified value.

    • Invalid file size: file size too large.

      The size of your media file is larger than what Amazon Transcribe can process. For more information, refer to Guidelines and quotas.

    • Invalid number of channels: number of channels too large.

      Your audio contains more channels than Amazon Transcribe is able to process. For more information, refer to Guidelines and quotas.

  • identifiedLanguageScore :: Maybe Double

    The confidence score associated with the language identified in your media file.

    Confidence scores are values between 0 and 1; a larger value indicates a higher probability that the identified language correctly matches the language spoken in your media.

  • languageCode :: Maybe LanguageCode

    The language code used to create your Call Analytics job. For a list of supported languages and their associated language codes, refer to the Supported languages table.

    If you don't know the language spoken in your media file, you can omit this field and let Amazon Transcribe automatically identify the language of your media. To improve the accuracy of language identification, you can include several language codes and Amazon Transcribe chooses the closest match for your transcription.

  • media :: Maybe Media

    Provides the Amazon S3 location of the media file you used in your Call Analytics request.

  • mediaFormat :: Maybe MediaFormat

    The format of the input media file.

  • mediaSampleRateHertz :: Maybe Natural

    The sample rate, in hertz, of the audio track in your input media file.

  • settings :: Maybe CallAnalyticsJobSettings

    Provides information on any additional settings that were included in your request. Additional settings include content redaction and language identification settings.

  • startTime :: Maybe POSIX

    The date and time the specified Call Analytics job began processing.

    Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:32:58.789000-07:00 represents a transcription job that started processing at 12:32 PM UTC-7 on May 4, 2022.

  • transcript :: Maybe Transcript
     

Instances

Instances details
FromJSON CallAnalyticsJob Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CallAnalyticsJob

Generic CallAnalyticsJob Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CallAnalyticsJob

Associated Types

type Rep CallAnalyticsJob :: Type -> Type #

Read CallAnalyticsJob Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CallAnalyticsJob

Show CallAnalyticsJob Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CallAnalyticsJob

NFData CallAnalyticsJob Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CallAnalyticsJob

Methods

rnf :: CallAnalyticsJob -> () #

Eq CallAnalyticsJob Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CallAnalyticsJob

Hashable CallAnalyticsJob Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CallAnalyticsJob

type Rep CallAnalyticsJob Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CallAnalyticsJob

type Rep CallAnalyticsJob = D1 ('MetaData "CallAnalyticsJob" "Amazonka.Transcribe.Types.CallAnalyticsJob" "amazonka-transcribe-2.0-4BjJqzwmm0K94bsEMe1tnF" 'False) (C1 ('MetaCons "CallAnalyticsJob'" 'PrefixI 'True) (((S1 ('MetaSel ('Just "callAnalyticsJobName") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: (S1 ('MetaSel ('Just "callAnalyticsJobStatus") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe CallAnalyticsJobStatus)) :*: S1 ('MetaSel ('Just "channelDefinitions") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe (NonEmpty ChannelDefinition))))) :*: ((S1 ('MetaSel ('Just "completionTime") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe POSIX)) :*: S1 ('MetaSel ('Just "creationTime") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe POSIX))) :*: (S1 ('MetaSel ('Just "dataAccessRoleArn") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "failureReason") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text))))) :*: (((S1 ('MetaSel ('Just "identifiedLanguageScore") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Double)) :*: S1 ('MetaSel ('Just "languageCode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe LanguageCode))) :*: (S1 ('MetaSel ('Just "media") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Media)) :*: S1 ('MetaSel ('Just "mediaFormat") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe MediaFormat)))) :*: ((S1 ('MetaSel ('Just "mediaSampleRateHertz") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "settings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe CallAnalyticsJobSettings))) :*: (S1 ('MetaSel ('Just "startTime") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe POSIX)) :*: S1 ('MetaSel ('Just "transcript") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Transcript)))))))

newCallAnalyticsJob :: CallAnalyticsJob Source #

Create a value of CallAnalyticsJob with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:callAnalyticsJobName:CallAnalyticsJob', callAnalyticsJob_callAnalyticsJobName - The name of the Call Analytics job. Job names are case sensitive and must be unique within an Amazon Web Services account.

$sel:callAnalyticsJobStatus:CallAnalyticsJob', callAnalyticsJob_callAnalyticsJobStatus - Provides the status of the specified Call Analytics job.

If the status is COMPLETED, the job is finished and you can find the results at the location specified in TranscriptFileUri (or RedactedTranscriptFileUri, if you requested transcript redaction). If the status is FAILED, FailureReason provides details on why your transcription job failed.

$sel:channelDefinitions:CallAnalyticsJob', callAnalyticsJob_channelDefinitions - Indicates which speaker is on which channel.

$sel:completionTime:CallAnalyticsJob', callAnalyticsJob_completionTime - The date and time the specified Call Analytics job finished processing.

Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:33:13.922000-07:00 represents a transcription job that started processing at 12:33 PM UTC-7 on May 4, 2022.

$sel:creationTime:CallAnalyticsJob', callAnalyticsJob_creationTime - The date and time the specified Call Analytics job request was made.

Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:32:58.761000-07:00 represents a transcription job that started processing at 12:32 PM UTC-7 on May 4, 2022.

$sel:dataAccessRoleArn:CallAnalyticsJob', callAnalyticsJob_dataAccessRoleArn - The Amazon Resource Name (ARN) you included in your request.

$sel:failureReason:CallAnalyticsJob', callAnalyticsJob_failureReason - If CallAnalyticsJobStatus is FAILED, FailureReason contains information about why the Call Analytics job request failed.

The FailureReason field contains one of the following values:

  • Unsupported media format.

    The media format specified in MediaFormat isn't valid. Refer to MediaFormat for a list of supported formats.

  • The media format provided does not match the detected media format.

    The media format specified in MediaFormat doesn't match the format of the input file. Check the media format of your media file and correct the specified value.

  • Invalid sample rate for audio file.

    The sample rate specified in MediaSampleRateHertz isn't valid. The sample rate must be between 8,000 and 48,000 hertz.

  • The sample rate provided does not match the detected sample rate.

    The sample rate specified in MediaSampleRateHertz doesn't match the sample rate detected in your input media file. Check the sample rate of your media file and correct the specified value.

  • Invalid file size: file size too large.

    The size of your media file is larger than what Amazon Transcribe can process. For more information, refer to Guidelines and quotas.

  • Invalid number of channels: number of channels too large.

    Your audio contains more channels than Amazon Transcribe is able to process. For more information, refer to Guidelines and quotas.

$sel:identifiedLanguageScore:CallAnalyticsJob', callAnalyticsJob_identifiedLanguageScore - The confidence score associated with the language identified in your media file.

Confidence scores are values between 0 and 1; a larger value indicates a higher probability that the identified language correctly matches the language spoken in your media.

$sel:languageCode:CallAnalyticsJob', callAnalyticsJob_languageCode - The language code used to create your Call Analytics job. For a list of supported languages and their associated language codes, refer to the Supported languages table.

If you don't know the language spoken in your media file, you can omit this field and let Amazon Transcribe automatically identify the language of your media. To improve the accuracy of language identification, you can include several language codes and Amazon Transcribe chooses the closest match for your transcription.

$sel:media:CallAnalyticsJob', callAnalyticsJob_media - Provides the Amazon S3 location of the media file you used in your Call Analytics request.

$sel:mediaFormat:CallAnalyticsJob', callAnalyticsJob_mediaFormat - The format of the input media file.

$sel:mediaSampleRateHertz:CallAnalyticsJob', callAnalyticsJob_mediaSampleRateHertz - The sample rate, in hertz, of the audio track in your input media file.

$sel:settings:CallAnalyticsJob', callAnalyticsJob_settings - Provides information on any additional settings that were included in your request. Additional settings include content redaction and language identification settings.

$sel:startTime:CallAnalyticsJob', callAnalyticsJob_startTime - The date and time the specified Call Analytics job began processing.

Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:32:58.789000-07:00 represents a transcription job that started processing at 12:32 PM UTC-7 on May 4, 2022.

$sel:transcript:CallAnalyticsJob', callAnalyticsJob_transcript - Undocumented member.

callAnalyticsJob_callAnalyticsJobName :: Lens' CallAnalyticsJob (Maybe Text) Source #

The name of the Call Analytics job. Job names are case sensitive and must be unique within an Amazon Web Services account.

callAnalyticsJob_callAnalyticsJobStatus :: Lens' CallAnalyticsJob (Maybe CallAnalyticsJobStatus) Source #

Provides the status of the specified Call Analytics job.

If the status is COMPLETED, the job is finished and you can find the results at the location specified in TranscriptFileUri (or RedactedTranscriptFileUri, if you requested transcript redaction). If the status is FAILED, FailureReason provides details on why your transcription job failed.

callAnalyticsJob_completionTime :: Lens' CallAnalyticsJob (Maybe UTCTime) Source #

The date and time the specified Call Analytics job finished processing.

Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:33:13.922000-07:00 represents a transcription job that started processing at 12:33 PM UTC-7 on May 4, 2022.

callAnalyticsJob_creationTime :: Lens' CallAnalyticsJob (Maybe UTCTime) Source #

The date and time the specified Call Analytics job request was made.

Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:32:58.761000-07:00 represents a transcription job that started processing at 12:32 PM UTC-7 on May 4, 2022.

callAnalyticsJob_dataAccessRoleArn :: Lens' CallAnalyticsJob (Maybe Text) Source #

The Amazon Resource Name (ARN) you included in your request.

callAnalyticsJob_failureReason :: Lens' CallAnalyticsJob (Maybe Text) Source #

If CallAnalyticsJobStatus is FAILED, FailureReason contains information about why the Call Analytics job request failed.

The FailureReason field contains one of the following values:

  • Unsupported media format.

    The media format specified in MediaFormat isn't valid. Refer to MediaFormat for a list of supported formats.

  • The media format provided does not match the detected media format.

    The media format specified in MediaFormat doesn't match the format of the input file. Check the media format of your media file and correct the specified value.

  • Invalid sample rate for audio file.

    The sample rate specified in MediaSampleRateHertz isn't valid. The sample rate must be between 8,000 and 48,000 hertz.

  • The sample rate provided does not match the detected sample rate.

    The sample rate specified in MediaSampleRateHertz doesn't match the sample rate detected in your input media file. Check the sample rate of your media file and correct the specified value.

  • Invalid file size: file size too large.

    The size of your media file is larger than what Amazon Transcribe can process. For more information, refer to Guidelines and quotas.

  • Invalid number of channels: number of channels too large.

    Your audio contains more channels than Amazon Transcribe is able to process. For more information, refer to Guidelines and quotas.

callAnalyticsJob_identifiedLanguageScore :: Lens' CallAnalyticsJob (Maybe Double) Source #

The confidence score associated with the language identified in your media file.

Confidence scores are values between 0 and 1; a larger value indicates a higher probability that the identified language correctly matches the language spoken in your media.

callAnalyticsJob_languageCode :: Lens' CallAnalyticsJob (Maybe LanguageCode) Source #

The language code used to create your Call Analytics job. For a list of supported languages and their associated language codes, refer to the Supported languages table.

If you don't know the language spoken in your media file, you can omit this field and let Amazon Transcribe automatically identify the language of your media. To improve the accuracy of language identification, you can include several language codes and Amazon Transcribe chooses the closest match for your transcription.

callAnalyticsJob_media :: Lens' CallAnalyticsJob (Maybe Media) Source #

Provides the Amazon S3 location of the media file you used in your Call Analytics request.

callAnalyticsJob_mediaSampleRateHertz :: Lens' CallAnalyticsJob (Maybe Natural) Source #

The sample rate, in hertz, of the audio track in your input media file.

callAnalyticsJob_settings :: Lens' CallAnalyticsJob (Maybe CallAnalyticsJobSettings) Source #

Provides information on any additional settings that were included in your request. Additional settings include content redaction and language identification settings.

callAnalyticsJob_startTime :: Lens' CallAnalyticsJob (Maybe UTCTime) Source #

The date and time the specified Call Analytics job began processing.

Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:32:58.789000-07:00 represents a transcription job that started processing at 12:32 PM UTC-7 on May 4, 2022.

CallAnalyticsJobSettings

data CallAnalyticsJobSettings Source #

Provides additional optional settings for your request, including content redaction, automatic language identification; allows you to apply custom language models, custom vocabulary filters, and custom vocabularies.

See: newCallAnalyticsJobSettings smart constructor.

Constructors

CallAnalyticsJobSettings' 

Fields

  • contentRedaction :: Maybe ContentRedaction
     
  • languageIdSettings :: Maybe (HashMap LanguageCode LanguageIdSettings)

    If using automatic language identification in your request and you want to apply a custom language model, a custom vocabulary, or a custom vocabulary filter, include LanguageIdSettings with the relevant sub-parameters (VocabularyName, LanguageModelName, and VocabularyFilterName).

    LanguageIdSettings supports two to five language codes. Each language code you include can have an associated custom language model, custom vocabulary, and custom vocabulary filter. The language codes that you specify must match the languages of the associated custom language models, custom vocabularies, and custom vocabulary filters.

    It's recommended that you include LanguageOptions when using LanguageIdSettings to ensure that the correct language dialect is identified. For example, if you specify a custom vocabulary that is in en-US but Amazon Transcribe determines that the language spoken in your media is en-AU, your custom vocabulary is not applied to your transcription. If you include LanguageOptions and include en-US as the only English language dialect, your custom vocabulary is applied to your transcription.

    If you want to include a custom language model, custom vocabulary, or custom vocabulary filter with your request but do not want to use automatic language identification, use instead the parameter with the LanguageModelName, VocabularyName, or VocabularyFilterName sub-parameters.

    For a list of languages supported with Call Analytics, refer to Supported languages and language-specific features.

  • languageModelName :: Maybe Text

    The name of the custom language model you want to use when processing your Call Analytics job. Note that custom language model names are case sensitive.

    The language of the specified custom language model must match the language code that you specify in your transcription request. If the languages don't match, the custom language model isn't applied. There are no errors or warnings associated with a language mismatch.

  • languageOptions :: Maybe (NonEmpty LanguageCode)

    You can specify two or more language codes that represent the languages you think may be present in your media. Including more than five is not recommended. If you're unsure what languages are present, do not include this parameter.

    Including language options can improve the accuracy of language identification.

    For a list of languages supported with Call Analytics, refer to the Supported languages table.

    To transcribe speech in Modern Standard Arabic (ar-SA), your media file must be encoded at a sample rate of 16,000 Hz or higher.

  • vocabularyFilterMethod :: Maybe VocabularyFilterMethod

    Specify how you want your custom vocabulary filter applied to your transcript.

    To replace words with ***, choose mask.

    To delete words, choose remove.

    To flag words without changing them, choose tag.

  • vocabularyFilterName :: Maybe Text

    The name of the custom vocabulary filter you want to include in your Call Analytics transcription request. Custom vocabulary filter names are case sensitive.

    Note that if you include VocabularyFilterName in your request, you must also include VocabularyFilterMethod.

  • vocabularyName :: Maybe Text

    The name of the custom vocabulary you want to include in your Call Analytics transcription request. Custom vocabulary names are case sensitive.

Instances

Instances details
FromJSON CallAnalyticsJobSettings Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CallAnalyticsJobSettings

ToJSON CallAnalyticsJobSettings Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CallAnalyticsJobSettings

Generic CallAnalyticsJobSettings Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CallAnalyticsJobSettings

Associated Types

type Rep CallAnalyticsJobSettings :: Type -> Type #

Read CallAnalyticsJobSettings Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CallAnalyticsJobSettings

Show CallAnalyticsJobSettings Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CallAnalyticsJobSettings

NFData CallAnalyticsJobSettings Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CallAnalyticsJobSettings

Eq CallAnalyticsJobSettings Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CallAnalyticsJobSettings

Hashable CallAnalyticsJobSettings Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CallAnalyticsJobSettings

type Rep CallAnalyticsJobSettings Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CallAnalyticsJobSettings

type Rep CallAnalyticsJobSettings = D1 ('MetaData "CallAnalyticsJobSettings" "Amazonka.Transcribe.Types.CallAnalyticsJobSettings" "amazonka-transcribe-2.0-4BjJqzwmm0K94bsEMe1tnF" 'False) (C1 ('MetaCons "CallAnalyticsJobSettings'" 'PrefixI 'True) ((S1 ('MetaSel ('Just "contentRedaction") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe ContentRedaction)) :*: (S1 ('MetaSel ('Just "languageIdSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe (HashMap LanguageCode LanguageIdSettings))) :*: S1 ('MetaSel ('Just "languageModelName") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)))) :*: ((S1 ('MetaSel ('Just "languageOptions") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe (NonEmpty LanguageCode))) :*: S1 ('MetaSel ('Just "vocabularyFilterMethod") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe VocabularyFilterMethod))) :*: (S1 ('MetaSel ('Just "vocabularyFilterName") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "vocabularyName") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text))))))

newCallAnalyticsJobSettings :: CallAnalyticsJobSettings Source #

Create a value of CallAnalyticsJobSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:contentRedaction:CallAnalyticsJobSettings', callAnalyticsJobSettings_contentRedaction - Undocumented member.

$sel:languageIdSettings:CallAnalyticsJobSettings', callAnalyticsJobSettings_languageIdSettings - If using automatic language identification in your request and you want to apply a custom language model, a custom vocabulary, or a custom vocabulary filter, include LanguageIdSettings with the relevant sub-parameters (VocabularyName, LanguageModelName, and VocabularyFilterName).

LanguageIdSettings supports two to five language codes. Each language code you include can have an associated custom language model, custom vocabulary, and custom vocabulary filter. The language codes that you specify must match the languages of the associated custom language models, custom vocabularies, and custom vocabulary filters.

It's recommended that you include LanguageOptions when using LanguageIdSettings to ensure that the correct language dialect is identified. For example, if you specify a custom vocabulary that is in en-US but Amazon Transcribe determines that the language spoken in your media is en-AU, your custom vocabulary is not applied to your transcription. If you include LanguageOptions and include en-US as the only English language dialect, your custom vocabulary is applied to your transcription.

If you want to include a custom language model, custom vocabulary, or custom vocabulary filter with your request but do not want to use automatic language identification, use instead the parameter with the LanguageModelName, VocabularyName, or VocabularyFilterName sub-parameters.

For a list of languages supported with Call Analytics, refer to Supported languages and language-specific features.

CallAnalyticsJobSettings, callAnalyticsJobSettings_languageModelName - The name of the custom language model you want to use when processing your Call Analytics job. Note that custom language model names are case sensitive.

The language of the specified custom language model must match the language code that you specify in your transcription request. If the languages don't match, the custom language model isn't applied. There are no errors or warnings associated with a language mismatch.

$sel:languageOptions:CallAnalyticsJobSettings', callAnalyticsJobSettings_languageOptions - You can specify two or more language codes that represent the languages you think may be present in your media. Including more than five is not recommended. If you're unsure what languages are present, do not include this parameter.

Including language options can improve the accuracy of language identification.

For a list of languages supported with Call Analytics, refer to the Supported languages table.

To transcribe speech in Modern Standard Arabic (ar-SA), your media file must be encoded at a sample rate of 16,000 Hz or higher.

$sel:vocabularyFilterMethod:CallAnalyticsJobSettings', callAnalyticsJobSettings_vocabularyFilterMethod - Specify how you want your custom vocabulary filter applied to your transcript.

To replace words with ***, choose mask.

To delete words, choose remove.

To flag words without changing them, choose tag.

CallAnalyticsJobSettings, callAnalyticsJobSettings_vocabularyFilterName - The name of the custom vocabulary filter you want to include in your Call Analytics transcription request. Custom vocabulary filter names are case sensitive.

Note that if you include VocabularyFilterName in your request, you must also include VocabularyFilterMethod.

CallAnalyticsJobSettings, callAnalyticsJobSettings_vocabularyName - The name of the custom vocabulary you want to include in your Call Analytics transcription request. Custom vocabulary names are case sensitive.

callAnalyticsJobSettings_languageIdSettings :: Lens' CallAnalyticsJobSettings (Maybe (HashMap LanguageCode LanguageIdSettings)) Source #

If using automatic language identification in your request and you want to apply a custom language model, a custom vocabulary, or a custom vocabulary filter, include LanguageIdSettings with the relevant sub-parameters (VocabularyName, LanguageModelName, and VocabularyFilterName).

LanguageIdSettings supports two to five language codes. Each language code you include can have an associated custom language model, custom vocabulary, and custom vocabulary filter. The language codes that you specify must match the languages of the associated custom language models, custom vocabularies, and custom vocabulary filters.

It's recommended that you include LanguageOptions when using LanguageIdSettings to ensure that the correct language dialect is identified. For example, if you specify a custom vocabulary that is in en-US but Amazon Transcribe determines that the language spoken in your media is en-AU, your custom vocabulary is not applied to your transcription. If you include LanguageOptions and include en-US as the only English language dialect, your custom vocabulary is applied to your transcription.

If you want to include a custom language model, custom vocabulary, or custom vocabulary filter with your request but do not want to use automatic language identification, use instead the parameter with the LanguageModelName, VocabularyName, or VocabularyFilterName sub-parameters.

For a list of languages supported with Call Analytics, refer to Supported languages and language-specific features.

callAnalyticsJobSettings_languageModelName :: Lens' CallAnalyticsJobSettings (Maybe Text) Source #

The name of the custom language model you want to use when processing your Call Analytics job. Note that custom language model names are case sensitive.

The language of the specified custom language model must match the language code that you specify in your transcription request. If the languages don't match, the custom language model isn't applied. There are no errors or warnings associated with a language mismatch.

callAnalyticsJobSettings_languageOptions :: Lens' CallAnalyticsJobSettings (Maybe (NonEmpty LanguageCode)) Source #

You can specify two or more language codes that represent the languages you think may be present in your media. Including more than five is not recommended. If you're unsure what languages are present, do not include this parameter.

Including language options can improve the accuracy of language identification.

For a list of languages supported with Call Analytics, refer to the Supported languages table.

To transcribe speech in Modern Standard Arabic (ar-SA), your media file must be encoded at a sample rate of 16,000 Hz or higher.

callAnalyticsJobSettings_vocabularyFilterMethod :: Lens' CallAnalyticsJobSettings (Maybe VocabularyFilterMethod) Source #

Specify how you want your custom vocabulary filter applied to your transcript.

To replace words with ***, choose mask.

To delete words, choose remove.

To flag words without changing them, choose tag.

callAnalyticsJobSettings_vocabularyFilterName :: Lens' CallAnalyticsJobSettings (Maybe Text) Source #

The name of the custom vocabulary filter you want to include in your Call Analytics transcription request. Custom vocabulary filter names are case sensitive.

Note that if you include VocabularyFilterName in your request, you must also include VocabularyFilterMethod.

callAnalyticsJobSettings_vocabularyName :: Lens' CallAnalyticsJobSettings (Maybe Text) Source #

The name of the custom vocabulary you want to include in your Call Analytics transcription request. Custom vocabulary names are case sensitive.

CallAnalyticsJobSummary

data CallAnalyticsJobSummary Source #

Provides detailed information about a specific Call Analytics job.

See: newCallAnalyticsJobSummary smart constructor.

Constructors

CallAnalyticsJobSummary' 

Fields

  • callAnalyticsJobName :: Maybe Text

    The name of the Call Analytics job. Job names are case sensitive and must be unique within an Amazon Web Services account.

  • callAnalyticsJobStatus :: Maybe CallAnalyticsJobStatus

    Provides the status of your Call Analytics job.

    If the status is COMPLETED, the job is finished and you can find the results at the location specified in TranscriptFileUri (or RedactedTranscriptFileUri, if you requested transcript redaction). If the status is FAILED, FailureReason provides details on why your transcription job failed.

  • completionTime :: Maybe POSIX

    The date and time the specified Call Analytics job finished processing.

    Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:33:13.922000-07:00 represents a transcription job that started processing at 12:33 PM UTC-7 on May 4, 2022.

  • creationTime :: Maybe POSIX

    The date and time the specified Call Analytics job request was made.

    Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:32:58.761000-07:00 represents a transcription job that started processing at 12:32 PM UTC-7 on May 4, 2022.

  • failureReason :: Maybe Text

    If CallAnalyticsJobStatus is FAILED, FailureReason contains information about why the Call Analytics job failed. See also: Common Errors.

  • languageCode :: Maybe LanguageCode

    The language code used to create your Call Analytics transcription.

  • startTime :: Maybe POSIX

    The date and time your Call Analytics job began processing.

    Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:32:58.789000-07:00 represents a transcription job that started processing at 12:32 PM UTC-7 on May 4, 2022.

Instances

Instances details
FromJSON CallAnalyticsJobSummary Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CallAnalyticsJobSummary

Generic CallAnalyticsJobSummary Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CallAnalyticsJobSummary

Associated Types

type Rep CallAnalyticsJobSummary :: Type -> Type #

Read CallAnalyticsJobSummary Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CallAnalyticsJobSummary

Show CallAnalyticsJobSummary Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CallAnalyticsJobSummary

NFData CallAnalyticsJobSummary Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CallAnalyticsJobSummary

Methods

rnf :: CallAnalyticsJobSummary -> () #

Eq CallAnalyticsJobSummary Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CallAnalyticsJobSummary

Hashable CallAnalyticsJobSummary Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CallAnalyticsJobSummary

type Rep CallAnalyticsJobSummary Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CallAnalyticsJobSummary

type Rep CallAnalyticsJobSummary = D1 ('MetaData "CallAnalyticsJobSummary" "Amazonka.Transcribe.Types.CallAnalyticsJobSummary" "amazonka-transcribe-2.0-4BjJqzwmm0K94bsEMe1tnF" 'False) (C1 ('MetaCons "CallAnalyticsJobSummary'" 'PrefixI 'True) ((S1 ('MetaSel ('Just "callAnalyticsJobName") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: (S1 ('MetaSel ('Just "callAnalyticsJobStatus") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe CallAnalyticsJobStatus)) :*: S1 ('MetaSel ('Just "completionTime") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe POSIX)))) :*: ((S1 ('MetaSel ('Just "creationTime") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe POSIX)) :*: S1 ('MetaSel ('Just "failureReason") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text))) :*: (S1 ('MetaSel ('Just "languageCode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe LanguageCode)) :*: S1 ('MetaSel ('Just "startTime") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe POSIX))))))

newCallAnalyticsJobSummary :: CallAnalyticsJobSummary Source #

Create a value of CallAnalyticsJobSummary with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:callAnalyticsJobName:CallAnalyticsJobSummary', callAnalyticsJobSummary_callAnalyticsJobName - The name of the Call Analytics job. Job names are case sensitive and must be unique within an Amazon Web Services account.

$sel:callAnalyticsJobStatus:CallAnalyticsJobSummary', callAnalyticsJobSummary_callAnalyticsJobStatus - Provides the status of your Call Analytics job.

If the status is COMPLETED, the job is finished and you can find the results at the location specified in TranscriptFileUri (or RedactedTranscriptFileUri, if you requested transcript redaction). If the status is FAILED, FailureReason provides details on why your transcription job failed.

$sel:completionTime:CallAnalyticsJobSummary', callAnalyticsJobSummary_completionTime - The date and time the specified Call Analytics job finished processing.

Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:33:13.922000-07:00 represents a transcription job that started processing at 12:33 PM UTC-7 on May 4, 2022.

$sel:creationTime:CallAnalyticsJobSummary', callAnalyticsJobSummary_creationTime - The date and time the specified Call Analytics job request was made.

Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:32:58.761000-07:00 represents a transcription job that started processing at 12:32 PM UTC-7 on May 4, 2022.

$sel:failureReason:CallAnalyticsJobSummary', callAnalyticsJobSummary_failureReason - If CallAnalyticsJobStatus is FAILED, FailureReason contains information about why the Call Analytics job failed. See also: Common Errors.

$sel:languageCode:CallAnalyticsJobSummary', callAnalyticsJobSummary_languageCode - The language code used to create your Call Analytics transcription.

$sel:startTime:CallAnalyticsJobSummary', callAnalyticsJobSummary_startTime - The date and time your Call Analytics job began processing.

Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:32:58.789000-07:00 represents a transcription job that started processing at 12:32 PM UTC-7 on May 4, 2022.

callAnalyticsJobSummary_callAnalyticsJobName :: Lens' CallAnalyticsJobSummary (Maybe Text) Source #

The name of the Call Analytics job. Job names are case sensitive and must be unique within an Amazon Web Services account.

callAnalyticsJobSummary_callAnalyticsJobStatus :: Lens' CallAnalyticsJobSummary (Maybe CallAnalyticsJobStatus) Source #

Provides the status of your Call Analytics job.

If the status is COMPLETED, the job is finished and you can find the results at the location specified in TranscriptFileUri (or RedactedTranscriptFileUri, if you requested transcript redaction). If the status is FAILED, FailureReason provides details on why your transcription job failed.

callAnalyticsJobSummary_completionTime :: Lens' CallAnalyticsJobSummary (Maybe UTCTime) Source #

The date and time the specified Call Analytics job finished processing.

Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:33:13.922000-07:00 represents a transcription job that started processing at 12:33 PM UTC-7 on May 4, 2022.

callAnalyticsJobSummary_creationTime :: Lens' CallAnalyticsJobSummary (Maybe UTCTime) Source #

The date and time the specified Call Analytics job request was made.

Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:32:58.761000-07:00 represents a transcription job that started processing at 12:32 PM UTC-7 on May 4, 2022.

callAnalyticsJobSummary_failureReason :: Lens' CallAnalyticsJobSummary (Maybe Text) Source #

If CallAnalyticsJobStatus is FAILED, FailureReason contains information about why the Call Analytics job failed. See also: Common Errors.

callAnalyticsJobSummary_languageCode :: Lens' CallAnalyticsJobSummary (Maybe LanguageCode) Source #

The language code used to create your Call Analytics transcription.

callAnalyticsJobSummary_startTime :: Lens' CallAnalyticsJobSummary (Maybe UTCTime) Source #

The date and time your Call Analytics job began processing.

Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:32:58.789000-07:00 represents a transcription job that started processing at 12:32 PM UTC-7 on May 4, 2022.

CategoryProperties

data CategoryProperties Source #

Provides you with the properties of the Call Analytics category you specified in your request. This includes the list of rules that define the specified category.

See: newCategoryProperties smart constructor.

Constructors

CategoryProperties' 

Fields

  • categoryName :: Maybe Text

    The name of the Call Analytics category. Category names are case sensitive and must be unique within an Amazon Web Services account.

  • createTime :: Maybe POSIX

    The date and time the specified Call Analytics category was created.

    Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:32:58.761000-07:00 represents 12:32 PM UTC-7 on May 4, 2022.

  • inputType :: Maybe InputType

    The input type associated with the specified category. POST_CALL refers to a category that is applied to batch transcriptions; REAL_TIME refers to a category that is applied to streaming transcriptions.

  • lastUpdateTime :: Maybe POSIX

    The date and time the specified Call Analytics category was last updated.

    Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-05T12:45:32.691000-07:00 represents 12:45 PM UTC-7 on May 5, 2022.

  • rules :: Maybe (NonEmpty Rule)

    The rules used to define a Call Analytics category. Each category can have between 1 and 20 rules.

Instances

Instances details
FromJSON CategoryProperties Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CategoryProperties

Generic CategoryProperties Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CategoryProperties

Associated Types

type Rep CategoryProperties :: Type -> Type #

Read CategoryProperties Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CategoryProperties

Show CategoryProperties Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CategoryProperties

NFData CategoryProperties Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CategoryProperties

Methods

rnf :: CategoryProperties -> () #

Eq CategoryProperties Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CategoryProperties

Hashable CategoryProperties Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CategoryProperties

type Rep CategoryProperties Source # 
Instance details

Defined in Amazonka.Transcribe.Types.CategoryProperties

type Rep CategoryProperties = D1 ('MetaData "CategoryProperties" "Amazonka.Transcribe.Types.CategoryProperties" "amazonka-transcribe-2.0-4BjJqzwmm0K94bsEMe1tnF" 'False) (C1 ('MetaCons "CategoryProperties'" 'PrefixI 'True) ((S1 ('MetaSel ('Just "categoryName") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "createTime") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe POSIX))) :*: (S1 ('MetaSel ('Just "inputType") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe InputType)) :*: (S1 ('MetaSel ('Just "lastUpdateTime") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe POSIX)) :*: S1 ('MetaSel ('Just "rules") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe (NonEmpty Rule)))))))

newCategoryProperties :: CategoryProperties Source #

Create a value of CategoryProperties with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:categoryName:CategoryProperties', categoryProperties_categoryName - The name of the Call Analytics category. Category names are case sensitive and must be unique within an Amazon Web Services account.

$sel:createTime:CategoryProperties', categoryProperties_createTime - The date and time the specified Call Analytics category was created.

Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:32:58.761000-07:00 represents 12:32 PM UTC-7 on May 4, 2022.

$sel:inputType:CategoryProperties', categoryProperties_inputType - The input type associated with the specified category. POST_CALL refers to a category that is applied to batch transcriptions; REAL_TIME refers to a category that is applied to streaming transcriptions.

$sel:lastUpdateTime:CategoryProperties', categoryProperties_lastUpdateTime - The date and time the specified Call Analytics category was last updated.

Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-05T12:45:32.691000-07:00 represents 12:45 PM UTC-7 on May 5, 2022.

$sel:rules:CategoryProperties', categoryProperties_rules - The rules used to define a Call Analytics category. Each category can have between 1 and 20 rules.

categoryProperties_categoryName :: Lens' CategoryProperties (Maybe Text) Source #

The name of the Call Analytics category. Category names are case sensitive and must be unique within an Amazon Web Services account.

categoryProperties_createTime :: Lens' CategoryProperties (Maybe UTCTime) Source #

The date and time the specified Call Analytics category was created.

Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:32:58.761000-07:00 represents 12:32 PM UTC-7 on May 4, 2022.

categoryProperties_inputType :: Lens' CategoryProperties (Maybe InputType) Source #

The input type associated with the specified category. POST_CALL refers to a category that is applied to batch transcriptions; REAL_TIME refers to a category that is applied to streaming transcriptions.

categoryProperties_lastUpdateTime :: Lens' CategoryProperties (Maybe UTCTime) Source #

The date and time the specified Call Analytics category was last updated.

Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-05T12:45:32.691000-07:00 represents 12:45 PM UTC-7 on May 5, 2022.

categoryProperties_rules :: Lens' CategoryProperties (Maybe (NonEmpty Rule)) Source #

The rules used to define a Call Analytics category. Each category can have between 1 and 20 rules.

ChannelDefinition

data ChannelDefinition Source #

Makes it possible to specify which speaker is on which channel. For example, if your agent is the first participant to speak, you would set ChannelId to 0 (to indicate the first channel) and ParticipantRole to AGENT (to indicate that it's the agent speaking).

See: newChannelDefinition smart constructor.

Constructors

ChannelDefinition' 

Fields

Instances

Instances details
FromJSON ChannelDefinition Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ChannelDefinition

ToJSON ChannelDefinition Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ChannelDefinition

Generic ChannelDefinition Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ChannelDefinition

Associated Types

type Rep ChannelDefinition :: Type -> Type #

Read ChannelDefinition Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ChannelDefinition

Show ChannelDefinition Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ChannelDefinition

NFData ChannelDefinition Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ChannelDefinition

Methods

rnf :: ChannelDefinition -> () #

Eq ChannelDefinition Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ChannelDefinition

Hashable ChannelDefinition Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ChannelDefinition

type Rep ChannelDefinition Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ChannelDefinition

type Rep ChannelDefinition = D1 ('MetaData "ChannelDefinition" "Amazonka.Transcribe.Types.ChannelDefinition" "amazonka-transcribe-2.0-4BjJqzwmm0K94bsEMe1tnF" 'False) (C1 ('MetaCons "ChannelDefinition'" 'PrefixI 'True) (S1 ('MetaSel ('Just "channelId") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "participantRole") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe ParticipantRole))))

newChannelDefinition :: ChannelDefinition Source #

Create a value of ChannelDefinition with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:channelId:ChannelDefinition', channelDefinition_channelId - Specify the audio channel you want to define.

$sel:participantRole:ChannelDefinition', channelDefinition_participantRole - Specify the speaker you want to define. Omitting this parameter is equivalent to specifying both participants.

channelDefinition_channelId :: Lens' ChannelDefinition (Maybe Natural) Source #

Specify the audio channel you want to define.

channelDefinition_participantRole :: Lens' ChannelDefinition (Maybe ParticipantRole) Source #

Specify the speaker you want to define. Omitting this parameter is equivalent to specifying both participants.

ContentRedaction

data ContentRedaction Source #

Makes it possible to redact or flag specified personally identifiable information (PII) in your transcript. If you use ContentRedaction, you must also include the sub-parameters: PiiEntityTypes, RedactionOutput, and RedactionType.

See: newContentRedaction smart constructor.

Constructors

ContentRedaction' 

Fields

  • piiEntityTypes :: Maybe [PiiEntityType]

    Specify which types of personally identifiable information (PII) you want to redact in your transcript. You can include as many types as you'd like, or you can select ALL.

  • redactionType :: RedactionType

    Specify the category of information you want to redact; PII (personally identifiable information) is the only valid value. You can use PiiEntityTypes to choose which types of PII you want to redact.

  • redactionOutput :: RedactionOutput

    Specify if you want only a redacted transcript, or if you want a redacted and an unredacted transcript.

    When you choose redacted Amazon Transcribe creates only a redacted transcript.

    When you choose redacted_and_unredacted Amazon Transcribe creates a redacted and an unredacted transcript (as two separate files).

Instances

Instances details
FromJSON ContentRedaction Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ContentRedaction

ToJSON ContentRedaction Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ContentRedaction

Generic ContentRedaction Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ContentRedaction

Associated Types

type Rep ContentRedaction :: Type -> Type #

Read ContentRedaction Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ContentRedaction

Show ContentRedaction Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ContentRedaction

NFData ContentRedaction Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ContentRedaction

Methods

rnf :: ContentRedaction -> () #

Eq ContentRedaction Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ContentRedaction

Hashable ContentRedaction Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ContentRedaction

type Rep ContentRedaction Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ContentRedaction

type Rep ContentRedaction = D1 ('MetaData "ContentRedaction" "Amazonka.Transcribe.Types.ContentRedaction" "amazonka-transcribe-2.0-4BjJqzwmm0K94bsEMe1tnF" 'False) (C1 ('MetaCons "ContentRedaction'" 'PrefixI 'True) (S1 ('MetaSel ('Just "piiEntityTypes") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe [PiiEntityType])) :*: (S1 ('MetaSel ('Just "redactionType") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 RedactionType) :*: S1 ('MetaSel ('Just "redactionOutput") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 RedactionOutput))))

newContentRedaction Source #

Create a value of ContentRedaction with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:piiEntityTypes:ContentRedaction', contentRedaction_piiEntityTypes - Specify which types of personally identifiable information (PII) you want to redact in your transcript. You can include as many types as you'd like, or you can select ALL.

$sel:redactionType:ContentRedaction', contentRedaction_redactionType - Specify the category of information you want to redact; PII (personally identifiable information) is the only valid value. You can use PiiEntityTypes to choose which types of PII you want to redact.

$sel:redactionOutput:ContentRedaction', contentRedaction_redactionOutput - Specify if you want only a redacted transcript, or if you want a redacted and an unredacted transcript.

When you choose redacted Amazon Transcribe creates only a redacted transcript.

When you choose redacted_and_unredacted Amazon Transcribe creates a redacted and an unredacted transcript (as two separate files).

contentRedaction_piiEntityTypes :: Lens' ContentRedaction (Maybe [PiiEntityType]) Source #

Specify which types of personally identifiable information (PII) you want to redact in your transcript. You can include as many types as you'd like, or you can select ALL.

contentRedaction_redactionType :: Lens' ContentRedaction RedactionType Source #

Specify the category of information you want to redact; PII (personally identifiable information) is the only valid value. You can use PiiEntityTypes to choose which types of PII you want to redact.

contentRedaction_redactionOutput :: Lens' ContentRedaction RedactionOutput Source #

Specify if you want only a redacted transcript, or if you want a redacted and an unredacted transcript.

When you choose redacted Amazon Transcribe creates only a redacted transcript.

When you choose redacted_and_unredacted Amazon Transcribe creates a redacted and an unredacted transcript (as two separate files).

InputDataConfig

data InputDataConfig Source #

Contains the Amazon S3 location of the training data you want to use to create a new custom language model, and permissions to access this location.

When using InputDataConfig, you must include these sub-parameters: S3Uri and DataAccessRoleArn. You can optionally include TuningDataS3Uri.

See: newInputDataConfig smart constructor.

Constructors

InputDataConfig' 

Fields

  • tuningDataS3Uri :: Maybe Text

    The Amazon S3 location (URI) of the text files you want to use to tune your custom language model.

    Here's an example URI path: s3://DOC-EXAMPLE-BUCKET/my-model-tuning-data/

  • s3Uri :: Text

    The Amazon S3 location (URI) of the text files you want to use to train your custom language model.

    Here's an example URI path: s3://DOC-EXAMPLE-BUCKET/my-model-training-data/

  • dataAccessRoleArn :: Text

    The Amazon Resource Name (ARN) of an IAM role that has permissions to access the Amazon S3 bucket that contains your input files. If the role that you specify doesn’t have the appropriate permissions to access the specified Amazon S3 location, your request fails.

    IAM role ARNs have the format arn:partition:iam::account:role/role-name-with-path. For example: arn:aws:iam::111122223333:role/Admin.

    For more information, see IAM ARNs.

Instances

Instances details
FromJSON InputDataConfig Source # 
Instance details

Defined in Amazonka.Transcribe.Types.InputDataConfig

ToJSON InputDataConfig Source # 
Instance details

Defined in Amazonka.Transcribe.Types.InputDataConfig

Generic InputDataConfig Source # 
Instance details

Defined in Amazonka.Transcribe.Types.InputDataConfig

Associated Types

type Rep InputDataConfig :: Type -> Type #

Read InputDataConfig Source # 
Instance details

Defined in Amazonka.Transcribe.Types.InputDataConfig

Show InputDataConfig Source # 
Instance details

Defined in Amazonka.Transcribe.Types.InputDataConfig

NFData InputDataConfig Source # 
Instance details

Defined in Amazonka.Transcribe.Types.InputDataConfig

Methods

rnf :: InputDataConfig -> () #

Eq InputDataConfig Source # 
Instance details

Defined in Amazonka.Transcribe.Types.InputDataConfig

Hashable InputDataConfig Source # 
Instance details

Defined in Amazonka.Transcribe.Types.InputDataConfig

type Rep InputDataConfig Source # 
Instance details

Defined in Amazonka.Transcribe.Types.InputDataConfig

type Rep InputDataConfig = D1 ('MetaData "InputDataConfig" "Amazonka.Transcribe.Types.InputDataConfig" "amazonka-transcribe-2.0-4BjJqzwmm0K94bsEMe1tnF" 'False) (C1 ('MetaCons "InputDataConfig'" 'PrefixI 'True) (S1 ('MetaSel ('Just "tuningDataS3Uri") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: (S1 ('MetaSel ('Just "s3Uri") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 Text) :*: S1 ('MetaSel ('Just "dataAccessRoleArn") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 Text))))

newInputDataConfig Source #

Create a value of InputDataConfig with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:tuningDataS3Uri:InputDataConfig', inputDataConfig_tuningDataS3Uri - The Amazon S3 location (URI) of the text files you want to use to tune your custom language model.

Here's an example URI path: s3://DOC-EXAMPLE-BUCKET/my-model-tuning-data/

$sel:s3Uri:InputDataConfig', inputDataConfig_s3Uri - The Amazon S3 location (URI) of the text files you want to use to train your custom language model.

Here's an example URI path: s3://DOC-EXAMPLE-BUCKET/my-model-training-data/

$sel:dataAccessRoleArn:InputDataConfig', inputDataConfig_dataAccessRoleArn - The Amazon Resource Name (ARN) of an IAM role that has permissions to access the Amazon S3 bucket that contains your input files. If the role that you specify doesn’t have the appropriate permissions to access the specified Amazon S3 location, your request fails.

IAM role ARNs have the format arn:partition:iam::account:role/role-name-with-path. For example: arn:aws:iam::111122223333:role/Admin.

For more information, see IAM ARNs.

inputDataConfig_tuningDataS3Uri :: Lens' InputDataConfig (Maybe Text) Source #

The Amazon S3 location (URI) of the text files you want to use to tune your custom language model.

Here's an example URI path: s3://DOC-EXAMPLE-BUCKET/my-model-tuning-data/

inputDataConfig_s3Uri :: Lens' InputDataConfig Text Source #

The Amazon S3 location (URI) of the text files you want to use to train your custom language model.

Here's an example URI path: s3://DOC-EXAMPLE-BUCKET/my-model-training-data/

inputDataConfig_dataAccessRoleArn :: Lens' InputDataConfig Text Source #

The Amazon Resource Name (ARN) of an IAM role that has permissions to access the Amazon S3 bucket that contains your input files. If the role that you specify doesn’t have the appropriate permissions to access the specified Amazon S3 location, your request fails.

IAM role ARNs have the format arn:partition:iam::account:role/role-name-with-path. For example: arn:aws:iam::111122223333:role/Admin.

For more information, see IAM ARNs.

InterruptionFilter

data InterruptionFilter Source #

Flag the presence or absence of interruptions in your Call Analytics transcription output.

Rules using InterruptionFilter are designed to match:

  • Instances where an agent interrupts a customer
  • Instances where a customer interrupts an agent
  • Either participant interrupting the other
  • A lack of interruptions

See Rule criteria for batch categories for usage examples.

See: newInterruptionFilter smart constructor.

Constructors

InterruptionFilter' 

Fields

  • absoluteTimeRange :: Maybe AbsoluteTimeRange

    Makes it possible to specify a time range (in milliseconds) in your audio, during which you want to search for an interruption. See for more detail.

  • negate :: Maybe Bool

    Set to TRUE to flag speech that does not contain interruptions. Set to FALSE to flag speech that contains interruptions.

  • participantRole :: Maybe ParticipantRole

    Specify the interrupter that you want to flag. Omitting this parameter is equivalent to specifying both participants.

  • relativeTimeRange :: Maybe RelativeTimeRange

    Makes it possible to specify a time range (in percentage) in your media file, during which you want to search for an interruption. See for more detail.

  • threshold :: Maybe Natural

    Specify the duration of the interruptions in milliseconds. For example, you can flag speech that contains more than 10,000 milliseconds of interruptions.

Instances

Instances details
FromJSON InterruptionFilter Source # 
Instance details

Defined in Amazonka.Transcribe.Types.InterruptionFilter

ToJSON InterruptionFilter Source # 
Instance details

Defined in Amazonka.Transcribe.Types.InterruptionFilter

Generic InterruptionFilter Source # 
Instance details

Defined in Amazonka.Transcribe.Types.InterruptionFilter

Associated Types

type Rep InterruptionFilter :: Type -> Type #

Read InterruptionFilter Source # 
Instance details

Defined in Amazonka.Transcribe.Types.InterruptionFilter

Show InterruptionFilter Source # 
Instance details

Defined in Amazonka.Transcribe.Types.InterruptionFilter

NFData InterruptionFilter Source # 
Instance details

Defined in Amazonka.Transcribe.Types.InterruptionFilter

Methods

rnf :: InterruptionFilter -> () #

Eq InterruptionFilter Source # 
Instance details

Defined in Amazonka.Transcribe.Types.InterruptionFilter

Hashable InterruptionFilter Source # 
Instance details

Defined in Amazonka.Transcribe.Types.InterruptionFilter

type Rep InterruptionFilter Source # 
Instance details

Defined in Amazonka.Transcribe.Types.InterruptionFilter

type Rep InterruptionFilter = D1 ('MetaData "InterruptionFilter" "Amazonka.Transcribe.Types.InterruptionFilter" "amazonka-transcribe-2.0-4BjJqzwmm0K94bsEMe1tnF" 'False) (C1 ('MetaCons "InterruptionFilter'" 'PrefixI 'True) ((S1 ('MetaSel ('Just "absoluteTimeRange") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe AbsoluteTimeRange)) :*: S1 ('MetaSel ('Just "negate") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Bool))) :*: (S1 ('MetaSel ('Just "participantRole") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe ParticipantRole)) :*: (S1 ('MetaSel ('Just "relativeTimeRange") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe RelativeTimeRange)) :*: S1 ('MetaSel ('Just "threshold") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))))))

newInterruptionFilter :: InterruptionFilter Source #

Create a value of InterruptionFilter with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:absoluteTimeRange:InterruptionFilter', interruptionFilter_absoluteTimeRange - Makes it possible to specify a time range (in milliseconds) in your audio, during which you want to search for an interruption. See for more detail.

$sel:negate:InterruptionFilter', interruptionFilter_negate - Set to TRUE to flag speech that does not contain interruptions. Set to FALSE to flag speech that contains interruptions.

$sel:participantRole:InterruptionFilter', interruptionFilter_participantRole - Specify the interrupter that you want to flag. Omitting this parameter is equivalent to specifying both participants.

$sel:relativeTimeRange:InterruptionFilter', interruptionFilter_relativeTimeRange - Makes it possible to specify a time range (in percentage) in your media file, during which you want to search for an interruption. See for more detail.

$sel:threshold:InterruptionFilter', interruptionFilter_threshold - Specify the duration of the interruptions in milliseconds. For example, you can flag speech that contains more than 10,000 milliseconds of interruptions.

interruptionFilter_absoluteTimeRange :: Lens' InterruptionFilter (Maybe AbsoluteTimeRange) Source #

Makes it possible to specify a time range (in milliseconds) in your audio, during which you want to search for an interruption. See for more detail.

interruptionFilter_negate :: Lens' InterruptionFilter (Maybe Bool) Source #

Set to TRUE to flag speech that does not contain interruptions. Set to FALSE to flag speech that contains interruptions.

interruptionFilter_participantRole :: Lens' InterruptionFilter (Maybe ParticipantRole) Source #

Specify the interrupter that you want to flag. Omitting this parameter is equivalent to specifying both participants.

interruptionFilter_relativeTimeRange :: Lens' InterruptionFilter (Maybe RelativeTimeRange) Source #

Makes it possible to specify a time range (in percentage) in your media file, during which you want to search for an interruption. See for more detail.

interruptionFilter_threshold :: Lens' InterruptionFilter (Maybe Natural) Source #

Specify the duration of the interruptions in milliseconds. For example, you can flag speech that contains more than 10,000 milliseconds of interruptions.

JobExecutionSettings

data JobExecutionSettings Source #

Makes it possible to control how your transcription job is processed. Currently, the only JobExecutionSettings modification you can choose is enabling job queueing using the AllowDeferredExecution sub-parameter.

If you include JobExecutionSettings in your request, you must also include the sub-parameters: AllowDeferredExecution and DataAccessRoleArn.

See: newJobExecutionSettings smart constructor.

Constructors

JobExecutionSettings' 

Fields

  • allowDeferredExecution :: Maybe Bool

    Makes it possible to enable job queuing when your concurrent request limit is exceeded. When AllowDeferredExecution is set to true, transcription job requests are placed in a queue until the number of jobs falls below the concurrent request limit. If AllowDeferredExecution is set to false and the number of transcription job requests exceed the concurrent request limit, you get a LimitExceededException error.

    Note that job queuing is enabled by default for Call Analytics jobs.

    If you include AllowDeferredExecution in your request, you must also include DataAccessRoleArn.

  • dataAccessRoleArn :: Maybe Text

    The Amazon Resource Name (ARN) of an IAM role that has permissions to access the Amazon S3 bucket that contains your input files. If the role that you specify doesn’t have the appropriate permissions to access the specified Amazon S3 location, your request fails.

    IAM role ARNs have the format arn:partition:iam::account:role/role-name-with-path. For example: arn:aws:iam::111122223333:role/Admin. For more information, see IAM ARNs.

    Note that if you include DataAccessRoleArn in your request, you must also include AllowDeferredExecution.

Instances

Instances details
FromJSON JobExecutionSettings Source # 
Instance details

Defined in Amazonka.Transcribe.Types.JobExecutionSettings

ToJSON JobExecutionSettings Source # 
Instance details

Defined in Amazonka.Transcribe.Types.JobExecutionSettings

Generic JobExecutionSettings Source # 
Instance details

Defined in Amazonka.Transcribe.Types.JobExecutionSettings

Associated Types

type Rep JobExecutionSettings :: Type -> Type #

Read JobExecutionSettings Source # 
Instance details

Defined in Amazonka.Transcribe.Types.JobExecutionSettings

Show JobExecutionSettings Source # 
Instance details

Defined in Amazonka.Transcribe.Types.JobExecutionSettings

NFData JobExecutionSettings Source # 
Instance details

Defined in Amazonka.Transcribe.Types.JobExecutionSettings

Methods

rnf :: JobExecutionSettings -> () #

Eq JobExecutionSettings Source # 
Instance details

Defined in Amazonka.Transcribe.Types.JobExecutionSettings

Hashable JobExecutionSettings Source # 
Instance details

Defined in Amazonka.Transcribe.Types.JobExecutionSettings

type Rep JobExecutionSettings Source # 
Instance details

Defined in Amazonka.Transcribe.Types.JobExecutionSettings

type Rep JobExecutionSettings = D1 ('MetaData "JobExecutionSettings" "Amazonka.Transcribe.Types.JobExecutionSettings" "amazonka-transcribe-2.0-4BjJqzwmm0K94bsEMe1tnF" 'False) (C1 ('MetaCons "JobExecutionSettings'" 'PrefixI 'True) (S1 ('MetaSel ('Just "allowDeferredExecution") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Bool)) :*: S1 ('MetaSel ('Just "dataAccessRoleArn") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text))))

newJobExecutionSettings :: JobExecutionSettings Source #

Create a value of JobExecutionSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:allowDeferredExecution:JobExecutionSettings', jobExecutionSettings_allowDeferredExecution - Makes it possible to enable job queuing when your concurrent request limit is exceeded. When AllowDeferredExecution is set to true, transcription job requests are placed in a queue until the number of jobs falls below the concurrent request limit. If AllowDeferredExecution is set to false and the number of transcription job requests exceed the concurrent request limit, you get a LimitExceededException error.

Note that job queuing is enabled by default for Call Analytics jobs.

If you include AllowDeferredExecution in your request, you must also include DataAccessRoleArn.

$sel:dataAccessRoleArn:JobExecutionSettings', jobExecutionSettings_dataAccessRoleArn - The Amazon Resource Name (ARN) of an IAM role that has permissions to access the Amazon S3 bucket that contains your input files. If the role that you specify doesn’t have the appropriate permissions to access the specified Amazon S3 location, your request fails.

IAM role ARNs have the format arn:partition:iam::account:role/role-name-with-path. For example: arn:aws:iam::111122223333:role/Admin. For more information, see IAM ARNs.

Note that if you include DataAccessRoleArn in your request, you must also include AllowDeferredExecution.

jobExecutionSettings_allowDeferredExecution :: Lens' JobExecutionSettings (Maybe Bool) Source #

Makes it possible to enable job queuing when your concurrent request limit is exceeded. When AllowDeferredExecution is set to true, transcription job requests are placed in a queue until the number of jobs falls below the concurrent request limit. If AllowDeferredExecution is set to false and the number of transcription job requests exceed the concurrent request limit, you get a LimitExceededException error.

Note that job queuing is enabled by default for Call Analytics jobs.

If you include AllowDeferredExecution in your request, you must also include DataAccessRoleArn.

jobExecutionSettings_dataAccessRoleArn :: Lens' JobExecutionSettings (Maybe Text) Source #

The Amazon Resource Name (ARN) of an IAM role that has permissions to access the Amazon S3 bucket that contains your input files. If the role that you specify doesn’t have the appropriate permissions to access the specified Amazon S3 location, your request fails.

IAM role ARNs have the format arn:partition:iam::account:role/role-name-with-path. For example: arn:aws:iam::111122223333:role/Admin. For more information, see IAM ARNs.

Note that if you include DataAccessRoleArn in your request, you must also include AllowDeferredExecution.

LanguageCodeItem

data LanguageCodeItem Source #

Provides information on the speech contained in a discreet utterance when multi-language identification is enabled in your request. This utterance represents a block of speech consisting of one language, preceded or followed by a block of speech in a different language.

See: newLanguageCodeItem smart constructor.

Constructors

LanguageCodeItem' 

Fields

Instances

Instances details
FromJSON LanguageCodeItem Source # 
Instance details

Defined in Amazonka.Transcribe.Types.LanguageCodeItem

Generic LanguageCodeItem Source # 
Instance details

Defined in Amazonka.Transcribe.Types.LanguageCodeItem

Associated Types

type Rep LanguageCodeItem :: Type -> Type #

Read LanguageCodeItem Source # 
Instance details

Defined in Amazonka.Transcribe.Types.LanguageCodeItem

Show LanguageCodeItem Source # 
Instance details

Defined in Amazonka.Transcribe.Types.LanguageCodeItem

NFData LanguageCodeItem Source # 
Instance details

Defined in Amazonka.Transcribe.Types.LanguageCodeItem

Methods

rnf :: LanguageCodeItem -> () #

Eq LanguageCodeItem Source # 
Instance details

Defined in Amazonka.Transcribe.Types.LanguageCodeItem

Hashable LanguageCodeItem Source # 
Instance details

Defined in Amazonka.Transcribe.Types.LanguageCodeItem

type Rep LanguageCodeItem Source # 
Instance details

Defined in Amazonka.Transcribe.Types.LanguageCodeItem

type Rep LanguageCodeItem = D1 ('MetaData "LanguageCodeItem" "Amazonka.Transcribe.Types.LanguageCodeItem" "amazonka-transcribe-2.0-4BjJqzwmm0K94bsEMe1tnF" 'False) (C1 ('MetaCons "LanguageCodeItem'" 'PrefixI 'True) (S1 ('MetaSel ('Just "durationInSeconds") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Double)) :*: S1 ('MetaSel ('Just "languageCode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe LanguageCode))))

newLanguageCodeItem :: LanguageCodeItem Source #

Create a value of LanguageCodeItem with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:durationInSeconds:LanguageCodeItem', languageCodeItem_durationInSeconds - Provides the total time, in seconds, each identified language is spoken in your media.

$sel:languageCode:LanguageCodeItem', languageCodeItem_languageCode - Provides the language code for each language identified in your media.

languageCodeItem_durationInSeconds :: Lens' LanguageCodeItem (Maybe Double) Source #

Provides the total time, in seconds, each identified language is spoken in your media.

languageCodeItem_languageCode :: Lens' LanguageCodeItem (Maybe LanguageCode) Source #

Provides the language code for each language identified in your media.

LanguageIdSettings

data LanguageIdSettings Source #

If using automatic language identification in your request and you want to apply a custom language model, a custom vocabulary, or a custom vocabulary filter, include LanguageIdSettings with the relevant sub-parameters (VocabularyName, LanguageModelName, and VocabularyFilterName). Note that multi-language identification (IdentifyMultipleLanguages) doesn't support custom language models.

LanguageIdSettings supports two to five language codes. Each language code you include can have an associated custom language model, custom vocabulary, and custom vocabulary filter. The language codes that you specify must match the languages of the associated custom language models, custom vocabularies, and custom vocabulary filters.

It's recommended that you include LanguageOptions when using LanguageIdSettings to ensure that the correct language dialect is identified. For example, if you specify a custom vocabulary that is in en-US but Amazon Transcribe determines that the language spoken in your media is en-AU, your custom vocabulary is not applied to your transcription. If you include LanguageOptions and include en-US as the only English language dialect, your custom vocabulary is applied to your transcription.

If you want to include a custom language model with your request but do not want to use automatic language identification, use instead the parameter with the LanguageModelName sub-parameter. If you want to include a custom vocabulary or a custom vocabulary filter (or both) with your request but do not want to use automatic language identification, use instead the parameter with the VocabularyName or VocabularyFilterName (or both) sub-parameter.

See: newLanguageIdSettings smart constructor.

Constructors

LanguageIdSettings' 

Fields

  • languageModelName :: Maybe Text

    The name of the custom language model you want to use when processing your transcription job. Note that custom language model names are case sensitive.

    The language of the specified custom language model must match the language code that you specify in your transcription request. If the languages don't match, the custom language model isn't applied. There are no errors or warnings associated with a language mismatch.

  • vocabularyFilterName :: Maybe Text

    The name of the custom vocabulary filter you want to use when processing your transcription job. Custom vocabulary filter names are case sensitive.

    The language of the specified custom vocabulary filter must match the language code that you specify in your transcription request. If the languages don't match, the custom vocabulary filter isn't applied. There are no errors or warnings associated with a language mismatch.

    Note that if you include VocabularyFilterName in your request, you must also include VocabularyFilterMethod.

  • vocabularyName :: Maybe Text

    The name of the custom vocabulary you want to use when processing your transcription job. Custom vocabulary names are case sensitive.

    The language of the specified custom vocabulary must match the language code that you specify in your transcription request. If the languages don't match, the custom vocabulary isn't applied. There are no errors or warnings associated with a language mismatch.

Instances

Instances details
FromJSON LanguageIdSettings Source # 
Instance details

Defined in Amazonka.Transcribe.Types.LanguageIdSettings

ToJSON LanguageIdSettings Source # 
Instance details

Defined in Amazonka.Transcribe.Types.LanguageIdSettings

Generic LanguageIdSettings Source # 
Instance details

Defined in Amazonka.Transcribe.Types.LanguageIdSettings

Associated Types

type Rep LanguageIdSettings :: Type -> Type #

Read LanguageIdSettings Source # 
Instance details

Defined in Amazonka.Transcribe.Types.LanguageIdSettings

Show LanguageIdSettings Source # 
Instance details

Defined in Amazonka.Transcribe.Types.LanguageIdSettings

NFData LanguageIdSettings Source # 
Instance details

Defined in Amazonka.Transcribe.Types.LanguageIdSettings

Methods

rnf :: LanguageIdSettings -> () #

Eq LanguageIdSettings Source # 
Instance details

Defined in Amazonka.Transcribe.Types.LanguageIdSettings

Hashable LanguageIdSettings Source # 
Instance details

Defined in Amazonka.Transcribe.Types.LanguageIdSettings

type Rep LanguageIdSettings Source # 
Instance details

Defined in Amazonka.Transcribe.Types.LanguageIdSettings

type Rep LanguageIdSettings = D1 ('MetaData "LanguageIdSettings" "Amazonka.Transcribe.Types.LanguageIdSettings" "amazonka-transcribe-2.0-4BjJqzwmm0K94bsEMe1tnF" 'False) (C1 ('MetaCons "LanguageIdSettings'" 'PrefixI 'True) (S1 ('MetaSel ('Just "languageModelName") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: (S1 ('MetaSel ('Just "vocabularyFilterName") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "vocabularyName") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)))))

newLanguageIdSettings :: LanguageIdSettings Source #

Create a value of LanguageIdSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:languageModelName:LanguageIdSettings', languageIdSettings_languageModelName - The name of the custom language model you want to use when processing your transcription job. Note that custom language model names are case sensitive.

The language of the specified custom language model must match the language code that you specify in your transcription request. If the languages don't match, the custom language model isn't applied. There are no errors or warnings associated with a language mismatch.

$sel:vocabularyFilterName:LanguageIdSettings', languageIdSettings_vocabularyFilterName - The name of the custom vocabulary filter you want to use when processing your transcription job. Custom vocabulary filter names are case sensitive.

The language of the specified custom vocabulary filter must match the language code that you specify in your transcription request. If the languages don't match, the custom vocabulary filter isn't applied. There are no errors or warnings associated with a language mismatch.

Note that if you include VocabularyFilterName in your request, you must also include VocabularyFilterMethod.

$sel:vocabularyName:LanguageIdSettings', languageIdSettings_vocabularyName - The name of the custom vocabulary you want to use when processing your transcription job. Custom vocabulary names are case sensitive.

The language of the specified custom vocabulary must match the language code that you specify in your transcription request. If the languages don't match, the custom vocabulary isn't applied. There are no errors or warnings associated with a language mismatch.

languageIdSettings_languageModelName :: Lens' LanguageIdSettings (Maybe Text) Source #

The name of the custom language model you want to use when processing your transcription job. Note that custom language model names are case sensitive.

The language of the specified custom language model must match the language code that you specify in your transcription request. If the languages don't match, the custom language model isn't applied. There are no errors or warnings associated with a language mismatch.

languageIdSettings_vocabularyFilterName :: Lens' LanguageIdSettings (Maybe Text) Source #

The name of the custom vocabulary filter you want to use when processing your transcription job. Custom vocabulary filter names are case sensitive.

The language of the specified custom vocabulary filter must match the language code that you specify in your transcription request. If the languages don't match, the custom vocabulary filter isn't applied. There are no errors or warnings associated with a language mismatch.

Note that if you include VocabularyFilterName in your request, you must also include VocabularyFilterMethod.

languageIdSettings_vocabularyName :: Lens' LanguageIdSettings (Maybe Text) Source #

The name of the custom vocabulary you want to use when processing your transcription job. Custom vocabulary names are case sensitive.

The language of the specified custom vocabulary must match the language code that you specify in your transcription request. If the languages don't match, the custom vocabulary isn't applied. There are no errors or warnings associated with a language mismatch.

LanguageModel

data LanguageModel Source #

Provides information about a custom language model, including the base model name, when the model was created, the location of the files used to train the model, when the model was last modified, the name you chose for the model, its language, its processing state, and if there is an upgrade available for the base model.

See: newLanguageModel smart constructor.

Constructors

LanguageModel' 

Fields

  • baseModelName :: Maybe BaseModelName

    The Amazon Transcribe standard language model, or base model, used to create your custom language model.

  • createTime :: Maybe POSIX

    The date and time the specified custom language model was created.

    Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:32:58.761000-07:00 represents 12:32 PM UTC-7 on May 4, 2022.

  • failureReason :: Maybe Text

    If ModelStatus is FAILED, FailureReason contains information about why the custom language model request failed. See also: Common Errors.

  • inputDataConfig :: Maybe InputDataConfig

    The Amazon S3 location of the input files used to train and tune your custom language model, in addition to the data access role ARN (Amazon Resource Name) that has permissions to access these data.

  • languageCode :: Maybe CLMLanguageCode

    The language code used to create your custom language model. Each custom language model must contain terms in only one language, and the language you select for your custom language model must match the language of your training and tuning data.

    For a list of supported languages and their associated language codes, refer to the Supported languages table. Note that U.S. English (en-US) is the only language supported with Amazon Transcribe Medical.

  • lastModifiedTime :: Maybe POSIX

    The date and time the specified custom language model was last modified.

    Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:32:58.761000-07:00 represents 12:32 PM UTC-7 on May 4, 2022.

  • modelName :: Maybe Text

    A unique name, chosen by you, for your custom language model.

    This name is case sensitive, cannot contain spaces, and must be unique within an Amazon Web Services account.

  • modelStatus :: Maybe ModelStatus

    The status of the specified custom language model. When the status displays as COMPLETED the model is ready for use.

  • upgradeAvailability :: Maybe Bool

    Shows if a more current base model is available for use with the specified custom language model.

    If false, your custom language model is using the most up-to-date base model.

    If true, there is a newer base model available than the one your language model is using.

    Note that to update a base model, you must recreate the custom language model using the new base model. Base model upgrades for existing custom language models are not supported.

Instances

Instances details
FromJSON LanguageModel Source # 
Instance details

Defined in Amazonka.Transcribe.Types.LanguageModel

Generic LanguageModel Source # 
Instance details

Defined in Amazonka.Transcribe.Types.LanguageModel

Associated Types

type Rep LanguageModel :: Type -> Type #

Read LanguageModel Source # 
Instance details

Defined in Amazonka.Transcribe.Types.LanguageModel

Show LanguageModel Source # 
Instance details

Defined in Amazonka.Transcribe.Types.LanguageModel

NFData LanguageModel Source # 
Instance details

Defined in Amazonka.Transcribe.Types.LanguageModel

Methods

rnf :: LanguageModel -> () #

Eq LanguageModel Source # 
Instance details

Defined in Amazonka.Transcribe.Types.LanguageModel

Hashable LanguageModel Source # 
Instance details

Defined in Amazonka.Transcribe.Types.LanguageModel

type Rep LanguageModel Source # 
Instance details

Defined in Amazonka.Transcribe.Types.LanguageModel

newLanguageModel :: LanguageModel Source #

Create a value of LanguageModel with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:baseModelName:LanguageModel', languageModel_baseModelName - The Amazon Transcribe standard language model, or base model, used to create your custom language model.

$sel:createTime:LanguageModel', languageModel_createTime - The date and time the specified custom language model was created.

Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:32:58.761000-07:00 represents 12:32 PM UTC-7 on May 4, 2022.

$sel:failureReason:LanguageModel', languageModel_failureReason - If ModelStatus is FAILED, FailureReason contains information about why the custom language model request failed. See also: Common Errors.

$sel:inputDataConfig:LanguageModel', languageModel_inputDataConfig - The Amazon S3 location of the input files used to train and tune your custom language model, in addition to the data access role ARN (Amazon Resource Name) that has permissions to access these data.

$sel:languageCode:LanguageModel', languageModel_languageCode - The language code used to create your custom language model. Each custom language model must contain terms in only one language, and the language you select for your custom language model must match the language of your training and tuning data.

For a list of supported languages and their associated language codes, refer to the Supported languages table. Note that U.S. English (en-US) is the only language supported with Amazon Transcribe Medical.

$sel:lastModifiedTime:LanguageModel', languageModel_lastModifiedTime - The date and time the specified custom language model was last modified.

Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:32:58.761000-07:00 represents 12:32 PM UTC-7 on May 4, 2022.

$sel:modelName:LanguageModel', languageModel_modelName - A unique name, chosen by you, for your custom language model.

This name is case sensitive, cannot contain spaces, and must be unique within an Amazon Web Services account.

$sel:modelStatus:LanguageModel', languageModel_modelStatus - The status of the specified custom language model. When the status displays as COMPLETED the model is ready for use.

$sel:upgradeAvailability:LanguageModel', languageModel_upgradeAvailability - Shows if a more current base model is available for use with the specified custom language model.

If false, your custom language model is using the most up-to-date base model.

If true, there is a newer base model available than the one your language model is using.

Note that to update a base model, you must recreate the custom language model using the new base model. Base model upgrades for existing custom language models are not supported.

languageModel_baseModelName :: Lens' LanguageModel (Maybe BaseModelName) Source #

The Amazon Transcribe standard language model, or base model, used to create your custom language model.

languageModel_createTime :: Lens' LanguageModel (Maybe UTCTime) Source #

The date and time the specified custom language model was created.

Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:32:58.761000-07:00 represents 12:32 PM UTC-7 on May 4, 2022.

languageModel_failureReason :: Lens' LanguageModel (Maybe Text) Source #

If ModelStatus is FAILED, FailureReason contains information about why the custom language model request failed. See also: Common Errors.

languageModel_inputDataConfig :: Lens' LanguageModel (Maybe InputDataConfig) Source #

The Amazon S3 location of the input files used to train and tune your custom language model, in addition to the data access role ARN (Amazon Resource Name) that has permissions to access these data.

languageModel_languageCode :: Lens' LanguageModel (Maybe CLMLanguageCode) Source #

The language code used to create your custom language model. Each custom language model must contain terms in only one language, and the language you select for your custom language model must match the language of your training and tuning data.

For a list of supported languages and their associated language codes, refer to the Supported languages table. Note that U.S. English (en-US) is the only language supported with Amazon Transcribe Medical.

languageModel_lastModifiedTime :: Lens' LanguageModel (Maybe UTCTime) Source #

The date and time the specified custom language model was last modified.

Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:32:58.761000-07:00 represents 12:32 PM UTC-7 on May 4, 2022.

languageModel_modelName :: Lens' LanguageModel (Maybe Text) Source #

A unique name, chosen by you, for your custom language model.

This name is case sensitive, cannot contain spaces, and must be unique within an Amazon Web Services account.

languageModel_modelStatus :: Lens' LanguageModel (Maybe ModelStatus) Source #

The status of the specified custom language model. When the status displays as COMPLETED the model is ready for use.

languageModel_upgradeAvailability :: Lens' LanguageModel (Maybe Bool) Source #

Shows if a more current base model is available for use with the specified custom language model.

If false, your custom language model is using the most up-to-date base model.

If true, there is a newer base model available than the one your language model is using.

Note that to update a base model, you must recreate the custom language model using the new base model. Base model upgrades for existing custom language models are not supported.

Media

data Media Source #

Describes the Amazon S3 location of the media file you want to use in your request.

For information on supported media formats, refer to the MediaFormat parameter or the Media formats section in the Amazon S3 Developer Guide.

See: newMedia smart constructor.

Constructors

Media' 

Fields

  • mediaFileUri :: Maybe Text

    The Amazon S3 location of the media file you want to transcribe. For example:

    • s3://DOC-EXAMPLE-BUCKET/my-media-file.flac
    • s3://DOC-EXAMPLE-BUCKET/media-files/my-media-file.flac

    Note that the Amazon S3 bucket that contains your input media must be located in the same Amazon Web Services Region where you're making your transcription request.

  • redactedMediaFileUri :: Maybe Text

    The Amazon S3 location of the media file you want to redact. For example:

    • s3://DOC-EXAMPLE-BUCKET/my-media-file.flac
    • s3://DOC-EXAMPLE-BUCKET/media-files/my-media-file.flac

    Note that the Amazon S3 bucket that contains your input media must be located in the same Amazon Web Services Region where you're making your transcription request.

    RedactedMediaFileUri produces a redacted audio file in addition to a redacted transcript. It is only supported for Call Analytics (StartCallAnalyticsJob) transcription requests.

Instances

Instances details
FromJSON Media Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Media

ToJSON Media Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Media

Generic Media Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Media

Associated Types

type Rep Media :: Type -> Type #

Methods

from :: Media -> Rep Media x #

to :: Rep Media x -> Media #

Read Media Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Media

Show Media Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Media

Methods

showsPrec :: Int -> Media -> ShowS #

show :: Media -> String #

showList :: [Media] -> ShowS #

NFData Media Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Media

Methods

rnf :: Media -> () #

Eq Media Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Media

Methods

(==) :: Media -> Media -> Bool #

(/=) :: Media -> Media -> Bool #

Hashable Media Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Media

Methods

hashWithSalt :: Int -> Media -> Int #

hash :: Media -> Int #

type Rep Media Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Media

type Rep Media = D1 ('MetaData "Media" "Amazonka.Transcribe.Types.Media" "amazonka-transcribe-2.0-4BjJqzwmm0K94bsEMe1tnF" 'False) (C1 ('MetaCons "Media'" 'PrefixI 'True) (S1 ('MetaSel ('Just "mediaFileUri") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "redactedMediaFileUri") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text))))

newMedia :: Media Source #

Create a value of Media with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:mediaFileUri:Media', media_mediaFileUri - The Amazon S3 location of the media file you want to transcribe. For example:

  • s3://DOC-EXAMPLE-BUCKET/my-media-file.flac
  • s3://DOC-EXAMPLE-BUCKET/media-files/my-media-file.flac

Note that the Amazon S3 bucket that contains your input media must be located in the same Amazon Web Services Region where you're making your transcription request.

$sel:redactedMediaFileUri:Media', media_redactedMediaFileUri - The Amazon S3 location of the media file you want to redact. For example:

  • s3://DOC-EXAMPLE-BUCKET/my-media-file.flac
  • s3://DOC-EXAMPLE-BUCKET/media-files/my-media-file.flac

Note that the Amazon S3 bucket that contains your input media must be located in the same Amazon Web Services Region where you're making your transcription request.

RedactedMediaFileUri produces a redacted audio file in addition to a redacted transcript. It is only supported for Call Analytics (StartCallAnalyticsJob) transcription requests.

media_mediaFileUri :: Lens' Media (Maybe Text) Source #

The Amazon S3 location of the media file you want to transcribe. For example:

  • s3://DOC-EXAMPLE-BUCKET/my-media-file.flac
  • s3://DOC-EXAMPLE-BUCKET/media-files/my-media-file.flac

Note that the Amazon S3 bucket that contains your input media must be located in the same Amazon Web Services Region where you're making your transcription request.

media_redactedMediaFileUri :: Lens' Media (Maybe Text) Source #

The Amazon S3 location of the media file you want to redact. For example:

  • s3://DOC-EXAMPLE-BUCKET/my-media-file.flac
  • s3://DOC-EXAMPLE-BUCKET/media-files/my-media-file.flac

Note that the Amazon S3 bucket that contains your input media must be located in the same Amazon Web Services Region where you're making your transcription request.

RedactedMediaFileUri produces a redacted audio file in addition to a redacted transcript. It is only supported for Call Analytics (StartCallAnalyticsJob) transcription requests.

MedicalTranscript

data MedicalTranscript Source #

Provides you with the Amazon S3 URI you can use to access your transcript.

See: newMedicalTranscript smart constructor.

Constructors

MedicalTranscript' 

Fields

  • transcriptFileUri :: Maybe Text

    The Amazon S3 location of your transcript. You can use this URI to access or download your transcript.

    Note that this is the Amazon S3 location you specified in your request using the OutputBucketName parameter.

Instances

Instances details
FromJSON MedicalTranscript Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MedicalTranscript

Generic MedicalTranscript Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MedicalTranscript

Associated Types

type Rep MedicalTranscript :: Type -> Type #

Read MedicalTranscript Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MedicalTranscript

Show MedicalTranscript Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MedicalTranscript

NFData MedicalTranscript Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MedicalTranscript

Methods

rnf :: MedicalTranscript -> () #

Eq MedicalTranscript Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MedicalTranscript

Hashable MedicalTranscript Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MedicalTranscript

type Rep MedicalTranscript Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MedicalTranscript

type Rep MedicalTranscript = D1 ('MetaData "MedicalTranscript" "Amazonka.Transcribe.Types.MedicalTranscript" "amazonka-transcribe-2.0-4BjJqzwmm0K94bsEMe1tnF" 'False) (C1 ('MetaCons "MedicalTranscript'" 'PrefixI 'True) (S1 ('MetaSel ('Just "transcriptFileUri") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text))))

newMedicalTranscript :: MedicalTranscript Source #

Create a value of MedicalTranscript with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:transcriptFileUri:MedicalTranscript', medicalTranscript_transcriptFileUri - The Amazon S3 location of your transcript. You can use this URI to access or download your transcript.

Note that this is the Amazon S3 location you specified in your request using the OutputBucketName parameter.

medicalTranscript_transcriptFileUri :: Lens' MedicalTranscript (Maybe Text) Source #

The Amazon S3 location of your transcript. You can use this URI to access or download your transcript.

Note that this is the Amazon S3 location you specified in your request using the OutputBucketName parameter.

MedicalTranscriptionJob

data MedicalTranscriptionJob Source #

Provides detailed information about a medical transcription job.

To view the status of the specified medical transcription job, check the TranscriptionJobStatus field. If the status is COMPLETED, the job is finished and you can find the results at the location specified in TranscriptFileUri. If the status is FAILED, FailureReason provides details on why your transcription job failed.

See: newMedicalTranscriptionJob smart constructor.

Constructors

MedicalTranscriptionJob' 

Fields

  • completionTime :: Maybe POSIX

    The date and time the specified medical transcription job finished processing.

    Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:33:13.922000-07:00 represents a transcription job that started processing at 12:33 PM UTC-7 on May 4, 2022.

  • contentIdentificationType :: Maybe MedicalContentIdentificationType

    Indicates whether content identification was enabled for your transcription request.

  • creationTime :: Maybe POSIX

    The date and time the specified medical transcription job request was made.

    Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:32:58.761000-07:00 represents a transcription job that started processing at 12:32 PM UTC-7 on May 4, 2022.

  • failureReason :: Maybe Text

    If TranscriptionJobStatus is FAILED, FailureReason contains information about why the transcription job request failed.

    The FailureReason field contains one of the following values:

    • Unsupported media format.

      The media format specified in MediaFormat isn't valid. Refer to MediaFormat for a list of supported formats.

    • The media format provided does not match the detected media format.

      The media format specified in MediaFormat doesn't match the format of the input file. Check the media format of your media file and correct the specified value.

    • Invalid sample rate for audio file.

      The sample rate specified in MediaSampleRateHertz isn't valid. The sample rate must be between 16,000 and 48,000 hertz.

    • The sample rate provided does not match the detected sample rate.

      The sample rate specified in MediaSampleRateHertz doesn't match the sample rate detected in your input media file. Check the sample rate of your media file and correct the specified value.

    • Invalid file size: file size too large.

      The size of your media file is larger than what Amazon Transcribe can process. For more information, refer to Guidelines and quotas.

    • Invalid number of channels: number of channels too large.

      Your audio contains more channels than Amazon Transcribe is able to process. For more information, refer to Guidelines and quotas.

  • languageCode :: Maybe LanguageCode

    The language code used to create your medical transcription job. US English (en-US) is the only supported language for medical transcriptions.

  • media :: Maybe Media
     
  • mediaFormat :: Maybe MediaFormat

    The format of the input media file.

  • mediaSampleRateHertz :: Maybe Natural

    The sample rate, in hertz, of the audio track in your input media file.

  • medicalTranscriptionJobName :: Maybe Text

    The name of the medical transcription job. Job names are case sensitive and must be unique within an Amazon Web Services account.

  • settings :: Maybe MedicalTranscriptionSetting

    Provides information on any additional settings that were included in your request. Additional settings include channel identification, alternative transcriptions, speaker partitioning, custom vocabularies, and custom vocabulary filters.

  • specialty :: Maybe Specialty

    Describes the medical specialty represented in your media.

  • startTime :: Maybe POSIX

    The date and time the specified medical transcription job began processing.

    Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:32:58.789000-07:00 represents a transcription job that started processing at 12:32 PM UTC-7 on May 4, 2022.

  • tags :: Maybe (NonEmpty Tag)

    The tags, each in the form of a key:value pair, assigned to the specified medical transcription job.

  • transcript :: Maybe MedicalTranscript

    Provides you with the Amazon S3 URI you can use to access your transcript.

  • transcriptionJobStatus :: Maybe TranscriptionJobStatus

    Provides the status of the specified medical transcription job.

    If the status is COMPLETED, the job is finished and you can find the results at the location specified in TranscriptFileUri. If the status is FAILED, FailureReason provides details on why your transcription job failed.

  • type' :: Maybe Type

    Indicates whether the input media is a dictation or a conversation, as specified in the StartMedicalTranscriptionJob request.

Instances

Instances details
FromJSON MedicalTranscriptionJob Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MedicalTranscriptionJob

Generic MedicalTranscriptionJob Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MedicalTranscriptionJob

Associated Types

type Rep MedicalTranscriptionJob :: Type -> Type #

Read MedicalTranscriptionJob Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MedicalTranscriptionJob

Show MedicalTranscriptionJob Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MedicalTranscriptionJob

NFData MedicalTranscriptionJob Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MedicalTranscriptionJob

Methods

rnf :: MedicalTranscriptionJob -> () #

Eq MedicalTranscriptionJob Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MedicalTranscriptionJob

Hashable MedicalTranscriptionJob Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MedicalTranscriptionJob

type Rep MedicalTranscriptionJob Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MedicalTranscriptionJob

type Rep MedicalTranscriptionJob = D1 ('MetaData "MedicalTranscriptionJob" "Amazonka.Transcribe.Types.MedicalTranscriptionJob" "amazonka-transcribe-2.0-4BjJqzwmm0K94bsEMe1tnF" 'False) (C1 ('MetaCons "MedicalTranscriptionJob'" 'PrefixI 'True) ((((S1 ('MetaSel ('Just "completionTime") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe POSIX)) :*: S1 ('MetaSel ('Just "contentIdentificationType") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe MedicalContentIdentificationType))) :*: (S1 ('MetaSel ('Just "creationTime") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe POSIX)) :*: S1 ('MetaSel ('Just "failureReason") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)))) :*: ((S1 ('MetaSel ('Just "languageCode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe LanguageCode)) :*: S1 ('MetaSel ('Just "media") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Media))) :*: (S1 ('MetaSel ('Just "mediaFormat") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe MediaFormat)) :*: S1 ('MetaSel ('Just "mediaSampleRateHertz") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))))) :*: (((S1 ('MetaSel ('Just "medicalTranscriptionJobName") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "settings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe MedicalTranscriptionSetting))) :*: (S1 ('MetaSel ('Just "specialty") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Specialty)) :*: S1 ('MetaSel ('Just "startTime") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe POSIX)))) :*: ((S1 ('MetaSel ('Just "tags") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe (NonEmpty Tag))) :*: S1 ('MetaSel ('Just "transcript") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe MedicalTranscript))) :*: (S1 ('MetaSel ('Just "transcriptionJobStatus") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe TranscriptionJobStatus)) :*: S1 ('MetaSel ('Just "type'") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Type)))))))

newMedicalTranscriptionJob :: MedicalTranscriptionJob Source #

Create a value of MedicalTranscriptionJob with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:completionTime:MedicalTranscriptionJob', medicalTranscriptionJob_completionTime - The date and time the specified medical transcription job finished processing.

Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:33:13.922000-07:00 represents a transcription job that started processing at 12:33 PM UTC-7 on May 4, 2022.

$sel:contentIdentificationType:MedicalTranscriptionJob', medicalTranscriptionJob_contentIdentificationType - Indicates whether content identification was enabled for your transcription request.

$sel:creationTime:MedicalTranscriptionJob', medicalTranscriptionJob_creationTime - The date and time the specified medical transcription job request was made.

Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:32:58.761000-07:00 represents a transcription job that started processing at 12:32 PM UTC-7 on May 4, 2022.

$sel:failureReason:MedicalTranscriptionJob', medicalTranscriptionJob_failureReason - If TranscriptionJobStatus is FAILED, FailureReason contains information about why the transcription job request failed.

The FailureReason field contains one of the following values:

  • Unsupported media format.

    The media format specified in MediaFormat isn't valid. Refer to MediaFormat for a list of supported formats.

  • The media format provided does not match the detected media format.

    The media format specified in MediaFormat doesn't match the format of the input file. Check the media format of your media file and correct the specified value.

  • Invalid sample rate for audio file.

    The sample rate specified in MediaSampleRateHertz isn't valid. The sample rate must be between 16,000 and 48,000 hertz.

  • The sample rate provided does not match the detected sample rate.

    The sample rate specified in MediaSampleRateHertz doesn't match the sample rate detected in your input media file. Check the sample rate of your media file and correct the specified value.

  • Invalid file size: file size too large.

    The size of your media file is larger than what Amazon Transcribe can process. For more information, refer to Guidelines and quotas.

  • Invalid number of channels: number of channels too large.

    Your audio contains more channels than Amazon Transcribe is able to process. For more information, refer to Guidelines and quotas.

$sel:languageCode:MedicalTranscriptionJob', medicalTranscriptionJob_languageCode - The language code used to create your medical transcription job. US English (en-US) is the only supported language for medical transcriptions.

$sel:media:MedicalTranscriptionJob', medicalTranscriptionJob_media - Undocumented member.

$sel:mediaFormat:MedicalTranscriptionJob', medicalTranscriptionJob_mediaFormat - The format of the input media file.

$sel:mediaSampleRateHertz:MedicalTranscriptionJob', medicalTranscriptionJob_mediaSampleRateHertz - The sample rate, in hertz, of the audio track in your input media file.

$sel:medicalTranscriptionJobName:MedicalTranscriptionJob', medicalTranscriptionJob_medicalTranscriptionJobName - The name of the medical transcription job. Job names are case sensitive and must be unique within an Amazon Web Services account.

$sel:settings:MedicalTranscriptionJob', medicalTranscriptionJob_settings - Provides information on any additional settings that were included in your request. Additional settings include channel identification, alternative transcriptions, speaker partitioning, custom vocabularies, and custom vocabulary filters.

$sel:specialty:MedicalTranscriptionJob', medicalTranscriptionJob_specialty - Describes the medical specialty represented in your media.

$sel:startTime:MedicalTranscriptionJob', medicalTranscriptionJob_startTime - The date and time the specified medical transcription job began processing.

Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:32:58.789000-07:00 represents a transcription job that started processing at 12:32 PM UTC-7 on May 4, 2022.

$sel:tags:MedicalTranscriptionJob', medicalTranscriptionJob_tags - The tags, each in the form of a key:value pair, assigned to the specified medical transcription job.

$sel:transcript:MedicalTranscriptionJob', medicalTranscriptionJob_transcript - Provides you with the Amazon S3 URI you can use to access your transcript.

$sel:transcriptionJobStatus:MedicalTranscriptionJob', medicalTranscriptionJob_transcriptionJobStatus - Provides the status of the specified medical transcription job.

If the status is COMPLETED, the job is finished and you can find the results at the location specified in TranscriptFileUri. If the status is FAILED, FailureReason provides details on why your transcription job failed.

$sel:type':MedicalTranscriptionJob', medicalTranscriptionJob_type - Indicates whether the input media is a dictation or a conversation, as specified in the StartMedicalTranscriptionJob request.

medicalTranscriptionJob_completionTime :: Lens' MedicalTranscriptionJob (Maybe UTCTime) Source #

The date and time the specified medical transcription job finished processing.

Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:33:13.922000-07:00 represents a transcription job that started processing at 12:33 PM UTC-7 on May 4, 2022.

medicalTranscriptionJob_contentIdentificationType :: Lens' MedicalTranscriptionJob (Maybe MedicalContentIdentificationType) Source #

Indicates whether content identification was enabled for your transcription request.

medicalTranscriptionJob_creationTime :: Lens' MedicalTranscriptionJob (Maybe UTCTime) Source #

The date and time the specified medical transcription job request was made.

Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:32:58.761000-07:00 represents a transcription job that started processing at 12:32 PM UTC-7 on May 4, 2022.

medicalTranscriptionJob_failureReason :: Lens' MedicalTranscriptionJob (Maybe Text) Source #

If TranscriptionJobStatus is FAILED, FailureReason contains information about why the transcription job request failed.

The FailureReason field contains one of the following values:

  • Unsupported media format.

    The media format specified in MediaFormat isn't valid. Refer to MediaFormat for a list of supported formats.

  • The media format provided does not match the detected media format.

    The media format specified in MediaFormat doesn't match the format of the input file. Check the media format of your media file and correct the specified value.

  • Invalid sample rate for audio file.

    The sample rate specified in MediaSampleRateHertz isn't valid. The sample rate must be between 16,000 and 48,000 hertz.

  • The sample rate provided does not match the detected sample rate.

    The sample rate specified in MediaSampleRateHertz doesn't match the sample rate detected in your input media file. Check the sample rate of your media file and correct the specified value.

  • Invalid file size: file size too large.

    The size of your media file is larger than what Amazon Transcribe can process. For more information, refer to Guidelines and quotas.

  • Invalid number of channels: number of channels too large.

    Your audio contains more channels than Amazon Transcribe is able to process. For more information, refer to Guidelines and quotas.

medicalTranscriptionJob_languageCode :: Lens' MedicalTranscriptionJob (Maybe LanguageCode) Source #

The language code used to create your medical transcription job. US English (en-US) is the only supported language for medical transcriptions.

medicalTranscriptionJob_mediaSampleRateHertz :: Lens' MedicalTranscriptionJob (Maybe Natural) Source #

The sample rate, in hertz, of the audio track in your input media file.

medicalTranscriptionJob_medicalTranscriptionJobName :: Lens' MedicalTranscriptionJob (Maybe Text) Source #

The name of the medical transcription job. Job names are case sensitive and must be unique within an Amazon Web Services account.

medicalTranscriptionJob_settings :: Lens' MedicalTranscriptionJob (Maybe MedicalTranscriptionSetting) Source #

Provides information on any additional settings that were included in your request. Additional settings include channel identification, alternative transcriptions, speaker partitioning, custom vocabularies, and custom vocabulary filters.

medicalTranscriptionJob_specialty :: Lens' MedicalTranscriptionJob (Maybe Specialty) Source #

Describes the medical specialty represented in your media.

medicalTranscriptionJob_startTime :: Lens' MedicalTranscriptionJob (Maybe UTCTime) Source #

The date and time the specified medical transcription job began processing.

Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:32:58.789000-07:00 represents a transcription job that started processing at 12:32 PM UTC-7 on May 4, 2022.

medicalTranscriptionJob_tags :: Lens' MedicalTranscriptionJob (Maybe (NonEmpty Tag)) Source #

The tags, each in the form of a key:value pair, assigned to the specified medical transcription job.

medicalTranscriptionJob_transcript :: Lens' MedicalTranscriptionJob (Maybe MedicalTranscript) Source #

Provides you with the Amazon S3 URI you can use to access your transcript.

medicalTranscriptionJob_transcriptionJobStatus :: Lens' MedicalTranscriptionJob (Maybe TranscriptionJobStatus) Source #

Provides the status of the specified medical transcription job.

If the status is COMPLETED, the job is finished and you can find the results at the location specified in TranscriptFileUri. If the status is FAILED, FailureReason provides details on why your transcription job failed.

medicalTranscriptionJob_type :: Lens' MedicalTranscriptionJob (Maybe Type) Source #

Indicates whether the input media is a dictation or a conversation, as specified in the StartMedicalTranscriptionJob request.

MedicalTranscriptionJobSummary

data MedicalTranscriptionJobSummary Source #

Provides detailed information about a specific medical transcription job.

See: newMedicalTranscriptionJobSummary smart constructor.

Constructors

MedicalTranscriptionJobSummary' 

Fields

  • completionTime :: Maybe POSIX

    The date and time the specified medical transcription job finished processing.

    Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:33:13.922000-07:00 represents a transcription job that started processing at 12:33 PM UTC-7 on May 4, 2022.

  • contentIdentificationType :: Maybe MedicalContentIdentificationType

    Labels all personal health information (PHI) identified in your transcript. For more information, see Identifying personal health information (PHI) in a transcription.

  • creationTime :: Maybe POSIX

    The date and time the specified medical transcription job request was made.

    Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:32:58.761000-07:00 represents a transcription job that started processing at 12:32 PM UTC-7 on May 4, 2022.

  • failureReason :: Maybe Text

    If TranscriptionJobStatus is FAILED, FailureReason contains information about why the transcription job failed. See also: Common Errors.

  • languageCode :: Maybe LanguageCode

    The language code used to create your medical transcription. US English (en-US) is the only supported language for medical transcriptions.

  • medicalTranscriptionJobName :: Maybe Text

    The name of the medical transcription job. Job names are case sensitive and must be unique within an Amazon Web Services account.

  • outputLocationType :: Maybe OutputLocationType

    Indicates where the specified medical transcription output is stored.

    If the value is CUSTOMER_BUCKET, the location is the Amazon S3 bucket you specified using the OutputBucketName parameter in your request. If you also included OutputKey in your request, your output is located in the path you specified in your request.

    If the value is SERVICE_BUCKET, the location is a service-managed Amazon S3 bucket. To access a transcript stored in a service-managed bucket, use the URI shown in the TranscriptFileUri field.

  • specialty :: Maybe Specialty

    Provides the medical specialty represented in your media.

  • startTime :: Maybe POSIX

    The date and time your medical transcription job began processing.

    Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:32:58.789000-07:00 represents a transcription job that started processing at 12:32 PM UTC-7 on May 4, 2022.

  • transcriptionJobStatus :: Maybe TranscriptionJobStatus

    Provides the status of your medical transcription job.

    If the status is COMPLETED, the job is finished and you can find the results at the location specified in TranscriptFileUri. If the status is FAILED, FailureReason provides details on why your transcription job failed.

  • type' :: Maybe Type

    Indicates whether the input media is a dictation or a conversation, as specified in the StartMedicalTranscriptionJob request.

Instances

Instances details
FromJSON MedicalTranscriptionJobSummary Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MedicalTranscriptionJobSummary

Generic MedicalTranscriptionJobSummary Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MedicalTranscriptionJobSummary

Associated Types

type Rep MedicalTranscriptionJobSummary :: Type -> Type #

Read MedicalTranscriptionJobSummary Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MedicalTranscriptionJobSummary

Show MedicalTranscriptionJobSummary Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MedicalTranscriptionJobSummary

NFData MedicalTranscriptionJobSummary Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MedicalTranscriptionJobSummary

Eq MedicalTranscriptionJobSummary Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MedicalTranscriptionJobSummary

Hashable MedicalTranscriptionJobSummary Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MedicalTranscriptionJobSummary

type Rep MedicalTranscriptionJobSummary Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MedicalTranscriptionJobSummary

type Rep MedicalTranscriptionJobSummary = D1 ('MetaData "MedicalTranscriptionJobSummary" "Amazonka.Transcribe.Types.MedicalTranscriptionJobSummary" "amazonka-transcribe-2.0-4BjJqzwmm0K94bsEMe1tnF" 'False) (C1 ('MetaCons "MedicalTranscriptionJobSummary'" 'PrefixI 'True) (((S1 ('MetaSel ('Just "completionTime") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe POSIX)) :*: S1 ('MetaSel ('Just "contentIdentificationType") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe MedicalContentIdentificationType))) :*: (S1 ('MetaSel ('Just "creationTime") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe POSIX)) :*: (S1 ('MetaSel ('Just "failureReason") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "languageCode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe LanguageCode))))) :*: ((S1 ('MetaSel ('Just "medicalTranscriptionJobName") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: (S1 ('MetaSel ('Just "outputLocationType") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe OutputLocationType)) :*: S1 ('MetaSel ('Just "specialty") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Specialty)))) :*: (S1 ('MetaSel ('Just "startTime") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe POSIX)) :*: (S1 ('MetaSel ('Just "transcriptionJobStatus") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe TranscriptionJobStatus)) :*: S1 ('MetaSel ('Just "type'") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Type)))))))

newMedicalTranscriptionJobSummary :: MedicalTranscriptionJobSummary Source #

Create a value of MedicalTranscriptionJobSummary with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:completionTime:MedicalTranscriptionJobSummary', medicalTranscriptionJobSummary_completionTime - The date and time the specified medical transcription job finished processing.

Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:33:13.922000-07:00 represents a transcription job that started processing at 12:33 PM UTC-7 on May 4, 2022.

$sel:contentIdentificationType:MedicalTranscriptionJobSummary', medicalTranscriptionJobSummary_contentIdentificationType - Labels all personal health information (PHI) identified in your transcript. For more information, see Identifying personal health information (PHI) in a transcription.

$sel:creationTime:MedicalTranscriptionJobSummary', medicalTranscriptionJobSummary_creationTime - The date and time the specified medical transcription job request was made.

Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:32:58.761000-07:00 represents a transcription job that started processing at 12:32 PM UTC-7 on May 4, 2022.

$sel:failureReason:MedicalTranscriptionJobSummary', medicalTranscriptionJobSummary_failureReason - If TranscriptionJobStatus is FAILED, FailureReason contains information about why the transcription job failed. See also: Common Errors.

$sel:languageCode:MedicalTranscriptionJobSummary', medicalTranscriptionJobSummary_languageCode - The language code used to create your medical transcription. US English (en-US) is the only supported language for medical transcriptions.

$sel:medicalTranscriptionJobName:MedicalTranscriptionJobSummary', medicalTranscriptionJobSummary_medicalTranscriptionJobName - The name of the medical transcription job. Job names are case sensitive and must be unique within an Amazon Web Services account.

$sel:outputLocationType:MedicalTranscriptionJobSummary', medicalTranscriptionJobSummary_outputLocationType - Indicates where the specified medical transcription output is stored.

If the value is CUSTOMER_BUCKET, the location is the Amazon S3 bucket you specified using the OutputBucketName parameter in your request. If you also included OutputKey in your request, your output is located in the path you specified in your request.

If the value is SERVICE_BUCKET, the location is a service-managed Amazon S3 bucket. To access a transcript stored in a service-managed bucket, use the URI shown in the TranscriptFileUri field.

$sel:specialty:MedicalTranscriptionJobSummary', medicalTranscriptionJobSummary_specialty - Provides the medical specialty represented in your media.

$sel:startTime:MedicalTranscriptionJobSummary', medicalTranscriptionJobSummary_startTime - The date and time your medical transcription job began processing.

Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:32:58.789000-07:00 represents a transcription job that started processing at 12:32 PM UTC-7 on May 4, 2022.

$sel:transcriptionJobStatus:MedicalTranscriptionJobSummary', medicalTranscriptionJobSummary_transcriptionJobStatus - Provides the status of your medical transcription job.

If the status is COMPLETED, the job is finished and you can find the results at the location specified in TranscriptFileUri. If the status is FAILED, FailureReason provides details on why your transcription job failed.

$sel:type':MedicalTranscriptionJobSummary', medicalTranscriptionJobSummary_type - Indicates whether the input media is a dictation or a conversation, as specified in the StartMedicalTranscriptionJob request.

medicalTranscriptionJobSummary_completionTime :: Lens' MedicalTranscriptionJobSummary (Maybe UTCTime) Source #

The date and time the specified medical transcription job finished processing.

Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:33:13.922000-07:00 represents a transcription job that started processing at 12:33 PM UTC-7 on May 4, 2022.

medicalTranscriptionJobSummary_creationTime :: Lens' MedicalTranscriptionJobSummary (Maybe UTCTime) Source #

The date and time the specified medical transcription job request was made.

Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:32:58.761000-07:00 represents a transcription job that started processing at 12:32 PM UTC-7 on May 4, 2022.

medicalTranscriptionJobSummary_failureReason :: Lens' MedicalTranscriptionJobSummary (Maybe Text) Source #

If TranscriptionJobStatus is FAILED, FailureReason contains information about why the transcription job failed. See also: Common Errors.

medicalTranscriptionJobSummary_languageCode :: Lens' MedicalTranscriptionJobSummary (Maybe LanguageCode) Source #

The language code used to create your medical transcription. US English (en-US) is the only supported language for medical transcriptions.

medicalTranscriptionJobSummary_medicalTranscriptionJobName :: Lens' MedicalTranscriptionJobSummary (Maybe Text) Source #

The name of the medical transcription job. Job names are case sensitive and must be unique within an Amazon Web Services account.

medicalTranscriptionJobSummary_outputLocationType :: Lens' MedicalTranscriptionJobSummary (Maybe OutputLocationType) Source #

Indicates where the specified medical transcription output is stored.

If the value is CUSTOMER_BUCKET, the location is the Amazon S3 bucket you specified using the OutputBucketName parameter in your request. If you also included OutputKey in your request, your output is located in the path you specified in your request.

If the value is SERVICE_BUCKET, the location is a service-managed Amazon S3 bucket. To access a transcript stored in a service-managed bucket, use the URI shown in the TranscriptFileUri field.

medicalTranscriptionJobSummary_specialty :: Lens' MedicalTranscriptionJobSummary (Maybe Specialty) Source #

Provides the medical specialty represented in your media.

medicalTranscriptionJobSummary_startTime :: Lens' MedicalTranscriptionJobSummary (Maybe UTCTime) Source #

The date and time your medical transcription job began processing.

Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:32:58.789000-07:00 represents a transcription job that started processing at 12:32 PM UTC-7 on May 4, 2022.

medicalTranscriptionJobSummary_transcriptionJobStatus :: Lens' MedicalTranscriptionJobSummary (Maybe TranscriptionJobStatus) Source #

Provides the status of your medical transcription job.

If the status is COMPLETED, the job is finished and you can find the results at the location specified in TranscriptFileUri. If the status is FAILED, FailureReason provides details on why your transcription job failed.

medicalTranscriptionJobSummary_type :: Lens' MedicalTranscriptionJobSummary (Maybe Type) Source #

Indicates whether the input media is a dictation or a conversation, as specified in the StartMedicalTranscriptionJob request.

MedicalTranscriptionSetting

data MedicalTranscriptionSetting Source #

Allows additional optional settings in your request, including channel identification, alternative transcriptions, and speaker partitioning. You can use that to apply custom vocabularies to your medical transcription job.

See: newMedicalTranscriptionSetting smart constructor.

Constructors

MedicalTranscriptionSetting' 

Fields

  • channelIdentification :: Maybe Bool

    Enables channel identification in multi-channel audio.

    Channel identification transcribes the audio on each channel independently, then appends the output for each channel into one transcript.

    If you have multi-channel audio and do not enable channel identification, your audio is transcribed in a continuous manner and your transcript does not separate the speech by channel.

    You can't include both ShowSpeakerLabels and ChannelIdentification in the same request. Including both parameters returns a BadRequestException.

    For more information, see Transcribing multi-channel audio.

  • maxAlternatives :: Maybe Natural

    Indicate the maximum number of alternative transcriptions you want Amazon Transcribe Medical to include in your transcript.

    If you select a number greater than the number of alternative transcriptions generated by Amazon Transcribe Medical, only the actual number of alternative transcriptions are included.

    If you include MaxAlternatives in your request, you must also include ShowAlternatives with a value of true.

    For more information, see Alternative transcriptions.

  • maxSpeakerLabels :: Maybe Natural

    Specify the maximum number of speakers you want to partition in your media.

    Note that if your media contains more speakers than the specified number, multiple speakers are treated as a single speaker.

    If you specify the MaxSpeakerLabels field, you must set the ShowSpeakerLabels field to true.

  • showAlternatives :: Maybe Bool

    To include alternative transcriptions within your transcription output, include ShowAlternatives in your transcription request.

    If you include ShowAlternatives, you must also include MaxAlternatives, which is the maximum number of alternative transcriptions you want Amazon Transcribe Medical to generate.

    For more information, see Alternative transcriptions.

  • showSpeakerLabels :: Maybe Bool

    Enables speaker partitioning (diarization) in your transcription output. Speaker partitioning labels the speech from individual speakers in your media file.

    If you enable ShowSpeakerLabels in your request, you must also include MaxSpeakerLabels.

    You can't include ShowSpeakerLabels and ChannelIdentification in the same request. Including both parameters returns a BadRequestException.

    For more information, see Partitioning speakers (diarization).

  • vocabularyName :: Maybe Text

    The name of the custom vocabulary you want to use when processing your medical transcription job. Custom vocabulary names are case sensitive.

    The language of the specified custom vocabulary must match the language code that you specify in your transcription request. If the languages don't match, the custom vocabulary isn't applied. There are no errors or warnings associated with a language mismatch. US English (en-US) is the only valid language for Amazon Transcribe Medical.

Instances

Instances details
FromJSON MedicalTranscriptionSetting Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MedicalTranscriptionSetting

ToJSON MedicalTranscriptionSetting Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MedicalTranscriptionSetting

Generic MedicalTranscriptionSetting Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MedicalTranscriptionSetting

Associated Types

type Rep MedicalTranscriptionSetting :: Type -> Type #

Read MedicalTranscriptionSetting Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MedicalTranscriptionSetting

Show MedicalTranscriptionSetting Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MedicalTranscriptionSetting

NFData MedicalTranscriptionSetting Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MedicalTranscriptionSetting

Eq MedicalTranscriptionSetting Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MedicalTranscriptionSetting

Hashable MedicalTranscriptionSetting Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MedicalTranscriptionSetting

type Rep MedicalTranscriptionSetting Source # 
Instance details

Defined in Amazonka.Transcribe.Types.MedicalTranscriptionSetting

type Rep MedicalTranscriptionSetting = D1 ('MetaData "MedicalTranscriptionSetting" "Amazonka.Transcribe.Types.MedicalTranscriptionSetting" "amazonka-transcribe-2.0-4BjJqzwmm0K94bsEMe1tnF" 'False) (C1 ('MetaCons "MedicalTranscriptionSetting'" 'PrefixI 'True) ((S1 ('MetaSel ('Just "channelIdentification") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Bool)) :*: (S1 ('MetaSel ('Just "maxAlternatives") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "maxSpeakerLabels") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)))) :*: (S1 ('MetaSel ('Just "showAlternatives") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Bool)) :*: (S1 ('MetaSel ('Just "showSpeakerLabels") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Bool)) :*: S1 ('MetaSel ('Just "vocabularyName") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text))))))

newMedicalTranscriptionSetting :: MedicalTranscriptionSetting Source #

Create a value of MedicalTranscriptionSetting with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:channelIdentification:MedicalTranscriptionSetting', medicalTranscriptionSetting_channelIdentification - Enables channel identification in multi-channel audio.

Channel identification transcribes the audio on each channel independently, then appends the output for each channel into one transcript.

If you have multi-channel audio and do not enable channel identification, your audio is transcribed in a continuous manner and your transcript does not separate the speech by channel.

You can't include both ShowSpeakerLabels and ChannelIdentification in the same request. Including both parameters returns a BadRequestException.

For more information, see Transcribing multi-channel audio.

$sel:maxAlternatives:MedicalTranscriptionSetting', medicalTranscriptionSetting_maxAlternatives - Indicate the maximum number of alternative transcriptions you want Amazon Transcribe Medical to include in your transcript.

If you select a number greater than the number of alternative transcriptions generated by Amazon Transcribe Medical, only the actual number of alternative transcriptions are included.

If you include MaxAlternatives in your request, you must also include ShowAlternatives with a value of true.

For more information, see Alternative transcriptions.

$sel:maxSpeakerLabels:MedicalTranscriptionSetting', medicalTranscriptionSetting_maxSpeakerLabels - Specify the maximum number of speakers you want to partition in your media.

Note that if your media contains more speakers than the specified number, multiple speakers are treated as a single speaker.

If you specify the MaxSpeakerLabels field, you must set the ShowSpeakerLabels field to true.

$sel:showAlternatives:MedicalTranscriptionSetting', medicalTranscriptionSetting_showAlternatives - To include alternative transcriptions within your transcription output, include ShowAlternatives in your transcription request.

If you include ShowAlternatives, you must also include MaxAlternatives, which is the maximum number of alternative transcriptions you want Amazon Transcribe Medical to generate.

For more information, see Alternative transcriptions.

$sel:showSpeakerLabels:MedicalTranscriptionSetting', medicalTranscriptionSetting_showSpeakerLabels - Enables speaker partitioning (diarization) in your transcription output. Speaker partitioning labels the speech from individual speakers in your media file.

If you enable ShowSpeakerLabels in your request, you must also include MaxSpeakerLabels.

You can't include ShowSpeakerLabels and ChannelIdentification in the same request. Including both parameters returns a BadRequestException.

For more information, see Partitioning speakers (diarization).

$sel:vocabularyName:MedicalTranscriptionSetting', medicalTranscriptionSetting_vocabularyName - The name of the custom vocabulary you want to use when processing your medical transcription job. Custom vocabulary names are case sensitive.

The language of the specified custom vocabulary must match the language code that you specify in your transcription request. If the languages don't match, the custom vocabulary isn't applied. There are no errors or warnings associated with a language mismatch. US English (en-US) is the only valid language for Amazon Transcribe Medical.

medicalTranscriptionSetting_channelIdentification :: Lens' MedicalTranscriptionSetting (Maybe Bool) Source #

Enables channel identification in multi-channel audio.

Channel identification transcribes the audio on each channel independently, then appends the output for each channel into one transcript.

If you have multi-channel audio and do not enable channel identification, your audio is transcribed in a continuous manner and your transcript does not separate the speech by channel.

You can't include both ShowSpeakerLabels and ChannelIdentification in the same request. Including both parameters returns a BadRequestException.

For more information, see Transcribing multi-channel audio.

medicalTranscriptionSetting_maxAlternatives :: Lens' MedicalTranscriptionSetting (Maybe Natural) Source #

Indicate the maximum number of alternative transcriptions you want Amazon Transcribe Medical to include in your transcript.

If you select a number greater than the number of alternative transcriptions generated by Amazon Transcribe Medical, only the actual number of alternative transcriptions are included.

If you include MaxAlternatives in your request, you must also include ShowAlternatives with a value of true.

For more information, see Alternative transcriptions.

medicalTranscriptionSetting_maxSpeakerLabels :: Lens' MedicalTranscriptionSetting (Maybe Natural) Source #

Specify the maximum number of speakers you want to partition in your media.

Note that if your media contains more speakers than the specified number, multiple speakers are treated as a single speaker.

If you specify the MaxSpeakerLabels field, you must set the ShowSpeakerLabels field to true.

medicalTranscriptionSetting_showAlternatives :: Lens' MedicalTranscriptionSetting (Maybe Bool) Source #

To include alternative transcriptions within your transcription output, include ShowAlternatives in your transcription request.

If you include ShowAlternatives, you must also include MaxAlternatives, which is the maximum number of alternative transcriptions you want Amazon Transcribe Medical to generate.

For more information, see Alternative transcriptions.

medicalTranscriptionSetting_showSpeakerLabels :: Lens' MedicalTranscriptionSetting (Maybe Bool) Source #

Enables speaker partitioning (diarization) in your transcription output. Speaker partitioning labels the speech from individual speakers in your media file.

If you enable ShowSpeakerLabels in your request, you must also include MaxSpeakerLabels.

You can't include ShowSpeakerLabels and ChannelIdentification in the same request. Including both parameters returns a BadRequestException.

For more information, see Partitioning speakers (diarization).

medicalTranscriptionSetting_vocabularyName :: Lens' MedicalTranscriptionSetting (Maybe Text) Source #

The name of the custom vocabulary you want to use when processing your medical transcription job. Custom vocabulary names are case sensitive.

The language of the specified custom vocabulary must match the language code that you specify in your transcription request. If the languages don't match, the custom vocabulary isn't applied. There are no errors or warnings associated with a language mismatch. US English (en-US) is the only valid language for Amazon Transcribe Medical.

ModelSettings

data ModelSettings Source #

Provides the name of the custom language model that was included in the specified transcription job.

Only use ModelSettings with the LanguageModelName sub-parameter if you're not using automatic language identification (). If using LanguageIdSettings in your request, this parameter contains a LanguageModelName sub-parameter.

See: newModelSettings smart constructor.

Constructors

ModelSettings' 

Fields

  • languageModelName :: Maybe Text

    The name of the custom language model you want to use when processing your transcription job. Note that custom language model names are case sensitive.

    The language of the specified custom language model must match the language code that you specify in your transcription request. If the languages don't match, the custom language model isn't applied. There are no errors or warnings associated with a language mismatch.

Instances

Instances details
FromJSON ModelSettings Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ModelSettings

ToJSON ModelSettings Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ModelSettings

Generic ModelSettings Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ModelSettings

Associated Types

type Rep ModelSettings :: Type -> Type #

Read ModelSettings Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ModelSettings

Show ModelSettings Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ModelSettings

NFData ModelSettings Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ModelSettings

Methods

rnf :: ModelSettings -> () #

Eq ModelSettings Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ModelSettings

Hashable ModelSettings Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ModelSettings

type Rep ModelSettings Source # 
Instance details

Defined in Amazonka.Transcribe.Types.ModelSettings

type Rep ModelSettings = D1 ('MetaData "ModelSettings" "Amazonka.Transcribe.Types.ModelSettings" "amazonka-transcribe-2.0-4BjJqzwmm0K94bsEMe1tnF" 'False) (C1 ('MetaCons "ModelSettings'" 'PrefixI 'True) (S1 ('MetaSel ('Just "languageModelName") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text))))

newModelSettings :: ModelSettings Source #

Create a value of ModelSettings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:languageModelName:ModelSettings', modelSettings_languageModelName - The name of the custom language model you want to use when processing your transcription job. Note that custom language model names are case sensitive.

The language of the specified custom language model must match the language code that you specify in your transcription request. If the languages don't match, the custom language model isn't applied. There are no errors or warnings associated with a language mismatch.

modelSettings_languageModelName :: Lens' ModelSettings (Maybe Text) Source #

The name of the custom language model you want to use when processing your transcription job. Note that custom language model names are case sensitive.

The language of the specified custom language model must match the language code that you specify in your transcription request. If the languages don't match, the custom language model isn't applied. There are no errors or warnings associated with a language mismatch.

NonTalkTimeFilter

data NonTalkTimeFilter Source #

Flag the presence or absence of periods of silence in your Call Analytics transcription output.

Rules using NonTalkTimeFilter are designed to match:

  • The presence of silence at specified periods throughout the call
  • The presence of speech at specified periods throughout the call

See Rule criteria for batch categories for usage examples.

See: newNonTalkTimeFilter smart constructor.

Constructors

NonTalkTimeFilter' 

Fields

  • absoluteTimeRange :: Maybe AbsoluteTimeRange

    Makes it possible to specify a time range (in milliseconds) in your audio, during which you want to search for a period of silence. See for more detail.

  • negate :: Maybe Bool

    Set to TRUE to flag periods of speech. Set to FALSE to flag periods of silence

  • relativeTimeRange :: Maybe RelativeTimeRange

    Makes it possible to specify a time range (in percentage) in your media file, during which you want to search for a period of silence. See for more detail.

  • threshold :: Maybe Natural

    Specify the duration, in milliseconds, of the period of silence that you want to flag. For example, you can flag a silent period that lasts 30,000 milliseconds.

Instances

Instances details
FromJSON NonTalkTimeFilter Source # 
Instance details

Defined in Amazonka.Transcribe.Types.NonTalkTimeFilter

ToJSON NonTalkTimeFilter Source # 
Instance details

Defined in Amazonka.Transcribe.Types.NonTalkTimeFilter

Generic NonTalkTimeFilter Source # 
Instance details

Defined in Amazonka.Transcribe.Types.NonTalkTimeFilter

Associated Types

type Rep NonTalkTimeFilter :: Type -> Type #

Read NonTalkTimeFilter Source # 
Instance details

Defined in Amazonka.Transcribe.Types.NonTalkTimeFilter

Show NonTalkTimeFilter Source # 
Instance details

Defined in Amazonka.Transcribe.Types.NonTalkTimeFilter

NFData NonTalkTimeFilter Source # 
Instance details

Defined in Amazonka.Transcribe.Types.NonTalkTimeFilter

Methods

rnf :: NonTalkTimeFilter -> () #

Eq NonTalkTimeFilter Source # 
Instance details

Defined in Amazonka.Transcribe.Types.NonTalkTimeFilter

Hashable NonTalkTimeFilter Source # 
Instance details

Defined in Amazonka.Transcribe.Types.NonTalkTimeFilter

type Rep NonTalkTimeFilter Source # 
Instance details

Defined in Amazonka.Transcribe.Types.NonTalkTimeFilter

type Rep NonTalkTimeFilter = D1 ('MetaData "NonTalkTimeFilter" "Amazonka.Transcribe.Types.NonTalkTimeFilter" "amazonka-transcribe-2.0-4BjJqzwmm0K94bsEMe1tnF" 'False) (C1 ('MetaCons "NonTalkTimeFilter'" 'PrefixI 'True) ((S1 ('MetaSel ('Just "absoluteTimeRange") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe AbsoluteTimeRange)) :*: S1 ('MetaSel ('Just "negate") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Bool))) :*: (S1 ('MetaSel ('Just "relativeTimeRange") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe RelativeTimeRange)) :*: S1 ('MetaSel ('Just "threshold") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)))))

newNonTalkTimeFilter :: NonTalkTimeFilter Source #

Create a value of NonTalkTimeFilter with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:absoluteTimeRange:NonTalkTimeFilter', nonTalkTimeFilter_absoluteTimeRange - Makes it possible to specify a time range (in milliseconds) in your audio, during which you want to search for a period of silence. See for more detail.

$sel:negate:NonTalkTimeFilter', nonTalkTimeFilter_negate - Set to TRUE to flag periods of speech. Set to FALSE to flag periods of silence

$sel:relativeTimeRange:NonTalkTimeFilter', nonTalkTimeFilter_relativeTimeRange - Makes it possible to specify a time range (in percentage) in your media file, during which you want to search for a period of silence. See for more detail.

$sel:threshold:NonTalkTimeFilter', nonTalkTimeFilter_threshold - Specify the duration, in milliseconds, of the period of silence that you want to flag. For example, you can flag a silent period that lasts 30,000 milliseconds.

nonTalkTimeFilter_absoluteTimeRange :: Lens' NonTalkTimeFilter (Maybe AbsoluteTimeRange) Source #

Makes it possible to specify a time range (in milliseconds) in your audio, during which you want to search for a period of silence. See for more detail.

nonTalkTimeFilter_negate :: Lens' NonTalkTimeFilter (Maybe Bool) Source #

Set to TRUE to flag periods of speech. Set to FALSE to flag periods of silence

nonTalkTimeFilter_relativeTimeRange :: Lens' NonTalkTimeFilter (Maybe RelativeTimeRange) Source #

Makes it possible to specify a time range (in percentage) in your media file, during which you want to search for a period of silence. See for more detail.

nonTalkTimeFilter_threshold :: Lens' NonTalkTimeFilter (Maybe Natural) Source #

Specify the duration, in milliseconds, of the period of silence that you want to flag. For example, you can flag a silent period that lasts 30,000 milliseconds.

RelativeTimeRange

data RelativeTimeRange Source #

A time range, in percentage, between two points in your media file.

You can use StartPercentage and EndPercentage to search a custom segment. For example, setting StartPercentage to 10 and EndPercentage to 50 only searches for your specified criteria in the audio contained between the 10 percent mark and the 50 percent mark of your media file.

You can use also First to search from the start of the media file until the time that you specify. Or use Last to search from the time that you specify until the end of the media file. For example, setting First to 10 only searches for your specified criteria in the audio contained in the first 10 percent of the media file.

If you prefer to use milliseconds instead of percentage, see .

See: newRelativeTimeRange smart constructor.

Constructors

RelativeTimeRange' 

Fields

  • endPercentage :: Maybe Natural

    The time, in percentage, when Amazon Transcribe stops searching for the specified criteria in your media file. If you include EndPercentage in your request, you must also include StartPercentage.

  • first :: Maybe Natural

    The time, in percentage, from the start of your media file until the specified value. Amazon Transcribe searches for your specified criteria in this time segment.

  • last :: Maybe Natural

    The time, in percentage, from the specified value until the end of your media file. Amazon Transcribe searches for your specified criteria in this time segment.

  • startPercentage :: Maybe Natural

    The time, in percentage, when Amazon Transcribe starts searching for the specified criteria in your media file. If you include StartPercentage in your request, you must also include EndPercentage.

Instances

Instances details
FromJSON RelativeTimeRange Source # 
Instance details

Defined in Amazonka.Transcribe.Types.RelativeTimeRange

ToJSON RelativeTimeRange Source # 
Instance details

Defined in Amazonka.Transcribe.Types.RelativeTimeRange

Generic RelativeTimeRange Source # 
Instance details

Defined in Amazonka.Transcribe.Types.RelativeTimeRange

Associated Types

type Rep RelativeTimeRange :: Type -> Type #

Read RelativeTimeRange Source # 
Instance details

Defined in Amazonka.Transcribe.Types.RelativeTimeRange

Show RelativeTimeRange Source # 
Instance details

Defined in Amazonka.Transcribe.Types.RelativeTimeRange

NFData RelativeTimeRange Source # 
Instance details

Defined in Amazonka.Transcribe.Types.RelativeTimeRange

Methods

rnf :: RelativeTimeRange -> () #

Eq RelativeTimeRange Source # 
Instance details

Defined in Amazonka.Transcribe.Types.RelativeTimeRange

Hashable RelativeTimeRange Source # 
Instance details

Defined in Amazonka.Transcribe.Types.RelativeTimeRange

type Rep RelativeTimeRange Source # 
Instance details

Defined in Amazonka.Transcribe.Types.RelativeTimeRange

type Rep RelativeTimeRange = D1 ('MetaData "RelativeTimeRange" "Amazonka.Transcribe.Types.RelativeTimeRange" "amazonka-transcribe-2.0-4BjJqzwmm0K94bsEMe1tnF" 'False) (C1 ('MetaCons "RelativeTimeRange'" 'PrefixI 'True) ((S1 ('MetaSel ('Just "endPercentage") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "first") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))) :*: (S1 ('MetaSel ('Just "last") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "startPercentage") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)))))

newRelativeTimeRange :: RelativeTimeRange Source #

Create a value of RelativeTimeRange with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:endPercentage:RelativeTimeRange', relativeTimeRange_endPercentage - The time, in percentage, when Amazon Transcribe stops searching for the specified criteria in your media file. If you include EndPercentage in your request, you must also include StartPercentage.

$sel:first:RelativeTimeRange', relativeTimeRange_first - The time, in percentage, from the start of your media file until the specified value. Amazon Transcribe searches for your specified criteria in this time segment.

$sel:last:RelativeTimeRange', relativeTimeRange_last - The time, in percentage, from the specified value until the end of your media file. Amazon Transcribe searches for your specified criteria in this time segment.

$sel:startPercentage:RelativeTimeRange', relativeTimeRange_startPercentage - The time, in percentage, when Amazon Transcribe starts searching for the specified criteria in your media file. If you include StartPercentage in your request, you must also include EndPercentage.

relativeTimeRange_endPercentage :: Lens' RelativeTimeRange (Maybe Natural) Source #

The time, in percentage, when Amazon Transcribe stops searching for the specified criteria in your media file. If you include EndPercentage in your request, you must also include StartPercentage.

relativeTimeRange_first :: Lens' RelativeTimeRange (Maybe Natural) Source #

The time, in percentage, from the start of your media file until the specified value. Amazon Transcribe searches for your specified criteria in this time segment.

relativeTimeRange_last :: Lens' RelativeTimeRange (Maybe Natural) Source #

The time, in percentage, from the specified value until the end of your media file. Amazon Transcribe searches for your specified criteria in this time segment.

relativeTimeRange_startPercentage :: Lens' RelativeTimeRange (Maybe Natural) Source #

The time, in percentage, when Amazon Transcribe starts searching for the specified criteria in your media file. If you include StartPercentage in your request, you must also include EndPercentage.

Rule

data Rule Source #

A rule is a set of criteria that you can specify to flag an attribute in your Call Analytics output. Rules define a Call Analytics category.

Rules can include these parameters: , , , and .

To learn more about Call Analytics rules and categories, see Creating categories for batch transcriptions and Creating categories for streaming transcriptions.

To learn more about Call Analytics, see Analyzing call center audio with Call Analytics.

See: newRule smart constructor.

Constructors

Rule' 

Fields

Instances

Instances details
FromJSON Rule Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Rule

ToJSON Rule Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Rule

Generic Rule Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Rule

Associated Types

type Rep Rule :: Type -> Type #

Methods

from :: Rule -> Rep Rule x #

to :: Rep Rule x -> Rule #

Read Rule Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Rule

Show Rule Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Rule

Methods

showsPrec :: Int -> Rule -> ShowS #

show :: Rule -> String #

showList :: [Rule] -> ShowS #

NFData Rule Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Rule

Methods

rnf :: Rule -> () #

Eq Rule Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Rule

Methods

(==) :: Rule -> Rule -> Bool #

(/=) :: Rule -> Rule -> Bool #

Hashable Rule Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Rule

Methods

hashWithSalt :: Int -> Rule -> Int #

hash :: Rule -> Int #

type Rep Rule Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Rule

type Rep Rule = D1 ('MetaData "Rule" "Amazonka.Transcribe.Types.Rule" "amazonka-transcribe-2.0-4BjJqzwmm0K94bsEMe1tnF" 'False) (C1 ('MetaCons "Rule'" 'PrefixI 'True) ((S1 ('MetaSel ('Just "interruptionFilter") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe InterruptionFilter)) :*: S1 ('MetaSel ('Just "nonTalkTimeFilter") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe NonTalkTimeFilter))) :*: (S1 ('MetaSel ('Just "sentimentFilter") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe SentimentFilter)) :*: S1 ('MetaSel ('Just "transcriptFilter") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe TranscriptFilter)))))

newRule :: Rule Source #

Create a value of Rule with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:interruptionFilter:Rule', rule_interruptionFilter - Flag the presence or absence of interruptions in your Call Analytics transcription output. Refer to for more detail.

$sel:nonTalkTimeFilter:Rule', rule_nonTalkTimeFilter - Flag the presence or absence of periods of silence in your Call Analytics transcription output. Refer to for more detail.

$sel:sentimentFilter:Rule', rule_sentimentFilter - Flag the presence or absence of specific sentiments in your Call Analytics transcription output. Refer to for more detail.

$sel:transcriptFilter:Rule', rule_transcriptFilter - Flag the presence or absence of specific words or phrases in your Call Analytics transcription output. Refer to for more detail.

rule_interruptionFilter :: Lens' Rule (Maybe InterruptionFilter) Source #

Flag the presence or absence of interruptions in your Call Analytics transcription output. Refer to for more detail.

rule_nonTalkTimeFilter :: Lens' Rule (Maybe NonTalkTimeFilter) Source #

Flag the presence or absence of periods of silence in your Call Analytics transcription output. Refer to for more detail.

rule_sentimentFilter :: Lens' Rule (Maybe SentimentFilter) Source #

Flag the presence or absence of specific sentiments in your Call Analytics transcription output. Refer to for more detail.

rule_transcriptFilter :: Lens' Rule (Maybe TranscriptFilter) Source #

Flag the presence or absence of specific words or phrases in your Call Analytics transcription output. Refer to for more detail.

SentimentFilter

data SentimentFilter Source #

Flag the presence or absence of specific sentiments detected in your Call Analytics transcription output.

Rules using SentimentFilter are designed to match:

  • The presence or absence of a positive sentiment felt by the customer, agent, or both at specified points in the call
  • The presence or absence of a negative sentiment felt by the customer, agent, or both at specified points in the call
  • The presence or absence of a neutral sentiment felt by the customer, agent, or both at specified points in the call
  • The presence or absence of a mixed sentiment felt by the customer, the agent, or both at specified points in the call

See Rule criteria for batch categories for usage examples.

See: newSentimentFilter smart constructor.

Constructors

SentimentFilter' 

Fields

  • absoluteTimeRange :: Maybe AbsoluteTimeRange

    Makes it possible to specify a time range (in milliseconds) in your audio, during which you want to search for the specified sentiments. See for more detail.

  • negate :: Maybe Bool

    Set to TRUE to flag the sentiments that you didn't include in your request. Set to FALSE to flag the sentiments that you specified in your request.

  • participantRole :: Maybe ParticipantRole

    Specify the participant that you want to flag. Omitting this parameter is equivalent to specifying both participants.

  • relativeTimeRange :: Maybe RelativeTimeRange

    Makes it possible to specify a time range (in percentage) in your media file, during which you want to search for the specified sentiments. See for more detail.

  • sentiments :: NonEmpty SentimentValue

    Specify the sentiments that you want to flag.

Instances

Instances details
FromJSON SentimentFilter Source # 
Instance details

Defined in Amazonka.Transcribe.Types.SentimentFilter

ToJSON SentimentFilter Source # 
Instance details

Defined in Amazonka.Transcribe.Types.SentimentFilter

Generic SentimentFilter Source # 
Instance details

Defined in Amazonka.Transcribe.Types.SentimentFilter

Associated Types

type Rep SentimentFilter :: Type -> Type #

Read SentimentFilter Source # 
Instance details

Defined in Amazonka.Transcribe.Types.SentimentFilter

Show SentimentFilter Source # 
Instance details

Defined in Amazonka.Transcribe.Types.SentimentFilter

NFData SentimentFilter Source # 
Instance details

Defined in Amazonka.Transcribe.Types.SentimentFilter

Methods

rnf :: SentimentFilter -> () #

Eq SentimentFilter Source # 
Instance details

Defined in Amazonka.Transcribe.Types.SentimentFilter

Hashable SentimentFilter Source # 
Instance details

Defined in Amazonka.Transcribe.Types.SentimentFilter

type Rep SentimentFilter Source # 
Instance details

Defined in Amazonka.Transcribe.Types.SentimentFilter

type Rep SentimentFilter = D1 ('MetaData "SentimentFilter" "Amazonka.Transcribe.Types.SentimentFilter" "amazonka-transcribe-2.0-4BjJqzwmm0K94bsEMe1tnF" 'False) (C1 ('MetaCons "SentimentFilter'" 'PrefixI 'True) ((S1 ('MetaSel ('Just "absoluteTimeRange") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe AbsoluteTimeRange)) :*: S1 ('MetaSel ('Just "negate") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Bool))) :*: (S1 ('MetaSel ('Just "participantRole") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe ParticipantRole)) :*: (S1 ('MetaSel ('Just "relativeTimeRange") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe RelativeTimeRange)) :*: S1 ('MetaSel ('Just "sentiments") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (NonEmpty SentimentValue))))))

newSentimentFilter Source #

Create a value of SentimentFilter with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:absoluteTimeRange:SentimentFilter', sentimentFilter_absoluteTimeRange - Makes it possible to specify a time range (in milliseconds) in your audio, during which you want to search for the specified sentiments. See for more detail.

$sel:negate:SentimentFilter', sentimentFilter_negate - Set to TRUE to flag the sentiments that you didn't include in your request. Set to FALSE to flag the sentiments that you specified in your request.

$sel:participantRole:SentimentFilter', sentimentFilter_participantRole - Specify the participant that you want to flag. Omitting this parameter is equivalent to specifying both participants.

$sel:relativeTimeRange:SentimentFilter', sentimentFilter_relativeTimeRange - Makes it possible to specify a time range (in percentage) in your media file, during which you want to search for the specified sentiments. See for more detail.

$sel:sentiments:SentimentFilter', sentimentFilter_sentiments - Specify the sentiments that you want to flag.

sentimentFilter_absoluteTimeRange :: Lens' SentimentFilter (Maybe AbsoluteTimeRange) Source #

Makes it possible to specify a time range (in milliseconds) in your audio, during which you want to search for the specified sentiments. See for more detail.

sentimentFilter_negate :: Lens' SentimentFilter (Maybe Bool) Source #

Set to TRUE to flag the sentiments that you didn't include in your request. Set to FALSE to flag the sentiments that you specified in your request.

sentimentFilter_participantRole :: Lens' SentimentFilter (Maybe ParticipantRole) Source #

Specify the participant that you want to flag. Omitting this parameter is equivalent to specifying both participants.

sentimentFilter_relativeTimeRange :: Lens' SentimentFilter (Maybe RelativeTimeRange) Source #

Makes it possible to specify a time range (in percentage) in your media file, during which you want to search for the specified sentiments. See for more detail.

sentimentFilter_sentiments :: Lens' SentimentFilter (NonEmpty SentimentValue) Source #

Specify the sentiments that you want to flag.

Settings

data Settings Source #

Allows additional optional settings in your request, including channel identification, alternative transcriptions, and speaker partitioning. You can use that to apply custom vocabularies to your transcription job.

See: newSettings smart constructor.

Constructors

Settings' 

Fields

  • channelIdentification :: Maybe Bool

    Enables channel identification in multi-channel audio.

    Channel identification transcribes the audio on each channel independently, then appends the output for each channel into one transcript.

    You can't include both ShowSpeakerLabels and ChannelIdentification in the same request. Including both parameters returns a BadRequestException.

    For more information, see Transcribing multi-channel audio.

  • maxAlternatives :: Maybe Natural

    Indicate the maximum number of alternative transcriptions you want Amazon Transcribe to include in your transcript.

    If you select a number greater than the number of alternative transcriptions generated by Amazon Transcribe, only the actual number of alternative transcriptions are included.

    If you include MaxAlternatives in your request, you must also include ShowAlternatives with a value of true.

    For more information, see Alternative transcriptions.

  • maxSpeakerLabels :: Maybe Natural

    Specify the maximum number of speakers you want to partition in your media.

    Note that if your media contains more speakers than the specified number, multiple speakers are treated as a single speaker.

    If you specify the MaxSpeakerLabels field, you must set the ShowSpeakerLabels field to true.

  • showAlternatives :: Maybe Bool

    To include alternative transcriptions within your transcription output, include ShowAlternatives in your transcription request.

    If you have multi-channel audio and do not enable channel identification, your audio is transcribed in a continuous manner and your transcript does not separate the speech by channel.

    If you include ShowAlternatives, you must also include MaxAlternatives, which is the maximum number of alternative transcriptions you want Amazon Transcribe to generate.

    For more information, see Alternative transcriptions.

  • showSpeakerLabels :: Maybe Bool

    Enables speaker partitioning (diarization) in your transcription output. Speaker partitioning labels the speech from individual speakers in your media file.

    If you enable ShowSpeakerLabels in your request, you must also include MaxSpeakerLabels.

    You can't include both ShowSpeakerLabels and ChannelIdentification in the same request. Including both parameters returns a BadRequestException.

    For more information, see Partitioning speakers (diarization).

  • vocabularyFilterMethod :: Maybe VocabularyFilterMethod

    Specify how you want your custom vocabulary filter applied to your transcript.

    To replace words with ***, choose mask.

    To delete words, choose remove.

    To flag words without changing them, choose tag.

  • vocabularyFilterName :: Maybe Text

    The name of the custom vocabulary filter you want to use in your transcription job request. This name is case sensitive, cannot contain spaces, and must be unique within an Amazon Web Services account.

    Note that if you include VocabularyFilterName in your request, you must also include VocabularyFilterMethod.

  • vocabularyName :: Maybe Text

    The name of the custom vocabulary you want to use in your transcription job request. This name is case sensitive, cannot contain spaces, and must be unique within an Amazon Web Services account.

Instances

Instances details
FromJSON Settings Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Settings

ToJSON Settings Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Settings

Generic Settings Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Settings

Associated Types

type Rep Settings :: Type -> Type #

Methods

from :: Settings -> Rep Settings x #

to :: Rep Settings x -> Settings #

Read Settings Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Settings

Show Settings Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Settings

NFData Settings Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Settings

Methods

rnf :: Settings -> () #

Eq Settings Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Settings

Hashable Settings Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Settings

Methods

hashWithSalt :: Int -> Settings -> Int #

hash :: Settings -> Int #

type Rep Settings Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Settings

type Rep Settings = D1 ('MetaData "Settings" "Amazonka.Transcribe.Types.Settings" "amazonka-transcribe-2.0-4BjJqzwmm0K94bsEMe1tnF" 'False) (C1 ('MetaCons "Settings'" 'PrefixI 'True) (((S1 ('MetaSel ('Just "channelIdentification") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Bool)) :*: S1 ('MetaSel ('Just "maxAlternatives") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))) :*: (S1 ('MetaSel ('Just "maxSpeakerLabels") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "showAlternatives") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Bool)))) :*: ((S1 ('MetaSel ('Just "showSpeakerLabels") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Bool)) :*: S1 ('MetaSel ('Just "vocabularyFilterMethod") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe VocabularyFilterMethod))) :*: (S1 ('MetaSel ('Just "vocabularyFilterName") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "vocabularyName") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text))))))

newSettings :: Settings Source #

Create a value of Settings with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:channelIdentification:Settings', settings_channelIdentification - Enables channel identification in multi-channel audio.

Channel identification transcribes the audio on each channel independently, then appends the output for each channel into one transcript.

You can't include both ShowSpeakerLabels and ChannelIdentification in the same request. Including both parameters returns a BadRequestException.

For more information, see Transcribing multi-channel audio.

$sel:maxAlternatives:Settings', settings_maxAlternatives - Indicate the maximum number of alternative transcriptions you want Amazon Transcribe to include in your transcript.

If you select a number greater than the number of alternative transcriptions generated by Amazon Transcribe, only the actual number of alternative transcriptions are included.

If you include MaxAlternatives in your request, you must also include ShowAlternatives with a value of true.

For more information, see Alternative transcriptions.

$sel:maxSpeakerLabels:Settings', settings_maxSpeakerLabels - Specify the maximum number of speakers you want to partition in your media.

Note that if your media contains more speakers than the specified number, multiple speakers are treated as a single speaker.

If you specify the MaxSpeakerLabels field, you must set the ShowSpeakerLabels field to true.

$sel:showAlternatives:Settings', settings_showAlternatives - To include alternative transcriptions within your transcription output, include ShowAlternatives in your transcription request.

If you have multi-channel audio and do not enable channel identification, your audio is transcribed in a continuous manner and your transcript does not separate the speech by channel.

If you include ShowAlternatives, you must also include MaxAlternatives, which is the maximum number of alternative transcriptions you want Amazon Transcribe to generate.

For more information, see Alternative transcriptions.

$sel:showSpeakerLabels:Settings', settings_showSpeakerLabels - Enables speaker partitioning (diarization) in your transcription output. Speaker partitioning labels the speech from individual speakers in your media file.

If you enable ShowSpeakerLabels in your request, you must also include MaxSpeakerLabels.

You can't include both ShowSpeakerLabels and ChannelIdentification in the same request. Including both parameters returns a BadRequestException.

For more information, see Partitioning speakers (diarization).

$sel:vocabularyFilterMethod:Settings', settings_vocabularyFilterMethod - Specify how you want your custom vocabulary filter applied to your transcript.

To replace words with ***, choose mask.

To delete words, choose remove.

To flag words without changing them, choose tag.

$sel:vocabularyFilterName:Settings', settings_vocabularyFilterName - The name of the custom vocabulary filter you want to use in your transcription job request. This name is case sensitive, cannot contain spaces, and must be unique within an Amazon Web Services account.

Note that if you include VocabularyFilterName in your request, you must also include VocabularyFilterMethod.

$sel:vocabularyName:Settings', settings_vocabularyName - The name of the custom vocabulary you want to use in your transcription job request. This name is case sensitive, cannot contain spaces, and must be unique within an Amazon Web Services account.

settings_channelIdentification :: Lens' Settings (Maybe Bool) Source #

Enables channel identification in multi-channel audio.

Channel identification transcribes the audio on each channel independently, then appends the output for each channel into one transcript.

You can't include both ShowSpeakerLabels and ChannelIdentification in the same request. Including both parameters returns a BadRequestException.

For more information, see Transcribing multi-channel audio.

settings_maxAlternatives :: Lens' Settings (Maybe Natural) Source #

Indicate the maximum number of alternative transcriptions you want Amazon Transcribe to include in your transcript.

If you select a number greater than the number of alternative transcriptions generated by Amazon Transcribe, only the actual number of alternative transcriptions are included.

If you include MaxAlternatives in your request, you must also include ShowAlternatives with a value of true.

For more information, see Alternative transcriptions.

settings_maxSpeakerLabels :: Lens' Settings (Maybe Natural) Source #

Specify the maximum number of speakers you want to partition in your media.

Note that if your media contains more speakers than the specified number, multiple speakers are treated as a single speaker.

If you specify the MaxSpeakerLabels field, you must set the ShowSpeakerLabels field to true.

settings_showAlternatives :: Lens' Settings (Maybe Bool) Source #

To include alternative transcriptions within your transcription output, include ShowAlternatives in your transcription request.

If you have multi-channel audio and do not enable channel identification, your audio is transcribed in a continuous manner and your transcript does not separate the speech by channel.

If you include ShowAlternatives, you must also include MaxAlternatives, which is the maximum number of alternative transcriptions you want Amazon Transcribe to generate.

For more information, see Alternative transcriptions.

settings_showSpeakerLabels :: Lens' Settings (Maybe Bool) Source #

Enables speaker partitioning (diarization) in your transcription output. Speaker partitioning labels the speech from individual speakers in your media file.

If you enable ShowSpeakerLabels in your request, you must also include MaxSpeakerLabels.

You can't include both ShowSpeakerLabels and ChannelIdentification in the same request. Including both parameters returns a BadRequestException.

For more information, see Partitioning speakers (diarization).

settings_vocabularyFilterMethod :: Lens' Settings (Maybe VocabularyFilterMethod) Source #

Specify how you want your custom vocabulary filter applied to your transcript.

To replace words with ***, choose mask.

To delete words, choose remove.

To flag words without changing them, choose tag.

settings_vocabularyFilterName :: Lens' Settings (Maybe Text) Source #

The name of the custom vocabulary filter you want to use in your transcription job request. This name is case sensitive, cannot contain spaces, and must be unique within an Amazon Web Services account.

Note that if you include VocabularyFilterName in your request, you must also include VocabularyFilterMethod.

settings_vocabularyName :: Lens' Settings (Maybe Text) Source #

The name of the custom vocabulary you want to use in your transcription job request. This name is case sensitive, cannot contain spaces, and must be unique within an Amazon Web Services account.

Subtitles

data Subtitles Source #

Generate subtitles for your media file with your transcription request.

You can choose a start index of 0 or 1, and you can specify either WebVTT or SubRip (or both) as your output format.

Note that your subtitle files are placed in the same location as your transcription output.

See: newSubtitles smart constructor.

Constructors

Subtitles' 

Fields

  • formats :: Maybe [SubtitleFormat]

    Specify the output format for your subtitle file; if you select both WebVTT (vtt) and SubRip (srt) formats, two output files are generated.

  • outputStartIndex :: Maybe Natural

    Specify the starting value that is assigned to the first subtitle segment.

    The default start index for Amazon Transcribe is 0, which differs from the more widely used standard of 1. If you're uncertain which value to use, we recommend choosing 1, as this may improve compatibility with other services.

Instances

Instances details
ToJSON Subtitles Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Subtitles

Generic Subtitles Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Subtitles

Associated Types

type Rep Subtitles :: Type -> Type #

Read Subtitles Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Subtitles

Show Subtitles Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Subtitles

NFData Subtitles Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Subtitles

Methods

rnf :: Subtitles -> () #

Eq Subtitles Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Subtitles

Hashable Subtitles Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Subtitles

type Rep Subtitles Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Subtitles

type Rep Subtitles = D1 ('MetaData "Subtitles" "Amazonka.Transcribe.Types.Subtitles" "amazonka-transcribe-2.0-4BjJqzwmm0K94bsEMe1tnF" 'False) (C1 ('MetaCons "Subtitles'" 'PrefixI 'True) (S1 ('MetaSel ('Just "formats") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe [SubtitleFormat])) :*: S1 ('MetaSel ('Just "outputStartIndex") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural))))

newSubtitles :: Subtitles Source #

Create a value of Subtitles with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:formats:Subtitles', subtitles_formats - Specify the output format for your subtitle file; if you select both WebVTT (vtt) and SubRip (srt) formats, two output files are generated.

$sel:outputStartIndex:Subtitles', subtitles_outputStartIndex - Specify the starting value that is assigned to the first subtitle segment.

The default start index for Amazon Transcribe is 0, which differs from the more widely used standard of 1. If you're uncertain which value to use, we recommend choosing 1, as this may improve compatibility with other services.

subtitles_formats :: Lens' Subtitles (Maybe [SubtitleFormat]) Source #

Specify the output format for your subtitle file; if you select both WebVTT (vtt) and SubRip (srt) formats, two output files are generated.

subtitles_outputStartIndex :: Lens' Subtitles (Maybe Natural) Source #

Specify the starting value that is assigned to the first subtitle segment.

The default start index for Amazon Transcribe is 0, which differs from the more widely used standard of 1. If you're uncertain which value to use, we recommend choosing 1, as this may improve compatibility with other services.

SubtitlesOutput

data SubtitlesOutput Source #

Provides information about your subtitle file, including format, start index, and Amazon S3 location.

See: newSubtitlesOutput smart constructor.

Constructors

SubtitlesOutput' 

Fields

  • formats :: Maybe [SubtitleFormat]

    Provides the format of your subtitle files. If your request included both WebVTT (vtt) and SubRip (srt) formats, both formats are shown.

  • outputStartIndex :: Maybe Natural

    Provides the start index value for your subtitle files. If you did not specify a value in your request, the default value of 0 is used.

  • subtitleFileUris :: Maybe [Text]

    The Amazon S3 location of your transcript. You can use this URI to access or download your subtitle file. Your subtitle file is stored in the same location as your transcript. If you specified both WebVTT and SubRip subtitle formats, two URIs are provided.

    If you included OutputBucketName in your transcription job request, this is the URI of that bucket. If you also included OutputKey in your request, your output is located in the path you specified in your request.

    If you didn't include OutputBucketName in your transcription job request, your subtitle file is stored in a service-managed bucket, and TranscriptFileUri provides you with a temporary URI you can use for secure access to your subtitle file.

    Temporary URIs for service-managed Amazon S3 buckets are only valid for 15 minutes. If you get an AccesDenied error, you can get a new temporary URI by running a GetTranscriptionJob or ListTranscriptionJob request.

Instances

Instances details
FromJSON SubtitlesOutput Source # 
Instance details

Defined in Amazonka.Transcribe.Types.SubtitlesOutput

Generic SubtitlesOutput Source # 
Instance details

Defined in Amazonka.Transcribe.Types.SubtitlesOutput

Associated Types

type Rep SubtitlesOutput :: Type -> Type #

Read SubtitlesOutput Source # 
Instance details

Defined in Amazonka.Transcribe.Types.SubtitlesOutput

Show SubtitlesOutput Source # 
Instance details

Defined in Amazonka.Transcribe.Types.SubtitlesOutput

NFData SubtitlesOutput Source # 
Instance details

Defined in Amazonka.Transcribe.Types.SubtitlesOutput

Methods

rnf :: SubtitlesOutput -> () #

Eq SubtitlesOutput Source # 
Instance details

Defined in Amazonka.Transcribe.Types.SubtitlesOutput

Hashable SubtitlesOutput Source # 
Instance details

Defined in Amazonka.Transcribe.Types.SubtitlesOutput

type Rep SubtitlesOutput Source # 
Instance details

Defined in Amazonka.Transcribe.Types.SubtitlesOutput

type Rep SubtitlesOutput = D1 ('MetaData "SubtitlesOutput" "Amazonka.Transcribe.Types.SubtitlesOutput" "amazonka-transcribe-2.0-4BjJqzwmm0K94bsEMe1tnF" 'False) (C1 ('MetaCons "SubtitlesOutput'" 'PrefixI 'True) (S1 ('MetaSel ('Just "formats") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe [SubtitleFormat])) :*: (S1 ('MetaSel ('Just "outputStartIndex") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: S1 ('MetaSel ('Just "subtitleFileUris") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe [Text])))))

newSubtitlesOutput :: SubtitlesOutput Source #

Create a value of SubtitlesOutput with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:formats:SubtitlesOutput', subtitlesOutput_formats - Provides the format of your subtitle files. If your request included both WebVTT (vtt) and SubRip (srt) formats, both formats are shown.

$sel:outputStartIndex:SubtitlesOutput', subtitlesOutput_outputStartIndex - Provides the start index value for your subtitle files. If you did not specify a value in your request, the default value of 0 is used.

$sel:subtitleFileUris:SubtitlesOutput', subtitlesOutput_subtitleFileUris - The Amazon S3 location of your transcript. You can use this URI to access or download your subtitle file. Your subtitle file is stored in the same location as your transcript. If you specified both WebVTT and SubRip subtitle formats, two URIs are provided.

If you included OutputBucketName in your transcription job request, this is the URI of that bucket. If you also included OutputKey in your request, your output is located in the path you specified in your request.

If you didn't include OutputBucketName in your transcription job request, your subtitle file is stored in a service-managed bucket, and TranscriptFileUri provides you with a temporary URI you can use for secure access to your subtitle file.

Temporary URIs for service-managed Amazon S3 buckets are only valid for 15 minutes. If you get an AccesDenied error, you can get a new temporary URI by running a GetTranscriptionJob or ListTranscriptionJob request.

subtitlesOutput_formats :: Lens' SubtitlesOutput (Maybe [SubtitleFormat]) Source #

Provides the format of your subtitle files. If your request included both WebVTT (vtt) and SubRip (srt) formats, both formats are shown.

subtitlesOutput_outputStartIndex :: Lens' SubtitlesOutput (Maybe Natural) Source #

Provides the start index value for your subtitle files. If you did not specify a value in your request, the default value of 0 is used.

subtitlesOutput_subtitleFileUris :: Lens' SubtitlesOutput (Maybe [Text]) Source #

The Amazon S3 location of your transcript. You can use this URI to access or download your subtitle file. Your subtitle file is stored in the same location as your transcript. If you specified both WebVTT and SubRip subtitle formats, two URIs are provided.

If you included OutputBucketName in your transcription job request, this is the URI of that bucket. If you also included OutputKey in your request, your output is located in the path you specified in your request.

If you didn't include OutputBucketName in your transcription job request, your subtitle file is stored in a service-managed bucket, and TranscriptFileUri provides you with a temporary URI you can use for secure access to your subtitle file.

Temporary URIs for service-managed Amazon S3 buckets are only valid for 15 minutes. If you get an AccesDenied error, you can get a new temporary URI by running a GetTranscriptionJob or ListTranscriptionJob request.

Tag

data Tag Source #

Adds metadata, in the form of a key:value pair, to the specified resource.

For example, you could add the tag Department:Sales to a resource to indicate that it pertains to your organization's sales department. You can also use tags for tag-based access control.

To learn more about tagging, see Tagging resources.

See: newTag smart constructor.

Constructors

Tag' 

Fields

  • key :: Text

    The first part of a key:value pair that forms a tag associated with a given resource. For example, in the tag Department:Sales, the key is 'Department'.

  • value :: Text

    The second part of a key:value pair that forms a tag associated with a given resource. For example, in the tag Department:Sales, the value is 'Sales'.

    Note that you can set the value of a tag to an empty string, but you can't set the value of a tag to null. Omitting the tag value is the same as using an empty string.

Instances

Instances details
FromJSON Tag Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Tag

ToJSON Tag Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Tag

Generic Tag Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Tag

Associated Types

type Rep Tag :: Type -> Type #

Methods

from :: Tag -> Rep Tag x #

to :: Rep Tag x -> Tag #

Read Tag Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Tag

Show Tag Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Tag

Methods

showsPrec :: Int -> Tag -> ShowS #

show :: Tag -> String #

showList :: [Tag] -> ShowS #

NFData Tag Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Tag

Methods

rnf :: Tag -> () #

Eq Tag Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Tag

Methods

(==) :: Tag -> Tag -> Bool #

(/=) :: Tag -> Tag -> Bool #

Hashable Tag Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Tag

Methods

hashWithSalt :: Int -> Tag -> Int #

hash :: Tag -> Int #

type Rep Tag Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Tag

type Rep Tag = D1 ('MetaData "Tag" "Amazonka.Transcribe.Types.Tag" "amazonka-transcribe-2.0-4BjJqzwmm0K94bsEMe1tnF" 'False) (C1 ('MetaCons "Tag'" 'PrefixI 'True) (S1 ('MetaSel ('Just "key") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 Text) :*: S1 ('MetaSel ('Just "value") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 Text)))

newTag Source #

Create a value of Tag with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:key:Tag', tag_key - The first part of a key:value pair that forms a tag associated with a given resource. For example, in the tag Department:Sales, the key is 'Department'.

$sel:value:Tag', tag_value - The second part of a key:value pair that forms a tag associated with a given resource. For example, in the tag Department:Sales, the value is 'Sales'.

Note that you can set the value of a tag to an empty string, but you can't set the value of a tag to null. Omitting the tag value is the same as using an empty string.

tag_key :: Lens' Tag Text Source #

The first part of a key:value pair that forms a tag associated with a given resource. For example, in the tag Department:Sales, the key is 'Department'.

tag_value :: Lens' Tag Text Source #

The second part of a key:value pair that forms a tag associated with a given resource. For example, in the tag Department:Sales, the value is 'Sales'.

Note that you can set the value of a tag to an empty string, but you can't set the value of a tag to null. Omitting the tag value is the same as using an empty string.

Transcript

data Transcript Source #

Provides you with the Amazon S3 URI you can use to access your transcript.

See: newTranscript smart constructor.

Constructors

Transcript' 

Fields

  • redactedTranscriptFileUri :: Maybe Text

    The Amazon S3 location of your redacted transcript. You can use this URI to access or download your transcript.

    If you included OutputBucketName in your transcription job request, this is the URI of that bucket. If you also included OutputKey in your request, your output is located in the path you specified in your request.

    If you didn't include OutputBucketName in your transcription job request, your transcript is stored in a service-managed bucket, and RedactedTranscriptFileUri provides you with a temporary URI you can use for secure access to your transcript.

    Temporary URIs for service-managed Amazon S3 buckets are only valid for 15 minutes. If you get an AccesDenied error, you can get a new temporary URI by running a GetTranscriptionJob or ListTranscriptionJob request.

  • transcriptFileUri :: Maybe Text

    The Amazon S3 location of your transcript. You can use this URI to access or download your transcript.

    If you included OutputBucketName in your transcription job request, this is the URI of that bucket. If you also included OutputKey in your request, your output is located in the path you specified in your request.

    If you didn't include OutputBucketName in your transcription job request, your transcript is stored in a service-managed bucket, and TranscriptFileUri provides you with a temporary URI you can use for secure access to your transcript.

    Temporary URIs for service-managed Amazon S3 buckets are only valid for 15 minutes. If you get an AccesDenied error, you can get a new temporary URI by running a GetTranscriptionJob or ListTranscriptionJob request.

Instances

Instances details
FromJSON Transcript Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Transcript

Generic Transcript Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Transcript

Associated Types

type Rep Transcript :: Type -> Type #

Read Transcript Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Transcript

Show Transcript Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Transcript

NFData Transcript Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Transcript

Methods

rnf :: Transcript -> () #

Eq Transcript Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Transcript

Hashable Transcript Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Transcript

type Rep Transcript Source # 
Instance details

Defined in Amazonka.Transcribe.Types.Transcript

type Rep Transcript = D1 ('MetaData "Transcript" "Amazonka.Transcribe.Types.Transcript" "amazonka-transcribe-2.0-4BjJqzwmm0K94bsEMe1tnF" 'False) (C1 ('MetaCons "Transcript'" 'PrefixI 'True) (S1 ('MetaSel ('Just "redactedTranscriptFileUri") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "transcriptFileUri") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text))))

newTranscript :: Transcript Source #

Create a value of Transcript with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:redactedTranscriptFileUri:Transcript', transcript_redactedTranscriptFileUri - The Amazon S3 location of your redacted transcript. You can use this URI to access or download your transcript.

If you included OutputBucketName in your transcription job request, this is the URI of that bucket. If you also included OutputKey in your request, your output is located in the path you specified in your request.

If you didn't include OutputBucketName in your transcription job request, your transcript is stored in a service-managed bucket, and RedactedTranscriptFileUri provides you with a temporary URI you can use for secure access to your transcript.

Temporary URIs for service-managed Amazon S3 buckets are only valid for 15 minutes. If you get an AccesDenied error, you can get a new temporary URI by running a GetTranscriptionJob or ListTranscriptionJob request.

$sel:transcriptFileUri:Transcript', transcript_transcriptFileUri - The Amazon S3 location of your transcript. You can use this URI to access or download your transcript.

If you included OutputBucketName in your transcription job request, this is the URI of that bucket. If you also included OutputKey in your request, your output is located in the path you specified in your request.

If you didn't include OutputBucketName in your transcription job request, your transcript is stored in a service-managed bucket, and TranscriptFileUri provides you with a temporary URI you can use for secure access to your transcript.

Temporary URIs for service-managed Amazon S3 buckets are only valid for 15 minutes. If you get an AccesDenied error, you can get a new temporary URI by running a GetTranscriptionJob or ListTranscriptionJob request.

transcript_redactedTranscriptFileUri :: Lens' Transcript (Maybe Text) Source #

The Amazon S3 location of your redacted transcript. You can use this URI to access or download your transcript.

If you included OutputBucketName in your transcription job request, this is the URI of that bucket. If you also included OutputKey in your request, your output is located in the path you specified in your request.

If you didn't include OutputBucketName in your transcription job request, your transcript is stored in a service-managed bucket, and RedactedTranscriptFileUri provides you with a temporary URI you can use for secure access to your transcript.

Temporary URIs for service-managed Amazon S3 buckets are only valid for 15 minutes. If you get an AccesDenied error, you can get a new temporary URI by running a GetTranscriptionJob or ListTranscriptionJob request.

transcript_transcriptFileUri :: Lens' Transcript (Maybe Text) Source #

The Amazon S3 location of your transcript. You can use this URI to access or download your transcript.

If you included OutputBucketName in your transcription job request, this is the URI of that bucket. If you also included OutputKey in your request, your output is located in the path you specified in your request.

If you didn't include OutputBucketName in your transcription job request, your transcript is stored in a service-managed bucket, and TranscriptFileUri provides you with a temporary URI you can use for secure access to your transcript.

Temporary URIs for service-managed Amazon S3 buckets are only valid for 15 minutes. If you get an AccesDenied error, you can get a new temporary URI by running a GetTranscriptionJob or ListTranscriptionJob request.

TranscriptFilter

data TranscriptFilter Source #

Flag the presence or absence of specific words or phrases detected in your Call Analytics transcription output.

Rules using TranscriptFilter are designed to match:

  • Custom words or phrases spoken by the agent, the customer, or both
  • Custom words or phrases not spoken by the agent, the customer, or either
  • Custom words or phrases that occur at a specific time frame

See Rule criteria for batch categories and Rule criteria for streaming categories for usage examples.

See: newTranscriptFilter smart constructor.

Constructors

TranscriptFilter' 

Fields

  • absoluteTimeRange :: Maybe AbsoluteTimeRange

    Makes it possible to specify a time range (in milliseconds) in your audio, during which you want to search for the specified key words or phrases. See for more detail.

  • negate :: Maybe Bool

    Set to TRUE to flag the absence of the phrase that you specified in your request. Set to FALSE to flag the presence of the phrase that you specified in your request.

  • participantRole :: Maybe ParticipantRole

    Specify the participant that you want to flag. Omitting this parameter is equivalent to specifying both participants.

  • relativeTimeRange :: Maybe RelativeTimeRange

    Makes it possible to specify a time range (in percentage) in your media file, during which you want to search for the specified key words or phrases. See for more detail.

  • transcriptFilterType :: TranscriptFilterType

    Flag the presence or absence of an exact match to the phrases that you specify. For example, if you specify the phrase "speak to a manager" as your Targets value, only that exact phrase is flagged.

    Note that semantic matching is not supported. For example, if your customer says "speak to the manager", instead of "speak to a manager", your content is not flagged.

  • targets :: NonEmpty Text

    Specify the phrases that you want to flag.

Instances

Instances details
FromJSON TranscriptFilter Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptFilter

ToJSON TranscriptFilter Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptFilter

Generic TranscriptFilter Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptFilter

Associated Types

type Rep TranscriptFilter :: Type -> Type #

Read TranscriptFilter Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptFilter

Show TranscriptFilter Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptFilter

NFData TranscriptFilter Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptFilter

Methods

rnf :: TranscriptFilter -> () #

Eq TranscriptFilter Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptFilter

Hashable TranscriptFilter Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptFilter

type Rep TranscriptFilter Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptFilter

type Rep TranscriptFilter = D1 ('MetaData "TranscriptFilter" "Amazonka.Transcribe.Types.TranscriptFilter" "amazonka-transcribe-2.0-4BjJqzwmm0K94bsEMe1tnF" 'False) (C1 ('MetaCons "TranscriptFilter'" 'PrefixI 'True) ((S1 ('MetaSel ('Just "absoluteTimeRange") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe AbsoluteTimeRange)) :*: (S1 ('MetaSel ('Just "negate") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Bool)) :*: S1 ('MetaSel ('Just "participantRole") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe ParticipantRole)))) :*: (S1 ('MetaSel ('Just "relativeTimeRange") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe RelativeTimeRange)) :*: (S1 ('MetaSel ('Just "transcriptFilterType") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 TranscriptFilterType) :*: S1 ('MetaSel ('Just "targets") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (NonEmpty Text))))))

newTranscriptFilter Source #

Create a value of TranscriptFilter with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:absoluteTimeRange:TranscriptFilter', transcriptFilter_absoluteTimeRange - Makes it possible to specify a time range (in milliseconds) in your audio, during which you want to search for the specified key words or phrases. See for more detail.

$sel:negate:TranscriptFilter', transcriptFilter_negate - Set to TRUE to flag the absence of the phrase that you specified in your request. Set to FALSE to flag the presence of the phrase that you specified in your request.

$sel:participantRole:TranscriptFilter', transcriptFilter_participantRole - Specify the participant that you want to flag. Omitting this parameter is equivalent to specifying both participants.

$sel:relativeTimeRange:TranscriptFilter', transcriptFilter_relativeTimeRange - Makes it possible to specify a time range (in percentage) in your media file, during which you want to search for the specified key words or phrases. See for more detail.

$sel:transcriptFilterType:TranscriptFilter', transcriptFilter_transcriptFilterType - Flag the presence or absence of an exact match to the phrases that you specify. For example, if you specify the phrase "speak to a manager" as your Targets value, only that exact phrase is flagged.

Note that semantic matching is not supported. For example, if your customer says "speak to the manager", instead of "speak to a manager", your content is not flagged.

$sel:targets:TranscriptFilter', transcriptFilter_targets - Specify the phrases that you want to flag.

transcriptFilter_absoluteTimeRange :: Lens' TranscriptFilter (Maybe AbsoluteTimeRange) Source #

Makes it possible to specify a time range (in milliseconds) in your audio, during which you want to search for the specified key words or phrases. See for more detail.

transcriptFilter_negate :: Lens' TranscriptFilter (Maybe Bool) Source #

Set to TRUE to flag the absence of the phrase that you specified in your request. Set to FALSE to flag the presence of the phrase that you specified in your request.

transcriptFilter_participantRole :: Lens' TranscriptFilter (Maybe ParticipantRole) Source #

Specify the participant that you want to flag. Omitting this parameter is equivalent to specifying both participants.

transcriptFilter_relativeTimeRange :: Lens' TranscriptFilter (Maybe RelativeTimeRange) Source #

Makes it possible to specify a time range (in percentage) in your media file, during which you want to search for the specified key words or phrases. See for more detail.

transcriptFilter_transcriptFilterType :: Lens' TranscriptFilter TranscriptFilterType Source #

Flag the presence or absence of an exact match to the phrases that you specify. For example, if you specify the phrase "speak to a manager" as your Targets value, only that exact phrase is flagged.

Note that semantic matching is not supported. For example, if your customer says "speak to the manager", instead of "speak to a manager", your content is not flagged.

transcriptFilter_targets :: Lens' TranscriptFilter (NonEmpty Text) Source #

Specify the phrases that you want to flag.

TranscriptionJob

data TranscriptionJob Source #

Provides detailed information about a transcription job.

To view the status of the specified transcription job, check the TranscriptionJobStatus field. If the status is COMPLETED, the job is finished and you can find the results at the location specified in TranscriptFileUri. If the status is FAILED, FailureReason provides details on why your transcription job failed.

If you enabled content redaction, the redacted transcript can be found at the location specified in RedactedTranscriptFileUri.

See: newTranscriptionJob smart constructor.

Constructors

TranscriptionJob' 

Fields

  • completionTime :: Maybe POSIX

    The date and time the specified transcription job finished processing.

    Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:33:13.922000-07:00 represents a transcription job that started processing at 12:33 PM UTC-7 on May 4, 2022.

  • contentRedaction :: Maybe ContentRedaction

    Indicates whether redaction was enabled in your transcript.

  • creationTime :: Maybe POSIX

    The date and time the specified transcription job request was made.

    Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:32:58.761000-07:00 represents a transcription job that started processing at 12:32 PM UTC-7 on May 4, 2022.

  • failureReason :: Maybe Text

    If TranscriptionJobStatus is FAILED, FailureReason contains information about why the transcription job request failed.

    The FailureReason field contains one of the following values:

    • Unsupported media format.

      The media format specified in MediaFormat isn't valid. Refer to MediaFormat for a list of supported formats.

    • The media format provided does not match the detected media format.

      The media format specified in MediaFormat doesn't match the format of the input file. Check the media format of your media file and correct the specified value.

    • Invalid sample rate for audio file.

      The sample rate specified in MediaSampleRateHertz isn't valid. The sample rate must be between 8,000 and 48,000 hertz.

    • The sample rate provided does not match the detected sample rate.

      The sample rate specified in MediaSampleRateHertz doesn't match the sample rate detected in your input media file. Check the sample rate of your media file and correct the specified value.

    • Invalid file size: file size too large.

      The size of your media file is larger than what Amazon Transcribe can process. For more information, refer to Guidelines and quotas.

    • Invalid number of channels: number of channels too large.

      Your audio contains more channels than Amazon Transcribe is able to process. For more information, refer to Guidelines and quotas.

  • identifiedLanguageScore :: Maybe Double

    The confidence score associated with the language identified in your media file.

    Confidence scores are values between 0 and 1; a larger value indicates a higher probability that the identified language correctly matches the language spoken in your media.

  • identifyLanguage :: Maybe Bool

    Indicates whether automatic language identification was enabled (TRUE) for the specified transcription job.

  • identifyMultipleLanguages :: Maybe Bool

    Indicates whether automatic multi-language identification was enabled (TRUE) for the specified transcription job.

  • jobExecutionSettings :: Maybe JobExecutionSettings

    Provides information about how your transcription job was processed. This parameter shows if your request was queued and what data access role was used.

  • languageCode :: Maybe LanguageCode

    The language code used to create your transcription job. This parameter is used with single-language identification. For multi-language identification requests, refer to the plural version of this parameter, LanguageCodes.

  • languageCodes :: Maybe [LanguageCodeItem]

    The language codes used to create your transcription job. This parameter is used with multi-language identification. For single-language identification requests, refer to the singular version of this parameter, LanguageCode.

  • languageIdSettings :: Maybe (HashMap LanguageCode LanguageIdSettings)

    Provides the name and language of all custom language models, custom vocabularies, and custom vocabulary filters that you included in your request.

  • languageOptions :: Maybe (NonEmpty LanguageCode)

    Provides the language codes you specified in your request.

  • media :: Maybe Media

    Provides the Amazon S3 location of the media file you used in your request.

  • mediaFormat :: Maybe MediaFormat

    The format of the input media file.

  • mediaSampleRateHertz :: Maybe Natural

    The sample rate, in hertz, of the audio track in your input media file.

  • modelSettings :: Maybe ModelSettings

    Provides information on the custom language model you included in your request.

  • settings :: Maybe Settings

    Provides information on any additional settings that were included in your request. Additional settings include channel identification, alternative transcriptions, speaker partitioning, custom vocabularies, and custom vocabulary filters.

  • startTime :: Maybe POSIX

    The date and time the specified transcription job began processing.

    Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:32:58.789000-07:00 represents a transcription job that started processing at 12:32 PM UTC-7 on May 4, 2022.

  • subtitles :: Maybe SubtitlesOutput

    Indicates whether subtitles were generated with your transcription.

  • tags :: Maybe (NonEmpty Tag)

    The tags, each in the form of a key:value pair, assigned to the specified transcription job.

  • transcript :: Maybe Transcript

    Provides you with the Amazon S3 URI you can use to access your transcript.

  • transcriptionJobName :: Maybe Text

    The name of the transcription job. Job names are case sensitive and must be unique within an Amazon Web Services account.

  • transcriptionJobStatus :: Maybe TranscriptionJobStatus

    Provides the status of the specified transcription job.

    If the status is COMPLETED, the job is finished and you can find the results at the location specified in TranscriptFileUri (or RedactedTranscriptFileUri, if you requested transcript redaction). If the status is FAILED, FailureReason provides details on why your transcription job failed.

Instances

Instances details
FromJSON TranscriptionJob Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptionJob

Generic TranscriptionJob Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptionJob

Associated Types

type Rep TranscriptionJob :: Type -> Type #

Read TranscriptionJob Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptionJob

Show TranscriptionJob Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptionJob

NFData TranscriptionJob Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptionJob

Methods

rnf :: TranscriptionJob -> () #

Eq TranscriptionJob Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptionJob

Hashable TranscriptionJob Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptionJob

type Rep TranscriptionJob Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptionJob

type Rep TranscriptionJob = D1 ('MetaData "TranscriptionJob" "Amazonka.Transcribe.Types.TranscriptionJob" "amazonka-transcribe-2.0-4BjJqzwmm0K94bsEMe1tnF" 'False) (C1 ('MetaCons "TranscriptionJob'" 'PrefixI 'True) ((((S1 ('MetaSel ('Just "completionTime") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe POSIX)) :*: S1 ('MetaSel ('Just "contentRedaction") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe ContentRedaction))) :*: (S1 ('MetaSel ('Just "creationTime") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe POSIX)) :*: (S1 ('MetaSel ('Just "failureReason") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "identifiedLanguageScore") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Double))))) :*: ((S1 ('MetaSel ('Just "identifyLanguage") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Bool)) :*: (S1 ('MetaSel ('Just "identifyMultipleLanguages") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Bool)) :*: S1 ('MetaSel ('Just "jobExecutionSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe JobExecutionSettings)))) :*: (S1 ('MetaSel ('Just "languageCode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe LanguageCode)) :*: (S1 ('MetaSel ('Just "languageCodes") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe [LanguageCodeItem])) :*: S1 ('MetaSel ('Just "languageIdSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe (HashMap LanguageCode LanguageIdSettings))))))) :*: (((S1 ('MetaSel ('Just "languageOptions") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe (NonEmpty LanguageCode))) :*: (S1 ('MetaSel ('Just "media") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Media)) :*: S1 ('MetaSel ('Just "mediaFormat") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe MediaFormat)))) :*: (S1 ('MetaSel ('Just "mediaSampleRateHertz") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Natural)) :*: (S1 ('MetaSel ('Just "modelSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe ModelSettings)) :*: S1 ('MetaSel ('Just "settings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Settings))))) :*: ((S1 ('MetaSel ('Just "startTime") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe POSIX)) :*: (S1 ('MetaSel ('Just "subtitles") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe SubtitlesOutput)) :*: S1 ('MetaSel ('Just "tags") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe (NonEmpty Tag))))) :*: (S1 ('MetaSel ('Just "transcript") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Transcript)) :*: (S1 ('MetaSel ('Just "transcriptionJobName") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "transcriptionJobStatus") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe TranscriptionJobStatus))))))))

newTranscriptionJob :: TranscriptionJob Source #

Create a value of TranscriptionJob with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:completionTime:TranscriptionJob', transcriptionJob_completionTime - The date and time the specified transcription job finished processing.

Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:33:13.922000-07:00 represents a transcription job that started processing at 12:33 PM UTC-7 on May 4, 2022.

$sel:contentRedaction:TranscriptionJob', transcriptionJob_contentRedaction - Indicates whether redaction was enabled in your transcript.

$sel:creationTime:TranscriptionJob', transcriptionJob_creationTime - The date and time the specified transcription job request was made.

Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:32:58.761000-07:00 represents a transcription job that started processing at 12:32 PM UTC-7 on May 4, 2022.

$sel:failureReason:TranscriptionJob', transcriptionJob_failureReason - If TranscriptionJobStatus is FAILED, FailureReason contains information about why the transcription job request failed.

The FailureReason field contains one of the following values:

  • Unsupported media format.

    The media format specified in MediaFormat isn't valid. Refer to MediaFormat for a list of supported formats.

  • The media format provided does not match the detected media format.

    The media format specified in MediaFormat doesn't match the format of the input file. Check the media format of your media file and correct the specified value.

  • Invalid sample rate for audio file.

    The sample rate specified in MediaSampleRateHertz isn't valid. The sample rate must be between 8,000 and 48,000 hertz.

  • The sample rate provided does not match the detected sample rate.

    The sample rate specified in MediaSampleRateHertz doesn't match the sample rate detected in your input media file. Check the sample rate of your media file and correct the specified value.

  • Invalid file size: file size too large.

    The size of your media file is larger than what Amazon Transcribe can process. For more information, refer to Guidelines and quotas.

  • Invalid number of channels: number of channels too large.

    Your audio contains more channels than Amazon Transcribe is able to process. For more information, refer to Guidelines and quotas.

$sel:identifiedLanguageScore:TranscriptionJob', transcriptionJob_identifiedLanguageScore - The confidence score associated with the language identified in your media file.

Confidence scores are values between 0 and 1; a larger value indicates a higher probability that the identified language correctly matches the language spoken in your media.

$sel:identifyLanguage:TranscriptionJob', transcriptionJob_identifyLanguage - Indicates whether automatic language identification was enabled (TRUE) for the specified transcription job.

$sel:identifyMultipleLanguages:TranscriptionJob', transcriptionJob_identifyMultipleLanguages - Indicates whether automatic multi-language identification was enabled (TRUE) for the specified transcription job.

$sel:jobExecutionSettings:TranscriptionJob', transcriptionJob_jobExecutionSettings - Provides information about how your transcription job was processed. This parameter shows if your request was queued and what data access role was used.

TranscriptionJob, transcriptionJob_languageCode - The language code used to create your transcription job. This parameter is used with single-language identification. For multi-language identification requests, refer to the plural version of this parameter, LanguageCodes.

$sel:languageCodes:TranscriptionJob', transcriptionJob_languageCodes - The language codes used to create your transcription job. This parameter is used with multi-language identification. For single-language identification requests, refer to the singular version of this parameter, LanguageCode.

$sel:languageIdSettings:TranscriptionJob', transcriptionJob_languageIdSettings - Provides the name and language of all custom language models, custom vocabularies, and custom vocabulary filters that you included in your request.

$sel:languageOptions:TranscriptionJob', transcriptionJob_languageOptions - Provides the language codes you specified in your request.

$sel:media:TranscriptionJob', transcriptionJob_media - Provides the Amazon S3 location of the media file you used in your request.

$sel:mediaFormat:TranscriptionJob', transcriptionJob_mediaFormat - The format of the input media file.

$sel:mediaSampleRateHertz:TranscriptionJob', transcriptionJob_mediaSampleRateHertz - The sample rate, in hertz, of the audio track in your input media file.

$sel:modelSettings:TranscriptionJob', transcriptionJob_modelSettings - Provides information on the custom language model you included in your request.

$sel:settings:TranscriptionJob', transcriptionJob_settings - Provides information on any additional settings that were included in your request. Additional settings include channel identification, alternative transcriptions, speaker partitioning, custom vocabularies, and custom vocabulary filters.

$sel:startTime:TranscriptionJob', transcriptionJob_startTime - The date and time the specified transcription job began processing.

Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:32:58.789000-07:00 represents a transcription job that started processing at 12:32 PM UTC-7 on May 4, 2022.

$sel:subtitles:TranscriptionJob', transcriptionJob_subtitles - Indicates whether subtitles were generated with your transcription.

$sel:tags:TranscriptionJob', transcriptionJob_tags - The tags, each in the form of a key:value pair, assigned to the specified transcription job.

$sel:transcript:TranscriptionJob', transcriptionJob_transcript - Provides you with the Amazon S3 URI you can use to access your transcript.

$sel:transcriptionJobName:TranscriptionJob', transcriptionJob_transcriptionJobName - The name of the transcription job. Job names are case sensitive and must be unique within an Amazon Web Services account.

$sel:transcriptionJobStatus:TranscriptionJob', transcriptionJob_transcriptionJobStatus - Provides the status of the specified transcription job.

If the status is COMPLETED, the job is finished and you can find the results at the location specified in TranscriptFileUri (or RedactedTranscriptFileUri, if you requested transcript redaction). If the status is FAILED, FailureReason provides details on why your transcription job failed.

transcriptionJob_completionTime :: Lens' TranscriptionJob (Maybe UTCTime) Source #

The date and time the specified transcription job finished processing.

Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:33:13.922000-07:00 represents a transcription job that started processing at 12:33 PM UTC-7 on May 4, 2022.

transcriptionJob_contentRedaction :: Lens' TranscriptionJob (Maybe ContentRedaction) Source #

Indicates whether redaction was enabled in your transcript.

transcriptionJob_creationTime :: Lens' TranscriptionJob (Maybe UTCTime) Source #

The date and time the specified transcription job request was made.

Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:32:58.761000-07:00 represents a transcription job that started processing at 12:32 PM UTC-7 on May 4, 2022.

transcriptionJob_failureReason :: Lens' TranscriptionJob (Maybe Text) Source #

If TranscriptionJobStatus is FAILED, FailureReason contains information about why the transcription job request failed.

The FailureReason field contains one of the following values:

  • Unsupported media format.

    The media format specified in MediaFormat isn't valid. Refer to MediaFormat for a list of supported formats.

  • The media format provided does not match the detected media format.

    The media format specified in MediaFormat doesn't match the format of the input file. Check the media format of your media file and correct the specified value.

  • Invalid sample rate for audio file.

    The sample rate specified in MediaSampleRateHertz isn't valid. The sample rate must be between 8,000 and 48,000 hertz.

  • The sample rate provided does not match the detected sample rate.

    The sample rate specified in MediaSampleRateHertz doesn't match the sample rate detected in your input media file. Check the sample rate of your media file and correct the specified value.

  • Invalid file size: file size too large.

    The size of your media file is larger than what Amazon Transcribe can process. For more information, refer to Guidelines and quotas.

  • Invalid number of channels: number of channels too large.

    Your audio contains more channels than Amazon Transcribe is able to process. For more information, refer to Guidelines and quotas.

transcriptionJob_identifiedLanguageScore :: Lens' TranscriptionJob (Maybe Double) Source #

The confidence score associated with the language identified in your media file.

Confidence scores are values between 0 and 1; a larger value indicates a higher probability that the identified language correctly matches the language spoken in your media.

transcriptionJob_identifyLanguage :: Lens' TranscriptionJob (Maybe Bool) Source #

Indicates whether automatic language identification was enabled (TRUE) for the specified transcription job.

transcriptionJob_identifyMultipleLanguages :: Lens' TranscriptionJob (Maybe Bool) Source #

Indicates whether automatic multi-language identification was enabled (TRUE) for the specified transcription job.

transcriptionJob_jobExecutionSettings :: Lens' TranscriptionJob (Maybe JobExecutionSettings) Source #

Provides information about how your transcription job was processed. This parameter shows if your request was queued and what data access role was used.

transcriptionJob_languageCode :: Lens' TranscriptionJob (Maybe LanguageCode) Source #

The language code used to create your transcription job. This parameter is used with single-language identification. For multi-language identification requests, refer to the plural version of this parameter, LanguageCodes.

transcriptionJob_languageCodes :: Lens' TranscriptionJob (Maybe [LanguageCodeItem]) Source #

The language codes used to create your transcription job. This parameter is used with multi-language identification. For single-language identification requests, refer to the singular version of this parameter, LanguageCode.

transcriptionJob_languageIdSettings :: Lens' TranscriptionJob (Maybe (HashMap LanguageCode LanguageIdSettings)) Source #

Provides the name and language of all custom language models, custom vocabularies, and custom vocabulary filters that you included in your request.

transcriptionJob_languageOptions :: Lens' TranscriptionJob (Maybe (NonEmpty LanguageCode)) Source #

Provides the language codes you specified in your request.

transcriptionJob_media :: Lens' TranscriptionJob (Maybe Media) Source #

Provides the Amazon S3 location of the media file you used in your request.

transcriptionJob_mediaSampleRateHertz :: Lens' TranscriptionJob (Maybe Natural) Source #

The sample rate, in hertz, of the audio track in your input media file.

transcriptionJob_modelSettings :: Lens' TranscriptionJob (Maybe ModelSettings) Source #

Provides information on the custom language model you included in your request.

transcriptionJob_settings :: Lens' TranscriptionJob (Maybe Settings) Source #

Provides information on any additional settings that were included in your request. Additional settings include channel identification, alternative transcriptions, speaker partitioning, custom vocabularies, and custom vocabulary filters.

transcriptionJob_startTime :: Lens' TranscriptionJob (Maybe UTCTime) Source #

The date and time the specified transcription job began processing.

Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:32:58.789000-07:00 represents a transcription job that started processing at 12:32 PM UTC-7 on May 4, 2022.

transcriptionJob_subtitles :: Lens' TranscriptionJob (Maybe SubtitlesOutput) Source #

Indicates whether subtitles were generated with your transcription.

transcriptionJob_tags :: Lens' TranscriptionJob (Maybe (NonEmpty Tag)) Source #

The tags, each in the form of a key:value pair, assigned to the specified transcription job.

transcriptionJob_transcript :: Lens' TranscriptionJob (Maybe Transcript) Source #

Provides you with the Amazon S3 URI you can use to access your transcript.

transcriptionJob_transcriptionJobName :: Lens' TranscriptionJob (Maybe Text) Source #

The name of the transcription job. Job names are case sensitive and must be unique within an Amazon Web Services account.

transcriptionJob_transcriptionJobStatus :: Lens' TranscriptionJob (Maybe TranscriptionJobStatus) Source #

Provides the status of the specified transcription job.

If the status is COMPLETED, the job is finished and you can find the results at the location specified in TranscriptFileUri (or RedactedTranscriptFileUri, if you requested transcript redaction). If the status is FAILED, FailureReason provides details on why your transcription job failed.

TranscriptionJobSummary

data TranscriptionJobSummary Source #

Provides detailed information about a specific transcription job.

See: newTranscriptionJobSummary smart constructor.

Constructors

TranscriptionJobSummary' 

Fields

  • completionTime :: Maybe POSIX

    The date and time the specified transcription job finished processing.

    Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:33:13.922000-07:00 represents a transcription job that started processing at 12:33 PM UTC-7 on May 4, 2022.

  • contentRedaction :: Maybe ContentRedaction

    The content redaction settings of the transcription job.

  • creationTime :: Maybe POSIX

    The date and time the specified transcription job request was made.

    Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:32:58.761000-07:00 represents a transcription job that started processing at 12:32 PM UTC-7 on May 4, 2022.

  • failureReason :: Maybe Text

    If TranscriptionJobStatus is FAILED, FailureReason contains information about why the transcription job failed. See also: Common Errors.

  • identifiedLanguageScore :: Maybe Double

    The confidence score associated with the language identified in your media file.

    Confidence scores are values between 0 and 1; a larger value indicates a higher probability that the identified language correctly matches the language spoken in your media.

  • identifyLanguage :: Maybe Bool

    Indicates whether automatic language identification was enabled (TRUE) for the specified transcription job.

  • identifyMultipleLanguages :: Maybe Bool

    Indicates whether automatic multi-language identification was enabled (TRUE) for the specified transcription job.

  • languageCode :: Maybe LanguageCode

    The language code used to create your transcription.

  • languageCodes :: Maybe [LanguageCodeItem]

    The language codes used to create your transcription job. This parameter is used with multi-language identification. For single-language identification, the singular version of this parameter, LanguageCode, is present.

  • modelSettings :: Maybe ModelSettings
     
  • outputLocationType :: Maybe OutputLocationType

    Indicates where the specified transcription output is stored.

    If the value is CUSTOMER_BUCKET, the location is the Amazon S3 bucket you specified using the OutputBucketName parameter in your request. If you also included OutputKey in your request, your output is located in the path you specified in your request.

    If the value is SERVICE_BUCKET, the location is a service-managed Amazon S3 bucket. To access a transcript stored in a service-managed bucket, use the URI shown in the TranscriptFileUri or RedactedTranscriptFileUri field.

  • startTime :: Maybe POSIX

    The date and time your transcription job began processing.

    Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:32:58.789000-07:00 represents a transcription job that started processing at 12:32 PM UTC-7 on May 4, 2022.

  • transcriptionJobName :: Maybe Text

    The name of the transcription job. Job names are case sensitive and must be unique within an Amazon Web Services account.

  • transcriptionJobStatus :: Maybe TranscriptionJobStatus

    Provides the status of your transcription job.

    If the status is COMPLETED, the job is finished and you can find the results at the location specified in TranscriptFileUri (or RedactedTranscriptFileUri, if you requested transcript redaction). If the status is FAILED, FailureReason provides details on why your transcription job failed.

Instances

Instances details
FromJSON TranscriptionJobSummary Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptionJobSummary

Generic TranscriptionJobSummary Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptionJobSummary

Associated Types

type Rep TranscriptionJobSummary :: Type -> Type #

Read TranscriptionJobSummary Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptionJobSummary

Show TranscriptionJobSummary Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptionJobSummary

NFData TranscriptionJobSummary Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptionJobSummary

Methods

rnf :: TranscriptionJobSummary -> () #

Eq TranscriptionJobSummary Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptionJobSummary

Hashable TranscriptionJobSummary Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptionJobSummary

type Rep TranscriptionJobSummary Source # 
Instance details

Defined in Amazonka.Transcribe.Types.TranscriptionJobSummary

type Rep TranscriptionJobSummary = D1 ('MetaData "TranscriptionJobSummary" "Amazonka.Transcribe.Types.TranscriptionJobSummary" "amazonka-transcribe-2.0-4BjJqzwmm0K94bsEMe1tnF" 'False) (C1 ('MetaCons "TranscriptionJobSummary'" 'PrefixI 'True) (((S1 ('MetaSel ('Just "completionTime") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe POSIX)) :*: (S1 ('MetaSel ('Just "contentRedaction") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe ContentRedaction)) :*: S1 ('MetaSel ('Just "creationTime") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe POSIX)))) :*: ((S1 ('MetaSel ('Just "failureReason") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "identifiedLanguageScore") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Double))) :*: (S1 ('MetaSel ('Just "identifyLanguage") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Bool)) :*: S1 ('MetaSel ('Just "identifyMultipleLanguages") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Bool))))) :*: ((S1 ('MetaSel ('Just "languageCode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe LanguageCode)) :*: (S1 ('MetaSel ('Just "languageCodes") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe [LanguageCodeItem])) :*: S1 ('MetaSel ('Just "modelSettings") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe ModelSettings)))) :*: ((S1 ('MetaSel ('Just "outputLocationType") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe OutputLocationType)) :*: S1 ('MetaSel ('Just "startTime") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe POSIX))) :*: (S1 ('MetaSel ('Just "transcriptionJobName") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "transcriptionJobStatus") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe TranscriptionJobStatus)))))))

newTranscriptionJobSummary :: TranscriptionJobSummary Source #

Create a value of TranscriptionJobSummary with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:completionTime:TranscriptionJobSummary', transcriptionJobSummary_completionTime - The date and time the specified transcription job finished processing.

Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:33:13.922000-07:00 represents a transcription job that started processing at 12:33 PM UTC-7 on May 4, 2022.

$sel:contentRedaction:TranscriptionJobSummary', transcriptionJobSummary_contentRedaction - The content redaction settings of the transcription job.

$sel:creationTime:TranscriptionJobSummary', transcriptionJobSummary_creationTime - The date and time the specified transcription job request was made.

Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:32:58.761000-07:00 represents a transcription job that started processing at 12:32 PM UTC-7 on May 4, 2022.

$sel:failureReason:TranscriptionJobSummary', transcriptionJobSummary_failureReason - If TranscriptionJobStatus is FAILED, FailureReason contains information about why the transcription job failed. See also: Common Errors.

$sel:identifiedLanguageScore:TranscriptionJobSummary', transcriptionJobSummary_identifiedLanguageScore - The confidence score associated with the language identified in your media file.

Confidence scores are values between 0 and 1; a larger value indicates a higher probability that the identified language correctly matches the language spoken in your media.

$sel:identifyLanguage:TranscriptionJobSummary', transcriptionJobSummary_identifyLanguage - Indicates whether automatic language identification was enabled (TRUE) for the specified transcription job.

$sel:identifyMultipleLanguages:TranscriptionJobSummary', transcriptionJobSummary_identifyMultipleLanguages - Indicates whether automatic multi-language identification was enabled (TRUE) for the specified transcription job.

TranscriptionJobSummary, transcriptionJobSummary_languageCode - The language code used to create your transcription.

$sel:languageCodes:TranscriptionJobSummary', transcriptionJobSummary_languageCodes - The language codes used to create your transcription job. This parameter is used with multi-language identification. For single-language identification, the singular version of this parameter, LanguageCode, is present.

$sel:modelSettings:TranscriptionJobSummary', transcriptionJobSummary_modelSettings - Undocumented member.

$sel:outputLocationType:TranscriptionJobSummary', transcriptionJobSummary_outputLocationType - Indicates where the specified transcription output is stored.

If the value is CUSTOMER_BUCKET, the location is the Amazon S3 bucket you specified using the OutputBucketName parameter in your request. If you also included OutputKey in your request, your output is located in the path you specified in your request.

If the value is SERVICE_BUCKET, the location is a service-managed Amazon S3 bucket. To access a transcript stored in a service-managed bucket, use the URI shown in the TranscriptFileUri or RedactedTranscriptFileUri field.

$sel:startTime:TranscriptionJobSummary', transcriptionJobSummary_startTime - The date and time your transcription job began processing.

Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:32:58.789000-07:00 represents a transcription job that started processing at 12:32 PM UTC-7 on May 4, 2022.

$sel:transcriptionJobName:TranscriptionJobSummary', transcriptionJobSummary_transcriptionJobName - The name of the transcription job. Job names are case sensitive and must be unique within an Amazon Web Services account.

$sel:transcriptionJobStatus:TranscriptionJobSummary', transcriptionJobSummary_transcriptionJobStatus - Provides the status of your transcription job.

If the status is COMPLETED, the job is finished and you can find the results at the location specified in TranscriptFileUri (or RedactedTranscriptFileUri, if you requested transcript redaction). If the status is FAILED, FailureReason provides details on why your transcription job failed.

transcriptionJobSummary_completionTime :: Lens' TranscriptionJobSummary (Maybe UTCTime) Source #

The date and time the specified transcription job finished processing.

Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:33:13.922000-07:00 represents a transcription job that started processing at 12:33 PM UTC-7 on May 4, 2022.

transcriptionJobSummary_contentRedaction :: Lens' TranscriptionJobSummary (Maybe ContentRedaction) Source #

The content redaction settings of the transcription job.

transcriptionJobSummary_creationTime :: Lens' TranscriptionJobSummary (Maybe UTCTime) Source #

The date and time the specified transcription job request was made.

Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:32:58.761000-07:00 represents a transcription job that started processing at 12:32 PM UTC-7 on May 4, 2022.

transcriptionJobSummary_failureReason :: Lens' TranscriptionJobSummary (Maybe Text) Source #

If TranscriptionJobStatus is FAILED, FailureReason contains information about why the transcription job failed. See also: Common Errors.

transcriptionJobSummary_identifiedLanguageScore :: Lens' TranscriptionJobSummary (Maybe Double) Source #

The confidence score associated with the language identified in your media file.

Confidence scores are values between 0 and 1; a larger value indicates a higher probability that the identified language correctly matches the language spoken in your media.

transcriptionJobSummary_identifyLanguage :: Lens' TranscriptionJobSummary (Maybe Bool) Source #

Indicates whether automatic language identification was enabled (TRUE) for the specified transcription job.

transcriptionJobSummary_identifyMultipleLanguages :: Lens' TranscriptionJobSummary (Maybe Bool) Source #

Indicates whether automatic multi-language identification was enabled (TRUE) for the specified transcription job.

transcriptionJobSummary_languageCode :: Lens' TranscriptionJobSummary (Maybe LanguageCode) Source #

The language code used to create your transcription.

transcriptionJobSummary_languageCodes :: Lens' TranscriptionJobSummary (Maybe [LanguageCodeItem]) Source #

The language codes used to create your transcription job. This parameter is used with multi-language identification. For single-language identification, the singular version of this parameter, LanguageCode, is present.

transcriptionJobSummary_outputLocationType :: Lens' TranscriptionJobSummary (Maybe OutputLocationType) Source #

Indicates where the specified transcription output is stored.

If the value is CUSTOMER_BUCKET, the location is the Amazon S3 bucket you specified using the OutputBucketName parameter in your request. If you also included OutputKey in your request, your output is located in the path you specified in your request.

If the value is SERVICE_BUCKET, the location is a service-managed Amazon S3 bucket. To access a transcript stored in a service-managed bucket, use the URI shown in the TranscriptFileUri or RedactedTranscriptFileUri field.

transcriptionJobSummary_startTime :: Lens' TranscriptionJobSummary (Maybe UTCTime) Source #

The date and time your transcription job began processing.

Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:32:58.789000-07:00 represents a transcription job that started processing at 12:32 PM UTC-7 on May 4, 2022.

transcriptionJobSummary_transcriptionJobName :: Lens' TranscriptionJobSummary (Maybe Text) Source #

The name of the transcription job. Job names are case sensitive and must be unique within an Amazon Web Services account.

transcriptionJobSummary_transcriptionJobStatus :: Lens' TranscriptionJobSummary (Maybe TranscriptionJobStatus) Source #

Provides the status of your transcription job.

If the status is COMPLETED, the job is finished and you can find the results at the location specified in TranscriptFileUri (or RedactedTranscriptFileUri, if you requested transcript redaction). If the status is FAILED, FailureReason provides details on why your transcription job failed.

VocabularyFilterInfo

data VocabularyFilterInfo Source #

Provides information about a custom vocabulary filter, including the language of the filter, when it was last modified, and its name.

See: newVocabularyFilterInfo smart constructor.

Constructors

VocabularyFilterInfo' 

Fields

  • languageCode :: Maybe LanguageCode

    The language code that represents the language of the entries in your vocabulary filter. Each custom vocabulary filter must contain terms in only one language.

    A custom vocabulary filter can only be used to transcribe files in the same language as the filter. For example, if you create a custom vocabulary filter using US English (en-US), you can only apply this filter to files that contain English audio.

    For a list of supported languages and their associated language codes, refer to the Supported languages table.

  • lastModifiedTime :: Maybe POSIX

    The date and time the specified custom vocabulary filter was last modified.

    Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:32:58.761000-07:00 represents 12:32 PM UTC-7 on May 4, 2022.

  • vocabularyFilterName :: Maybe Text

    A unique name, chosen by you, for your custom vocabulary filter. This name is case sensitive, cannot contain spaces, and must be unique within an Amazon Web Services account.

Instances

Instances details
FromJSON VocabularyFilterInfo Source # 
Instance details

Defined in Amazonka.Transcribe.Types.VocabularyFilterInfo

Generic VocabularyFilterInfo Source # 
Instance details

Defined in Amazonka.Transcribe.Types.VocabularyFilterInfo

Associated Types

type Rep VocabularyFilterInfo :: Type -> Type #

Read VocabularyFilterInfo Source # 
Instance details

Defined in Amazonka.Transcribe.Types.VocabularyFilterInfo

Show VocabularyFilterInfo Source # 
Instance details

Defined in Amazonka.Transcribe.Types.VocabularyFilterInfo

NFData VocabularyFilterInfo Source # 
Instance details

Defined in Amazonka.Transcribe.Types.VocabularyFilterInfo

Methods

rnf :: VocabularyFilterInfo -> () #

Eq VocabularyFilterInfo Source # 
Instance details

Defined in Amazonka.Transcribe.Types.VocabularyFilterInfo

Hashable VocabularyFilterInfo Source # 
Instance details

Defined in Amazonka.Transcribe.Types.VocabularyFilterInfo

type Rep VocabularyFilterInfo Source # 
Instance details

Defined in Amazonka.Transcribe.Types.VocabularyFilterInfo

type Rep VocabularyFilterInfo = D1 ('MetaData "VocabularyFilterInfo" "Amazonka.Transcribe.Types.VocabularyFilterInfo" "amazonka-transcribe-2.0-4BjJqzwmm0K94bsEMe1tnF" 'False) (C1 ('MetaCons "VocabularyFilterInfo'" 'PrefixI 'True) (S1 ('MetaSel ('Just "languageCode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe LanguageCode)) :*: (S1 ('MetaSel ('Just "lastModifiedTime") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe POSIX)) :*: S1 ('MetaSel ('Just "vocabularyFilterName") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)))))

newVocabularyFilterInfo :: VocabularyFilterInfo Source #

Create a value of VocabularyFilterInfo with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:languageCode:VocabularyFilterInfo', vocabularyFilterInfo_languageCode - The language code that represents the language of the entries in your vocabulary filter. Each custom vocabulary filter must contain terms in only one language.

A custom vocabulary filter can only be used to transcribe files in the same language as the filter. For example, if you create a custom vocabulary filter using US English (en-US), you can only apply this filter to files that contain English audio.

For a list of supported languages and their associated language codes, refer to the Supported languages table.

$sel:lastModifiedTime:VocabularyFilterInfo', vocabularyFilterInfo_lastModifiedTime - The date and time the specified custom vocabulary filter was last modified.

Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:32:58.761000-07:00 represents 12:32 PM UTC-7 on May 4, 2022.

$sel:vocabularyFilterName:VocabularyFilterInfo', vocabularyFilterInfo_vocabularyFilterName - A unique name, chosen by you, for your custom vocabulary filter. This name is case sensitive, cannot contain spaces, and must be unique within an Amazon Web Services account.

vocabularyFilterInfo_languageCode :: Lens' VocabularyFilterInfo (Maybe LanguageCode) Source #

The language code that represents the language of the entries in your vocabulary filter. Each custom vocabulary filter must contain terms in only one language.

A custom vocabulary filter can only be used to transcribe files in the same language as the filter. For example, if you create a custom vocabulary filter using US English (en-US), you can only apply this filter to files that contain English audio.

For a list of supported languages and their associated language codes, refer to the Supported languages table.

vocabularyFilterInfo_lastModifiedTime :: Lens' VocabularyFilterInfo (Maybe UTCTime) Source #

The date and time the specified custom vocabulary filter was last modified.

Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:32:58.761000-07:00 represents 12:32 PM UTC-7 on May 4, 2022.

vocabularyFilterInfo_vocabularyFilterName :: Lens' VocabularyFilterInfo (Maybe Text) Source #

A unique name, chosen by you, for your custom vocabulary filter. This name is case sensitive, cannot contain spaces, and must be unique within an Amazon Web Services account.

VocabularyInfo

data VocabularyInfo Source #

Provides information about a custom vocabulary, including the language of the custom vocabulary, when it was last modified, its name, and the processing state.

See: newVocabularyInfo smart constructor.

Constructors

VocabularyInfo' 

Fields

  • languageCode :: Maybe LanguageCode

    The language code used to create your custom vocabulary. Each custom vocabulary must contain terms in only one language.

    A custom vocabulary can only be used to transcribe files in the same language as the custom vocabulary. For example, if you create a custom vocabulary using US English (en-US), you can only apply this custom vocabulary to files that contain English audio.

  • lastModifiedTime :: Maybe POSIX

    The date and time the specified custom vocabulary was last modified.

    Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:32:58.761000-07:00 represents 12:32 PM UTC-7 on May 4, 2022.

  • vocabularyName :: Maybe Text

    A unique name, chosen by you, for your custom vocabulary. This name is case sensitive, cannot contain spaces, and must be unique within an Amazon Web Services account.

  • vocabularyState :: Maybe VocabularyState

    The processing state of your custom vocabulary. If the state is READY, you can use the custom vocabulary in a StartTranscriptionJob request.

Instances

Instances details
FromJSON VocabularyInfo Source # 
Instance details

Defined in Amazonka.Transcribe.Types.VocabularyInfo

Generic VocabularyInfo Source # 
Instance details

Defined in Amazonka.Transcribe.Types.VocabularyInfo

Associated Types

type Rep VocabularyInfo :: Type -> Type #

Read VocabularyInfo Source # 
Instance details

Defined in Amazonka.Transcribe.Types.VocabularyInfo

Show VocabularyInfo Source # 
Instance details

Defined in Amazonka.Transcribe.Types.VocabularyInfo

NFData VocabularyInfo Source # 
Instance details

Defined in Amazonka.Transcribe.Types.VocabularyInfo

Methods

rnf :: VocabularyInfo -> () #

Eq VocabularyInfo Source # 
Instance details

Defined in Amazonka.Transcribe.Types.VocabularyInfo

Hashable VocabularyInfo Source # 
Instance details

Defined in Amazonka.Transcribe.Types.VocabularyInfo

type Rep VocabularyInfo Source # 
Instance details

Defined in Amazonka.Transcribe.Types.VocabularyInfo

type Rep VocabularyInfo = D1 ('MetaData "VocabularyInfo" "Amazonka.Transcribe.Types.VocabularyInfo" "amazonka-transcribe-2.0-4BjJqzwmm0K94bsEMe1tnF" 'False) (C1 ('MetaCons "VocabularyInfo'" 'PrefixI 'True) ((S1 ('MetaSel ('Just "languageCode") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe LanguageCode)) :*: S1 ('MetaSel ('Just "lastModifiedTime") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe POSIX))) :*: (S1 ('MetaSel ('Just "vocabularyName") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "vocabularyState") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe VocabularyState)))))

newVocabularyInfo :: VocabularyInfo Source #

Create a value of VocabularyInfo with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:languageCode:VocabularyInfo', vocabularyInfo_languageCode - The language code used to create your custom vocabulary. Each custom vocabulary must contain terms in only one language.

A custom vocabulary can only be used to transcribe files in the same language as the custom vocabulary. For example, if you create a custom vocabulary using US English (en-US), you can only apply this custom vocabulary to files that contain English audio.

$sel:lastModifiedTime:VocabularyInfo', vocabularyInfo_lastModifiedTime - The date and time the specified custom vocabulary was last modified.

Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:32:58.761000-07:00 represents 12:32 PM UTC-7 on May 4, 2022.

$sel:vocabularyName:VocabularyInfo', vocabularyInfo_vocabularyName - A unique name, chosen by you, for your custom vocabulary. This name is case sensitive, cannot contain spaces, and must be unique within an Amazon Web Services account.

$sel:vocabularyState:VocabularyInfo', vocabularyInfo_vocabularyState - The processing state of your custom vocabulary. If the state is READY, you can use the custom vocabulary in a StartTranscriptionJob request.

vocabularyInfo_languageCode :: Lens' VocabularyInfo (Maybe LanguageCode) Source #

The language code used to create your custom vocabulary. Each custom vocabulary must contain terms in only one language.

A custom vocabulary can only be used to transcribe files in the same language as the custom vocabulary. For example, if you create a custom vocabulary using US English (en-US), you can only apply this custom vocabulary to files that contain English audio.

vocabularyInfo_lastModifiedTime :: Lens' VocabularyInfo (Maybe UTCTime) Source #

The date and time the specified custom vocabulary was last modified.

Timestamps are in the format YYYY-MM-DD'T'HH:MM:SS.SSSSSS-UTC. For example, 2022-05-04T12:32:58.761000-07:00 represents 12:32 PM UTC-7 on May 4, 2022.

vocabularyInfo_vocabularyName :: Lens' VocabularyInfo (Maybe Text) Source #

A unique name, chosen by you, for your custom vocabulary. This name is case sensitive, cannot contain spaces, and must be unique within an Amazon Web Services account.

vocabularyInfo_vocabularyState :: Lens' VocabularyInfo (Maybe VocabularyState) Source #

The processing state of your custom vocabulary. If the state is READY, you can use the custom vocabulary in a StartTranscriptionJob request.