amazonka-ml-2.0: Amazon Machine Learning SDK.
Copyright(c) 2013-2023 Brendan Hay
LicenseMozilla Public License, v. 2.0.
MaintainerBrendan Hay
Stabilityauto-generated
Portabilitynon-portable (GHC extensions)
Safe HaskellSafe-Inferred
LanguageHaskell2010

Amazonka.MachineLearning.Types

Description

 
Synopsis

Service Configuration

defaultService :: Service Source #

API version 2014-12-12 of the Amazon Machine Learning SDK configuration.

Errors

_IdempotentParameterMismatchException :: AsError a => Fold a ServiceError Source #

A second request to use or change an object was not allowed. This can result from retrying a request using a parameter that was not present in the original request.

_InternalServerException :: AsError a => Fold a ServiceError Source #

An error on the server occurred when trying to process a request.

_InvalidInputException :: AsError a => Fold a ServiceError Source #

An error on the client occurred. Typically, the cause is an invalid input value.

_InvalidTagException :: AsError a => Fold a ServiceError Source #

Prism for InvalidTagException' errors.

_LimitExceededException :: AsError a => Fold a ServiceError Source #

The subscriber exceeded the maximum number of operations. This exception can occur when listing objects such as DataSource.

_PredictorNotMountedException :: AsError a => Fold a ServiceError Source #

The exception is thrown when a predict request is made to an unmounted MLModel.

_ResourceNotFoundException :: AsError a => Fold a ServiceError Source #

A specified resource cannot be located.

_TagLimitExceededException :: AsError a => Fold a ServiceError Source #

Prism for TagLimitExceededException' errors.

Algorithm

newtype Algorithm Source #

The function used to train an MLModel. Training choices supported by Amazon ML include the following:

  • SGD - Stochastic Gradient Descent.
  • RandomForest - Random forest of decision trees.

Constructors

Algorithm' 

Fields

Bundled Patterns

pattern Algorithm_Sgd :: Algorithm 

Instances

Instances details
FromJSON Algorithm Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.Algorithm

FromJSONKey Algorithm Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.Algorithm

ToJSON Algorithm Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.Algorithm

ToJSONKey Algorithm Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.Algorithm

ToByteString Algorithm Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.Algorithm

Methods

toBS :: Algorithm -> ByteString #

ToHeader Algorithm Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.Algorithm

ToLog Algorithm Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.Algorithm

ToQuery Algorithm Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.Algorithm

FromText Algorithm Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.Algorithm

ToText Algorithm Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.Algorithm

Methods

toText :: Algorithm -> Text #

FromXML Algorithm Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.Algorithm

ToXML Algorithm Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.Algorithm

Methods

toXML :: Algorithm -> XML #

Generic Algorithm Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.Algorithm

Associated Types

type Rep Algorithm :: Type -> Type #

Read Algorithm Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.Algorithm

Show Algorithm Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.Algorithm

NFData Algorithm Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.Algorithm

Methods

rnf :: Algorithm -> () #

Eq Algorithm Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.Algorithm

Ord Algorithm Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.Algorithm

Hashable Algorithm Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.Algorithm

type Rep Algorithm Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.Algorithm

type Rep Algorithm = D1 ('MetaData "Algorithm" "Amazonka.MachineLearning.Types.Algorithm" "amazonka-ml-2.0-A3JLJ63WvmfHxGBBIqhdRA" 'True) (C1 ('MetaCons "Algorithm'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromAlgorithm") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

BatchPredictionFilterVariable

newtype BatchPredictionFilterVariable Source #

A list of the variables to use in searching or filtering BatchPrediction.

  • CreatedAt - Sets the search criteria to BatchPrediction creation date.
  • Status - Sets the search criteria to BatchPrediction status.
  • Name - Sets the search criteria to the contents of BatchPrediction Name.
  • IAMUser - Sets the search criteria to the user account that invoked the BatchPrediction creation.
  • MLModelId - Sets the search criteria to the MLModel used in the BatchPrediction.
  • DataSourceId - Sets the search criteria to the DataSource used in the BatchPrediction.
  • DataURI - Sets the search criteria to the data file(s) used in the BatchPrediction. The URL can identify either a file or an Amazon Simple Storage Service (Amazon S3) bucket or directory.

Instances

Instances details
FromJSON BatchPredictionFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.BatchPredictionFilterVariable

FromJSONKey BatchPredictionFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.BatchPredictionFilterVariable

ToJSON BatchPredictionFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.BatchPredictionFilterVariable

ToJSONKey BatchPredictionFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.BatchPredictionFilterVariable

ToByteString BatchPredictionFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.BatchPredictionFilterVariable

ToHeader BatchPredictionFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.BatchPredictionFilterVariable

ToLog BatchPredictionFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.BatchPredictionFilterVariable

ToQuery BatchPredictionFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.BatchPredictionFilterVariable

FromText BatchPredictionFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.BatchPredictionFilterVariable

ToText BatchPredictionFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.BatchPredictionFilterVariable

FromXML BatchPredictionFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.BatchPredictionFilterVariable

ToXML BatchPredictionFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.BatchPredictionFilterVariable

Generic BatchPredictionFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.BatchPredictionFilterVariable

Associated Types

type Rep BatchPredictionFilterVariable :: Type -> Type #

Read BatchPredictionFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.BatchPredictionFilterVariable

Show BatchPredictionFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.BatchPredictionFilterVariable

NFData BatchPredictionFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.BatchPredictionFilterVariable

Eq BatchPredictionFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.BatchPredictionFilterVariable

Ord BatchPredictionFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.BatchPredictionFilterVariable

Hashable BatchPredictionFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.BatchPredictionFilterVariable

type Rep BatchPredictionFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.BatchPredictionFilterVariable

type Rep BatchPredictionFilterVariable = D1 ('MetaData "BatchPredictionFilterVariable" "Amazonka.MachineLearning.Types.BatchPredictionFilterVariable" "amazonka-ml-2.0-A3JLJ63WvmfHxGBBIqhdRA" 'True) (C1 ('MetaCons "BatchPredictionFilterVariable'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromBatchPredictionFilterVariable") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

DataSourceFilterVariable

newtype DataSourceFilterVariable Source #

A list of the variables to use in searching or filtering DataSource.

  • CreatedAt - Sets the search criteria to DataSource creation date.
  • Status - Sets the search criteria to DataSource status.
  • Name - Sets the search criteria to the contents of DataSource Name.
  • DataUri - Sets the search criteria to the URI of data files used to create the DataSource. The URI can identify either a file or an Amazon Simple Storage Service (Amazon S3) bucket or directory.
  • IAMUser - Sets the search criteria to the user account that invoked the DataSource creation.

Note: The variable names should match the variable names in the DataSource.

Instances

Instances details
FromJSON DataSourceFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.DataSourceFilterVariable

FromJSONKey DataSourceFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.DataSourceFilterVariable

ToJSON DataSourceFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.DataSourceFilterVariable

ToJSONKey DataSourceFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.DataSourceFilterVariable

ToByteString DataSourceFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.DataSourceFilterVariable

ToHeader DataSourceFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.DataSourceFilterVariable

ToLog DataSourceFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.DataSourceFilterVariable

ToQuery DataSourceFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.DataSourceFilterVariable

FromText DataSourceFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.DataSourceFilterVariable

ToText DataSourceFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.DataSourceFilterVariable

FromXML DataSourceFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.DataSourceFilterVariable

ToXML DataSourceFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.DataSourceFilterVariable

Generic DataSourceFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.DataSourceFilterVariable

Associated Types

type Rep DataSourceFilterVariable :: Type -> Type #

Read DataSourceFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.DataSourceFilterVariable

Show DataSourceFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.DataSourceFilterVariable

NFData DataSourceFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.DataSourceFilterVariable

Eq DataSourceFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.DataSourceFilterVariable

Ord DataSourceFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.DataSourceFilterVariable

Hashable DataSourceFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.DataSourceFilterVariable

type Rep DataSourceFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.DataSourceFilterVariable

type Rep DataSourceFilterVariable = D1 ('MetaData "DataSourceFilterVariable" "Amazonka.MachineLearning.Types.DataSourceFilterVariable" "amazonka-ml-2.0-A3JLJ63WvmfHxGBBIqhdRA" 'True) (C1 ('MetaCons "DataSourceFilterVariable'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromDataSourceFilterVariable") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

DetailsAttributes

newtype DetailsAttributes Source #

Contains the key values of DetailsMap:

  • PredictiveModelType - Indicates the type of the MLModel.
  • Algorithm - Indicates the algorithm that was used for the MLModel.

Instances

Instances details
FromJSON DetailsAttributes Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.DetailsAttributes

FromJSONKey DetailsAttributes Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.DetailsAttributes

ToJSON DetailsAttributes Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.DetailsAttributes

ToJSONKey DetailsAttributes Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.DetailsAttributes

ToByteString DetailsAttributes Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.DetailsAttributes

ToHeader DetailsAttributes Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.DetailsAttributes

ToLog DetailsAttributes Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.DetailsAttributes

ToQuery DetailsAttributes Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.DetailsAttributes

FromText DetailsAttributes Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.DetailsAttributes

ToText DetailsAttributes Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.DetailsAttributes

FromXML DetailsAttributes Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.DetailsAttributes

ToXML DetailsAttributes Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.DetailsAttributes

Generic DetailsAttributes Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.DetailsAttributes

Associated Types

type Rep DetailsAttributes :: Type -> Type #

Read DetailsAttributes Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.DetailsAttributes

Show DetailsAttributes Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.DetailsAttributes

NFData DetailsAttributes Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.DetailsAttributes

Methods

rnf :: DetailsAttributes -> () #

Eq DetailsAttributes Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.DetailsAttributes

Ord DetailsAttributes Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.DetailsAttributes

Hashable DetailsAttributes Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.DetailsAttributes

type Rep DetailsAttributes Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.DetailsAttributes

type Rep DetailsAttributes = D1 ('MetaData "DetailsAttributes" "Amazonka.MachineLearning.Types.DetailsAttributes" "amazonka-ml-2.0-A3JLJ63WvmfHxGBBIqhdRA" 'True) (C1 ('MetaCons "DetailsAttributes'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromDetailsAttributes") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

EntityStatus

newtype EntityStatus Source #

Object status with the following possible values:

  • PENDING
  • INPROGRESS
  • FAILED
  • COMPLETED
  • DELETED

Constructors

EntityStatus' 

Instances

Instances details
FromJSON EntityStatus Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.EntityStatus

FromJSONKey EntityStatus Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.EntityStatus

ToJSON EntityStatus Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.EntityStatus

ToJSONKey EntityStatus Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.EntityStatus

ToByteString EntityStatus Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.EntityStatus

ToHeader EntityStatus Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.EntityStatus

ToLog EntityStatus Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.EntityStatus

ToQuery EntityStatus Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.EntityStatus

FromText EntityStatus Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.EntityStatus

ToText EntityStatus Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.EntityStatus

Methods

toText :: EntityStatus -> Text #

FromXML EntityStatus Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.EntityStatus

ToXML EntityStatus Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.EntityStatus

Methods

toXML :: EntityStatus -> XML #

Generic EntityStatus Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.EntityStatus

Associated Types

type Rep EntityStatus :: Type -> Type #

Read EntityStatus Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.EntityStatus

Show EntityStatus Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.EntityStatus

NFData EntityStatus Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.EntityStatus

Methods

rnf :: EntityStatus -> () #

Eq EntityStatus Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.EntityStatus

Ord EntityStatus Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.EntityStatus

Hashable EntityStatus Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.EntityStatus

type Rep EntityStatus Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.EntityStatus

type Rep EntityStatus = D1 ('MetaData "EntityStatus" "Amazonka.MachineLearning.Types.EntityStatus" "amazonka-ml-2.0-A3JLJ63WvmfHxGBBIqhdRA" 'True) (C1 ('MetaCons "EntityStatus'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromEntityStatus") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

EvaluationFilterVariable

newtype EvaluationFilterVariable Source #

A list of the variables to use in searching or filtering Evaluation.

  • CreatedAt - Sets the search criteria to Evaluation creation date.
  • Status - Sets the search criteria to Evaluation status.
  • Name - Sets the search criteria to the contents of Evaluation ____ Name.
  • IAMUser - Sets the search criteria to the user account that invoked an evaluation.
  • MLModelId - Sets the search criteria to the Predictor that was evaluated.
  • DataSourceId - Sets the search criteria to the DataSource used in evaluation.
  • DataUri - Sets the search criteria to the data file(s) used in evaluation. The URL can identify either a file or an Amazon Simple Storage Service (Amazon S3) bucket or directory.

Instances

Instances details
FromJSON EvaluationFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.EvaluationFilterVariable

FromJSONKey EvaluationFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.EvaluationFilterVariable

ToJSON EvaluationFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.EvaluationFilterVariable

ToJSONKey EvaluationFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.EvaluationFilterVariable

ToByteString EvaluationFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.EvaluationFilterVariable

ToHeader EvaluationFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.EvaluationFilterVariable

ToLog EvaluationFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.EvaluationFilterVariable

ToQuery EvaluationFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.EvaluationFilterVariable

FromText EvaluationFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.EvaluationFilterVariable

ToText EvaluationFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.EvaluationFilterVariable

FromXML EvaluationFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.EvaluationFilterVariable

ToXML EvaluationFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.EvaluationFilterVariable

Generic EvaluationFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.EvaluationFilterVariable

Associated Types

type Rep EvaluationFilterVariable :: Type -> Type #

Read EvaluationFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.EvaluationFilterVariable

Show EvaluationFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.EvaluationFilterVariable

NFData EvaluationFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.EvaluationFilterVariable

Eq EvaluationFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.EvaluationFilterVariable

Ord EvaluationFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.EvaluationFilterVariable

Hashable EvaluationFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.EvaluationFilterVariable

type Rep EvaluationFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.EvaluationFilterVariable

type Rep EvaluationFilterVariable = D1 ('MetaData "EvaluationFilterVariable" "Amazonka.MachineLearning.Types.EvaluationFilterVariable" "amazonka-ml-2.0-A3JLJ63WvmfHxGBBIqhdRA" 'True) (C1 ('MetaCons "EvaluationFilterVariable'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromEvaluationFilterVariable") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

MLModelFilterVariable

newtype MLModelFilterVariable Source #

Instances

Instances details
FromJSON MLModelFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.MLModelFilterVariable

FromJSONKey MLModelFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.MLModelFilterVariable

ToJSON MLModelFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.MLModelFilterVariable

ToJSONKey MLModelFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.MLModelFilterVariable

ToByteString MLModelFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.MLModelFilterVariable

ToHeader MLModelFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.MLModelFilterVariable

ToLog MLModelFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.MLModelFilterVariable

ToQuery MLModelFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.MLModelFilterVariable

FromText MLModelFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.MLModelFilterVariable

ToText MLModelFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.MLModelFilterVariable

FromXML MLModelFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.MLModelFilterVariable

ToXML MLModelFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.MLModelFilterVariable

Generic MLModelFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.MLModelFilterVariable

Associated Types

type Rep MLModelFilterVariable :: Type -> Type #

Read MLModelFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.MLModelFilterVariable

Show MLModelFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.MLModelFilterVariable

NFData MLModelFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.MLModelFilterVariable

Methods

rnf :: MLModelFilterVariable -> () #

Eq MLModelFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.MLModelFilterVariable

Ord MLModelFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.MLModelFilterVariable

Hashable MLModelFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.MLModelFilterVariable

type Rep MLModelFilterVariable Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.MLModelFilterVariable

type Rep MLModelFilterVariable = D1 ('MetaData "MLModelFilterVariable" "Amazonka.MachineLearning.Types.MLModelFilterVariable" "amazonka-ml-2.0-A3JLJ63WvmfHxGBBIqhdRA" 'True) (C1 ('MetaCons "MLModelFilterVariable'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromMLModelFilterVariable") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

MLModelType

newtype MLModelType Source #

Constructors

MLModelType' 

Instances

Instances details
FromJSON MLModelType Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.MLModelType

FromJSONKey MLModelType Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.MLModelType

ToJSON MLModelType Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.MLModelType

ToJSONKey MLModelType Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.MLModelType

ToByteString MLModelType Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.MLModelType

ToHeader MLModelType Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.MLModelType

ToLog MLModelType Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.MLModelType

ToQuery MLModelType Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.MLModelType

FromText MLModelType Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.MLModelType

ToText MLModelType Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.MLModelType

Methods

toText :: MLModelType -> Text #

FromXML MLModelType Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.MLModelType

ToXML MLModelType Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.MLModelType

Methods

toXML :: MLModelType -> XML #

Generic MLModelType Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.MLModelType

Associated Types

type Rep MLModelType :: Type -> Type #

Read MLModelType Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.MLModelType

Show MLModelType Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.MLModelType

NFData MLModelType Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.MLModelType

Methods

rnf :: MLModelType -> () #

Eq MLModelType Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.MLModelType

Ord MLModelType Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.MLModelType

Hashable MLModelType Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.MLModelType

type Rep MLModelType Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.MLModelType

type Rep MLModelType = D1 ('MetaData "MLModelType" "Amazonka.MachineLearning.Types.MLModelType" "amazonka-ml-2.0-A3JLJ63WvmfHxGBBIqhdRA" 'True) (C1 ('MetaCons "MLModelType'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromMLModelType") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

RealtimeEndpointStatus

newtype RealtimeEndpointStatus Source #

Instances

Instances details
FromJSON RealtimeEndpointStatus Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RealtimeEndpointStatus

FromJSONKey RealtimeEndpointStatus Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RealtimeEndpointStatus

ToJSON RealtimeEndpointStatus Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RealtimeEndpointStatus

ToJSONKey RealtimeEndpointStatus Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RealtimeEndpointStatus

ToByteString RealtimeEndpointStatus Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RealtimeEndpointStatus

ToHeader RealtimeEndpointStatus Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RealtimeEndpointStatus

ToLog RealtimeEndpointStatus Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RealtimeEndpointStatus

ToQuery RealtimeEndpointStatus Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RealtimeEndpointStatus

FromText RealtimeEndpointStatus Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RealtimeEndpointStatus

ToText RealtimeEndpointStatus Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RealtimeEndpointStatus

FromXML RealtimeEndpointStatus Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RealtimeEndpointStatus

ToXML RealtimeEndpointStatus Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RealtimeEndpointStatus

Generic RealtimeEndpointStatus Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RealtimeEndpointStatus

Associated Types

type Rep RealtimeEndpointStatus :: Type -> Type #

Read RealtimeEndpointStatus Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RealtimeEndpointStatus

Show RealtimeEndpointStatus Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RealtimeEndpointStatus

NFData RealtimeEndpointStatus Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RealtimeEndpointStatus

Methods

rnf :: RealtimeEndpointStatus -> () #

Eq RealtimeEndpointStatus Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RealtimeEndpointStatus

Ord RealtimeEndpointStatus Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RealtimeEndpointStatus

Hashable RealtimeEndpointStatus Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RealtimeEndpointStatus

type Rep RealtimeEndpointStatus Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RealtimeEndpointStatus

type Rep RealtimeEndpointStatus = D1 ('MetaData "RealtimeEndpointStatus" "Amazonka.MachineLearning.Types.RealtimeEndpointStatus" "amazonka-ml-2.0-A3JLJ63WvmfHxGBBIqhdRA" 'True) (C1 ('MetaCons "RealtimeEndpointStatus'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromRealtimeEndpointStatus") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

SortOrder

newtype SortOrder Source #

The sort order specified in a listing condition. Possible values include the following:

  • asc - Present the information in ascending order (from A-Z).
  • dsc - Present the information in descending order (from Z-A).

Constructors

SortOrder' 

Fields

Bundled Patterns

pattern SortOrder_Asc :: SortOrder 
pattern SortOrder_Dsc :: SortOrder 

Instances

Instances details
FromJSON SortOrder Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.SortOrder

FromJSONKey SortOrder Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.SortOrder

ToJSON SortOrder Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.SortOrder

ToJSONKey SortOrder Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.SortOrder

ToByteString SortOrder Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.SortOrder

Methods

toBS :: SortOrder -> ByteString #

ToHeader SortOrder Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.SortOrder

ToLog SortOrder Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.SortOrder

ToQuery SortOrder Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.SortOrder

FromText SortOrder Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.SortOrder

ToText SortOrder Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.SortOrder

Methods

toText :: SortOrder -> Text #

FromXML SortOrder Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.SortOrder

ToXML SortOrder Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.SortOrder

Methods

toXML :: SortOrder -> XML #

Generic SortOrder Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.SortOrder

Associated Types

type Rep SortOrder :: Type -> Type #

Read SortOrder Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.SortOrder

Show SortOrder Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.SortOrder

NFData SortOrder Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.SortOrder

Methods

rnf :: SortOrder -> () #

Eq SortOrder Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.SortOrder

Ord SortOrder Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.SortOrder

Hashable SortOrder Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.SortOrder

type Rep SortOrder Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.SortOrder

type Rep SortOrder = D1 ('MetaData "SortOrder" "Amazonka.MachineLearning.Types.SortOrder" "amazonka-ml-2.0-A3JLJ63WvmfHxGBBIqhdRA" 'True) (C1 ('MetaCons "SortOrder'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromSortOrder") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

TaggableResourceType

newtype TaggableResourceType Source #

Instances

Instances details
FromJSON TaggableResourceType Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.TaggableResourceType

FromJSONKey TaggableResourceType Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.TaggableResourceType

ToJSON TaggableResourceType Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.TaggableResourceType

ToJSONKey TaggableResourceType Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.TaggableResourceType

ToByteString TaggableResourceType Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.TaggableResourceType

ToHeader TaggableResourceType Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.TaggableResourceType

ToLog TaggableResourceType Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.TaggableResourceType

ToQuery TaggableResourceType Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.TaggableResourceType

FromText TaggableResourceType Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.TaggableResourceType

ToText TaggableResourceType Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.TaggableResourceType

FromXML TaggableResourceType Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.TaggableResourceType

ToXML TaggableResourceType Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.TaggableResourceType

Generic TaggableResourceType Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.TaggableResourceType

Associated Types

type Rep TaggableResourceType :: Type -> Type #

Read TaggableResourceType Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.TaggableResourceType

Show TaggableResourceType Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.TaggableResourceType

NFData TaggableResourceType Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.TaggableResourceType

Methods

rnf :: TaggableResourceType -> () #

Eq TaggableResourceType Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.TaggableResourceType

Ord TaggableResourceType Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.TaggableResourceType

Hashable TaggableResourceType Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.TaggableResourceType

type Rep TaggableResourceType Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.TaggableResourceType

type Rep TaggableResourceType = D1 ('MetaData "TaggableResourceType" "Amazonka.MachineLearning.Types.TaggableResourceType" "amazonka-ml-2.0-A3JLJ63WvmfHxGBBIqhdRA" 'True) (C1 ('MetaCons "TaggableResourceType'" 'PrefixI 'True) (S1 ('MetaSel ('Just "fromTaggableResourceType") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 Text)))

BatchPrediction

data BatchPrediction Source #

Represents the output of a GetBatchPrediction operation.

The content consists of the detailed metadata, the status, and the data file information of a Batch Prediction.

See: newBatchPrediction smart constructor.

Constructors

BatchPrediction' 

Fields

  • batchPredictionDataSourceId :: Maybe Text

    The ID of the DataSource that points to the group of observations to predict.

  • batchPredictionId :: Maybe Text

    The ID assigned to the BatchPrediction at creation. This value should be identical to the value of the BatchPredictionID in the request.

  • computeTime :: Maybe Integer
     
  • createdAt :: Maybe POSIX

    The time that the BatchPrediction was created. The time is expressed in epoch time.

  • createdByIamUser :: Maybe Text

    The AWS user account that invoked the BatchPrediction. The account type can be either an AWS root account or an AWS Identity and Access Management (IAM) user account.

  • finishedAt :: Maybe POSIX
     
  • inputDataLocationS3 :: Maybe Text

    The location of the data file or directory in Amazon Simple Storage Service (Amazon S3).

  • invalidRecordCount :: Maybe Integer
     
  • lastUpdatedAt :: Maybe POSIX

    The time of the most recent edit to the BatchPrediction. The time is expressed in epoch time.

  • mLModelId :: Maybe Text

    The ID of the MLModel that generated predictions for the BatchPrediction request.

  • message :: Maybe Text

    A description of the most recent details about processing the batch prediction request.

  • name :: Maybe Text

    A user-supplied name or description of the BatchPrediction.

  • outputUri :: Maybe Text

    The location of an Amazon S3 bucket or directory to receive the operation results. The following substrings are not allowed in the s3 key portion of the outputURI field: ':', '//', '/./', '/../'.

  • startedAt :: Maybe POSIX
     
  • status :: Maybe EntityStatus

    The status of the BatchPrediction. This element can have one of the following values:

    • PENDING - Amazon Machine Learning (Amazon ML) submitted a request to generate predictions for a batch of observations.
    • INPROGRESS - The process is underway.
    • FAILED - The request to perform a batch prediction did not run to completion. It is not usable.
    • COMPLETED - The batch prediction process completed successfully.
    • DELETED - The BatchPrediction is marked as deleted. It is not usable.
  • totalRecordCount :: Maybe Integer
     

Instances

Instances details
FromJSON BatchPrediction Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.BatchPrediction

Generic BatchPrediction Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.BatchPrediction

Associated Types

type Rep BatchPrediction :: Type -> Type #

Read BatchPrediction Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.BatchPrediction

Show BatchPrediction Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.BatchPrediction

NFData BatchPrediction Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.BatchPrediction

Methods

rnf :: BatchPrediction -> () #

Eq BatchPrediction Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.BatchPrediction

Hashable BatchPrediction Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.BatchPrediction

type Rep BatchPrediction Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.BatchPrediction

type Rep BatchPrediction = D1 ('MetaData "BatchPrediction" "Amazonka.MachineLearning.Types.BatchPrediction" "amazonka-ml-2.0-A3JLJ63WvmfHxGBBIqhdRA" 'False) (C1 ('MetaCons "BatchPrediction'" 'PrefixI 'True) ((((S1 ('MetaSel ('Just "batchPredictionDataSourceId") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "batchPredictionId") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text))) :*: (S1 ('MetaSel ('Just "computeTime") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Integer)) :*: S1 ('MetaSel ('Just "createdAt") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe POSIX)))) :*: ((S1 ('MetaSel ('Just "createdByIamUser") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "finishedAt") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe POSIX))) :*: (S1 ('MetaSel ('Just "inputDataLocationS3") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "invalidRecordCount") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Integer))))) :*: (((S1 ('MetaSel ('Just "lastUpdatedAt") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe POSIX)) :*: S1 ('MetaSel ('Just "mLModelId") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text))) :*: (S1 ('MetaSel ('Just "message") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "name") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)))) :*: ((S1 ('MetaSel ('Just "outputUri") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "startedAt") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe POSIX))) :*: (S1 ('MetaSel ('Just "status") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe EntityStatus)) :*: S1 ('MetaSel ('Just "totalRecordCount") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Integer)))))))

newBatchPrediction :: BatchPrediction Source #

Create a value of BatchPrediction with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:batchPredictionDataSourceId:BatchPrediction', batchPrediction_batchPredictionDataSourceId - The ID of the DataSource that points to the group of observations to predict.

$sel:batchPredictionId:BatchPrediction', batchPrediction_batchPredictionId - The ID assigned to the BatchPrediction at creation. This value should be identical to the value of the BatchPredictionID in the request.

$sel:computeTime:BatchPrediction', batchPrediction_computeTime - Undocumented member.

$sel:createdAt:BatchPrediction', batchPrediction_createdAt - The time that the BatchPrediction was created. The time is expressed in epoch time.

$sel:createdByIamUser:BatchPrediction', batchPrediction_createdByIamUser - The AWS user account that invoked the BatchPrediction. The account type can be either an AWS root account or an AWS Identity and Access Management (IAM) user account.

$sel:finishedAt:BatchPrediction', batchPrediction_finishedAt - Undocumented member.

$sel:inputDataLocationS3:BatchPrediction', batchPrediction_inputDataLocationS3 - The location of the data file or directory in Amazon Simple Storage Service (Amazon S3).

$sel:invalidRecordCount:BatchPrediction', batchPrediction_invalidRecordCount - Undocumented member.

$sel:lastUpdatedAt:BatchPrediction', batchPrediction_lastUpdatedAt - The time of the most recent edit to the BatchPrediction. The time is expressed in epoch time.

$sel:mLModelId:BatchPrediction', batchPrediction_mLModelId - The ID of the MLModel that generated predictions for the BatchPrediction request.

$sel:message:BatchPrediction', batchPrediction_message - A description of the most recent details about processing the batch prediction request.

$sel:name:BatchPrediction', batchPrediction_name - A user-supplied name or description of the BatchPrediction.

$sel:outputUri:BatchPrediction', batchPrediction_outputUri - The location of an Amazon S3 bucket or directory to receive the operation results. The following substrings are not allowed in the s3 key portion of the outputURI field: ':', '//', '/./', '/../'.

$sel:startedAt:BatchPrediction', batchPrediction_startedAt - Undocumented member.

$sel:status:BatchPrediction', batchPrediction_status - The status of the BatchPrediction. This element can have one of the following values:

  • PENDING - Amazon Machine Learning (Amazon ML) submitted a request to generate predictions for a batch of observations.
  • INPROGRESS - The process is underway.
  • FAILED - The request to perform a batch prediction did not run to completion. It is not usable.
  • COMPLETED - The batch prediction process completed successfully.
  • DELETED - The BatchPrediction is marked as deleted. It is not usable.

$sel:totalRecordCount:BatchPrediction', batchPrediction_totalRecordCount - Undocumented member.

batchPrediction_batchPredictionDataSourceId :: Lens' BatchPrediction (Maybe Text) Source #

The ID of the DataSource that points to the group of observations to predict.

batchPrediction_batchPredictionId :: Lens' BatchPrediction (Maybe Text) Source #

The ID assigned to the BatchPrediction at creation. This value should be identical to the value of the BatchPredictionID in the request.

batchPrediction_createdAt :: Lens' BatchPrediction (Maybe UTCTime) Source #

The time that the BatchPrediction was created. The time is expressed in epoch time.

batchPrediction_createdByIamUser :: Lens' BatchPrediction (Maybe Text) Source #

The AWS user account that invoked the BatchPrediction. The account type can be either an AWS root account or an AWS Identity and Access Management (IAM) user account.

batchPrediction_inputDataLocationS3 :: Lens' BatchPrediction (Maybe Text) Source #

The location of the data file or directory in Amazon Simple Storage Service (Amazon S3).

batchPrediction_lastUpdatedAt :: Lens' BatchPrediction (Maybe UTCTime) Source #

The time of the most recent edit to the BatchPrediction. The time is expressed in epoch time.

batchPrediction_mLModelId :: Lens' BatchPrediction (Maybe Text) Source #

The ID of the MLModel that generated predictions for the BatchPrediction request.

batchPrediction_message :: Lens' BatchPrediction (Maybe Text) Source #

A description of the most recent details about processing the batch prediction request.

batchPrediction_name :: Lens' BatchPrediction (Maybe Text) Source #

A user-supplied name or description of the BatchPrediction.

batchPrediction_outputUri :: Lens' BatchPrediction (Maybe Text) Source #

The location of an Amazon S3 bucket or directory to receive the operation results. The following substrings are not allowed in the s3 key portion of the outputURI field: ':', '//', '/./', '/../'.

batchPrediction_status :: Lens' BatchPrediction (Maybe EntityStatus) Source #

The status of the BatchPrediction. This element can have one of the following values:

  • PENDING - Amazon Machine Learning (Amazon ML) submitted a request to generate predictions for a batch of observations.
  • INPROGRESS - The process is underway.
  • FAILED - The request to perform a batch prediction did not run to completion. It is not usable.
  • COMPLETED - The batch prediction process completed successfully.
  • DELETED - The BatchPrediction is marked as deleted. It is not usable.

DataSource

data DataSource Source #

Represents the output of the GetDataSource operation.

The content consists of the detailed metadata and data file information and the current status of the DataSource.

See: newDataSource smart constructor.

Constructors

DataSource' 

Fields

Instances

Instances details
FromJSON DataSource Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.DataSource

Generic DataSource Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.DataSource

Associated Types

type Rep DataSource :: Type -> Type #

Read DataSource Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.DataSource

Show DataSource Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.DataSource

NFData DataSource Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.DataSource

Methods

rnf :: DataSource -> () #

Eq DataSource Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.DataSource

Hashable DataSource Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.DataSource

type Rep DataSource Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.DataSource

type Rep DataSource = D1 ('MetaData "DataSource" "Amazonka.MachineLearning.Types.DataSource" "amazonka-ml-2.0-A3JLJ63WvmfHxGBBIqhdRA" 'False) (C1 ('MetaCons "DataSource'" 'PrefixI 'True) ((((S1 ('MetaSel ('Just "computeStatistics") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Bool)) :*: S1 ('MetaSel ('Just "computeTime") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Integer))) :*: (S1 ('MetaSel ('Just "createdAt") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe POSIX)) :*: S1 ('MetaSel ('Just "createdByIamUser") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)))) :*: ((S1 ('MetaSel ('Just "dataLocationS3") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "dataRearrangement") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text))) :*: (S1 ('MetaSel ('Just "dataSizeInBytes") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Integer)) :*: (S1 ('MetaSel ('Just "dataSourceId") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "finishedAt") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe POSIX)))))) :*: (((S1 ('MetaSel ('Just "lastUpdatedAt") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe POSIX)) :*: S1 ('MetaSel ('Just "message") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text))) :*: (S1 ('MetaSel ('Just "name") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "numberOfFiles") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Integer)))) :*: ((S1 ('MetaSel ('Just "rDSMetadata") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe RDSMetadata)) :*: S1 ('MetaSel ('Just "redshiftMetadata") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe RedshiftMetadata))) :*: (S1 ('MetaSel ('Just "roleARN") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: (S1 ('MetaSel ('Just "startedAt") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe POSIX)) :*: S1 ('MetaSel ('Just "status") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe EntityStatus))))))))

newDataSource :: DataSource Source #

Create a value of DataSource with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:computeStatistics:DataSource', dataSource_computeStatistics - The parameter is true if statistics need to be generated from the observation data.

$sel:computeTime:DataSource', dataSource_computeTime - Undocumented member.

$sel:createdAt:DataSource', dataSource_createdAt - The time that the DataSource was created. The time is expressed in epoch time.

$sel:createdByIamUser:DataSource', dataSource_createdByIamUser - The AWS user account from which the DataSource was created. The account type can be either an AWS root account or an AWS Identity and Access Management (IAM) user account.

$sel:dataLocationS3:DataSource', dataSource_dataLocationS3 - The location and name of the data in Amazon Simple Storage Service (Amazon S3) that is used by a DataSource.

$sel:dataRearrangement:DataSource', dataSource_dataRearrangement - A JSON string that represents the splitting and rearrangement requirement used when this DataSource was created.

$sel:dataSizeInBytes:DataSource', dataSource_dataSizeInBytes - The total number of observations contained in the data files that the DataSource references.

$sel:dataSourceId:DataSource', dataSource_dataSourceId - The ID that is assigned to the DataSource during creation.

$sel:finishedAt:DataSource', dataSource_finishedAt - Undocumented member.

$sel:lastUpdatedAt:DataSource', dataSource_lastUpdatedAt - The time of the most recent edit to the BatchPrediction. The time is expressed in epoch time.

$sel:message:DataSource', dataSource_message - A description of the most recent details about creating the DataSource.

$sel:name:DataSource', dataSource_name - A user-supplied name or description of the DataSource.

$sel:numberOfFiles:DataSource', dataSource_numberOfFiles - The number of data files referenced by the DataSource.

$sel:rDSMetadata:DataSource', dataSource_rDSMetadata - Undocumented member.

$sel:redshiftMetadata:DataSource', dataSource_redshiftMetadata - Undocumented member.

$sel:roleARN:DataSource', dataSource_roleARN - Undocumented member.

$sel:startedAt:DataSource', dataSource_startedAt - Undocumented member.

$sel:status:DataSource', dataSource_status - The current status of the DataSource. This element can have one of the following values:

  • PENDING - Amazon Machine Learning (Amazon ML) submitted a request to create a DataSource.
  • INPROGRESS - The creation process is underway.
  • FAILED - The request to create a DataSource did not run to completion. It is not usable.
  • COMPLETED - The creation process completed successfully.
  • DELETED - The DataSource is marked as deleted. It is not usable.

dataSource_computeStatistics :: Lens' DataSource (Maybe Bool) Source #

The parameter is true if statistics need to be generated from the observation data.

dataSource_createdAt :: Lens' DataSource (Maybe UTCTime) Source #

The time that the DataSource was created. The time is expressed in epoch time.

dataSource_createdByIamUser :: Lens' DataSource (Maybe Text) Source #

The AWS user account from which the DataSource was created. The account type can be either an AWS root account or an AWS Identity and Access Management (IAM) user account.

dataSource_dataLocationS3 :: Lens' DataSource (Maybe Text) Source #

The location and name of the data in Amazon Simple Storage Service (Amazon S3) that is used by a DataSource.

dataSource_dataRearrangement :: Lens' DataSource (Maybe Text) Source #

A JSON string that represents the splitting and rearrangement requirement used when this DataSource was created.

dataSource_dataSizeInBytes :: Lens' DataSource (Maybe Integer) Source #

The total number of observations contained in the data files that the DataSource references.

dataSource_dataSourceId :: Lens' DataSource (Maybe Text) Source #

The ID that is assigned to the DataSource during creation.

dataSource_lastUpdatedAt :: Lens' DataSource (Maybe UTCTime) Source #

The time of the most recent edit to the BatchPrediction. The time is expressed in epoch time.

dataSource_message :: Lens' DataSource (Maybe Text) Source #

A description of the most recent details about creating the DataSource.

dataSource_name :: Lens' DataSource (Maybe Text) Source #

A user-supplied name or description of the DataSource.

dataSource_numberOfFiles :: Lens' DataSource (Maybe Integer) Source #

The number of data files referenced by the DataSource.

dataSource_status :: Lens' DataSource (Maybe EntityStatus) Source #

The current status of the DataSource. This element can have one of the following values:

  • PENDING - Amazon Machine Learning (Amazon ML) submitted a request to create a DataSource.
  • INPROGRESS - The creation process is underway.
  • FAILED - The request to create a DataSource did not run to completion. It is not usable.
  • COMPLETED - The creation process completed successfully.
  • DELETED - The DataSource is marked as deleted. It is not usable.

Evaluation

data Evaluation Source #

Represents the output of GetEvaluation operation.

The content consists of the detailed metadata and data file information and the current status of the Evaluation.

See: newEvaluation smart constructor.

Constructors

Evaluation' 

Fields

  • computeTime :: Maybe Integer
     
  • createdAt :: Maybe POSIX

    The time that the Evaluation was created. The time is expressed in epoch time.

  • createdByIamUser :: Maybe Text

    The AWS user account that invoked the evaluation. The account type can be either an AWS root account or an AWS Identity and Access Management (IAM) user account.

  • evaluationDataSourceId :: Maybe Text

    The ID of the DataSource that is used to evaluate the MLModel.

  • evaluationId :: Maybe Text

    The ID that is assigned to the Evaluation at creation.

  • finishedAt :: Maybe POSIX
     
  • inputDataLocationS3 :: Maybe Text

    The location and name of the data in Amazon Simple Storage Server (Amazon S3) that is used in the evaluation.

  • lastUpdatedAt :: Maybe POSIX

    The time of the most recent edit to the Evaluation. The time is expressed in epoch time.

  • mLModelId :: Maybe Text

    The ID of the MLModel that is the focus of the evaluation.

  • message :: Maybe Text

    A description of the most recent details about evaluating the MLModel.

  • name :: Maybe Text

    A user-supplied name or description of the Evaluation.

  • performanceMetrics :: Maybe PerformanceMetrics

    Measurements of how well the MLModel performed, using observations referenced by the DataSource. One of the following metrics is returned, based on the type of the MLModel:

    • BinaryAUC: A binary MLModel uses the Area Under the Curve (AUC) technique to measure performance.
    • RegressionRMSE: A regression MLModel uses the Root Mean Square Error (RMSE) technique to measure performance. RMSE measures the difference between predicted and actual values for a single variable.
    • MulticlassAvgFScore: A multiclass MLModel uses the F1 score technique to measure performance.

    For more information about performance metrics, please see the Amazon Machine Learning Developer Guide.

  • startedAt :: Maybe POSIX
     
  • status :: Maybe EntityStatus

    The status of the evaluation. This element can have one of the following values:

    • PENDING - Amazon Machine Learning (Amazon ML) submitted a request to evaluate an MLModel.
    • INPROGRESS - The evaluation is underway.
    • FAILED - The request to evaluate an MLModel did not run to completion. It is not usable.
    • COMPLETED - The evaluation process completed successfully.
    • DELETED - The Evaluation is marked as deleted. It is not usable.

Instances

Instances details
FromJSON Evaluation Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.Evaluation

Generic Evaluation Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.Evaluation

Associated Types

type Rep Evaluation :: Type -> Type #

Read Evaluation Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.Evaluation

Show Evaluation Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.Evaluation

NFData Evaluation Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.Evaluation

Methods

rnf :: Evaluation -> () #

Eq Evaluation Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.Evaluation

Hashable Evaluation Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.Evaluation

type Rep Evaluation Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.Evaluation

type Rep Evaluation = D1 ('MetaData "Evaluation" "Amazonka.MachineLearning.Types.Evaluation" "amazonka-ml-2.0-A3JLJ63WvmfHxGBBIqhdRA" 'False) (C1 ('MetaCons "Evaluation'" 'PrefixI 'True) (((S1 ('MetaSel ('Just "computeTime") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Integer)) :*: (S1 ('MetaSel ('Just "createdAt") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe POSIX)) :*: S1 ('MetaSel ('Just "createdByIamUser") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)))) :*: ((S1 ('MetaSel ('Just "evaluationDataSourceId") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "evaluationId") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text))) :*: (S1 ('MetaSel ('Just "finishedAt") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe POSIX)) :*: S1 ('MetaSel ('Just "inputDataLocationS3") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text))))) :*: ((S1 ('MetaSel ('Just "lastUpdatedAt") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe POSIX)) :*: (S1 ('MetaSel ('Just "mLModelId") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "message") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)))) :*: ((S1 ('MetaSel ('Just "name") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "performanceMetrics") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe PerformanceMetrics))) :*: (S1 ('MetaSel ('Just "startedAt") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe POSIX)) :*: S1 ('MetaSel ('Just "status") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe EntityStatus)))))))

newEvaluation :: Evaluation Source #

Create a value of Evaluation with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:computeTime:Evaluation', evaluation_computeTime - Undocumented member.

$sel:createdAt:Evaluation', evaluation_createdAt - The time that the Evaluation was created. The time is expressed in epoch time.

$sel:createdByIamUser:Evaluation', evaluation_createdByIamUser - The AWS user account that invoked the evaluation. The account type can be either an AWS root account or an AWS Identity and Access Management (IAM) user account.

$sel:evaluationDataSourceId:Evaluation', evaluation_evaluationDataSourceId - The ID of the DataSource that is used to evaluate the MLModel.

$sel:evaluationId:Evaluation', evaluation_evaluationId - The ID that is assigned to the Evaluation at creation.

$sel:finishedAt:Evaluation', evaluation_finishedAt - Undocumented member.

$sel:inputDataLocationS3:Evaluation', evaluation_inputDataLocationS3 - The location and name of the data in Amazon Simple Storage Server (Amazon S3) that is used in the evaluation.

$sel:lastUpdatedAt:Evaluation', evaluation_lastUpdatedAt - The time of the most recent edit to the Evaluation. The time is expressed in epoch time.

$sel:mLModelId:Evaluation', evaluation_mLModelId - The ID of the MLModel that is the focus of the evaluation.

$sel:message:Evaluation', evaluation_message - A description of the most recent details about evaluating the MLModel.

$sel:name:Evaluation', evaluation_name - A user-supplied name or description of the Evaluation.

$sel:performanceMetrics:Evaluation', evaluation_performanceMetrics - Measurements of how well the MLModel performed, using observations referenced by the DataSource. One of the following metrics is returned, based on the type of the MLModel:

  • BinaryAUC: A binary MLModel uses the Area Under the Curve (AUC) technique to measure performance.
  • RegressionRMSE: A regression MLModel uses the Root Mean Square Error (RMSE) technique to measure performance. RMSE measures the difference between predicted and actual values for a single variable.
  • MulticlassAvgFScore: A multiclass MLModel uses the F1 score technique to measure performance.

For more information about performance metrics, please see the Amazon Machine Learning Developer Guide.

$sel:startedAt:Evaluation', evaluation_startedAt - Undocumented member.

$sel:status:Evaluation', evaluation_status - The status of the evaluation. This element can have one of the following values:

  • PENDING - Amazon Machine Learning (Amazon ML) submitted a request to evaluate an MLModel.
  • INPROGRESS - The evaluation is underway.
  • FAILED - The request to evaluate an MLModel did not run to completion. It is not usable.
  • COMPLETED - The evaluation process completed successfully.
  • DELETED - The Evaluation is marked as deleted. It is not usable.

evaluation_createdAt :: Lens' Evaluation (Maybe UTCTime) Source #

The time that the Evaluation was created. The time is expressed in epoch time.

evaluation_createdByIamUser :: Lens' Evaluation (Maybe Text) Source #

The AWS user account that invoked the evaluation. The account type can be either an AWS root account or an AWS Identity and Access Management (IAM) user account.

evaluation_evaluationDataSourceId :: Lens' Evaluation (Maybe Text) Source #

The ID of the DataSource that is used to evaluate the MLModel.

evaluation_evaluationId :: Lens' Evaluation (Maybe Text) Source #

The ID that is assigned to the Evaluation at creation.

evaluation_inputDataLocationS3 :: Lens' Evaluation (Maybe Text) Source #

The location and name of the data in Amazon Simple Storage Server (Amazon S3) that is used in the evaluation.

evaluation_lastUpdatedAt :: Lens' Evaluation (Maybe UTCTime) Source #

The time of the most recent edit to the Evaluation. The time is expressed in epoch time.

evaluation_mLModelId :: Lens' Evaluation (Maybe Text) Source #

The ID of the MLModel that is the focus of the evaluation.

evaluation_message :: Lens' Evaluation (Maybe Text) Source #

A description of the most recent details about evaluating the MLModel.

evaluation_name :: Lens' Evaluation (Maybe Text) Source #

A user-supplied name or description of the Evaluation.

evaluation_performanceMetrics :: Lens' Evaluation (Maybe PerformanceMetrics) Source #

Measurements of how well the MLModel performed, using observations referenced by the DataSource. One of the following metrics is returned, based on the type of the MLModel:

  • BinaryAUC: A binary MLModel uses the Area Under the Curve (AUC) technique to measure performance.
  • RegressionRMSE: A regression MLModel uses the Root Mean Square Error (RMSE) technique to measure performance. RMSE measures the difference between predicted and actual values for a single variable.
  • MulticlassAvgFScore: A multiclass MLModel uses the F1 score technique to measure performance.

For more information about performance metrics, please see the Amazon Machine Learning Developer Guide.

evaluation_status :: Lens' Evaluation (Maybe EntityStatus) Source #

The status of the evaluation. This element can have one of the following values:

  • PENDING - Amazon Machine Learning (Amazon ML) submitted a request to evaluate an MLModel.
  • INPROGRESS - The evaluation is underway.
  • FAILED - The request to evaluate an MLModel did not run to completion. It is not usable.
  • COMPLETED - The evaluation process completed successfully.
  • DELETED - The Evaluation is marked as deleted. It is not usable.

MLModel

data MLModel Source #

Represents the output of a GetMLModel operation.

The content consists of the detailed metadata and the current status of the MLModel.

See: newMLModel smart constructor.

Constructors

MLModel' 

Fields

  • algorithm :: Maybe Algorithm

    The algorithm used to train the MLModel. The following algorithm is supported:

    • SGD -- Stochastic gradient descent. The goal of SGD is to minimize the gradient of the loss function.
  • computeTime :: Maybe Integer
     
  • createdAt :: Maybe POSIX

    The time that the MLModel was created. The time is expressed in epoch time.

  • createdByIamUser :: Maybe Text

    The AWS user account from which the MLModel was created. The account type can be either an AWS root account or an AWS Identity and Access Management (IAM) user account.

  • endpointInfo :: Maybe RealtimeEndpointInfo

    The current endpoint of the MLModel.

  • finishedAt :: Maybe POSIX
     
  • inputDataLocationS3 :: Maybe Text

    The location of the data file or directory in Amazon Simple Storage Service (Amazon S3).

  • lastUpdatedAt :: Maybe POSIX

    The time of the most recent edit to the MLModel. The time is expressed in epoch time.

  • mLModelId :: Maybe Text

    The ID assigned to the MLModel at creation.

  • mLModelType :: Maybe MLModelType

    Identifies the MLModel category. The following are the available types:

    • REGRESSION - Produces a numeric result. For example, "What price should a house be listed at?"
    • BINARY - Produces one of two possible results. For example, "Is this a child-friendly web site?".
    • MULTICLASS - Produces one of several possible results. For example, "Is this a HIGH-, LOW-, or MEDIUM-risk trade?".
  • message :: Maybe Text

    A description of the most recent details about accessing the MLModel.

  • name :: Maybe Text

    A user-supplied name or description of the MLModel.

  • scoreThreshold :: Maybe Double
     
  • scoreThresholdLastUpdatedAt :: Maybe POSIX

    The time of the most recent edit to the ScoreThreshold. The time is expressed in epoch time.

  • sizeInBytes :: Maybe Integer
     
  • startedAt :: Maybe POSIX
     
  • status :: Maybe EntityStatus

    The current status of an MLModel. This element can have one of the following values:

    • PENDING - Amazon Machine Learning (Amazon ML) submitted a request to create an MLModel.
    • INPROGRESS - The creation process is underway.
    • FAILED - The request to create an MLModel didn't run to completion. The model isn't usable.
    • COMPLETED - The creation process completed successfully.
    • DELETED - The MLModel is marked as deleted. It isn't usable.
  • trainingDataSourceId :: Maybe Text

    The ID of the training DataSource. The CreateMLModel operation uses the TrainingDataSourceId.

  • trainingParameters :: Maybe (HashMap Text Text)

    A list of the training parameters in the MLModel. The list is implemented as a map of key-value pairs.

    The following is the current set of training parameters:

    • sgd.maxMLModelSizeInBytes - The maximum allowed size of the model. Depending on the input data, the size of the model might affect its performance.

      The value is an integer that ranges from 100000 to 2147483648. The default value is 33554432.

    • sgd.maxPasses - The number of times that the training process traverses the observations to build the MLModel. The value is an integer that ranges from 1 to 10000. The default value is 10.
    • sgd.shuffleType - Whether Amazon ML shuffles the training data. Shuffling the data improves a model's ability to find the optimal solution for a variety of data types. The valid values are auto and none. The default value is none.
    • sgd.l1RegularizationAmount - The coefficient regularization L1 norm, which controls overfitting the data by penalizing large coefficients. This parameter tends to drive coefficients to zero, resulting in sparse feature set. If you use this parameter, start by specifying a small value, such as 1.0E-08.

      The value is a double that ranges from 0 to MAX_DOUBLE. The default is to not use L1 normalization. This parameter can't be used when L2 is specified. Use this parameter sparingly.

    • sgd.l2RegularizationAmount - The coefficient regularization L2 norm, which controls overfitting the data by penalizing large coefficients. This tends to drive coefficients to small, nonzero values. If you use this parameter, start by specifying a small value, such as 1.0E-08.

      The value is a double that ranges from 0 to MAX_DOUBLE. The default is to not use L2 normalization. This parameter can't be used when L1 is specified. Use this parameter sparingly.

Instances

Instances details
FromJSON MLModel Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.MLModel

Generic MLModel Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.MLModel

Associated Types

type Rep MLModel :: Type -> Type #

Methods

from :: MLModel -> Rep MLModel x #

to :: Rep MLModel x -> MLModel #

Read MLModel Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.MLModel

Show MLModel Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.MLModel

NFData MLModel Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.MLModel

Methods

rnf :: MLModel -> () #

Eq MLModel Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.MLModel

Methods

(==) :: MLModel -> MLModel -> Bool #

(/=) :: MLModel -> MLModel -> Bool #

Hashable MLModel Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.MLModel

Methods

hashWithSalt :: Int -> MLModel -> Int #

hash :: MLModel -> Int #

type Rep MLModel Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.MLModel

type Rep MLModel = D1 ('MetaData "MLModel" "Amazonka.MachineLearning.Types.MLModel" "amazonka-ml-2.0-A3JLJ63WvmfHxGBBIqhdRA" 'False) (C1 ('MetaCons "MLModel'" 'PrefixI 'True) ((((S1 ('MetaSel ('Just "algorithm") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Algorithm)) :*: S1 ('MetaSel ('Just "computeTime") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Integer))) :*: (S1 ('MetaSel ('Just "createdAt") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe POSIX)) :*: S1 ('MetaSel ('Just "createdByIamUser") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)))) :*: ((S1 ('MetaSel ('Just "endpointInfo") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe RealtimeEndpointInfo)) :*: S1 ('MetaSel ('Just "finishedAt") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe POSIX))) :*: (S1 ('MetaSel ('Just "inputDataLocationS3") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: (S1 ('MetaSel ('Just "lastUpdatedAt") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe POSIX)) :*: S1 ('MetaSel ('Just "mLModelId") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)))))) :*: (((S1 ('MetaSel ('Just "mLModelType") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe MLModelType)) :*: S1 ('MetaSel ('Just "message") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text))) :*: (S1 ('MetaSel ('Just "name") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: (S1 ('MetaSel ('Just "scoreThreshold") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Double)) :*: S1 ('MetaSel ('Just "scoreThresholdLastUpdatedAt") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe POSIX))))) :*: ((S1 ('MetaSel ('Just "sizeInBytes") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Integer)) :*: S1 ('MetaSel ('Just "startedAt") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe POSIX))) :*: (S1 ('MetaSel ('Just "status") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe EntityStatus)) :*: (S1 ('MetaSel ('Just "trainingDataSourceId") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "trainingParameters") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe (HashMap Text Text)))))))))

newMLModel :: MLModel Source #

Create a value of MLModel with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:algorithm:MLModel', mLModel_algorithm - The algorithm used to train the MLModel. The following algorithm is supported:

  • SGD -- Stochastic gradient descent. The goal of SGD is to minimize the gradient of the loss function.

$sel:computeTime:MLModel', mLModel_computeTime - Undocumented member.

MLModel, mLModel_createdAt - The time that the MLModel was created. The time is expressed in epoch time.

$sel:createdByIamUser:MLModel', mLModel_createdByIamUser - The AWS user account from which the MLModel was created. The account type can be either an AWS root account or an AWS Identity and Access Management (IAM) user account.

$sel:endpointInfo:MLModel', mLModel_endpointInfo - The current endpoint of the MLModel.

$sel:finishedAt:MLModel', mLModel_finishedAt - Undocumented member.

$sel:inputDataLocationS3:MLModel', mLModel_inputDataLocationS3 - The location of the data file or directory in Amazon Simple Storage Service (Amazon S3).

$sel:lastUpdatedAt:MLModel', mLModel_lastUpdatedAt - The time of the most recent edit to the MLModel. The time is expressed in epoch time.

$sel:mLModelId:MLModel', mLModel_mLModelId - The ID assigned to the MLModel at creation.

$sel:mLModelType:MLModel', mLModel_mLModelType - Identifies the MLModel category. The following are the available types:

  • REGRESSION - Produces a numeric result. For example, "What price should a house be listed at?"
  • BINARY - Produces one of two possible results. For example, "Is this a child-friendly web site?".
  • MULTICLASS - Produces one of several possible results. For example, "Is this a HIGH-, LOW-, or MEDIUM-risk trade?".

$sel:message:MLModel', mLModel_message - A description of the most recent details about accessing the MLModel.

$sel:name:MLModel', mLModel_name - A user-supplied name or description of the MLModel.

$sel:scoreThreshold:MLModel', mLModel_scoreThreshold - Undocumented member.

$sel:scoreThresholdLastUpdatedAt:MLModel', mLModel_scoreThresholdLastUpdatedAt - The time of the most recent edit to the ScoreThreshold. The time is expressed in epoch time.

$sel:sizeInBytes:MLModel', mLModel_sizeInBytes - Undocumented member.

$sel:startedAt:MLModel', mLModel_startedAt - Undocumented member.

$sel:status:MLModel', mLModel_status - The current status of an MLModel. This element can have one of the following values:

  • PENDING - Amazon Machine Learning (Amazon ML) submitted a request to create an MLModel.
  • INPROGRESS - The creation process is underway.
  • FAILED - The request to create an MLModel didn't run to completion. The model isn't usable.
  • COMPLETED - The creation process completed successfully.
  • DELETED - The MLModel is marked as deleted. It isn't usable.

$sel:trainingDataSourceId:MLModel', mLModel_trainingDataSourceId - The ID of the training DataSource. The CreateMLModel operation uses the TrainingDataSourceId.

$sel:trainingParameters:MLModel', mLModel_trainingParameters - A list of the training parameters in the MLModel. The list is implemented as a map of key-value pairs.

The following is the current set of training parameters:

  • sgd.maxMLModelSizeInBytes - The maximum allowed size of the model. Depending on the input data, the size of the model might affect its performance.

    The value is an integer that ranges from 100000 to 2147483648. The default value is 33554432.

  • sgd.maxPasses - The number of times that the training process traverses the observations to build the MLModel. The value is an integer that ranges from 1 to 10000. The default value is 10.
  • sgd.shuffleType - Whether Amazon ML shuffles the training data. Shuffling the data improves a model's ability to find the optimal solution for a variety of data types. The valid values are auto and none. The default value is none.
  • sgd.l1RegularizationAmount - The coefficient regularization L1 norm, which controls overfitting the data by penalizing large coefficients. This parameter tends to drive coefficients to zero, resulting in sparse feature set. If you use this parameter, start by specifying a small value, such as 1.0E-08.

    The value is a double that ranges from 0 to MAX_DOUBLE. The default is to not use L1 normalization. This parameter can't be used when L2 is specified. Use this parameter sparingly.

  • sgd.l2RegularizationAmount - The coefficient regularization L2 norm, which controls overfitting the data by penalizing large coefficients. This tends to drive coefficients to small, nonzero values. If you use this parameter, start by specifying a small value, such as 1.0E-08.

    The value is a double that ranges from 0 to MAX_DOUBLE. The default is to not use L2 normalization. This parameter can't be used when L1 is specified. Use this parameter sparingly.

mLModel_algorithm :: Lens' MLModel (Maybe Algorithm) Source #

The algorithm used to train the MLModel. The following algorithm is supported:

  • SGD -- Stochastic gradient descent. The goal of SGD is to minimize the gradient of the loss function.

mLModel_createdAt :: Lens' MLModel (Maybe UTCTime) Source #

The time that the MLModel was created. The time is expressed in epoch time.

mLModel_createdByIamUser :: Lens' MLModel (Maybe Text) Source #

The AWS user account from which the MLModel was created. The account type can be either an AWS root account or an AWS Identity and Access Management (IAM) user account.

mLModel_endpointInfo :: Lens' MLModel (Maybe RealtimeEndpointInfo) Source #

The current endpoint of the MLModel.

mLModel_inputDataLocationS3 :: Lens' MLModel (Maybe Text) Source #

The location of the data file or directory in Amazon Simple Storage Service (Amazon S3).

mLModel_lastUpdatedAt :: Lens' MLModel (Maybe UTCTime) Source #

The time of the most recent edit to the MLModel. The time is expressed in epoch time.

mLModel_mLModelId :: Lens' MLModel (Maybe Text) Source #

The ID assigned to the MLModel at creation.

mLModel_mLModelType :: Lens' MLModel (Maybe MLModelType) Source #

Identifies the MLModel category. The following are the available types:

  • REGRESSION - Produces a numeric result. For example, "What price should a house be listed at?"
  • BINARY - Produces one of two possible results. For example, "Is this a child-friendly web site?".
  • MULTICLASS - Produces one of several possible results. For example, "Is this a HIGH-, LOW-, or MEDIUM-risk trade?".

mLModel_message :: Lens' MLModel (Maybe Text) Source #

A description of the most recent details about accessing the MLModel.

mLModel_name :: Lens' MLModel (Maybe Text) Source #

A user-supplied name or description of the MLModel.

mLModel_scoreThresholdLastUpdatedAt :: Lens' MLModel (Maybe UTCTime) Source #

The time of the most recent edit to the ScoreThreshold. The time is expressed in epoch time.

mLModel_status :: Lens' MLModel (Maybe EntityStatus) Source #

The current status of an MLModel. This element can have one of the following values:

  • PENDING - Amazon Machine Learning (Amazon ML) submitted a request to create an MLModel.
  • INPROGRESS - The creation process is underway.
  • FAILED - The request to create an MLModel didn't run to completion. The model isn't usable.
  • COMPLETED - The creation process completed successfully.
  • DELETED - The MLModel is marked as deleted. It isn't usable.

mLModel_trainingDataSourceId :: Lens' MLModel (Maybe Text) Source #

The ID of the training DataSource. The CreateMLModel operation uses the TrainingDataSourceId.

mLModel_trainingParameters :: Lens' MLModel (Maybe (HashMap Text Text)) Source #

A list of the training parameters in the MLModel. The list is implemented as a map of key-value pairs.

The following is the current set of training parameters:

  • sgd.maxMLModelSizeInBytes - The maximum allowed size of the model. Depending on the input data, the size of the model might affect its performance.

    The value is an integer that ranges from 100000 to 2147483648. The default value is 33554432.

  • sgd.maxPasses - The number of times that the training process traverses the observations to build the MLModel. The value is an integer that ranges from 1 to 10000. The default value is 10.
  • sgd.shuffleType - Whether Amazon ML shuffles the training data. Shuffling the data improves a model's ability to find the optimal solution for a variety of data types. The valid values are auto and none. The default value is none.
  • sgd.l1RegularizationAmount - The coefficient regularization L1 norm, which controls overfitting the data by penalizing large coefficients. This parameter tends to drive coefficients to zero, resulting in sparse feature set. If you use this parameter, start by specifying a small value, such as 1.0E-08.

    The value is a double that ranges from 0 to MAX_DOUBLE. The default is to not use L1 normalization. This parameter can't be used when L2 is specified. Use this parameter sparingly.

  • sgd.l2RegularizationAmount - The coefficient regularization L2 norm, which controls overfitting the data by penalizing large coefficients. This tends to drive coefficients to small, nonzero values. If you use this parameter, start by specifying a small value, such as 1.0E-08.

    The value is a double that ranges from 0 to MAX_DOUBLE. The default is to not use L2 normalization. This parameter can't be used when L1 is specified. Use this parameter sparingly.

PerformanceMetrics

data PerformanceMetrics Source #

Measurements of how well the MLModel performed on known observations. One of the following metrics is returned, based on the type of the MLModel:

  • BinaryAUC: The binary MLModel uses the Area Under the Curve (AUC) technique to measure performance.
  • RegressionRMSE: The regression MLModel uses the Root Mean Square Error (RMSE) technique to measure performance. RMSE measures the difference between predicted and actual values for a single variable.
  • MulticlassAvgFScore: The multiclass MLModel uses the F1 score technique to measure performance.

For more information about performance metrics, please see the Amazon Machine Learning Developer Guide.

See: newPerformanceMetrics smart constructor.

Instances

Instances details
FromJSON PerformanceMetrics Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.PerformanceMetrics

Generic PerformanceMetrics Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.PerformanceMetrics

Associated Types

type Rep PerformanceMetrics :: Type -> Type #

Read PerformanceMetrics Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.PerformanceMetrics

Show PerformanceMetrics Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.PerformanceMetrics

NFData PerformanceMetrics Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.PerformanceMetrics

Methods

rnf :: PerformanceMetrics -> () #

Eq PerformanceMetrics Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.PerformanceMetrics

Hashable PerformanceMetrics Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.PerformanceMetrics

type Rep PerformanceMetrics Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.PerformanceMetrics

type Rep PerformanceMetrics = D1 ('MetaData "PerformanceMetrics" "Amazonka.MachineLearning.Types.PerformanceMetrics" "amazonka-ml-2.0-A3JLJ63WvmfHxGBBIqhdRA" 'False) (C1 ('MetaCons "PerformanceMetrics'" 'PrefixI 'True) (S1 ('MetaSel ('Just "properties") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe (HashMap Text Text)))))

newPerformanceMetrics :: PerformanceMetrics Source #

Create a value of PerformanceMetrics with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:properties:PerformanceMetrics', performanceMetrics_properties - Undocumented member.

Prediction

data Prediction Source #

The output from a Predict operation:

  • Details - Contains the following attributes: DetailsAttributes.PREDICTIVE_MODEL_TYPE - REGRESSION | BINARY | MULTICLASS DetailsAttributes.ALGORITHM - SGD
  • PredictedLabel - Present for either a BINARY or MULTICLASS MLModel request.
  • PredictedScores - Contains the raw classification score corresponding to each label.
  • PredictedValue - Present for a REGRESSION MLModel request.

See: newPrediction smart constructor.

Constructors

Prediction' 

Fields

Instances

Instances details
FromJSON Prediction Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.Prediction

Generic Prediction Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.Prediction

Associated Types

type Rep Prediction :: Type -> Type #

Read Prediction Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.Prediction

Show Prediction Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.Prediction

NFData Prediction Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.Prediction

Methods

rnf :: Prediction -> () #

Eq Prediction Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.Prediction

Hashable Prediction Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.Prediction

type Rep Prediction Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.Prediction

type Rep Prediction = D1 ('MetaData "Prediction" "Amazonka.MachineLearning.Types.Prediction" "amazonka-ml-2.0-A3JLJ63WvmfHxGBBIqhdRA" 'False) (C1 ('MetaCons "Prediction'" 'PrefixI 'True) ((S1 ('MetaSel ('Just "details") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe (HashMap DetailsAttributes Text))) :*: S1 ('MetaSel ('Just "predictedLabel") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text))) :*: (S1 ('MetaSel ('Just "predictedScores") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe (HashMap Text Double))) :*: S1 ('MetaSel ('Just "predictedValue") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Double)))))

newPrediction :: Prediction Source #

Create a value of Prediction with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:details:Prediction', prediction_details - Undocumented member.

$sel:predictedLabel:Prediction', prediction_predictedLabel - The prediction label for either a BINARY or MULTICLASS MLModel.

$sel:predictedScores:Prediction', prediction_predictedScores - Undocumented member.

$sel:predictedValue:Prediction', prediction_predictedValue - The prediction value for REGRESSION MLModel.

prediction_predictedLabel :: Lens' Prediction (Maybe Text) Source #

The prediction label for either a BINARY or MULTICLASS MLModel.

prediction_predictedValue :: Lens' Prediction (Maybe Double) Source #

The prediction value for REGRESSION MLModel.

RDSDataSpec

data RDSDataSpec Source #

The data specification of an Amazon Relational Database Service (Amazon RDS) DataSource.

See: newRDSDataSpec smart constructor.

Constructors

RDSDataSpec' 

Fields

  • dataRearrangement :: Maybe Text

    A JSON string that represents the splitting and rearrangement processing to be applied to a DataSource. If the DataRearrangement parameter is not provided, all of the input data is used to create the Datasource.

    There are multiple parameters that control what data is used to create a datasource:

    • percentBegin

      Use percentBegin to indicate the beginning of the range of the data used to create the Datasource. If you do not include percentBegin and percentEnd, Amazon ML includes all of the data when creating the datasource.

    • percentEnd

      Use percentEnd to indicate the end of the range of the data used to create the Datasource. If you do not include percentBegin and percentEnd, Amazon ML includes all of the data when creating the datasource.

    • complement

      The complement parameter instructs Amazon ML to use the data that is not included in the range of percentBegin to percentEnd to create a datasource. The complement parameter is useful if you need to create complementary datasources for training and evaluation. To create a complementary datasource, use the same values for percentBegin and percentEnd, along with the complement parameter.

      For example, the following two datasources do not share any data, and can be used to train and evaluate a model. The first datasource has 25 percent of the data, and the second one has 75 percent of the data.

      Datasource for evaluation: {"splitting":{"percentBegin":0, "percentEnd":25}}

      Datasource for training: {"splitting":{"percentBegin":0, "percentEnd":25, "complement":"true"}}

    • strategy

      To change how Amazon ML splits the data for a datasource, use the strategy parameter.

      The default value for the strategy parameter is sequential, meaning that Amazon ML takes all of the data records between the percentBegin and percentEnd parameters for the datasource, in the order that the records appear in the input data.

      The following two DataRearrangement lines are examples of sequentially ordered training and evaluation datasources:

      Datasource for evaluation: {"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"sequential"}}

      Datasource for training: {"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"sequential", "complement":"true"}}

      To randomly split the input data into the proportions indicated by the percentBegin and percentEnd parameters, set the strategy parameter to random and provide a string that is used as the seed value for the random data splitting (for example, you can use the S3 path to your data as the random seed string). If you choose the random split strategy, Amazon ML assigns each row of data a pseudo-random number between 0 and 100, and then selects the rows that have an assigned number between percentBegin and percentEnd. Pseudo-random numbers are assigned using both the input seed string value and the byte offset as a seed, so changing the data results in a different split. Any existing ordering is preserved. The random splitting strategy ensures that variables in the training and evaluation data are distributed similarly. It is useful in the cases where the input data may have an implicit sort order, which would otherwise result in training and evaluation datasources containing non-similar data records.

      The following two DataRearrangement lines are examples of non-sequentially ordered training and evaluation datasources:

      Datasource for evaluation: {"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"random", "randomSeed"="s3://my_s3_path/bucket/file.csv"}}

      Datasource for training: {"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"random", "randomSeed"="s3://my_s3_path/bucket/file.csv", "complement":"true"}}

  • dataSchema :: Maybe Text

    A JSON string that represents the schema for an Amazon RDS DataSource. The DataSchema defines the structure of the observation data in the data file(s) referenced in the DataSource.

    A DataSchema is not required if you specify a DataSchemaUri

    Define your DataSchema as a series of key-value pairs. attributes and excludedVariableNames have an array of key-value pairs for their value. Use the following format to define your DataSchema.

    { "version": "1.0",

    "recordAnnotationFieldName": "F1",

    "recordWeightFieldName": "F2",

    "targetFieldName": "F3",

    "dataFormat": "CSV",

    "dataFileContainsHeader": true,

    "attributes": [

    { "fieldName": "F1", "fieldType": "TEXT" }, { "fieldName": "F2", "fieldType": "NUMERIC" }, { "fieldName": "F3", "fieldType": "CATEGORICAL" }, { "fieldName": "F4", "fieldType": "NUMERIC" }, { "fieldName": "F5", "fieldType": "CATEGORICAL" }, { "fieldName": "F6", "fieldType": "TEXT" }, { "fieldName": "F7", "fieldType": "WEIGHTED_INT_SEQUENCE" }, { "fieldName": "F8", "fieldType": "WEIGHTED_STRING_SEQUENCE" } ],

    "excludedVariableNames": [ "F6" ] }

  • dataSchemaUri :: Maybe Text

    The Amazon S3 location of the DataSchema.

  • databaseInformation :: RDSDatabase

    Describes the DatabaseName and InstanceIdentifier of an Amazon RDS database.

  • selectSqlQuery :: Text

    The query that is used to retrieve the observation data for the DataSource.

  • databaseCredentials :: RDSDatabaseCredentials

    The AWS Identity and Access Management (IAM) credentials that are used connect to the Amazon RDS database.

  • s3StagingLocation :: Text

    The Amazon S3 location for staging Amazon RDS data. The data retrieved from Amazon RDS using SelectSqlQuery is stored in this location.

  • resourceRole :: Text

    The role (DataPipelineDefaultResourceRole) assumed by an Amazon Elastic Compute Cloud (Amazon EC2) instance to carry out the copy operation from Amazon RDS to an Amazon S3 task. For more information, see Role templates for data pipelines.

  • serviceRole :: Text

    The role (DataPipelineDefaultRole) assumed by AWS Data Pipeline service to monitor the progress of the copy task from Amazon RDS to Amazon S3. For more information, see Role templates for data pipelines.

  • subnetId :: Text

    The subnet ID to be used to access a VPC-based RDS DB instance. This attribute is used by Data Pipeline to carry out the copy task from Amazon RDS to Amazon S3.

  • securityGroupIds :: [Text]

    The security group IDs to be used to access a VPC-based RDS DB instance. Ensure that there are appropriate ingress rules set up to allow access to the RDS DB instance. This attribute is used by Data Pipeline to carry out the copy operation from Amazon RDS to an Amazon S3 task.

Instances

Instances details
ToJSON RDSDataSpec Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RDSDataSpec

Generic RDSDataSpec Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RDSDataSpec

Associated Types

type Rep RDSDataSpec :: Type -> Type #

Read RDSDataSpec Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RDSDataSpec

Show RDSDataSpec Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RDSDataSpec

NFData RDSDataSpec Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RDSDataSpec

Methods

rnf :: RDSDataSpec -> () #

Eq RDSDataSpec Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RDSDataSpec

Hashable RDSDataSpec Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RDSDataSpec

type Rep RDSDataSpec Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RDSDataSpec

newRDSDataSpec Source #

Create a value of RDSDataSpec with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:dataRearrangement:RDSDataSpec', rDSDataSpec_dataRearrangement - A JSON string that represents the splitting and rearrangement processing to be applied to a DataSource. If the DataRearrangement parameter is not provided, all of the input data is used to create the Datasource.

There are multiple parameters that control what data is used to create a datasource:

  • percentBegin

    Use percentBegin to indicate the beginning of the range of the data used to create the Datasource. If you do not include percentBegin and percentEnd, Amazon ML includes all of the data when creating the datasource.

  • percentEnd

    Use percentEnd to indicate the end of the range of the data used to create the Datasource. If you do not include percentBegin and percentEnd, Amazon ML includes all of the data when creating the datasource.

  • complement

    The complement parameter instructs Amazon ML to use the data that is not included in the range of percentBegin to percentEnd to create a datasource. The complement parameter is useful if you need to create complementary datasources for training and evaluation. To create a complementary datasource, use the same values for percentBegin and percentEnd, along with the complement parameter.

    For example, the following two datasources do not share any data, and can be used to train and evaluate a model. The first datasource has 25 percent of the data, and the second one has 75 percent of the data.

    Datasource for evaluation: {"splitting":{"percentBegin":0, "percentEnd":25}}

    Datasource for training: {"splitting":{"percentBegin":0, "percentEnd":25, "complement":"true"}}

  • strategy

    To change how Amazon ML splits the data for a datasource, use the strategy parameter.

    The default value for the strategy parameter is sequential, meaning that Amazon ML takes all of the data records between the percentBegin and percentEnd parameters for the datasource, in the order that the records appear in the input data.

    The following two DataRearrangement lines are examples of sequentially ordered training and evaluation datasources:

    Datasource for evaluation: {"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"sequential"}}

    Datasource for training: {"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"sequential", "complement":"true"}}

    To randomly split the input data into the proportions indicated by the percentBegin and percentEnd parameters, set the strategy parameter to random and provide a string that is used as the seed value for the random data splitting (for example, you can use the S3 path to your data as the random seed string). If you choose the random split strategy, Amazon ML assigns each row of data a pseudo-random number between 0 and 100, and then selects the rows that have an assigned number between percentBegin and percentEnd. Pseudo-random numbers are assigned using both the input seed string value and the byte offset as a seed, so changing the data results in a different split. Any existing ordering is preserved. The random splitting strategy ensures that variables in the training and evaluation data are distributed similarly. It is useful in the cases where the input data may have an implicit sort order, which would otherwise result in training and evaluation datasources containing non-similar data records.

    The following two DataRearrangement lines are examples of non-sequentially ordered training and evaluation datasources:

    Datasource for evaluation: {"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"random", "randomSeed"="s3://my_s3_path/bucket/file.csv"}}

    Datasource for training: {"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"random", "randomSeed"="s3://my_s3_path/bucket/file.csv", "complement":"true"}}

$sel:dataSchema:RDSDataSpec', rDSDataSpec_dataSchema - A JSON string that represents the schema for an Amazon RDS DataSource. The DataSchema defines the structure of the observation data in the data file(s) referenced in the DataSource.

A DataSchema is not required if you specify a DataSchemaUri

Define your DataSchema as a series of key-value pairs. attributes and excludedVariableNames have an array of key-value pairs for their value. Use the following format to define your DataSchema.

{ "version": "1.0",

"recordAnnotationFieldName": "F1",

"recordWeightFieldName": "F2",

"targetFieldName": "F3",

"dataFormat": "CSV",

"dataFileContainsHeader": true,

"attributes": [

{ "fieldName": "F1", "fieldType": "TEXT" }, { "fieldName": "F2", "fieldType": "NUMERIC" }, { "fieldName": "F3", "fieldType": "CATEGORICAL" }, { "fieldName": "F4", "fieldType": "NUMERIC" }, { "fieldName": "F5", "fieldType": "CATEGORICAL" }, { "fieldName": "F6", "fieldType": "TEXT" }, { "fieldName": "F7", "fieldType": "WEIGHTED_INT_SEQUENCE" }, { "fieldName": "F8", "fieldType": "WEIGHTED_STRING_SEQUENCE" } ],

"excludedVariableNames": [ "F6" ] }

$sel:dataSchemaUri:RDSDataSpec', rDSDataSpec_dataSchemaUri - The Amazon S3 location of the DataSchema.

$sel:databaseInformation:RDSDataSpec', rDSDataSpec_databaseInformation - Describes the DatabaseName and InstanceIdentifier of an Amazon RDS database.

$sel:selectSqlQuery:RDSDataSpec', rDSDataSpec_selectSqlQuery - The query that is used to retrieve the observation data for the DataSource.

$sel:databaseCredentials:RDSDataSpec', rDSDataSpec_databaseCredentials - The AWS Identity and Access Management (IAM) credentials that are used connect to the Amazon RDS database.

$sel:s3StagingLocation:RDSDataSpec', rDSDataSpec_s3StagingLocation - The Amazon S3 location for staging Amazon RDS data. The data retrieved from Amazon RDS using SelectSqlQuery is stored in this location.

$sel:resourceRole:RDSDataSpec', rDSDataSpec_resourceRole - The role (DataPipelineDefaultResourceRole) assumed by an Amazon Elastic Compute Cloud (Amazon EC2) instance to carry out the copy operation from Amazon RDS to an Amazon S3 task. For more information, see Role templates for data pipelines.

$sel:serviceRole:RDSDataSpec', rDSDataSpec_serviceRole - The role (DataPipelineDefaultRole) assumed by AWS Data Pipeline service to monitor the progress of the copy task from Amazon RDS to Amazon S3. For more information, see Role templates for data pipelines.

$sel:subnetId:RDSDataSpec', rDSDataSpec_subnetId - The subnet ID to be used to access a VPC-based RDS DB instance. This attribute is used by Data Pipeline to carry out the copy task from Amazon RDS to Amazon S3.

$sel:securityGroupIds:RDSDataSpec', rDSDataSpec_securityGroupIds - The security group IDs to be used to access a VPC-based RDS DB instance. Ensure that there are appropriate ingress rules set up to allow access to the RDS DB instance. This attribute is used by Data Pipeline to carry out the copy operation from Amazon RDS to an Amazon S3 task.

rDSDataSpec_dataRearrangement :: Lens' RDSDataSpec (Maybe Text) Source #

A JSON string that represents the splitting and rearrangement processing to be applied to a DataSource. If the DataRearrangement parameter is not provided, all of the input data is used to create the Datasource.

There are multiple parameters that control what data is used to create a datasource:

  • percentBegin

    Use percentBegin to indicate the beginning of the range of the data used to create the Datasource. If you do not include percentBegin and percentEnd, Amazon ML includes all of the data when creating the datasource.

  • percentEnd

    Use percentEnd to indicate the end of the range of the data used to create the Datasource. If you do not include percentBegin and percentEnd, Amazon ML includes all of the data when creating the datasource.

  • complement

    The complement parameter instructs Amazon ML to use the data that is not included in the range of percentBegin to percentEnd to create a datasource. The complement parameter is useful if you need to create complementary datasources for training and evaluation. To create a complementary datasource, use the same values for percentBegin and percentEnd, along with the complement parameter.

    For example, the following two datasources do not share any data, and can be used to train and evaluate a model. The first datasource has 25 percent of the data, and the second one has 75 percent of the data.

    Datasource for evaluation: {"splitting":{"percentBegin":0, "percentEnd":25}}

    Datasource for training: {"splitting":{"percentBegin":0, "percentEnd":25, "complement":"true"}}

  • strategy

    To change how Amazon ML splits the data for a datasource, use the strategy parameter.

    The default value for the strategy parameter is sequential, meaning that Amazon ML takes all of the data records between the percentBegin and percentEnd parameters for the datasource, in the order that the records appear in the input data.

    The following two DataRearrangement lines are examples of sequentially ordered training and evaluation datasources:

    Datasource for evaluation: {"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"sequential"}}

    Datasource for training: {"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"sequential", "complement":"true"}}

    To randomly split the input data into the proportions indicated by the percentBegin and percentEnd parameters, set the strategy parameter to random and provide a string that is used as the seed value for the random data splitting (for example, you can use the S3 path to your data as the random seed string). If you choose the random split strategy, Amazon ML assigns each row of data a pseudo-random number between 0 and 100, and then selects the rows that have an assigned number between percentBegin and percentEnd. Pseudo-random numbers are assigned using both the input seed string value and the byte offset as a seed, so changing the data results in a different split. Any existing ordering is preserved. The random splitting strategy ensures that variables in the training and evaluation data are distributed similarly. It is useful in the cases where the input data may have an implicit sort order, which would otherwise result in training and evaluation datasources containing non-similar data records.

    The following two DataRearrangement lines are examples of non-sequentially ordered training and evaluation datasources:

    Datasource for evaluation: {"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"random", "randomSeed"="s3://my_s3_path/bucket/file.csv"}}

    Datasource for training: {"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"random", "randomSeed"="s3://my_s3_path/bucket/file.csv", "complement":"true"}}

rDSDataSpec_dataSchema :: Lens' RDSDataSpec (Maybe Text) Source #

A JSON string that represents the schema for an Amazon RDS DataSource. The DataSchema defines the structure of the observation data in the data file(s) referenced in the DataSource.

A DataSchema is not required if you specify a DataSchemaUri

Define your DataSchema as a series of key-value pairs. attributes and excludedVariableNames have an array of key-value pairs for their value. Use the following format to define your DataSchema.

{ "version": "1.0",

"recordAnnotationFieldName": "F1",

"recordWeightFieldName": "F2",

"targetFieldName": "F3",

"dataFormat": "CSV",

"dataFileContainsHeader": true,

"attributes": [

{ "fieldName": "F1", "fieldType": "TEXT" }, { "fieldName": "F2", "fieldType": "NUMERIC" }, { "fieldName": "F3", "fieldType": "CATEGORICAL" }, { "fieldName": "F4", "fieldType": "NUMERIC" }, { "fieldName": "F5", "fieldType": "CATEGORICAL" }, { "fieldName": "F6", "fieldType": "TEXT" }, { "fieldName": "F7", "fieldType": "WEIGHTED_INT_SEQUENCE" }, { "fieldName": "F8", "fieldType": "WEIGHTED_STRING_SEQUENCE" } ],

"excludedVariableNames": [ "F6" ] }

rDSDataSpec_dataSchemaUri :: Lens' RDSDataSpec (Maybe Text) Source #

The Amazon S3 location of the DataSchema.

rDSDataSpec_databaseInformation :: Lens' RDSDataSpec RDSDatabase Source #

Describes the DatabaseName and InstanceIdentifier of an Amazon RDS database.

rDSDataSpec_selectSqlQuery :: Lens' RDSDataSpec Text Source #

The query that is used to retrieve the observation data for the DataSource.

rDSDataSpec_databaseCredentials :: Lens' RDSDataSpec RDSDatabaseCredentials Source #

The AWS Identity and Access Management (IAM) credentials that are used connect to the Amazon RDS database.

rDSDataSpec_s3StagingLocation :: Lens' RDSDataSpec Text Source #

The Amazon S3 location for staging Amazon RDS data. The data retrieved from Amazon RDS using SelectSqlQuery is stored in this location.

rDSDataSpec_resourceRole :: Lens' RDSDataSpec Text Source #

The role (DataPipelineDefaultResourceRole) assumed by an Amazon Elastic Compute Cloud (Amazon EC2) instance to carry out the copy operation from Amazon RDS to an Amazon S3 task. For more information, see Role templates for data pipelines.

rDSDataSpec_serviceRole :: Lens' RDSDataSpec Text Source #

The role (DataPipelineDefaultRole) assumed by AWS Data Pipeline service to monitor the progress of the copy task from Amazon RDS to Amazon S3. For more information, see Role templates for data pipelines.

rDSDataSpec_subnetId :: Lens' RDSDataSpec Text Source #

The subnet ID to be used to access a VPC-based RDS DB instance. This attribute is used by Data Pipeline to carry out the copy task from Amazon RDS to Amazon S3.

rDSDataSpec_securityGroupIds :: Lens' RDSDataSpec [Text] Source #

The security group IDs to be used to access a VPC-based RDS DB instance. Ensure that there are appropriate ingress rules set up to allow access to the RDS DB instance. This attribute is used by Data Pipeline to carry out the copy operation from Amazon RDS to an Amazon S3 task.

RDSDatabase

data RDSDatabase Source #

The database details of an Amazon RDS database.

See: newRDSDatabase smart constructor.

Constructors

RDSDatabase' 

Fields

Instances

Instances details
FromJSON RDSDatabase Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RDSDatabase

ToJSON RDSDatabase Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RDSDatabase

Generic RDSDatabase Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RDSDatabase

Associated Types

type Rep RDSDatabase :: Type -> Type #

Read RDSDatabase Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RDSDatabase

Show RDSDatabase Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RDSDatabase

NFData RDSDatabase Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RDSDatabase

Methods

rnf :: RDSDatabase -> () #

Eq RDSDatabase Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RDSDatabase

Hashable RDSDatabase Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RDSDatabase

type Rep RDSDatabase Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RDSDatabase

type Rep RDSDatabase = D1 ('MetaData "RDSDatabase" "Amazonka.MachineLearning.Types.RDSDatabase" "amazonka-ml-2.0-A3JLJ63WvmfHxGBBIqhdRA" 'False) (C1 ('MetaCons "RDSDatabase'" 'PrefixI 'True) (S1 ('MetaSel ('Just "instanceIdentifier") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 Text) :*: S1 ('MetaSel ('Just "databaseName") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 Text)))

newRDSDatabase Source #

Create a value of RDSDatabase with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:instanceIdentifier:RDSDatabase', rDSDatabase_instanceIdentifier - The ID of an RDS DB instance.

$sel:databaseName:RDSDatabase', rDSDatabase_databaseName - Undocumented member.

RDSDatabaseCredentials

data RDSDatabaseCredentials Source #

The database credentials to connect to a database on an RDS DB instance.

See: newRDSDatabaseCredentials smart constructor.

Constructors

RDSDatabaseCredentials' 

Fields

Instances

Instances details
ToJSON RDSDatabaseCredentials Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RDSDatabaseCredentials

Generic RDSDatabaseCredentials Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RDSDatabaseCredentials

Associated Types

type Rep RDSDatabaseCredentials :: Type -> Type #

Read RDSDatabaseCredentials Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RDSDatabaseCredentials

Show RDSDatabaseCredentials Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RDSDatabaseCredentials

NFData RDSDatabaseCredentials Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RDSDatabaseCredentials

Methods

rnf :: RDSDatabaseCredentials -> () #

Eq RDSDatabaseCredentials Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RDSDatabaseCredentials

Hashable RDSDatabaseCredentials Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RDSDatabaseCredentials

type Rep RDSDatabaseCredentials Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RDSDatabaseCredentials

type Rep RDSDatabaseCredentials = D1 ('MetaData "RDSDatabaseCredentials" "Amazonka.MachineLearning.Types.RDSDatabaseCredentials" "amazonka-ml-2.0-A3JLJ63WvmfHxGBBIqhdRA" 'False) (C1 ('MetaCons "RDSDatabaseCredentials'" 'PrefixI 'True) (S1 ('MetaSel ('Just "username") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 Text) :*: S1 ('MetaSel ('Just "password") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 Text)))

newRDSDatabaseCredentials Source #

Create a value of RDSDatabaseCredentials with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:username:RDSDatabaseCredentials', rDSDatabaseCredentials_username - Undocumented member.

$sel:password:RDSDatabaseCredentials', rDSDatabaseCredentials_password - Undocumented member.

RDSMetadata

data RDSMetadata Source #

The datasource details that are specific to Amazon RDS.

See: newRDSMetadata smart constructor.

Constructors

RDSMetadata' 

Fields

  • dataPipelineId :: Maybe Text

    The ID of the Data Pipeline instance that is used to carry to copy data from Amazon RDS to Amazon S3. You can use the ID to find details about the instance in the Data Pipeline console.

  • database :: Maybe RDSDatabase

    The database details required to connect to an Amazon RDS.

  • databaseUserName :: Maybe Text
     
  • resourceRole :: Maybe Text

    The role (DataPipelineDefaultResourceRole) assumed by an Amazon EC2 instance to carry out the copy task from Amazon RDS to Amazon S3. For more information, see Role templates for data pipelines.

  • selectSqlQuery :: Maybe Text

    The SQL query that is supplied during CreateDataSourceFromRDS. Returns only if Verbose is true in GetDataSourceInput.

  • serviceRole :: Maybe Text

    The role (DataPipelineDefaultRole) assumed by the Data Pipeline service to monitor the progress of the copy task from Amazon RDS to Amazon S3. For more information, see Role templates for data pipelines.

Instances

Instances details
FromJSON RDSMetadata Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RDSMetadata

Generic RDSMetadata Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RDSMetadata

Associated Types

type Rep RDSMetadata :: Type -> Type #

Read RDSMetadata Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RDSMetadata

Show RDSMetadata Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RDSMetadata

NFData RDSMetadata Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RDSMetadata

Methods

rnf :: RDSMetadata -> () #

Eq RDSMetadata Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RDSMetadata

Hashable RDSMetadata Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RDSMetadata

type Rep RDSMetadata Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RDSMetadata

type Rep RDSMetadata = D1 ('MetaData "RDSMetadata" "Amazonka.MachineLearning.Types.RDSMetadata" "amazonka-ml-2.0-A3JLJ63WvmfHxGBBIqhdRA" 'False) (C1 ('MetaCons "RDSMetadata'" 'PrefixI 'True) ((S1 ('MetaSel ('Just "dataPipelineId") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: (S1 ('MetaSel ('Just "database") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe RDSDatabase)) :*: S1 ('MetaSel ('Just "databaseUserName") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)))) :*: (S1 ('MetaSel ('Just "resourceRole") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: (S1 ('MetaSel ('Just "selectSqlQuery") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "serviceRole") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text))))))

newRDSMetadata :: RDSMetadata Source #

Create a value of RDSMetadata with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:dataPipelineId:RDSMetadata', rDSMetadata_dataPipelineId - The ID of the Data Pipeline instance that is used to carry to copy data from Amazon RDS to Amazon S3. You can use the ID to find details about the instance in the Data Pipeline console.

$sel:database:RDSMetadata', rDSMetadata_database - The database details required to connect to an Amazon RDS.

$sel:databaseUserName:RDSMetadata', rDSMetadata_databaseUserName - Undocumented member.

$sel:resourceRole:RDSMetadata', rDSMetadata_resourceRole - The role (DataPipelineDefaultResourceRole) assumed by an Amazon EC2 instance to carry out the copy task from Amazon RDS to Amazon S3. For more information, see Role templates for data pipelines.

$sel:selectSqlQuery:RDSMetadata', rDSMetadata_selectSqlQuery - The SQL query that is supplied during CreateDataSourceFromRDS. Returns only if Verbose is true in GetDataSourceInput.

$sel:serviceRole:RDSMetadata', rDSMetadata_serviceRole - The role (DataPipelineDefaultRole) assumed by the Data Pipeline service to monitor the progress of the copy task from Amazon RDS to Amazon S3. For more information, see Role templates for data pipelines.

rDSMetadata_dataPipelineId :: Lens' RDSMetadata (Maybe Text) Source #

The ID of the Data Pipeline instance that is used to carry to copy data from Amazon RDS to Amazon S3. You can use the ID to find details about the instance in the Data Pipeline console.

rDSMetadata_database :: Lens' RDSMetadata (Maybe RDSDatabase) Source #

The database details required to connect to an Amazon RDS.

rDSMetadata_resourceRole :: Lens' RDSMetadata (Maybe Text) Source #

The role (DataPipelineDefaultResourceRole) assumed by an Amazon EC2 instance to carry out the copy task from Amazon RDS to Amazon S3. For more information, see Role templates for data pipelines.

rDSMetadata_selectSqlQuery :: Lens' RDSMetadata (Maybe Text) Source #

The SQL query that is supplied during CreateDataSourceFromRDS. Returns only if Verbose is true in GetDataSourceInput.

rDSMetadata_serviceRole :: Lens' RDSMetadata (Maybe Text) Source #

The role (DataPipelineDefaultRole) assumed by the Data Pipeline service to monitor the progress of the copy task from Amazon RDS to Amazon S3. For more information, see Role templates for data pipelines.

RealtimeEndpointInfo

data RealtimeEndpointInfo Source #

Describes the real-time endpoint information for an MLModel.

See: newRealtimeEndpointInfo smart constructor.

Constructors

RealtimeEndpointInfo' 

Fields

  • createdAt :: Maybe POSIX

    The time that the request to create the real-time endpoint for the MLModel was received. The time is expressed in epoch time.

  • endpointStatus :: Maybe RealtimeEndpointStatus

    The current status of the real-time endpoint for the MLModel. This element can have one of the following values:

    • NONE - Endpoint does not exist or was previously deleted.
    • READY - Endpoint is ready to be used for real-time predictions.
    • UPDATING - Updating/creating the endpoint.
  • endpointUrl :: Maybe Text

    The URI that specifies where to send real-time prediction requests for the MLModel.

    Note: The application must wait until the real-time endpoint is ready before using this URI.

  • peakRequestsPerSecond :: Maybe Int

    The maximum processing rate for the real-time endpoint for MLModel, measured in incoming requests per second.

Instances

Instances details
FromJSON RealtimeEndpointInfo Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RealtimeEndpointInfo

Generic RealtimeEndpointInfo Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RealtimeEndpointInfo

Associated Types

type Rep RealtimeEndpointInfo :: Type -> Type #

Read RealtimeEndpointInfo Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RealtimeEndpointInfo

Show RealtimeEndpointInfo Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RealtimeEndpointInfo

NFData RealtimeEndpointInfo Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RealtimeEndpointInfo

Methods

rnf :: RealtimeEndpointInfo -> () #

Eq RealtimeEndpointInfo Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RealtimeEndpointInfo

Hashable RealtimeEndpointInfo Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RealtimeEndpointInfo

type Rep RealtimeEndpointInfo Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RealtimeEndpointInfo

type Rep RealtimeEndpointInfo = D1 ('MetaData "RealtimeEndpointInfo" "Amazonka.MachineLearning.Types.RealtimeEndpointInfo" "amazonka-ml-2.0-A3JLJ63WvmfHxGBBIqhdRA" 'False) (C1 ('MetaCons "RealtimeEndpointInfo'" 'PrefixI 'True) ((S1 ('MetaSel ('Just "createdAt") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe POSIX)) :*: S1 ('MetaSel ('Just "endpointStatus") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe RealtimeEndpointStatus))) :*: (S1 ('MetaSel ('Just "endpointUrl") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "peakRequestsPerSecond") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Int)))))

newRealtimeEndpointInfo :: RealtimeEndpointInfo Source #

Create a value of RealtimeEndpointInfo with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:createdAt:RealtimeEndpointInfo', realtimeEndpointInfo_createdAt - The time that the request to create the real-time endpoint for the MLModel was received. The time is expressed in epoch time.

$sel:endpointStatus:RealtimeEndpointInfo', realtimeEndpointInfo_endpointStatus - The current status of the real-time endpoint for the MLModel. This element can have one of the following values:

  • NONE - Endpoint does not exist or was previously deleted.
  • READY - Endpoint is ready to be used for real-time predictions.
  • UPDATING - Updating/creating the endpoint.

$sel:endpointUrl:RealtimeEndpointInfo', realtimeEndpointInfo_endpointUrl - The URI that specifies where to send real-time prediction requests for the MLModel.

Note: The application must wait until the real-time endpoint is ready before using this URI.

$sel:peakRequestsPerSecond:RealtimeEndpointInfo', realtimeEndpointInfo_peakRequestsPerSecond - The maximum processing rate for the real-time endpoint for MLModel, measured in incoming requests per second.

realtimeEndpointInfo_createdAt :: Lens' RealtimeEndpointInfo (Maybe UTCTime) Source #

The time that the request to create the real-time endpoint for the MLModel was received. The time is expressed in epoch time.

realtimeEndpointInfo_endpointStatus :: Lens' RealtimeEndpointInfo (Maybe RealtimeEndpointStatus) Source #

The current status of the real-time endpoint for the MLModel. This element can have one of the following values:

  • NONE - Endpoint does not exist or was previously deleted.
  • READY - Endpoint is ready to be used for real-time predictions.
  • UPDATING - Updating/creating the endpoint.

realtimeEndpointInfo_endpointUrl :: Lens' RealtimeEndpointInfo (Maybe Text) Source #

The URI that specifies where to send real-time prediction requests for the MLModel.

Note: The application must wait until the real-time endpoint is ready before using this URI.

realtimeEndpointInfo_peakRequestsPerSecond :: Lens' RealtimeEndpointInfo (Maybe Int) Source #

The maximum processing rate for the real-time endpoint for MLModel, measured in incoming requests per second.

RedshiftDataSpec

data RedshiftDataSpec Source #

Describes the data specification of an Amazon Redshift DataSource.

See: newRedshiftDataSpec smart constructor.

Constructors

RedshiftDataSpec' 

Fields

  • dataRearrangement :: Maybe Text

    A JSON string that represents the splitting and rearrangement processing to be applied to a DataSource. If the DataRearrangement parameter is not provided, all of the input data is used to create the Datasource.

    There are multiple parameters that control what data is used to create a datasource:

    • percentBegin

      Use percentBegin to indicate the beginning of the range of the data used to create the Datasource. If you do not include percentBegin and percentEnd, Amazon ML includes all of the data when creating the datasource.

    • percentEnd

      Use percentEnd to indicate the end of the range of the data used to create the Datasource. If you do not include percentBegin and percentEnd, Amazon ML includes all of the data when creating the datasource.

    • complement

      The complement parameter instructs Amazon ML to use the data that is not included in the range of percentBegin to percentEnd to create a datasource. The complement parameter is useful if you need to create complementary datasources for training and evaluation. To create a complementary datasource, use the same values for percentBegin and percentEnd, along with the complement parameter.

      For example, the following two datasources do not share any data, and can be used to train and evaluate a model. The first datasource has 25 percent of the data, and the second one has 75 percent of the data.

      Datasource for evaluation: {"splitting":{"percentBegin":0, "percentEnd":25}}

      Datasource for training: {"splitting":{"percentBegin":0, "percentEnd":25, "complement":"true"}}

    • strategy

      To change how Amazon ML splits the data for a datasource, use the strategy parameter.

      The default value for the strategy parameter is sequential, meaning that Amazon ML takes all of the data records between the percentBegin and percentEnd parameters for the datasource, in the order that the records appear in the input data.

      The following two DataRearrangement lines are examples of sequentially ordered training and evaluation datasources:

      Datasource for evaluation: {"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"sequential"}}

      Datasource for training: {"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"sequential", "complement":"true"}}

      To randomly split the input data into the proportions indicated by the percentBegin and percentEnd parameters, set the strategy parameter to random and provide a string that is used as the seed value for the random data splitting (for example, you can use the S3 path to your data as the random seed string). If you choose the random split strategy, Amazon ML assigns each row of data a pseudo-random number between 0 and 100, and then selects the rows that have an assigned number between percentBegin and percentEnd. Pseudo-random numbers are assigned using both the input seed string value and the byte offset as a seed, so changing the data results in a different split. Any existing ordering is preserved. The random splitting strategy ensures that variables in the training and evaluation data are distributed similarly. It is useful in the cases where the input data may have an implicit sort order, which would otherwise result in training and evaluation datasources containing non-similar data records.

      The following two DataRearrangement lines are examples of non-sequentially ordered training and evaluation datasources:

      Datasource for evaluation: {"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"random", "randomSeed"="s3://my_s3_path/bucket/file.csv"}}

      Datasource for training: {"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"random", "randomSeed"="s3://my_s3_path/bucket/file.csv", "complement":"true"}}

  • dataSchema :: Maybe Text

    A JSON string that represents the schema for an Amazon Redshift DataSource. The DataSchema defines the structure of the observation data in the data file(s) referenced in the DataSource.

    A DataSchema is not required if you specify a DataSchemaUri.

    Define your DataSchema as a series of key-value pairs. attributes and excludedVariableNames have an array of key-value pairs for their value. Use the following format to define your DataSchema.

    { "version": "1.0",

    "recordAnnotationFieldName": "F1",

    "recordWeightFieldName": "F2",

    "targetFieldName": "F3",

    "dataFormat": "CSV",

    "dataFileContainsHeader": true,

    "attributes": [

    { "fieldName": "F1", "fieldType": "TEXT" }, { "fieldName": "F2", "fieldType": "NUMERIC" }, { "fieldName": "F3", "fieldType": "CATEGORICAL" }, { "fieldName": "F4", "fieldType": "NUMERIC" }, { "fieldName": "F5", "fieldType": "CATEGORICAL" }, { "fieldName": "F6", "fieldType": "TEXT" }, { "fieldName": "F7", "fieldType": "WEIGHTED_INT_SEQUENCE" }, { "fieldName": "F8", "fieldType": "WEIGHTED_STRING_SEQUENCE" } ],

    "excludedVariableNames": [ "F6" ] }

  • dataSchemaUri :: Maybe Text

    Describes the schema location for an Amazon Redshift DataSource.

  • databaseInformation :: RedshiftDatabase

    Describes the DatabaseName and ClusterIdentifier for an Amazon Redshift DataSource.

  • selectSqlQuery :: Text

    Describes the SQL Query to execute on an Amazon Redshift database for an Amazon Redshift DataSource.

  • databaseCredentials :: RedshiftDatabaseCredentials

    Describes AWS Identity and Access Management (IAM) credentials that are used connect to the Amazon Redshift database.

  • s3StagingLocation :: Text

    Describes an Amazon S3 location to store the result set of the SelectSqlQuery query.

Instances

Instances details
ToJSON RedshiftDataSpec Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RedshiftDataSpec

Generic RedshiftDataSpec Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RedshiftDataSpec

Associated Types

type Rep RedshiftDataSpec :: Type -> Type #

Read RedshiftDataSpec Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RedshiftDataSpec

Show RedshiftDataSpec Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RedshiftDataSpec

NFData RedshiftDataSpec Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RedshiftDataSpec

Methods

rnf :: RedshiftDataSpec -> () #

Eq RedshiftDataSpec Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RedshiftDataSpec

Hashable RedshiftDataSpec Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RedshiftDataSpec

type Rep RedshiftDataSpec Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RedshiftDataSpec

type Rep RedshiftDataSpec = D1 ('MetaData "RedshiftDataSpec" "Amazonka.MachineLearning.Types.RedshiftDataSpec" "amazonka-ml-2.0-A3JLJ63WvmfHxGBBIqhdRA" 'False) (C1 ('MetaCons "RedshiftDataSpec'" 'PrefixI 'True) ((S1 ('MetaSel ('Just "dataRearrangement") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: (S1 ('MetaSel ('Just "dataSchema") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "dataSchemaUri") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)))) :*: ((S1 ('MetaSel ('Just "databaseInformation") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 RedshiftDatabase) :*: S1 ('MetaSel ('Just "selectSqlQuery") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 Text)) :*: (S1 ('MetaSel ('Just "databaseCredentials") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 RedshiftDatabaseCredentials) :*: S1 ('MetaSel ('Just "s3StagingLocation") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 Text)))))

newRedshiftDataSpec Source #

Create a value of RedshiftDataSpec with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:dataRearrangement:RedshiftDataSpec', redshiftDataSpec_dataRearrangement - A JSON string that represents the splitting and rearrangement processing to be applied to a DataSource. If the DataRearrangement parameter is not provided, all of the input data is used to create the Datasource.

There are multiple parameters that control what data is used to create a datasource:

  • percentBegin

    Use percentBegin to indicate the beginning of the range of the data used to create the Datasource. If you do not include percentBegin and percentEnd, Amazon ML includes all of the data when creating the datasource.

  • percentEnd

    Use percentEnd to indicate the end of the range of the data used to create the Datasource. If you do not include percentBegin and percentEnd, Amazon ML includes all of the data when creating the datasource.

  • complement

    The complement parameter instructs Amazon ML to use the data that is not included in the range of percentBegin to percentEnd to create a datasource. The complement parameter is useful if you need to create complementary datasources for training and evaluation. To create a complementary datasource, use the same values for percentBegin and percentEnd, along with the complement parameter.

    For example, the following two datasources do not share any data, and can be used to train and evaluate a model. The first datasource has 25 percent of the data, and the second one has 75 percent of the data.

    Datasource for evaluation: {"splitting":{"percentBegin":0, "percentEnd":25}}

    Datasource for training: {"splitting":{"percentBegin":0, "percentEnd":25, "complement":"true"}}

  • strategy

    To change how Amazon ML splits the data for a datasource, use the strategy parameter.

    The default value for the strategy parameter is sequential, meaning that Amazon ML takes all of the data records between the percentBegin and percentEnd parameters for the datasource, in the order that the records appear in the input data.

    The following two DataRearrangement lines are examples of sequentially ordered training and evaluation datasources:

    Datasource for evaluation: {"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"sequential"}}

    Datasource for training: {"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"sequential", "complement":"true"}}

    To randomly split the input data into the proportions indicated by the percentBegin and percentEnd parameters, set the strategy parameter to random and provide a string that is used as the seed value for the random data splitting (for example, you can use the S3 path to your data as the random seed string). If you choose the random split strategy, Amazon ML assigns each row of data a pseudo-random number between 0 and 100, and then selects the rows that have an assigned number between percentBegin and percentEnd. Pseudo-random numbers are assigned using both the input seed string value and the byte offset as a seed, so changing the data results in a different split. Any existing ordering is preserved. The random splitting strategy ensures that variables in the training and evaluation data are distributed similarly. It is useful in the cases where the input data may have an implicit sort order, which would otherwise result in training and evaluation datasources containing non-similar data records.

    The following two DataRearrangement lines are examples of non-sequentially ordered training and evaluation datasources:

    Datasource for evaluation: {"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"random", "randomSeed"="s3://my_s3_path/bucket/file.csv"}}

    Datasource for training: {"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"random", "randomSeed"="s3://my_s3_path/bucket/file.csv", "complement":"true"}}

$sel:dataSchema:RedshiftDataSpec', redshiftDataSpec_dataSchema - A JSON string that represents the schema for an Amazon Redshift DataSource. The DataSchema defines the structure of the observation data in the data file(s) referenced in the DataSource.

A DataSchema is not required if you specify a DataSchemaUri.

Define your DataSchema as a series of key-value pairs. attributes and excludedVariableNames have an array of key-value pairs for their value. Use the following format to define your DataSchema.

{ "version": "1.0",

"recordAnnotationFieldName": "F1",

"recordWeightFieldName": "F2",

"targetFieldName": "F3",

"dataFormat": "CSV",

"dataFileContainsHeader": true,

"attributes": [

{ "fieldName": "F1", "fieldType": "TEXT" }, { "fieldName": "F2", "fieldType": "NUMERIC" }, { "fieldName": "F3", "fieldType": "CATEGORICAL" }, { "fieldName": "F4", "fieldType": "NUMERIC" }, { "fieldName": "F5", "fieldType": "CATEGORICAL" }, { "fieldName": "F6", "fieldType": "TEXT" }, { "fieldName": "F7", "fieldType": "WEIGHTED_INT_SEQUENCE" }, { "fieldName": "F8", "fieldType": "WEIGHTED_STRING_SEQUENCE" } ],

"excludedVariableNames": [ "F6" ] }

$sel:dataSchemaUri:RedshiftDataSpec', redshiftDataSpec_dataSchemaUri - Describes the schema location for an Amazon Redshift DataSource.

$sel:databaseInformation:RedshiftDataSpec', redshiftDataSpec_databaseInformation - Describes the DatabaseName and ClusterIdentifier for an Amazon Redshift DataSource.

$sel:selectSqlQuery:RedshiftDataSpec', redshiftDataSpec_selectSqlQuery - Describes the SQL Query to execute on an Amazon Redshift database for an Amazon Redshift DataSource.

$sel:databaseCredentials:RedshiftDataSpec', redshiftDataSpec_databaseCredentials - Describes AWS Identity and Access Management (IAM) credentials that are used connect to the Amazon Redshift database.

$sel:s3StagingLocation:RedshiftDataSpec', redshiftDataSpec_s3StagingLocation - Describes an Amazon S3 location to store the result set of the SelectSqlQuery query.

redshiftDataSpec_dataRearrangement :: Lens' RedshiftDataSpec (Maybe Text) Source #

A JSON string that represents the splitting and rearrangement processing to be applied to a DataSource. If the DataRearrangement parameter is not provided, all of the input data is used to create the Datasource.

There are multiple parameters that control what data is used to create a datasource:

  • percentBegin

    Use percentBegin to indicate the beginning of the range of the data used to create the Datasource. If you do not include percentBegin and percentEnd, Amazon ML includes all of the data when creating the datasource.

  • percentEnd

    Use percentEnd to indicate the end of the range of the data used to create the Datasource. If you do not include percentBegin and percentEnd, Amazon ML includes all of the data when creating the datasource.

  • complement

    The complement parameter instructs Amazon ML to use the data that is not included in the range of percentBegin to percentEnd to create a datasource. The complement parameter is useful if you need to create complementary datasources for training and evaluation. To create a complementary datasource, use the same values for percentBegin and percentEnd, along with the complement parameter.

    For example, the following two datasources do not share any data, and can be used to train and evaluate a model. The first datasource has 25 percent of the data, and the second one has 75 percent of the data.

    Datasource for evaluation: {"splitting":{"percentBegin":0, "percentEnd":25}}

    Datasource for training: {"splitting":{"percentBegin":0, "percentEnd":25, "complement":"true"}}

  • strategy

    To change how Amazon ML splits the data for a datasource, use the strategy parameter.

    The default value for the strategy parameter is sequential, meaning that Amazon ML takes all of the data records between the percentBegin and percentEnd parameters for the datasource, in the order that the records appear in the input data.

    The following two DataRearrangement lines are examples of sequentially ordered training and evaluation datasources:

    Datasource for evaluation: {"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"sequential"}}

    Datasource for training: {"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"sequential", "complement":"true"}}

    To randomly split the input data into the proportions indicated by the percentBegin and percentEnd parameters, set the strategy parameter to random and provide a string that is used as the seed value for the random data splitting (for example, you can use the S3 path to your data as the random seed string). If you choose the random split strategy, Amazon ML assigns each row of data a pseudo-random number between 0 and 100, and then selects the rows that have an assigned number between percentBegin and percentEnd. Pseudo-random numbers are assigned using both the input seed string value and the byte offset as a seed, so changing the data results in a different split. Any existing ordering is preserved. The random splitting strategy ensures that variables in the training and evaluation data are distributed similarly. It is useful in the cases where the input data may have an implicit sort order, which would otherwise result in training and evaluation datasources containing non-similar data records.

    The following two DataRearrangement lines are examples of non-sequentially ordered training and evaluation datasources:

    Datasource for evaluation: {"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"random", "randomSeed"="s3://my_s3_path/bucket/file.csv"}}

    Datasource for training: {"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"random", "randomSeed"="s3://my_s3_path/bucket/file.csv", "complement":"true"}}

redshiftDataSpec_dataSchema :: Lens' RedshiftDataSpec (Maybe Text) Source #

A JSON string that represents the schema for an Amazon Redshift DataSource. The DataSchema defines the structure of the observation data in the data file(s) referenced in the DataSource.

A DataSchema is not required if you specify a DataSchemaUri.

Define your DataSchema as a series of key-value pairs. attributes and excludedVariableNames have an array of key-value pairs for their value. Use the following format to define your DataSchema.

{ "version": "1.0",

"recordAnnotationFieldName": "F1",

"recordWeightFieldName": "F2",

"targetFieldName": "F3",

"dataFormat": "CSV",

"dataFileContainsHeader": true,

"attributes": [

{ "fieldName": "F1", "fieldType": "TEXT" }, { "fieldName": "F2", "fieldType": "NUMERIC" }, { "fieldName": "F3", "fieldType": "CATEGORICAL" }, { "fieldName": "F4", "fieldType": "NUMERIC" }, { "fieldName": "F5", "fieldType": "CATEGORICAL" }, { "fieldName": "F6", "fieldType": "TEXT" }, { "fieldName": "F7", "fieldType": "WEIGHTED_INT_SEQUENCE" }, { "fieldName": "F8", "fieldType": "WEIGHTED_STRING_SEQUENCE" } ],

"excludedVariableNames": [ "F6" ] }

redshiftDataSpec_dataSchemaUri :: Lens' RedshiftDataSpec (Maybe Text) Source #

Describes the schema location for an Amazon Redshift DataSource.

redshiftDataSpec_databaseInformation :: Lens' RedshiftDataSpec RedshiftDatabase Source #

Describes the DatabaseName and ClusterIdentifier for an Amazon Redshift DataSource.

redshiftDataSpec_selectSqlQuery :: Lens' RedshiftDataSpec Text Source #

Describes the SQL Query to execute on an Amazon Redshift database for an Amazon Redshift DataSource.

redshiftDataSpec_databaseCredentials :: Lens' RedshiftDataSpec RedshiftDatabaseCredentials Source #

Describes AWS Identity and Access Management (IAM) credentials that are used connect to the Amazon Redshift database.

redshiftDataSpec_s3StagingLocation :: Lens' RedshiftDataSpec Text Source #

Describes an Amazon S3 location to store the result set of the SelectSqlQuery query.

RedshiftDatabase

data RedshiftDatabase Source #

Describes the database details required to connect to an Amazon Redshift database.

See: newRedshiftDatabase smart constructor.

Instances

Instances details
FromJSON RedshiftDatabase Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RedshiftDatabase

ToJSON RedshiftDatabase Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RedshiftDatabase

Generic RedshiftDatabase Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RedshiftDatabase

Associated Types

type Rep RedshiftDatabase :: Type -> Type #

Read RedshiftDatabase Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RedshiftDatabase

Show RedshiftDatabase Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RedshiftDatabase

NFData RedshiftDatabase Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RedshiftDatabase

Methods

rnf :: RedshiftDatabase -> () #

Eq RedshiftDatabase Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RedshiftDatabase

Hashable RedshiftDatabase Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RedshiftDatabase

type Rep RedshiftDatabase Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RedshiftDatabase

type Rep RedshiftDatabase = D1 ('MetaData "RedshiftDatabase" "Amazonka.MachineLearning.Types.RedshiftDatabase" "amazonka-ml-2.0-A3JLJ63WvmfHxGBBIqhdRA" 'False) (C1 ('MetaCons "RedshiftDatabase'" 'PrefixI 'True) (S1 ('MetaSel ('Just "databaseName") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 Text) :*: S1 ('MetaSel ('Just "clusterIdentifier") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 Text)))

newRedshiftDatabase Source #

Create a value of RedshiftDatabase with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:databaseName:RedshiftDatabase', redshiftDatabase_databaseName - Undocumented member.

$sel:clusterIdentifier:RedshiftDatabase', redshiftDatabase_clusterIdentifier - Undocumented member.

RedshiftDatabaseCredentials

data RedshiftDatabaseCredentials Source #

Describes the database credentials for connecting to a database on an Amazon Redshift cluster.

See: newRedshiftDatabaseCredentials smart constructor.

Instances

Instances details
ToJSON RedshiftDatabaseCredentials Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RedshiftDatabaseCredentials

Generic RedshiftDatabaseCredentials Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RedshiftDatabaseCredentials

Associated Types

type Rep RedshiftDatabaseCredentials :: Type -> Type #

Read RedshiftDatabaseCredentials Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RedshiftDatabaseCredentials

Show RedshiftDatabaseCredentials Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RedshiftDatabaseCredentials

NFData RedshiftDatabaseCredentials Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RedshiftDatabaseCredentials

Eq RedshiftDatabaseCredentials Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RedshiftDatabaseCredentials

Hashable RedshiftDatabaseCredentials Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RedshiftDatabaseCredentials

type Rep RedshiftDatabaseCredentials Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RedshiftDatabaseCredentials

type Rep RedshiftDatabaseCredentials = D1 ('MetaData "RedshiftDatabaseCredentials" "Amazonka.MachineLearning.Types.RedshiftDatabaseCredentials" "amazonka-ml-2.0-A3JLJ63WvmfHxGBBIqhdRA" 'False) (C1 ('MetaCons "RedshiftDatabaseCredentials'" 'PrefixI 'True) (S1 ('MetaSel ('Just "username") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 Text) :*: S1 ('MetaSel ('Just "password") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 Text)))

newRedshiftDatabaseCredentials Source #

Create a value of RedshiftDatabaseCredentials with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:username:RedshiftDatabaseCredentials', redshiftDatabaseCredentials_username - Undocumented member.

$sel:password:RedshiftDatabaseCredentials', redshiftDatabaseCredentials_password - Undocumented member.

RedshiftMetadata

data RedshiftMetadata Source #

Describes the DataSource details specific to Amazon Redshift.

See: newRedshiftMetadata smart constructor.

Constructors

RedshiftMetadata' 

Fields

Instances

Instances details
FromJSON RedshiftMetadata Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RedshiftMetadata

Generic RedshiftMetadata Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RedshiftMetadata

Associated Types

type Rep RedshiftMetadata :: Type -> Type #

Read RedshiftMetadata Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RedshiftMetadata

Show RedshiftMetadata Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RedshiftMetadata

NFData RedshiftMetadata Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RedshiftMetadata

Methods

rnf :: RedshiftMetadata -> () #

Eq RedshiftMetadata Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RedshiftMetadata

Hashable RedshiftMetadata Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RedshiftMetadata

type Rep RedshiftMetadata Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.RedshiftMetadata

type Rep RedshiftMetadata = D1 ('MetaData "RedshiftMetadata" "Amazonka.MachineLearning.Types.RedshiftMetadata" "amazonka-ml-2.0-A3JLJ63WvmfHxGBBIqhdRA" 'False) (C1 ('MetaCons "RedshiftMetadata'" 'PrefixI 'True) (S1 ('MetaSel ('Just "databaseUserName") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: (S1 ('MetaSel ('Just "redshiftDatabase") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe RedshiftDatabase)) :*: S1 ('MetaSel ('Just "selectSqlQuery") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)))))

newRedshiftMetadata :: RedshiftMetadata Source #

Create a value of RedshiftMetadata with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:databaseUserName:RedshiftMetadata', redshiftMetadata_databaseUserName - Undocumented member.

$sel:redshiftDatabase:RedshiftMetadata', redshiftMetadata_redshiftDatabase - Undocumented member.

$sel:selectSqlQuery:RedshiftMetadata', redshiftMetadata_selectSqlQuery - The SQL query that is specified during CreateDataSourceFromRedshift. Returns only if Verbose is true in GetDataSourceInput.

redshiftMetadata_selectSqlQuery :: Lens' RedshiftMetadata (Maybe Text) Source #

The SQL query that is specified during CreateDataSourceFromRedshift. Returns only if Verbose is true in GetDataSourceInput.

S3DataSpec

data S3DataSpec Source #

Describes the data specification of a DataSource.

See: newS3DataSpec smart constructor.

Constructors

S3DataSpec' 

Fields

  • dataRearrangement :: Maybe Text

    A JSON string that represents the splitting and rearrangement processing to be applied to a DataSource. If the DataRearrangement parameter is not provided, all of the input data is used to create the Datasource.

    There are multiple parameters that control what data is used to create a datasource:

    • percentBegin

      Use percentBegin to indicate the beginning of the range of the data used to create the Datasource. If you do not include percentBegin and percentEnd, Amazon ML includes all of the data when creating the datasource.

    • percentEnd

      Use percentEnd to indicate the end of the range of the data used to create the Datasource. If you do not include percentBegin and percentEnd, Amazon ML includes all of the data when creating the datasource.

    • complement

      The complement parameter instructs Amazon ML to use the data that is not included in the range of percentBegin to percentEnd to create a datasource. The complement parameter is useful if you need to create complementary datasources for training and evaluation. To create a complementary datasource, use the same values for percentBegin and percentEnd, along with the complement parameter.

      For example, the following two datasources do not share any data, and can be used to train and evaluate a model. The first datasource has 25 percent of the data, and the second one has 75 percent of the data.

      Datasource for evaluation: {"splitting":{"percentBegin":0, "percentEnd":25}}

      Datasource for training: {"splitting":{"percentBegin":0, "percentEnd":25, "complement":"true"}}

    • strategy

      To change how Amazon ML splits the data for a datasource, use the strategy parameter.

      The default value for the strategy parameter is sequential, meaning that Amazon ML takes all of the data records between the percentBegin and percentEnd parameters for the datasource, in the order that the records appear in the input data.

      The following two DataRearrangement lines are examples of sequentially ordered training and evaluation datasources:

      Datasource for evaluation: {"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"sequential"}}

      Datasource for training: {"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"sequential", "complement":"true"}}

      To randomly split the input data into the proportions indicated by the percentBegin and percentEnd parameters, set the strategy parameter to random and provide a string that is used as the seed value for the random data splitting (for example, you can use the S3 path to your data as the random seed string). If you choose the random split strategy, Amazon ML assigns each row of data a pseudo-random number between 0 and 100, and then selects the rows that have an assigned number between percentBegin and percentEnd. Pseudo-random numbers are assigned using both the input seed string value and the byte offset as a seed, so changing the data results in a different split. Any existing ordering is preserved. The random splitting strategy ensures that variables in the training and evaluation data are distributed similarly. It is useful in the cases where the input data may have an implicit sort order, which would otherwise result in training and evaluation datasources containing non-similar data records.

      The following two DataRearrangement lines are examples of non-sequentially ordered training and evaluation datasources:

      Datasource for evaluation: {"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"random", "randomSeed"="s3://my_s3_path/bucket/file.csv"}}

      Datasource for training: {"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"random", "randomSeed"="s3://my_s3_path/bucket/file.csv", "complement":"true"}}

  • dataSchema :: Maybe Text

    A JSON string that represents the schema for an Amazon S3 DataSource. The DataSchema defines the structure of the observation data in the data file(s) referenced in the DataSource.

    You must provide either the DataSchema or the DataSchemaLocationS3.

    Define your DataSchema as a series of key-value pairs. attributes and excludedVariableNames have an array of key-value pairs for their value. Use the following format to define your DataSchema.

    { "version": "1.0",

    "recordAnnotationFieldName": "F1",

    "recordWeightFieldName": "F2",

    "targetFieldName": "F3",

    "dataFormat": "CSV",

    "dataFileContainsHeader": true,

    "attributes": [

    { "fieldName": "F1", "fieldType": "TEXT" }, { "fieldName": "F2", "fieldType": "NUMERIC" }, { "fieldName": "F3", "fieldType": "CATEGORICAL" }, { "fieldName": "F4", "fieldType": "NUMERIC" }, { "fieldName": "F5", "fieldType": "CATEGORICAL" }, { "fieldName": "F6", "fieldType": "TEXT" }, { "fieldName": "F7", "fieldType": "WEIGHTED_INT_SEQUENCE" }, { "fieldName": "F8", "fieldType": "WEIGHTED_STRING_SEQUENCE" } ],

    "excludedVariableNames": [ "F6" ] }

  • dataSchemaLocationS3 :: Maybe Text

    Describes the schema location in Amazon S3. You must provide either the DataSchema or the DataSchemaLocationS3.

  • dataLocationS3 :: Text

    The location of the data file(s) used by a DataSource. The URI specifies a data file or an Amazon Simple Storage Service (Amazon S3) directory or bucket containing data files.

Instances

Instances details
ToJSON S3DataSpec Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.S3DataSpec

Generic S3DataSpec Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.S3DataSpec

Associated Types

type Rep S3DataSpec :: Type -> Type #

Read S3DataSpec Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.S3DataSpec

Show S3DataSpec Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.S3DataSpec

NFData S3DataSpec Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.S3DataSpec

Methods

rnf :: S3DataSpec -> () #

Eq S3DataSpec Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.S3DataSpec

Hashable S3DataSpec Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.S3DataSpec

type Rep S3DataSpec Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.S3DataSpec

type Rep S3DataSpec = D1 ('MetaData "S3DataSpec" "Amazonka.MachineLearning.Types.S3DataSpec" "amazonka-ml-2.0-A3JLJ63WvmfHxGBBIqhdRA" 'False) (C1 ('MetaCons "S3DataSpec'" 'PrefixI 'True) ((S1 ('MetaSel ('Just "dataRearrangement") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "dataSchema") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text))) :*: (S1 ('MetaSel ('Just "dataSchemaLocationS3") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "dataLocationS3") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 Text))))

newS3DataSpec Source #

Create a value of S3DataSpec with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:dataRearrangement:S3DataSpec', s3DataSpec_dataRearrangement - A JSON string that represents the splitting and rearrangement processing to be applied to a DataSource. If the DataRearrangement parameter is not provided, all of the input data is used to create the Datasource.

There are multiple parameters that control what data is used to create a datasource:

  • percentBegin

    Use percentBegin to indicate the beginning of the range of the data used to create the Datasource. If you do not include percentBegin and percentEnd, Amazon ML includes all of the data when creating the datasource.

  • percentEnd

    Use percentEnd to indicate the end of the range of the data used to create the Datasource. If you do not include percentBegin and percentEnd, Amazon ML includes all of the data when creating the datasource.

  • complement

    The complement parameter instructs Amazon ML to use the data that is not included in the range of percentBegin to percentEnd to create a datasource. The complement parameter is useful if you need to create complementary datasources for training and evaluation. To create a complementary datasource, use the same values for percentBegin and percentEnd, along with the complement parameter.

    For example, the following two datasources do not share any data, and can be used to train and evaluate a model. The first datasource has 25 percent of the data, and the second one has 75 percent of the data.

    Datasource for evaluation: {"splitting":{"percentBegin":0, "percentEnd":25}}

    Datasource for training: {"splitting":{"percentBegin":0, "percentEnd":25, "complement":"true"}}

  • strategy

    To change how Amazon ML splits the data for a datasource, use the strategy parameter.

    The default value for the strategy parameter is sequential, meaning that Amazon ML takes all of the data records between the percentBegin and percentEnd parameters for the datasource, in the order that the records appear in the input data.

    The following two DataRearrangement lines are examples of sequentially ordered training and evaluation datasources:

    Datasource for evaluation: {"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"sequential"}}

    Datasource for training: {"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"sequential", "complement":"true"}}

    To randomly split the input data into the proportions indicated by the percentBegin and percentEnd parameters, set the strategy parameter to random and provide a string that is used as the seed value for the random data splitting (for example, you can use the S3 path to your data as the random seed string). If you choose the random split strategy, Amazon ML assigns each row of data a pseudo-random number between 0 and 100, and then selects the rows that have an assigned number between percentBegin and percentEnd. Pseudo-random numbers are assigned using both the input seed string value and the byte offset as a seed, so changing the data results in a different split. Any existing ordering is preserved. The random splitting strategy ensures that variables in the training and evaluation data are distributed similarly. It is useful in the cases where the input data may have an implicit sort order, which would otherwise result in training and evaluation datasources containing non-similar data records.

    The following two DataRearrangement lines are examples of non-sequentially ordered training and evaluation datasources:

    Datasource for evaluation: {"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"random", "randomSeed"="s3://my_s3_path/bucket/file.csv"}}

    Datasource for training: {"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"random", "randomSeed"="s3://my_s3_path/bucket/file.csv", "complement":"true"}}

$sel:dataSchema:S3DataSpec', s3DataSpec_dataSchema - A JSON string that represents the schema for an Amazon S3 DataSource. The DataSchema defines the structure of the observation data in the data file(s) referenced in the DataSource.

You must provide either the DataSchema or the DataSchemaLocationS3.

Define your DataSchema as a series of key-value pairs. attributes and excludedVariableNames have an array of key-value pairs for their value. Use the following format to define your DataSchema.

{ "version": "1.0",

"recordAnnotationFieldName": "F1",

"recordWeightFieldName": "F2",

"targetFieldName": "F3",

"dataFormat": "CSV",

"dataFileContainsHeader": true,

"attributes": [

{ "fieldName": "F1", "fieldType": "TEXT" }, { "fieldName": "F2", "fieldType": "NUMERIC" }, { "fieldName": "F3", "fieldType": "CATEGORICAL" }, { "fieldName": "F4", "fieldType": "NUMERIC" }, { "fieldName": "F5", "fieldType": "CATEGORICAL" }, { "fieldName": "F6", "fieldType": "TEXT" }, { "fieldName": "F7", "fieldType": "WEIGHTED_INT_SEQUENCE" }, { "fieldName": "F8", "fieldType": "WEIGHTED_STRING_SEQUENCE" } ],

"excludedVariableNames": [ "F6" ] }

$sel:dataSchemaLocationS3:S3DataSpec', s3DataSpec_dataSchemaLocationS3 - Describes the schema location in Amazon S3. You must provide either the DataSchema or the DataSchemaLocationS3.

$sel:dataLocationS3:S3DataSpec', s3DataSpec_dataLocationS3 - The location of the data file(s) used by a DataSource. The URI specifies a data file or an Amazon Simple Storage Service (Amazon S3) directory or bucket containing data files.

s3DataSpec_dataRearrangement :: Lens' S3DataSpec (Maybe Text) Source #

A JSON string that represents the splitting and rearrangement processing to be applied to a DataSource. If the DataRearrangement parameter is not provided, all of the input data is used to create the Datasource.

There are multiple parameters that control what data is used to create a datasource:

  • percentBegin

    Use percentBegin to indicate the beginning of the range of the data used to create the Datasource. If you do not include percentBegin and percentEnd, Amazon ML includes all of the data when creating the datasource.

  • percentEnd

    Use percentEnd to indicate the end of the range of the data used to create the Datasource. If you do not include percentBegin and percentEnd, Amazon ML includes all of the data when creating the datasource.

  • complement

    The complement parameter instructs Amazon ML to use the data that is not included in the range of percentBegin to percentEnd to create a datasource. The complement parameter is useful if you need to create complementary datasources for training and evaluation. To create a complementary datasource, use the same values for percentBegin and percentEnd, along with the complement parameter.

    For example, the following two datasources do not share any data, and can be used to train and evaluate a model. The first datasource has 25 percent of the data, and the second one has 75 percent of the data.

    Datasource for evaluation: {"splitting":{"percentBegin":0, "percentEnd":25}}

    Datasource for training: {"splitting":{"percentBegin":0, "percentEnd":25, "complement":"true"}}

  • strategy

    To change how Amazon ML splits the data for a datasource, use the strategy parameter.

    The default value for the strategy parameter is sequential, meaning that Amazon ML takes all of the data records between the percentBegin and percentEnd parameters for the datasource, in the order that the records appear in the input data.

    The following two DataRearrangement lines are examples of sequentially ordered training and evaluation datasources:

    Datasource for evaluation: {"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"sequential"}}

    Datasource for training: {"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"sequential", "complement":"true"}}

    To randomly split the input data into the proportions indicated by the percentBegin and percentEnd parameters, set the strategy parameter to random and provide a string that is used as the seed value for the random data splitting (for example, you can use the S3 path to your data as the random seed string). If you choose the random split strategy, Amazon ML assigns each row of data a pseudo-random number between 0 and 100, and then selects the rows that have an assigned number between percentBegin and percentEnd. Pseudo-random numbers are assigned using both the input seed string value and the byte offset as a seed, so changing the data results in a different split. Any existing ordering is preserved. The random splitting strategy ensures that variables in the training and evaluation data are distributed similarly. It is useful in the cases where the input data may have an implicit sort order, which would otherwise result in training and evaluation datasources containing non-similar data records.

    The following two DataRearrangement lines are examples of non-sequentially ordered training and evaluation datasources:

    Datasource for evaluation: {"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"random", "randomSeed"="s3://my_s3_path/bucket/file.csv"}}

    Datasource for training: {"splitting":{"percentBegin":70, "percentEnd":100, "strategy":"random", "randomSeed"="s3://my_s3_path/bucket/file.csv", "complement":"true"}}

s3DataSpec_dataSchema :: Lens' S3DataSpec (Maybe Text) Source #

A JSON string that represents the schema for an Amazon S3 DataSource. The DataSchema defines the structure of the observation data in the data file(s) referenced in the DataSource.

You must provide either the DataSchema or the DataSchemaLocationS3.

Define your DataSchema as a series of key-value pairs. attributes and excludedVariableNames have an array of key-value pairs for their value. Use the following format to define your DataSchema.

{ "version": "1.0",

"recordAnnotationFieldName": "F1",

"recordWeightFieldName": "F2",

"targetFieldName": "F3",

"dataFormat": "CSV",

"dataFileContainsHeader": true,

"attributes": [

{ "fieldName": "F1", "fieldType": "TEXT" }, { "fieldName": "F2", "fieldType": "NUMERIC" }, { "fieldName": "F3", "fieldType": "CATEGORICAL" }, { "fieldName": "F4", "fieldType": "NUMERIC" }, { "fieldName": "F5", "fieldType": "CATEGORICAL" }, { "fieldName": "F6", "fieldType": "TEXT" }, { "fieldName": "F7", "fieldType": "WEIGHTED_INT_SEQUENCE" }, { "fieldName": "F8", "fieldType": "WEIGHTED_STRING_SEQUENCE" } ],

"excludedVariableNames": [ "F6" ] }

s3DataSpec_dataSchemaLocationS3 :: Lens' S3DataSpec (Maybe Text) Source #

Describes the schema location in Amazon S3. You must provide either the DataSchema or the DataSchemaLocationS3.

s3DataSpec_dataLocationS3 :: Lens' S3DataSpec Text Source #

The location of the data file(s) used by a DataSource. The URI specifies a data file or an Amazon Simple Storage Service (Amazon S3) directory or bucket containing data files.

Tag

data Tag Source #

A custom key-value pair associated with an ML object, such as an ML model.

See: newTag smart constructor.

Constructors

Tag' 

Fields

  • key :: Maybe Text

    A unique identifier for the tag. Valid characters include Unicode letters, digits, white space, _, ., /, =, +, -, %, and @.

  • value :: Maybe Text

    An optional string, typically used to describe or define the tag. Valid characters include Unicode letters, digits, white space, _, ., /, =, +, -, %, and @.

Instances

Instances details
FromJSON Tag Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.Tag

ToJSON Tag Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.Tag

Generic Tag Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.Tag

Associated Types

type Rep Tag :: Type -> Type #

Methods

from :: Tag -> Rep Tag x #

to :: Rep Tag x -> Tag #

Read Tag Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.Tag

Show Tag Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.Tag

Methods

showsPrec :: Int -> Tag -> ShowS #

show :: Tag -> String #

showList :: [Tag] -> ShowS #

NFData Tag Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.Tag

Methods

rnf :: Tag -> () #

Eq Tag Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.Tag

Methods

(==) :: Tag -> Tag -> Bool #

(/=) :: Tag -> Tag -> Bool #

Hashable Tag Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.Tag

Methods

hashWithSalt :: Int -> Tag -> Int #

hash :: Tag -> Int #

type Rep Tag Source # 
Instance details

Defined in Amazonka.MachineLearning.Types.Tag

type Rep Tag = D1 ('MetaData "Tag" "Amazonka.MachineLearning.Types.Tag" "amazonka-ml-2.0-A3JLJ63WvmfHxGBBIqhdRA" 'False) (C1 ('MetaCons "Tag'" 'PrefixI 'True) (S1 ('MetaSel ('Just "key") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: S1 ('MetaSel ('Just "value") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text))))

newTag :: Tag Source #

Create a value of Tag with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:key:Tag', tag_key - A unique identifier for the tag. Valid characters include Unicode letters, digits, white space, _, ., /, =, +, -, %, and @.

$sel:value:Tag', tag_value - An optional string, typically used to describe or define the tag. Valid characters include Unicode letters, digits, white space, _, ., /, =, +, -, %, and @.

tag_key :: Lens' Tag (Maybe Text) Source #

A unique identifier for the tag. Valid characters include Unicode letters, digits, white space, _, ., /, =, +, -, %, and @.

tag_value :: Lens' Tag (Maybe Text) Source #

An optional string, typically used to describe or define the tag. Valid characters include Unicode letters, digits, white space, _, ., /, =, +, -, %, and @.