amazonka-kinesis-2.0: Amazon Kinesis SDK.
Copyright(c) 2013-2023 Brendan Hay
LicenseMozilla Public License, v. 2.0.
MaintainerBrendan Hay
Stabilityauto-generated
Portabilitynon-portable (GHC extensions)
Safe HaskellSafe-Inferred
LanguageHaskell2010

Amazonka.Kinesis.Types.PutRecordsRequestEntry

Description

 
Synopsis

Documentation

data PutRecordsRequestEntry Source #

Represents the output for PutRecords.

See: newPutRecordsRequestEntry smart constructor.

Constructors

PutRecordsRequestEntry' 

Fields

  • explicitHashKey :: Maybe Text

    The hash value used to determine explicitly the shard that the data record is assigned to by overriding the partition key hash.

  • data' :: Base64

    The data blob to put into the record, which is base64-encoded when the blob is serialized. When the data blob (the payload before base64-encoding) is added to the partition key size, the total size must not exceed the maximum record size (1 MiB).

  • partitionKey :: Text

    Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings with a maximum length limit of 256 characters for each key. Amazon Kinesis Data Streams uses the partition key as input to a hash function that maps the partition key and associated data to a specific shard. Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key map to the same shard within the stream.

Instances

Instances details
ToJSON PutRecordsRequestEntry Source # 
Instance details

Defined in Amazonka.Kinesis.Types.PutRecordsRequestEntry

Generic PutRecordsRequestEntry Source # 
Instance details

Defined in Amazonka.Kinesis.Types.PutRecordsRequestEntry

Associated Types

type Rep PutRecordsRequestEntry :: Type -> Type #

Read PutRecordsRequestEntry Source # 
Instance details

Defined in Amazonka.Kinesis.Types.PutRecordsRequestEntry

Show PutRecordsRequestEntry Source # 
Instance details

Defined in Amazonka.Kinesis.Types.PutRecordsRequestEntry

NFData PutRecordsRequestEntry Source # 
Instance details

Defined in Amazonka.Kinesis.Types.PutRecordsRequestEntry

Methods

rnf :: PutRecordsRequestEntry -> () #

Eq PutRecordsRequestEntry Source # 
Instance details

Defined in Amazonka.Kinesis.Types.PutRecordsRequestEntry

Hashable PutRecordsRequestEntry Source # 
Instance details

Defined in Amazonka.Kinesis.Types.PutRecordsRequestEntry

type Rep PutRecordsRequestEntry Source # 
Instance details

Defined in Amazonka.Kinesis.Types.PutRecordsRequestEntry

type Rep PutRecordsRequestEntry = D1 ('MetaData "PutRecordsRequestEntry" "Amazonka.Kinesis.Types.PutRecordsRequestEntry" "amazonka-kinesis-2.0-AW1spVPMUGf3LUqQQ22bci" 'False) (C1 ('MetaCons "PutRecordsRequestEntry'" 'PrefixI 'True) (S1 ('MetaSel ('Just "explicitHashKey") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 (Maybe Text)) :*: (S1 ('MetaSel ('Just "data'") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 Base64) :*: S1 ('MetaSel ('Just "partitionKey") 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedStrict) (Rec0 Text))))

newPutRecordsRequestEntry Source #

Create a value of PutRecordsRequestEntry with all optional fields omitted.

Use generic-lens or optics to modify other optional fields.

The following record fields are available, with the corresponding lenses provided for backwards compatibility:

$sel:explicitHashKey:PutRecordsRequestEntry', putRecordsRequestEntry_explicitHashKey - The hash value used to determine explicitly the shard that the data record is assigned to by overriding the partition key hash.

$sel:data':PutRecordsRequestEntry', putRecordsRequestEntry_data - The data blob to put into the record, which is base64-encoded when the blob is serialized. When the data blob (the payload before base64-encoding) is added to the partition key size, the total size must not exceed the maximum record size (1 MiB).-- -- Note: This Lens automatically encodes and decodes Base64 data. -- The underlying isomorphism will encode to Base64 representation during -- serialisation, and decode from Base64 representation during deserialisation. -- This Lens accepts and returns only raw unencoded data.

$sel:partitionKey:PutRecordsRequestEntry', putRecordsRequestEntry_partitionKey - Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings with a maximum length limit of 256 characters for each key. Amazon Kinesis Data Streams uses the partition key as input to a hash function that maps the partition key and associated data to a specific shard. Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key map to the same shard within the stream.

putRecordsRequestEntry_explicitHashKey :: Lens' PutRecordsRequestEntry (Maybe Text) Source #

The hash value used to determine explicitly the shard that the data record is assigned to by overriding the partition key hash.

putRecordsRequestEntry_data :: Lens' PutRecordsRequestEntry ByteString Source #

The data blob to put into the record, which is base64-encoded when the blob is serialized. When the data blob (the payload before base64-encoding) is added to the partition key size, the total size must not exceed the maximum record size (1 MiB).-- -- Note: This Lens automatically encodes and decodes Base64 data. -- The underlying isomorphism will encode to Base64 representation during -- serialisation, and decode from Base64 representation during deserialisation. -- This Lens accepts and returns only raw unencoded data.

putRecordsRequestEntry_partitionKey :: Lens' PutRecordsRequestEntry Text Source #

Determines which shard in the stream the data record is assigned to. Partition keys are Unicode strings with a maximum length limit of 256 characters for each key. Amazon Kinesis Data Streams uses the partition key as input to a hash function that maps the partition key and associated data to a specific shard. Specifically, an MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key map to the same shard within the stream.