amazonka-rekognition-1.6.1: Amazon Rekognition SDK.

Copyright(c) 2013-2018 Brendan Hay
LicenseMozilla Public License, v. 2.0.
MaintainerBrendan Hay <brendan.g.hay+amazonka@gmail.com>
Stabilityauto-generated
Portabilitynon-portable (GHC extensions)
Safe HaskellNone
LanguageHaskell2010

Network.AWS.Rekognition.IndexFaces

Contents

Description

Detects faces in the input image and adds them to the specified collection.

Amazon Rekognition does not save the actual faces detected. Instead, the underlying detection algorithm first detects the faces in the input image, and for each face extracts facial features into a feature vector, and stores it in the back-end database. Amazon Rekognition uses feature vectors when performing face match and search operations using the and operations.

If you are using version 1.0 of the face detection model, IndexFaces indexes the 15 largest faces in the input image. Later versions of the face detection model index the 100 largest faces in the input image. To determine which version of the model you are using, check the the value of FaceModelVersion in the response from IndexFaces . For more information, see 'face-detection-model' .

If you provide the optional ExternalImageID for the input image you provided, Amazon Rekognition associates this ID with all faces that it detects. When you call the operation, the response returns the external ID. You can use this external image ID to create a client-side index to associate the faces with each image. You can then use the index to find all faces in an image.

In response, the operation returns an array of metadata for all detected faces. This includes, the bounding box of the detected face, confidence value (indicating the bounding box contains a face), a face ID assigned by the service for each face that is detected and stored, and an image ID assigned by the service for the input image. If you request all facial attributes (using the detectionAttributes parameter, Amazon Rekognition returns detailed facial attributes such as facial landmarks (for example, location of eye and mount) and other facial attributes such gender. If you provide the same image, specify the same collection, and use the same external ID in the IndexFaces operation, Amazon Rekognition doesn't save duplicate face metadata.

The input image is passed either as base64-encoded image bytes or as a reference to an image in an Amazon S3 bucket. If you use the Amazon CLI to call Amazon Rekognition operations, passing image bytes is not supported. The image must be either a PNG or JPEG formatted file.

This operation requires permissions to perform the rekognition:IndexFaces action.

Synopsis

Creating a Request

indexFaces Source #

Creates a value of IndexFaces with the minimum fields required to make a request.

Use one of the following lenses to modify other fields as desired:

  • ifExternalImageId - ID you want to assign to all the faces detected in the image.
  • ifDetectionAttributes - An array of facial attributes that you want to be returned. This can be the default list of attributes or all attributes. If you don't specify a value for Attributes or if you specify [DEFAULT] , the API returns the following subset of facial attributes: BoundingBox , Confidence , Pose , Quality and Landmarks . If you provide [ALL] , all facial attributes are returned but the operation will take longer to complete. If you provide both, [ALL, DEFAULT] , the service uses a logical AND operator to determine which attributes to return (in this case, all attributes).
  • ifCollectionId - The ID of an existing collection to which you want to add the faces that are detected in the input images.
  • ifImage - The input image as base64-encoded bytes or an S3 object. If you use the AWS CLI to call Amazon Rekognition operations, passing base64-encoded image bytes is not supported.

data IndexFaces Source #

See: indexFaces smart constructor.

Instances
Eq IndexFaces Source # 
Instance details

Defined in Network.AWS.Rekognition.IndexFaces

Data IndexFaces Source # 
Instance details

Defined in Network.AWS.Rekognition.IndexFaces

Methods

gfoldl :: (forall d b. Data d => c (d -> b) -> d -> c b) -> (forall g. g -> c g) -> IndexFaces -> c IndexFaces #

gunfold :: (forall b r. Data b => c (b -> r) -> c r) -> (forall r. r -> c r) -> Constr -> c IndexFaces #

toConstr :: IndexFaces -> Constr #

dataTypeOf :: IndexFaces -> DataType #

dataCast1 :: Typeable t => (forall d. Data d => c (t d)) -> Maybe (c IndexFaces) #

dataCast2 :: Typeable t => (forall d e. (Data d, Data e) => c (t d e)) -> Maybe (c IndexFaces) #

gmapT :: (forall b. Data b => b -> b) -> IndexFaces -> IndexFaces #

gmapQl :: (r -> r' -> r) -> r -> (forall d. Data d => d -> r') -> IndexFaces -> r #

gmapQr :: (r' -> r -> r) -> r -> (forall d. Data d => d -> r') -> IndexFaces -> r #

gmapQ :: (forall d. Data d => d -> u) -> IndexFaces -> [u] #

gmapQi :: Int -> (forall d. Data d => d -> u) -> IndexFaces -> u #

gmapM :: Monad m => (forall d. Data d => d -> m d) -> IndexFaces -> m IndexFaces #

gmapMp :: MonadPlus m => (forall d. Data d => d -> m d) -> IndexFaces -> m IndexFaces #

gmapMo :: MonadPlus m => (forall d. Data d => d -> m d) -> IndexFaces -> m IndexFaces #

Read IndexFaces Source # 
Instance details

Defined in Network.AWS.Rekognition.IndexFaces

Show IndexFaces Source # 
Instance details

Defined in Network.AWS.Rekognition.IndexFaces

Generic IndexFaces Source # 
Instance details

Defined in Network.AWS.Rekognition.IndexFaces

Associated Types

type Rep IndexFaces :: Type -> Type #

Hashable IndexFaces Source # 
Instance details

Defined in Network.AWS.Rekognition.IndexFaces

ToJSON IndexFaces Source # 
Instance details

Defined in Network.AWS.Rekognition.IndexFaces

AWSRequest IndexFaces Source # 
Instance details

Defined in Network.AWS.Rekognition.IndexFaces

Associated Types

type Rs IndexFaces :: Type #

ToHeaders IndexFaces Source # 
Instance details

Defined in Network.AWS.Rekognition.IndexFaces

Methods

toHeaders :: IndexFaces -> [Header] #

ToPath IndexFaces Source # 
Instance details

Defined in Network.AWS.Rekognition.IndexFaces

ToQuery IndexFaces Source # 
Instance details

Defined in Network.AWS.Rekognition.IndexFaces

NFData IndexFaces Source # 
Instance details

Defined in Network.AWS.Rekognition.IndexFaces

Methods

rnf :: IndexFaces -> () #

type Rep IndexFaces Source # 
Instance details

Defined in Network.AWS.Rekognition.IndexFaces

type Rep IndexFaces = D1 (MetaData "IndexFaces" "Network.AWS.Rekognition.IndexFaces" "amazonka-rekognition-1.6.1-ELwOF0LSGWFp8S3Ef6QMe" False) (C1 (MetaCons "IndexFaces'" PrefixI True) ((S1 (MetaSel (Just "_ifExternalImageId") NoSourceUnpackedness SourceStrict DecidedStrict) (Rec0 (Maybe Text)) :*: S1 (MetaSel (Just "_ifDetectionAttributes") NoSourceUnpackedness SourceStrict DecidedStrict) (Rec0 (Maybe [Attribute]))) :*: (S1 (MetaSel (Just "_ifCollectionId") NoSourceUnpackedness SourceStrict DecidedStrict) (Rec0 Text) :*: S1 (MetaSel (Just "_ifImage") NoSourceUnpackedness SourceStrict DecidedStrict) (Rec0 Image))))
type Rs IndexFaces Source # 
Instance details

Defined in Network.AWS.Rekognition.IndexFaces

Request Lenses

ifExternalImageId :: Lens' IndexFaces (Maybe Text) Source #

ID you want to assign to all the faces detected in the image.

ifDetectionAttributes :: Lens' IndexFaces [Attribute] Source #

An array of facial attributes that you want to be returned. This can be the default list of attributes or all attributes. If you don't specify a value for Attributes or if you specify [DEFAULT] , the API returns the following subset of facial attributes: BoundingBox , Confidence , Pose , Quality and Landmarks . If you provide [ALL] , all facial attributes are returned but the operation will take longer to complete. If you provide both, [ALL, DEFAULT] , the service uses a logical AND operator to determine which attributes to return (in this case, all attributes).

ifCollectionId :: Lens' IndexFaces Text Source #

The ID of an existing collection to which you want to add the faces that are detected in the input images.

ifImage :: Lens' IndexFaces Image Source #

The input image as base64-encoded bytes or an S3 object. If you use the AWS CLI to call Amazon Rekognition operations, passing base64-encoded image bytes is not supported.

Destructuring the Response

indexFacesResponse Source #

Creates a value of IndexFacesResponse with the minimum fields required to make a request.

Use one of the following lenses to modify other fields as desired:

  • ifrsFaceModelVersion - Version number of the face detection model associated with the input collection (CollectionId ).
  • ifrsFaceRecords - An array of faces detected and added to the collection. For more information, see 'collections-index-faces' .
  • ifrsOrientationCorrection - The orientation of the input image (counterclockwise direction). If your application displays the image, you can use this value to correct image orientation. The bounding box coordinates returned in FaceRecords represent face locations before the image orientation is corrected.
  • ifrsResponseStatus - -- | The response status code.

data IndexFacesResponse Source #

See: indexFacesResponse smart constructor.

Instances
Eq IndexFacesResponse Source # 
Instance details

Defined in Network.AWS.Rekognition.IndexFaces

Data IndexFacesResponse Source # 
Instance details

Defined in Network.AWS.Rekognition.IndexFaces

Methods

gfoldl :: (forall d b. Data d => c (d -> b) -> d -> c b) -> (forall g. g -> c g) -> IndexFacesResponse -> c IndexFacesResponse #

gunfold :: (forall b r. Data b => c (b -> r) -> c r) -> (forall r. r -> c r) -> Constr -> c IndexFacesResponse #

toConstr :: IndexFacesResponse -> Constr #

dataTypeOf :: IndexFacesResponse -> DataType #

dataCast1 :: Typeable t => (forall d. Data d => c (t d)) -> Maybe (c IndexFacesResponse) #

dataCast2 :: Typeable t => (forall d e. (Data d, Data e) => c (t d e)) -> Maybe (c IndexFacesResponse) #

gmapT :: (forall b. Data b => b -> b) -> IndexFacesResponse -> IndexFacesResponse #

gmapQl :: (r -> r' -> r) -> r -> (forall d. Data d => d -> r') -> IndexFacesResponse -> r #

gmapQr :: (r' -> r -> r) -> r -> (forall d. Data d => d -> r') -> IndexFacesResponse -> r #

gmapQ :: (forall d. Data d => d -> u) -> IndexFacesResponse -> [u] #

gmapQi :: Int -> (forall d. Data d => d -> u) -> IndexFacesResponse -> u #

gmapM :: Monad m => (forall d. Data d => d -> m d) -> IndexFacesResponse -> m IndexFacesResponse #

gmapMp :: MonadPlus m => (forall d. Data d => d -> m d) -> IndexFacesResponse -> m IndexFacesResponse #

gmapMo :: MonadPlus m => (forall d. Data d => d -> m d) -> IndexFacesResponse -> m IndexFacesResponse #

Read IndexFacesResponse Source # 
Instance details

Defined in Network.AWS.Rekognition.IndexFaces

Show IndexFacesResponse Source # 
Instance details

Defined in Network.AWS.Rekognition.IndexFaces

Generic IndexFacesResponse Source # 
Instance details

Defined in Network.AWS.Rekognition.IndexFaces

Associated Types

type Rep IndexFacesResponse :: Type -> Type #

NFData IndexFacesResponse Source # 
Instance details

Defined in Network.AWS.Rekognition.IndexFaces

Methods

rnf :: IndexFacesResponse -> () #

type Rep IndexFacesResponse Source # 
Instance details

Defined in Network.AWS.Rekognition.IndexFaces

type Rep IndexFacesResponse = D1 (MetaData "IndexFacesResponse" "Network.AWS.Rekognition.IndexFaces" "amazonka-rekognition-1.6.1-ELwOF0LSGWFp8S3Ef6QMe" False) (C1 (MetaCons "IndexFacesResponse'" PrefixI True) ((S1 (MetaSel (Just "_ifrsFaceModelVersion") NoSourceUnpackedness SourceStrict DecidedStrict) (Rec0 (Maybe Text)) :*: S1 (MetaSel (Just "_ifrsFaceRecords") NoSourceUnpackedness SourceStrict DecidedStrict) (Rec0 (Maybe [FaceRecord]))) :*: (S1 (MetaSel (Just "_ifrsOrientationCorrection") NoSourceUnpackedness SourceStrict DecidedStrict) (Rec0 (Maybe OrientationCorrection)) :*: S1 (MetaSel (Just "_ifrsResponseStatus") NoSourceUnpackedness SourceStrict DecidedStrict) (Rec0 Int))))

Response Lenses

ifrsFaceModelVersion :: Lens' IndexFacesResponse (Maybe Text) Source #

Version number of the face detection model associated with the input collection (CollectionId ).

ifrsFaceRecords :: Lens' IndexFacesResponse [FaceRecord] Source #

An array of faces detected and added to the collection. For more information, see 'collections-index-faces' .

ifrsOrientationCorrection :: Lens' IndexFacesResponse (Maybe OrientationCorrection) Source #

The orientation of the input image (counterclockwise direction). If your application displays the image, you can use this value to correct image orientation. The bounding box coordinates returned in FaceRecords represent face locations before the image orientation is corrected.

ifrsResponseStatus :: Lens' IndexFacesResponse Int Source #

  • - | The response status code.