Amazon Rekognition

2018/08/23 - Amazon Rekognition - 1 new 12 updated api methods

Changes  This release introduces a new API called DescribeCollection to Amazon Rekognition. You can use DescribeCollection to get information about an existing face collection. Given the ID for a face collection, DescribeCollection returns the following information: the number of faces indexed into the collection, the version of the face detection model used by the collection, the Amazon Resource Name (ARN) of the collection and the creation date/time of the collection.

DescribeCollection (new) Link ¶

See also: AWS API Documentation

Request Syntax

client.describe_collection(
    CollectionId='string'
)
type CollectionId

string

param CollectionId

[REQUIRED]

rtype

dict

returns

Response Syntax

{
    'FaceCount': 123,
    'FaceModelVersion': 'string',
    'CollectionARN': 'string',
    'CreationTimestamp': datetime(2015, 1, 1)
}

Response Structure

  • (dict) --

    • FaceCount (integer) --

    • FaceModelVersion (string) --

    • CollectionARN (string) --

    • CreationTimestamp (datetime) --

GetCelebrityRecognition (updated) Link ¶
Changes (response)
{'BillableDurationSeconds': 'integer',
 'ErrorCode': 'string',
 'VideoMetadata': {'Rotation': 'integer'},
 'Warnings': [{'ErrorCode': 'string',
               'Message': 'string',
               'Sections': [{'EndTimestamp': 'long',
                             'StartTimestamp': 'long'}]}]}

Gets the celebrity recognition results for a Amazon Rekognition Video analysis started by .

Celebrity recognition in a video is an asynchronous operation. Analysis is started by a call to which returns a job identifier ( JobId ). When the celebrity recognition operation finishes, Amazon Rekognition Video publishes a completion status to the Amazon Simple Notification Service topic registered in the initial call to StartCelebrityRecognition . To get the results of the celebrity recognition analysis, first check that the status value published to the Amazon SNS topic is SUCCEEDED . If so, call GetCelebrityDetection and pass the job identifier ( JobId ) from the initial call to StartCelebrityDetection .

For more information, see Working With Stored Videos in the Amazon Rekognition Developer Guide.

GetCelebrityRecognition returns detected celebrities and the time(s) they are detected in an array ( Celebrities ) of objects. Each CelebrityRecognition contains information about the celebrity in a object and the time, Timestamp , the celebrity was detected.

Note

GetCelebrityRecognition only returns the default facial attributes ( BoundingBox , Confidence , Landmarks , Pose , and Quality ). The other facial attributes listed in the Face object of the following response syntax are not returned. For more information, see FaceDetail in the Amazon Rekognition Developer Guide.

By default, the Celebrities array is sorted by time (milliseconds from the start of the video). You can also sort the array by celebrity by specifying the value ID in the SortBy input parameter.

The CelebrityDetail object includes the celebrity identifer and additional information urls. If you don't store the additional information urls, you can get them later by calling with the celebrity identifer.

No information is returned for faces not recognized as celebrities.

Use MaxResults parameter to limit the number of labels returned. If there are more results than specified in MaxResults , the value of NextToken in the operation response contains a pagination token for getting the next set of results. To get the next page of results, call GetCelebrityDetection and populate the NextToken request parameter with the token value returned from the previous call to GetCelebrityRecognition .

See also: AWS API Documentation

Request Syntax

client.get_celebrity_recognition(
    JobId='string',
    MaxResults=123,
    NextToken='string',
    SortBy='ID'|'TIMESTAMP'
)
type JobId

string

param JobId

[REQUIRED]

Job identifier for the required celebrity recognition analysis. You can get the job identifer from a call to StartCelebrityRecognition .

type MaxResults

integer

param MaxResults

Maximum number of results to return per paginated call. The largest value you can specify is 1000. If you specify a value greater than 1000, a maximum of 1000 results is returned. The default value is 1000.

type NextToken

string

param NextToken

If the previous response was incomplete (because there is more recognized celebrities to retrieve), Amazon Rekognition Video returns a pagination token in the response. You can use this pagination token to retrieve the next set of celebrities.

type SortBy

string

param SortBy

Sort to use for celebrities returned in Celebrities field. Specify ID to sort by the celebrity identifier, specify TIMESTAMP to sort by the time the celebrity was recognized.

rtype

dict

returns

Response Syntax

{
    'JobStatus': 'IN_PROGRESS'|'SUCCEEDED'|'FAILED',
    'StatusMessage': 'string',
    'VideoMetadata': {
        'Codec': 'string',
        'DurationMillis': 123,
        'Format': 'string',
        'FrameRate': ...,
        'FrameHeight': 123,
        'FrameWidth': 123,
        'Rotation': 123
    },
    'NextToken': 'string',
    'Celebrities': [
        {
            'Timestamp': 123,
            'Celebrity': {
                'Urls': [
                    'string',
                ],
                'Name': 'string',
                'Id': 'string',
                'Confidence': ...,
                'BoundingBox': {
                    'Width': ...,
                    'Height': ...,
                    'Left': ...,
                    'Top': ...
                },
                'Face': {
                    'BoundingBox': {
                        'Width': ...,
                        'Height': ...,
                        'Left': ...,
                        'Top': ...
                    },
                    'AgeRange': {
                        'Low': 123,
                        'High': 123
                    },
                    'Smile': {
                        'Value': True|False,
                        'Confidence': ...
                    },
                    'Eyeglasses': {
                        'Value': True|False,
                        'Confidence': ...
                    },
                    'Sunglasses': {
                        'Value': True|False,
                        'Confidence': ...
                    },
                    'Gender': {
                        'Value': 'Male'|'Female',
                        'Confidence': ...
                    },
                    'Beard': {
                        'Value': True|False,
                        'Confidence': ...
                    },
                    'Mustache': {
                        'Value': True|False,
                        'Confidence': ...
                    },
                    'EyesOpen': {
                        'Value': True|False,
                        'Confidence': ...
                    },
                    'MouthOpen': {
                        'Value': True|False,
                        'Confidence': ...
                    },
                    'Emotions': [
                        {
                            'Type': 'HAPPY'|'SAD'|'ANGRY'|'CONFUSED'|'DISGUSTED'|'SURPRISED'|'CALM'|'UNKNOWN',
                            'Confidence': ...
                        },
                    ],
                    'Landmarks': [
                        {
                            'Type': 'eyeLeft'|'eyeRight'|'nose'|'mouthLeft'|'mouthRight'|'leftEyeBrowLeft'|'leftEyeBrowRight'|'leftEyeBrowUp'|'rightEyeBrowLeft'|'rightEyeBrowRight'|'rightEyeBrowUp'|'leftEyeLeft'|'leftEyeRight'|'leftEyeUp'|'leftEyeDown'|'rightEyeLeft'|'rightEyeRight'|'rightEyeUp'|'rightEyeDown'|'noseLeft'|'noseRight'|'mouthUp'|'mouthDown'|'leftPupil'|'rightPupil',
                            'X': ...,
                            'Y': ...
                        },
                    ],
                    'Pose': {
                        'Roll': ...,
                        'Yaw': ...,
                        'Pitch': ...
                    },
                    'Quality': {
                        'Brightness': ...,
                        'Sharpness': ...
                    },
                    'Confidence': ...
                }
            }
        },
    ],
    'BillableDurationSeconds': 123,
    'ErrorCode': 'string',
    'Warnings': [
        {
            'ErrorCode': 'string',
            'Message': 'string',
            'Sections': [
                {
                    'StartTimestamp': 123,
                    'EndTimestamp': 123
                },
            ]
        },
    ]
}

Response Structure

  • (dict) --

    • JobStatus (string) --

      The current status of the celebrity recognition job.

    • StatusMessage (string) --

      If the job fails, StatusMessage provides a descriptive error message.

    • VideoMetadata (dict) --

      Information about a video that Amazon Rekognition Video analyzed. Videometadata is returned in every page of paginated responses from a Amazon Rekognition Video operation.

      • Codec (string) --

        Type of compression used in the analyzed video.

      • DurationMillis (integer) --

        Length of the video in milliseconds.

      • Format (string) --

        Format of the analyzed video. Possible values are MP4, MOV and AVI.

      • FrameRate (float) --

        Number of frames per second in the video.

      • FrameHeight (integer) --

        Vertical pixel dimension of the video.

      • FrameWidth (integer) --

        Horizontal pixel dimension of the video.

      • Rotation (integer) --

    • NextToken (string) --

      If the response is truncated, Amazon Rekognition Video returns this token that you can use in the subsequent request to retrieve the next set of celebrities.

    • Celebrities (list) --

      Array of celebrities recognized in the video.

      • (dict) --

        Information about a detected celebrity and the time the celebrity was detected in a stored video. For more information, see GetCelebrityRecognition in the Amazon Rekognition Developer Guide.

        • Timestamp (integer) --

          The time, in milliseconds from the start of the video, that the celebrity was recognized.

        • Celebrity (dict) --

          Information about a recognized celebrity.

          • Urls (list) --

            An array of URLs pointing to additional celebrity information.

            • (string) --

          • Name (string) --

            The name of the celebrity.

          • Id (string) --

            The unique identifier for the celebrity.

          • Confidence (float) --

            The confidence, in percentage, that Amazon Rekognition has that the recognized face is the celebrity.

          • BoundingBox (dict) --

            Bounding box around the body of a celebrity.

            • Width (float) --

              Width of the bounding box as a ratio of the overall image width.

            • Height (float) --

              Height of the bounding box as a ratio of the overall image height.

            • Left (float) --

              Left coordinate of the bounding box as a ratio of overall image width.

            • Top (float) --

              Top coordinate of the bounding box as a ratio of overall image height.

          • Face (dict) --

            Face details for the recognized celebrity.

            • BoundingBox (dict) --

              Bounding box of the face. Default attribute.

              • Width (float) --

                Width of the bounding box as a ratio of the overall image width.

              • Height (float) --

                Height of the bounding box as a ratio of the overall image height.

              • Left (float) --

                Left coordinate of the bounding box as a ratio of overall image width.

              • Top (float) --

                Top coordinate of the bounding box as a ratio of overall image height.

            • AgeRange (dict) --

              The estimated age range, in years, for the face. Low represents the lowest estimated age and High represents the highest estimated age.

              • Low (integer) --

                The lowest estimated age.

              • High (integer) --

                The highest estimated age.

            • Smile (dict) --

              Indicates whether or not the face is smiling, and the confidence level in the determination.

              • Value (boolean) --

                Boolean value that indicates whether the face is smiling or not.

              • Confidence (float) --

                Level of confidence in the determination.

            • Eyeglasses (dict) --

              Indicates whether or not the face is wearing eye glasses, and the confidence level in the determination.

              • Value (boolean) --

                Boolean value that indicates whether the face is wearing eye glasses or not.

              • Confidence (float) --

                Level of confidence in the determination.

            • Sunglasses (dict) --

              Indicates whether or not the face is wearing sunglasses, and the confidence level in the determination.

              • Value (boolean) --

                Boolean value that indicates whether the face is wearing sunglasses or not.

              • Confidence (float) --

                Level of confidence in the determination.

            • Gender (dict) --

              Gender of the face and the confidence level in the determination.

              • Value (string) --

                Gender of the face.

              • Confidence (float) --

                Level of confidence in the determination.

            • Beard (dict) --

              Indicates whether or not the face has a beard, and the confidence level in the determination.

              • Value (boolean) --

                Boolean value that indicates whether the face has beard or not.

              • Confidence (float) --

                Level of confidence in the determination.

            • Mustache (dict) --

              Indicates whether or not the face has a mustache, and the confidence level in the determination.

              • Value (boolean) --

                Boolean value that indicates whether the face has mustache or not.

              • Confidence (float) --

                Level of confidence in the determination.

            • EyesOpen (dict) --

              Indicates whether or not the eyes on the face are open, and the confidence level in the determination.

              • Value (boolean) --

                Boolean value that indicates whether the eyes on the face are open.

              • Confidence (float) --

                Level of confidence in the determination.

            • MouthOpen (dict) --

              Indicates whether or not the mouth on the face is open, and the confidence level in the determination.

              • Value (boolean) --

                Boolean value that indicates whether the mouth on the face is open or not.

              • Confidence (float) --

                Level of confidence in the determination.

            • Emotions (list) --

              The emotions detected on the face, and the confidence level in the determination. For example, HAPPY, SAD, and ANGRY.

              • (dict) --

                The emotions detected on the face, and the confidence level in the determination. For example, HAPPY, SAD, and ANGRY.

                • Type (string) --

                  Type of emotion detected.

                • Confidence (float) --

                  Level of confidence in the determination.

            • Landmarks (list) --

              Indicates the location of landmarks on the face. Default attribute.

              • (dict) --

                Indicates the location of the landmark on the face.

                • Type (string) --

                  Type of the landmark.

                • X (float) --

                  x-coordinate from the top left of the landmark expressed as the ratio of the width of the image. For example, if the images is 700x200 and the x-coordinate of the landmark is at 350 pixels, this value is 0.5.

                • Y (float) --

                  y-coordinate from the top left of the landmark expressed as the ratio of the height of the image. For example, if the images is 700x200 and the y-coordinate of the landmark is at 100 pixels, this value is 0.5.

            • Pose (dict) --

              Indicates the pose of the face as determined by its pitch, roll, and yaw. Default attribute.

              • Roll (float) --

                Value representing the face rotation on the roll axis.

              • Yaw (float) --

                Value representing the face rotation on the yaw axis.

              • Pitch (float) --

                Value representing the face rotation on the pitch axis.

            • Quality (dict) --

              Identifies image brightness and sharpness. Default attribute.

              • Brightness (float) --

                Value representing brightness of the face. The service returns a value between 0 and 100 (inclusive). A higher value indicates a brighter face image.

              • Sharpness (float) --

                Value representing sharpness of the face. The service returns a value between 0 and 100 (inclusive). A higher value indicates a sharper face image.

            • Confidence (float) --

              Confidence level that the bounding box contains a face (and not a different object such as a tree). Default attribute.

    • BillableDurationSeconds (integer) --

    • ErrorCode (string) --

    • Warnings (list) --

      • (dict) --

        • ErrorCode (string) --

        • Message (string) --

        • Sections (list) --

          • (dict) --

            • StartTimestamp (integer) --

            • EndTimestamp (integer) --

GetContentModeration (updated) Link ¶
Changes (response)
{'BillableDurationSeconds': 'integer',
 'ErrorCode': 'string',
 'VideoMetadata': {'Rotation': 'integer'},
 'Warnings': [{'ErrorCode': 'string',
               'Message': 'string',
               'Sections': [{'EndTimestamp': 'long',
                             'StartTimestamp': 'long'}]}]}

Gets the content moderation analysis results for a Amazon Rekognition Video analysis started by .

Content moderation analysis of a video is an asynchronous operation. You start analysis by calling . which returns a job identifier ( JobId ). When analysis finishes, Amazon Rekognition Video publishes a completion status to the Amazon Simple Notification Service topic registered in the initial call to StartContentModeration . To get the results of the content moderation analysis, first check that the status value published to the Amazon SNS topic is SUCCEEDED . If so, call GetCelebrityDetection and pass the job identifier ( JobId ) from the initial call to StartCelebrityDetection .

For more information, see Working with Stored Videos in the Amazon Rekognition Devlopers Guide.

GetContentModeration returns detected content moderation labels, and the time they are detected, in an array, ModerationLabels , of objects.

By default, the moderated labels are returned sorted by time, in milliseconds from the start of the video. You can also sort them by moderated label by specifying NAME for the SortBy input parameter.

Since video analysis can return a large number of results, use the MaxResults parameter to limit the number of labels returned in a single call to GetContentModeration . If there are more results than specified in MaxResults , the value of NextToken in the operation response contains a pagination token for getting the next set of results. To get the next page of results, call GetContentModeration and populate the NextToken request parameter with the value of NextToken returned from the previous call to GetContentModeration .

For more information, see Detecting Unsafe Content in the Amazon Rekognition Developer Guide.

See also: AWS API Documentation

Request Syntax

client.get_content_moderation(
    JobId='string',
    MaxResults=123,
    NextToken='string',
    SortBy='NAME'|'TIMESTAMP'
)
type JobId

string

param JobId

[REQUIRED]

The identifier for the content moderation job. Use JobId to identify the job in a subsequent call to GetContentModeration .

type MaxResults

integer

param MaxResults

Maximum number of results to return per paginated call. The largest value you can specify is 1000. If you specify a value greater than 1000, a maximum of 1000 results is returned. The default value is 1000.

type NextToken

string

param NextToken

If the previous response was incomplete (because there is more data to retrieve), Amazon Rekognition returns a pagination token in the response. You can use this pagination token to retrieve the next set of content moderation labels.

type SortBy

string

param SortBy

Sort to use for elements in the ModerationLabelDetections array. Use TIMESTAMP to sort array elements by the time labels are detected. Use NAME to alphabetically group elements for a label together. Within each label group, the array element are sorted by detection confidence. The default sort is by TIMESTAMP .

rtype

dict

returns

Response Syntax

{
    'JobStatus': 'IN_PROGRESS'|'SUCCEEDED'|'FAILED',
    'StatusMessage': 'string',
    'VideoMetadata': {
        'Codec': 'string',
        'DurationMillis': 123,
        'Format': 'string',
        'FrameRate': ...,
        'FrameHeight': 123,
        'FrameWidth': 123,
        'Rotation': 123
    },
    'ModerationLabels': [
        {
            'Timestamp': 123,
            'ModerationLabel': {
                'Confidence': ...,
                'Name': 'string',
                'ParentName': 'string'
            }
        },
    ],
    'NextToken': 'string',
    'BillableDurationSeconds': 123,
    'ErrorCode': 'string',
    'Warnings': [
        {
            'ErrorCode': 'string',
            'Message': 'string',
            'Sections': [
                {
                    'StartTimestamp': 123,
                    'EndTimestamp': 123
                },
            ]
        },
    ]
}

Response Structure

  • (dict) --

    • JobStatus (string) --

      The current status of the content moderation job.

    • StatusMessage (string) --

      If the job fails, StatusMessage provides a descriptive error message.

    • VideoMetadata (dict) --

      Information about a video that Amazon Rekognition analyzed. Videometadata is returned in every page of paginated responses from GetContentModeration .

      • Codec (string) --

        Type of compression used in the analyzed video.

      • DurationMillis (integer) --

        Length of the video in milliseconds.

      • Format (string) --

        Format of the analyzed video. Possible values are MP4, MOV and AVI.

      • FrameRate (float) --

        Number of frames per second in the video.

      • FrameHeight (integer) --

        Vertical pixel dimension of the video.

      • FrameWidth (integer) --

        Horizontal pixel dimension of the video.

      • Rotation (integer) --

    • ModerationLabels (list) --

      The detected moderation labels and the time(s) they were detected.

      • (dict) --

        Information about a moderation label detection in a stored video.

        • Timestamp (integer) --

          Time, in milliseconds from the beginning of the video, that the moderation label was detected.

        • ModerationLabel (dict) --

          The moderation label detected by in the stored video.

          • Confidence (float) --

            Specifies the confidence that Amazon Rekognition has that the label has been correctly identified.

            If you don't specify the MinConfidence parameter in the call to DetectModerationLabels , the operation returns labels with a confidence value greater than or equal to 50 percent.

          • Name (string) --

            The label name for the type of content detected in the image.

          • ParentName (string) --

            The name for the parent label. Labels at the top-level of the hierarchy have the parent label "" .

    • NextToken (string) --

      If the response is truncated, Amazon Rekognition Video returns this token that you can use in the subsequent request to retrieve the next set of moderation labels.

    • BillableDurationSeconds (integer) --

    • ErrorCode (string) --

    • Warnings (list) --

      • (dict) --

        • ErrorCode (string) --

        • Message (string) --

        • Sections (list) --

          • (dict) --

            • StartTimestamp (integer) --

            • EndTimestamp (integer) --

GetFaceDetection (updated) Link ¶
Changes (response)
{'BillableDurationSeconds': 'integer',
 'ErrorCode': 'string',
 'VideoMetadata': {'Rotation': 'integer'},
 'Warnings': [{'ErrorCode': 'string',
               'Message': 'string',
               'Sections': [{'EndTimestamp': 'long',
                             'StartTimestamp': 'long'}]}]}

Gets face detection results for a Amazon Rekognition Video analysis started by .

Face detection with Amazon Rekognition Video is an asynchronous operation. You start face detection by calling which returns a job identifier ( JobId ). When the face detection operation finishes, Amazon Rekognition Video publishes a completion status to the Amazon Simple Notification Service topic registered in the initial call to StartFaceDetection . To get the results of the face detection operation, first check that the status value published to the Amazon SNS topic is SUCCEEDED . If so, call and pass the job identifier ( JobId ) from the initial call to StartFaceDetection .

GetFaceDetection returns an array of detected faces ( Faces ) sorted by the time the faces were detected.

Use MaxResults parameter to limit the number of labels returned. If there are more results than specified in MaxResults , the value of NextToken in the operation response contains a pagination token for getting the next set of results. To get the next page of results, call GetFaceDetection and populate the NextToken request parameter with the token value returned from the previous call to GetFaceDetection .

See also: AWS API Documentation

Request Syntax

client.get_face_detection(
    JobId='string',
    MaxResults=123,
    NextToken='string'
)
type JobId

string

param JobId

[REQUIRED]

Unique identifier for the face detection job. The JobId is returned from StartFaceDetection .

type MaxResults

integer

param MaxResults

Maximum number of results to return per paginated call. The largest value you can specify is 1000. If you specify a value greater than 1000, a maximum of 1000 results is returned. The default value is 1000.

type NextToken

string

param NextToken

If the previous response was incomplete (because there are more faces to retrieve), Amazon Rekognition Video returns a pagination token in the response. You can use this pagination token to retrieve the next set of faces.

rtype

dict

returns

Response Syntax

{
    'JobStatus': 'IN_PROGRESS'|'SUCCEEDED'|'FAILED',
    'StatusMessage': 'string',
    'VideoMetadata': {
        'Codec': 'string',
        'DurationMillis': 123,
        'Format': 'string',
        'FrameRate': ...,
        'FrameHeight': 123,
        'FrameWidth': 123,
        'Rotation': 123
    },
    'NextToken': 'string',
    'Faces': [
        {
            'Timestamp': 123,
            'Face': {
                'BoundingBox': {
                    'Width': ...,
                    'Height': ...,
                    'Left': ...,
                    'Top': ...
                },
                'AgeRange': {
                    'Low': 123,
                    'High': 123
                },
                'Smile': {
                    'Value': True|False,
                    'Confidence': ...
                },
                'Eyeglasses': {
                    'Value': True|False,
                    'Confidence': ...
                },
                'Sunglasses': {
                    'Value': True|False,
                    'Confidence': ...
                },
                'Gender': {
                    'Value': 'Male'|'Female',
                    'Confidence': ...
                },
                'Beard': {
                    'Value': True|False,
                    'Confidence': ...
                },
                'Mustache': {
                    'Value': True|False,
                    'Confidence': ...
                },
                'EyesOpen': {
                    'Value': True|False,
                    'Confidence': ...
                },
                'MouthOpen': {
                    'Value': True|False,
                    'Confidence': ...
                },
                'Emotions': [
                    {
                        'Type': 'HAPPY'|'SAD'|'ANGRY'|'CONFUSED'|'DISGUSTED'|'SURPRISED'|'CALM'|'UNKNOWN',
                        'Confidence': ...
                    },
                ],
                'Landmarks': [
                    {
                        'Type': 'eyeLeft'|'eyeRight'|'nose'|'mouthLeft'|'mouthRight'|'leftEyeBrowLeft'|'leftEyeBrowRight'|'leftEyeBrowUp'|'rightEyeBrowLeft'|'rightEyeBrowRight'|'rightEyeBrowUp'|'leftEyeLeft'|'leftEyeRight'|'leftEyeUp'|'leftEyeDown'|'rightEyeLeft'|'rightEyeRight'|'rightEyeUp'|'rightEyeDown'|'noseLeft'|'noseRight'|'mouthUp'|'mouthDown'|'leftPupil'|'rightPupil',
                        'X': ...,
                        'Y': ...
                    },
                ],
                'Pose': {
                    'Roll': ...,
                    'Yaw': ...,
                    'Pitch': ...
                },
                'Quality': {
                    'Brightness': ...,
                    'Sharpness': ...
                },
                'Confidence': ...
            }
        },
    ],
    'BillableDurationSeconds': 123,
    'ErrorCode': 'string',
    'Warnings': [
        {
            'ErrorCode': 'string',
            'Message': 'string',
            'Sections': [
                {
                    'StartTimestamp': 123,
                    'EndTimestamp': 123
                },
            ]
        },
    ]
}

Response Structure

  • (dict) --

    • JobStatus (string) --

      The current status of the face detection job.

    • StatusMessage (string) --

      If the job fails, StatusMessage provides a descriptive error message.

    • VideoMetadata (dict) --

      Information about a video that Amazon Rekognition Video analyzed. Videometadata is returned in every page of paginated responses from a Amazon Rekognition video operation.

      • Codec (string) --

        Type of compression used in the analyzed video.

      • DurationMillis (integer) --

        Length of the video in milliseconds.

      • Format (string) --

        Format of the analyzed video. Possible values are MP4, MOV and AVI.

      • FrameRate (float) --

        Number of frames per second in the video.

      • FrameHeight (integer) --

        Vertical pixel dimension of the video.

      • FrameWidth (integer) --

        Horizontal pixel dimension of the video.

      • Rotation (integer) --

    • NextToken (string) --

      If the response is truncated, Amazon Rekognition returns this token that you can use in the subsequent request to retrieve the next set of faces.

    • Faces (list) --

      An array of faces detected in the video. Each element contains a detected face's details and the time, in milliseconds from the start of the video, the face was detected.

      • (dict) --

        Information about a face detected in a video analysis request and the time the face was detected in the video.

        • Timestamp (integer) --

          Time, in milliseconds from the start of the video, that the face was detected.

        • Face (dict) --

          The face properties for the detected face.

          • BoundingBox (dict) --

            Bounding box of the face. Default attribute.

            • Width (float) --

              Width of the bounding box as a ratio of the overall image width.

            • Height (float) --

              Height of the bounding box as a ratio of the overall image height.

            • Left (float) --

              Left coordinate of the bounding box as a ratio of overall image width.

            • Top (float) --

              Top coordinate of the bounding box as a ratio of overall image height.

          • AgeRange (dict) --

            The estimated age range, in years, for the face. Low represents the lowest estimated age and High represents the highest estimated age.

            • Low (integer) --

              The lowest estimated age.

            • High (integer) --

              The highest estimated age.

          • Smile (dict) --

            Indicates whether or not the face is smiling, and the confidence level in the determination.

            • Value (boolean) --

              Boolean value that indicates whether the face is smiling or not.

            • Confidence (float) --

              Level of confidence in the determination.

          • Eyeglasses (dict) --

            Indicates whether or not the face is wearing eye glasses, and the confidence level in the determination.

            • Value (boolean) --

              Boolean value that indicates whether the face is wearing eye glasses or not.

            • Confidence (float) --

              Level of confidence in the determination.

          • Sunglasses (dict) --

            Indicates whether or not the face is wearing sunglasses, and the confidence level in the determination.

            • Value (boolean) --

              Boolean value that indicates whether the face is wearing sunglasses or not.

            • Confidence (float) --

              Level of confidence in the determination.

          • Gender (dict) --

            Gender of the face and the confidence level in the determination.

            • Value (string) --

              Gender of the face.

            • Confidence (float) --

              Level of confidence in the determination.

          • Beard (dict) --

            Indicates whether or not the face has a beard, and the confidence level in the determination.

            • Value (boolean) --

              Boolean value that indicates whether the face has beard or not.

            • Confidence (float) --

              Level of confidence in the determination.

          • Mustache (dict) --

            Indicates whether or not the face has a mustache, and the confidence level in the determination.

            • Value (boolean) --

              Boolean value that indicates whether the face has mustache or not.

            • Confidence (float) --

              Level of confidence in the determination.

          • EyesOpen (dict) --

            Indicates whether or not the eyes on the face are open, and the confidence level in the determination.

            • Value (boolean) --

              Boolean value that indicates whether the eyes on the face are open.

            • Confidence (float) --

              Level of confidence in the determination.

          • MouthOpen (dict) --

            Indicates whether or not the mouth on the face is open, and the confidence level in the determination.

            • Value (boolean) --

              Boolean value that indicates whether the mouth on the face is open or not.

            • Confidence (float) --

              Level of confidence in the determination.

          • Emotions (list) --

            The emotions detected on the face, and the confidence level in the determination. For example, HAPPY, SAD, and ANGRY.

            • (dict) --

              The emotions detected on the face, and the confidence level in the determination. For example, HAPPY, SAD, and ANGRY.

              • Type (string) --

                Type of emotion detected.

              • Confidence (float) --

                Level of confidence in the determination.

          • Landmarks (list) --

            Indicates the location of landmarks on the face. Default attribute.

            • (dict) --

              Indicates the location of the landmark on the face.

              • Type (string) --

                Type of the landmark.

              • X (float) --

                x-coordinate from the top left of the landmark expressed as the ratio of the width of the image. For example, if the images is 700x200 and the x-coordinate of the landmark is at 350 pixels, this value is 0.5.

              • Y (float) --

                y-coordinate from the top left of the landmark expressed as the ratio of the height of the image. For example, if the images is 700x200 and the y-coordinate of the landmark is at 100 pixels, this value is 0.5.

          • Pose (dict) --

            Indicates the pose of the face as determined by its pitch, roll, and yaw. Default attribute.

            • Roll (float) --

              Value representing the face rotation on the roll axis.

            • Yaw (float) --

              Value representing the face rotation on the yaw axis.

            • Pitch (float) --

              Value representing the face rotation on the pitch axis.

          • Quality (dict) --

            Identifies image brightness and sharpness. Default attribute.

            • Brightness (float) --

              Value representing brightness of the face. The service returns a value between 0 and 100 (inclusive). A higher value indicates a brighter face image.

            • Sharpness (float) --

              Value representing sharpness of the face. The service returns a value between 0 and 100 (inclusive). A higher value indicates a sharper face image.

          • Confidence (float) --

            Confidence level that the bounding box contains a face (and not a different object such as a tree). Default attribute.

    • BillableDurationSeconds (integer) --

    • ErrorCode (string) --

    • Warnings (list) --

      • (dict) --

        • ErrorCode (string) --

        • Message (string) --

        • Sections (list) --

          • (dict) --

            • StartTimestamp (integer) --

            • EndTimestamp (integer) --

GetFaceSearch (updated) Link ¶
Changes (response)
{'BillableDurationSeconds': 'integer',
 'ErrorCode': 'string',
 'Persons': {'FaceMatches': {'Face': {'AssociationScore': 'float'}}},
 'VideoMetadata': {'Rotation': 'integer'},
 'Warnings': [{'ErrorCode': 'string',
               'Message': 'string',
               'Sections': [{'EndTimestamp': 'long',
                             'StartTimestamp': 'long'}]}]}

Gets the face search results for Amazon Rekognition Video face search started by . The search returns faces in a collection that match the faces of persons detected in a video. It also includes the time(s) that faces are matched in the video.

Face search in a video is an asynchronous operation. You start face search by calling to which returns a job identifier ( JobId ). When the search operation finishes, Amazon Rekognition Video publishes a completion status to the Amazon Simple Notification Service topic registered in the initial call to StartFaceSearch . To get the search results, first check that the status value published to the Amazon SNS topic is SUCCEEDED . If so, call GetFaceSearch and pass the job identifier ( JobId ) from the initial call to StartFaceSearch .

For more information, see Searching Faces in a Collection in the Amazon Rekognition Developer Guide.

The search results are retured in an array, Persons , of objects. Each PersonMatch element contains details about the matching faces in the input collection, person information (facial attributes, bounding boxes, and person identifer) for the matched person, and the time the person was matched in the video.

Note

GetFaceSearch only returns the default facial attributes ( BoundingBox , Confidence , Landmarks , Pose , and Quality ). The other facial attributes listed in the Face object of the following response syntax are not returned. For more information, see FaceDetail in the Amazon Rekognition Developer Guide.

By default, the Persons array is sorted by the time, in milliseconds from the start of the video, persons are matched. You can also sort by persons by specifying INDEX for the SORTBY input parameter.

See also: AWS API Documentation

Request Syntax

client.get_face_search(
    JobId='string',
    MaxResults=123,
    NextToken='string',
    SortBy='INDEX'|'TIMESTAMP'
)
type JobId

string

param JobId

[REQUIRED]

The job identifer for the search request. You get the job identifier from an initial call to StartFaceSearch .

type MaxResults

integer

param MaxResults

Maximum number of results to return per paginated call. The largest value you can specify is 1000. If you specify a value greater than 1000, a maximum of 1000 results is returned. The default value is 1000.

type NextToken

string

param NextToken

If the previous response was incomplete (because there is more search results to retrieve), Amazon Rekognition Video returns a pagination token in the response. You can use this pagination token to retrieve the next set of search results.

type SortBy

string

param SortBy

Sort to use for grouping faces in the response. Use TIMESTAMP to group faces by the time that they are recognized. Use INDEX to sort by recognized faces.

rtype

dict

returns

Response Syntax

{
    'JobStatus': 'IN_PROGRESS'|'SUCCEEDED'|'FAILED',
    'StatusMessage': 'string',
    'NextToken': 'string',
    'VideoMetadata': {
        'Codec': 'string',
        'DurationMillis': 123,
        'Format': 'string',
        'FrameRate': ...,
        'FrameHeight': 123,
        'FrameWidth': 123,
        'Rotation': 123
    },
    'Persons': [
        {
            'Timestamp': 123,
            'Person': {
                'Index': 123,
                'BoundingBox': {
                    'Width': ...,
                    'Height': ...,
                    'Left': ...,
                    'Top': ...
                },
                'Face': {
                    'BoundingBox': {
                        'Width': ...,
                        'Height': ...,
                        'Left': ...,
                        'Top': ...
                    },
                    'AgeRange': {
                        'Low': 123,
                        'High': 123
                    },
                    'Smile': {
                        'Value': True|False,
                        'Confidence': ...
                    },
                    'Eyeglasses': {
                        'Value': True|False,
                        'Confidence': ...
                    },
                    'Sunglasses': {
                        'Value': True|False,
                        'Confidence': ...
                    },
                    'Gender': {
                        'Value': 'Male'|'Female',
                        'Confidence': ...
                    },
                    'Beard': {
                        'Value': True|False,
                        'Confidence': ...
                    },
                    'Mustache': {
                        'Value': True|False,
                        'Confidence': ...
                    },
                    'EyesOpen': {
                        'Value': True|False,
                        'Confidence': ...
                    },
                    'MouthOpen': {
                        'Value': True|False,
                        'Confidence': ...
                    },
                    'Emotions': [
                        {
                            'Type': 'HAPPY'|'SAD'|'ANGRY'|'CONFUSED'|'DISGUSTED'|'SURPRISED'|'CALM'|'UNKNOWN',
                            'Confidence': ...
                        },
                    ],
                    'Landmarks': [
                        {
                            'Type': 'eyeLeft'|'eyeRight'|'nose'|'mouthLeft'|'mouthRight'|'leftEyeBrowLeft'|'leftEyeBrowRight'|'leftEyeBrowUp'|'rightEyeBrowLeft'|'rightEyeBrowRight'|'rightEyeBrowUp'|'leftEyeLeft'|'leftEyeRight'|'leftEyeUp'|'leftEyeDown'|'rightEyeLeft'|'rightEyeRight'|'rightEyeUp'|'rightEyeDown'|'noseLeft'|'noseRight'|'mouthUp'|'mouthDown'|'leftPupil'|'rightPupil',
                            'X': ...,
                            'Y': ...
                        },
                    ],
                    'Pose': {
                        'Roll': ...,
                        'Yaw': ...,
                        'Pitch': ...
                    },
                    'Quality': {
                        'Brightness': ...,
                        'Sharpness': ...
                    },
                    'Confidence': ...
                }
            },
            'FaceMatches': [
                {
                    'Similarity': ...,
                    'Face': {
                        'FaceId': 'string',
                        'BoundingBox': {
                            'Width': ...,
                            'Height': ...,
                            'Left': ...,
                            'Top': ...
                        },
                        'ImageId': 'string',
                        'ExternalImageId': 'string',
                        'Confidence': ...,
                        'AssociationScore': ...
                    }
                },
            ]
        },
    ],
    'BillableDurationSeconds': 123,
    'ErrorCode': 'string',
    'Warnings': [
        {
            'ErrorCode': 'string',
            'Message': 'string',
            'Sections': [
                {
                    'StartTimestamp': 123,
                    'EndTimestamp': 123
                },
            ]
        },
    ]
}

Response Structure

  • (dict) --

    • JobStatus (string) --

      The current status of the face search job.

    • StatusMessage (string) --

      If the job fails, StatusMessage provides a descriptive error message.

    • NextToken (string) --

      If the response is truncated, Amazon Rekognition Video returns this token that you can use in the subsequent request to retrieve the next set of search results.

    • VideoMetadata (dict) --

      Information about a video that Amazon Rekognition analyzed. Videometadata is returned in every page of paginated responses from a Amazon Rekognition Video operation.

      • Codec (string) --

        Type of compression used in the analyzed video.

      • DurationMillis (integer) --

        Length of the video in milliseconds.

      • Format (string) --

        Format of the analyzed video. Possible values are MP4, MOV and AVI.

      • FrameRate (float) --

        Number of frames per second in the video.

      • FrameHeight (integer) --

        Vertical pixel dimension of the video.

      • FrameWidth (integer) --

        Horizontal pixel dimension of the video.

      • Rotation (integer) --

    • Persons (list) --

      An array of persons, , in the video whose face(s) match the face(s) in an Amazon Rekognition collection. It also includes time information for when persons are matched in the video. You specify the input collection in an initial call to StartFaceSearch . Each Persons element includes a time the person was matched, face match details ( FaceMatches ) for matching faces in the collection, and person information ( Person ) for the matched person.

      • (dict) --

        Information about a person whose face matches a face(s) in a Amazon Rekognition collection. Includes information about the faces in the Amazon Rekognition collection (), information about the person ( PersonDetail ) and the timestamp for when the person was detected in a video. An array of PersonMatch objects is returned by .

        • Timestamp (integer) --

          The time, in milliseconds from the beginning of the video, that the person was matched in the video.

        • Person (dict) --

          Information about the matched person.

          • Index (integer) --

            Identifier for the person detected person within a video. Use to keep track of the person throughout the video. The identifier is not stored by Amazon Rekognition.

          • BoundingBox (dict) --

            Bounding box around the detected person.

            • Width (float) --

              Width of the bounding box as a ratio of the overall image width.

            • Height (float) --

              Height of the bounding box as a ratio of the overall image height.

            • Left (float) --

              Left coordinate of the bounding box as a ratio of overall image width.

            • Top (float) --

              Top coordinate of the bounding box as a ratio of overall image height.

          • Face (dict) --

            Face details for the detected person.

            • BoundingBox (dict) --

              Bounding box of the face. Default attribute.

              • Width (float) --

                Width of the bounding box as a ratio of the overall image width.

              • Height (float) --

                Height of the bounding box as a ratio of the overall image height.

              • Left (float) --

                Left coordinate of the bounding box as a ratio of overall image width.

              • Top (float) --

                Top coordinate of the bounding box as a ratio of overall image height.

            • AgeRange (dict) --

              The estimated age range, in years, for the face. Low represents the lowest estimated age and High represents the highest estimated age.

              • Low (integer) --

                The lowest estimated age.

              • High (integer) --

                The highest estimated age.

            • Smile (dict) --

              Indicates whether or not the face is smiling, and the confidence level in the determination.

              • Value (boolean) --

                Boolean value that indicates whether the face is smiling or not.

              • Confidence (float) --

                Level of confidence in the determination.

            • Eyeglasses (dict) --

              Indicates whether or not the face is wearing eye glasses, and the confidence level in the determination.

              • Value (boolean) --

                Boolean value that indicates whether the face is wearing eye glasses or not.

              • Confidence (float) --

                Level of confidence in the determination.

            • Sunglasses (dict) --

              Indicates whether or not the face is wearing sunglasses, and the confidence level in the determination.

              • Value (boolean) --

                Boolean value that indicates whether the face is wearing sunglasses or not.

              • Confidence (float) --

                Level of confidence in the determination.

            • Gender (dict) --

              Gender of the face and the confidence level in the determination.

              • Value (string) --

                Gender of the face.

              • Confidence (float) --

                Level of confidence in the determination.

            • Beard (dict) --

              Indicates whether or not the face has a beard, and the confidence level in the determination.

              • Value (boolean) --

                Boolean value that indicates whether the face has beard or not.

              • Confidence (float) --

                Level of confidence in the determination.

            • Mustache (dict) --

              Indicates whether or not the face has a mustache, and the confidence level in the determination.

              • Value (boolean) --

                Boolean value that indicates whether the face has mustache or not.

              • Confidence (float) --

                Level of confidence in the determination.

            • EyesOpen (dict) --

              Indicates whether or not the eyes on the face are open, and the confidence level in the determination.

              • Value (boolean) --

                Boolean value that indicates whether the eyes on the face are open.

              • Confidence (float) --

                Level of confidence in the determination.

            • MouthOpen (dict) --

              Indicates whether or not the mouth on the face is open, and the confidence level in the determination.

              • Value (boolean) --

                Boolean value that indicates whether the mouth on the face is open or not.

              • Confidence (float) --

                Level of confidence in the determination.

            • Emotions (list) --

              The emotions detected on the face, and the confidence level in the determination. For example, HAPPY, SAD, and ANGRY.

              • (dict) --

                The emotions detected on the face, and the confidence level in the determination. For example, HAPPY, SAD, and ANGRY.

                • Type (string) --

                  Type of emotion detected.

                • Confidence (float) --

                  Level of confidence in the determination.

            • Landmarks (list) --

              Indicates the location of landmarks on the face. Default attribute.

              • (dict) --

                Indicates the location of the landmark on the face.

                • Type (string) --

                  Type of the landmark.

                • X (float) --

                  x-coordinate from the top left of the landmark expressed as the ratio of the width of the image. For example, if the images is 700x200 and the x-coordinate of the landmark is at 350 pixels, this value is 0.5.

                • Y (float) --

                  y-coordinate from the top left of the landmark expressed as the ratio of the height of the image. For example, if the images is 700x200 and the y-coordinate of the landmark is at 100 pixels, this value is 0.5.

            • Pose (dict) --

              Indicates the pose of the face as determined by its pitch, roll, and yaw. Default attribute.

              • Roll (float) --

                Value representing the face rotation on the roll axis.

              • Yaw (float) --

                Value representing the face rotation on the yaw axis.

              • Pitch (float) --

                Value representing the face rotation on the pitch axis.

            • Quality (dict) --

              Identifies image brightness and sharpness. Default attribute.

              • Brightness (float) --

                Value representing brightness of the face. The service returns a value between 0 and 100 (inclusive). A higher value indicates a brighter face image.

              • Sharpness (float) --

                Value representing sharpness of the face. The service returns a value between 0 and 100 (inclusive). A higher value indicates a sharper face image.

            • Confidence (float) --

              Confidence level that the bounding box contains a face (and not a different object such as a tree). Default attribute.

        • FaceMatches (list) --

          Information about the faces in the input collection that match the face of a person in the video.

          • (dict) --

            Provides face metadata. In addition, it also provides the confidence in the match of this face with the input face.

            • Similarity (float) --

              Confidence in the match of this face with the input face.

            • Face (dict) --

              Describes the face properties such as the bounding box, face ID, image ID of the source image, and external image ID that you assigned.

              • FaceId (string) --

                Unique identifier that Amazon Rekognition assigns to the face.

              • BoundingBox (dict) --

                Bounding box of the face.

                • Width (float) --

                  Width of the bounding box as a ratio of the overall image width.

                • Height (float) --

                  Height of the bounding box as a ratio of the overall image height.

                • Left (float) --

                  Left coordinate of the bounding box as a ratio of overall image width.

                • Top (float) --

                  Top coordinate of the bounding box as a ratio of overall image height.

              • ImageId (string) --

                Unique identifier that Amazon Rekognition assigns to the input image.

              • ExternalImageId (string) --

                Identifier that you assign to all the faces in the input image.

              • Confidence (float) --

                Confidence level that the bounding box contains a face (and not a different object such as a tree).

              • AssociationScore (float) --

    • BillableDurationSeconds (integer) --

    • ErrorCode (string) --

    • Warnings (list) --

      • (dict) --

        • ErrorCode (string) --

        • Message (string) --

        • Sections (list) --

          • (dict) --

            • StartTimestamp (integer) --

            • EndTimestamp (integer) --

GetLabelDetection (updated) Link ¶
Changes (response)
{'BillableDurationSeconds': 'integer',
 'ErrorCode': 'string',
 'VideoMetadata': {'Rotation': 'integer'},
 'Warnings': [{'ErrorCode': 'string',
               'Message': 'string',
               'Sections': [{'EndTimestamp': 'long',
                             'StartTimestamp': 'long'}]}]}

Gets the label detection results of a Amazon Rekognition Video analysis started by .

The label detection operation is started by a call to which returns a job identifier ( JobId ). When the label detection operation finishes, Amazon Rekognition publishes a completion status to the Amazon Simple Notification Service topic registered in the initial call to StartlabelDetection . To get the results of the label detection operation, first check that the status value published to the Amazon SNS topic is SUCCEEDED . If so, call and pass the job identifier ( JobId ) from the initial call to StartLabelDetection .

GetLabelDetection returns an array of detected labels ( Labels ) sorted by the time the labels were detected. You can also sort by the label name by specifying NAME for the SortBy input parameter.

The labels returned include the label name, the percentage confidence in the accuracy of the detected label, and the time the label was detected in the video.

Use MaxResults parameter to limit the number of labels returned. If there are more results than specified in MaxResults , the value of NextToken in the operation response contains a pagination token for getting the next set of results. To get the next page of results, call GetlabelDetection and populate the NextToken request parameter with the token value returned from the previous call to GetLabelDetection .

See also: AWS API Documentation

Request Syntax

client.get_label_detection(
    JobId='string',
    MaxResults=123,
    NextToken='string',
    SortBy='NAME'|'TIMESTAMP'
)
type JobId

string

param JobId

[REQUIRED]

Job identifier for the label detection operation for which you want results returned. You get the job identifer from an initial call to StartlabelDetection .

type MaxResults

integer

param MaxResults

Maximum number of results to return per paginated call. The largest value you can specify is 1000. If you specify a value greater than 1000, a maximum of 1000 results is returned. The default value is 1000.

type NextToken

string

param NextToken

If the previous response was incomplete (because there are more labels to retrieve), Amazon Rekognition Video returns a pagination token in the response. You can use this pagination token to retrieve the next set of labels.

type SortBy

string

param SortBy

Sort to use for elements in the Labels array. Use TIMESTAMP to sort array elements by the time labels are detected. Use NAME to alphabetically group elements for a label together. Within each label group, the array element are sorted by detection confidence. The default sort is by TIMESTAMP .

rtype

dict

returns

Response Syntax

{
    'JobStatus': 'IN_PROGRESS'|'SUCCEEDED'|'FAILED',
    'StatusMessage': 'string',
    'VideoMetadata': {
        'Codec': 'string',
        'DurationMillis': 123,
        'Format': 'string',
        'FrameRate': ...,
        'FrameHeight': 123,
        'FrameWidth': 123,
        'Rotation': 123
    },
    'NextToken': 'string',
    'Labels': [
        {
            'Timestamp': 123,
            'Label': {
                'Name': 'string',
                'Confidence': ...
            }
        },
    ],
    'BillableDurationSeconds': 123,
    'ErrorCode': 'string',
    'Warnings': [
        {
            'ErrorCode': 'string',
            'Message': 'string',
            'Sections': [
                {
                    'StartTimestamp': 123,
                    'EndTimestamp': 123
                },
            ]
        },
    ]
}

Response Structure

  • (dict) --

    • JobStatus (string) --

      The current status of the label detection job.

    • StatusMessage (string) --

      If the job fails, StatusMessage provides a descriptive error message.

    • VideoMetadata (dict) --

      Information about a video that Amazon Rekognition Video analyzed. Videometadata is returned in every page of paginated responses from a Amazon Rekognition video operation.

      • Codec (string) --

        Type of compression used in the analyzed video.

      • DurationMillis (integer) --

        Length of the video in milliseconds.

      • Format (string) --

        Format of the analyzed video. Possible values are MP4, MOV and AVI.

      • FrameRate (float) --

        Number of frames per second in the video.

      • FrameHeight (integer) --

        Vertical pixel dimension of the video.

      • FrameWidth (integer) --

        Horizontal pixel dimension of the video.

      • Rotation (integer) --

    • NextToken (string) --

      If the response is truncated, Amazon Rekognition Video returns this token that you can use in the subsequent request to retrieve the next set of labels.

    • Labels (list) --

      An array of labels detected in the video. Each element contains the detected label and the time, in milliseconds from the start of the video, that the label was detected.

      • (dict) --

        Information about a label detected in a video analysis request and the time the label was detected in the video.

        • Timestamp (integer) --

          Time, in milliseconds from the start of the video, that the label was detected.

        • Label (dict) --

          Details about the detected label.

          • Name (string) --

            The name (label) of the object.

          • Confidence (float) --

            Level of confidence.

    • BillableDurationSeconds (integer) --

    • ErrorCode (string) --

    • Warnings (list) --

      • (dict) --

        • ErrorCode (string) --

        • Message (string) --

        • Sections (list) --

          • (dict) --

            • StartTimestamp (integer) --

            • EndTimestamp (integer) --

GetPersonTracking (updated) Link ¶
Changes (response)
{'BillableDurationSeconds': 'integer',
 'ErrorCode': 'string',
 'VideoMetadata': {'Rotation': 'integer'},
 'Warnings': [{'ErrorCode': 'string',
               'Message': 'string',
               'Sections': [{'EndTimestamp': 'long',
                             'StartTimestamp': 'long'}]}]}

Gets the person tracking results of a Amazon Rekognition Video analysis started by .

The person detection operation is started by a call to StartPersonTracking which returns a job identifier ( JobId ). When the person detection operation finishes, Amazon Rekognition Video publishes a completion status to the Amazon Simple Notification Service topic registered in the initial call to StartPersonTracking .

To get the results of the person tracking operation, first check that the status value published to the Amazon SNS topic is SUCCEEDED . If so, call and pass the job identifier ( JobId ) from the initial call to StartPersonTracking .

GetPersonTracking returns an array, Persons , of tracked persons and the time(s) they were tracked in the video.

Note

GetPersonTracking only returns the default facial attributes ( BoundingBox , Confidence , Landmarks , Pose , and Quality ). The other facial attributes listed in the Face object of the following response syntax are not returned.

For more information, see FaceDetail in the Amazon Rekognition Developer Guide.

By default, the array is sorted by the time(s) a person is tracked in the video. You can sort by tracked persons by specifying INDEX for the SortBy input parameter.

Use the MaxResults parameter to limit the number of items returned. If there are more results than specified in MaxResults , the value of NextToken in the operation response contains a pagination token for getting the next set of results. To get the next page of results, call GetPersonTracking and populate the NextToken request parameter with the token value returned from the previous call to GetPersonTracking .

See also: AWS API Documentation

Request Syntax

client.get_person_tracking(
    JobId='string',
    MaxResults=123,
    NextToken='string',
    SortBy='INDEX'|'TIMESTAMP'
)
type JobId

string

param JobId

[REQUIRED]

The identifier for a job that tracks persons in a video. You get the JobId from a call to StartPersonTracking .

type MaxResults

integer

param MaxResults

Maximum number of results to return per paginated call. The largest value you can specify is 1000. If you specify a value greater than 1000, a maximum of 1000 results is returned. The default value is 1000.

type NextToken

string

param NextToken

If the previous response was incomplete (because there are more persons to retrieve), Amazon Rekognition Video returns a pagination token in the response. You can use this pagination token to retrieve the next set of persons.

type SortBy

string

param SortBy

Sort to use for elements in the Persons array. Use TIMESTAMP to sort array elements by the time persons are detected. Use INDEX to sort by the tracked persons. If you sort by INDEX , the array elements for each person are sorted by detection confidence. The default sort is by TIMESTAMP .

rtype

dict

returns

Response Syntax

{
    'JobStatus': 'IN_PROGRESS'|'SUCCEEDED'|'FAILED',
    'StatusMessage': 'string',
    'VideoMetadata': {
        'Codec': 'string',
        'DurationMillis': 123,
        'Format': 'string',
        'FrameRate': ...,
        'FrameHeight': 123,
        'FrameWidth': 123,
        'Rotation': 123
    },
    'NextToken': 'string',
    'Persons': [
        {
            'Timestamp': 123,
            'Person': {
                'Index': 123,
                'BoundingBox': {
                    'Width': ...,
                    'Height': ...,
                    'Left': ...,
                    'Top': ...
                },
                'Face': {
                    'BoundingBox': {
                        'Width': ...,
                        'Height': ...,
                        'Left': ...,
                        'Top': ...
                    },
                    'AgeRange': {
                        'Low': 123,
                        'High': 123
                    },
                    'Smile': {
                        'Value': True|False,
                        'Confidence': ...
                    },
                    'Eyeglasses': {
                        'Value': True|False,
                        'Confidence': ...
                    },
                    'Sunglasses': {
                        'Value': True|False,
                        'Confidence': ...
                    },
                    'Gender': {
                        'Value': 'Male'|'Female',
                        'Confidence': ...
                    },
                    'Beard': {
                        'Value': True|False,
                        'Confidence': ...
                    },
                    'Mustache': {
                        'Value': True|False,
                        'Confidence': ...
                    },
                    'EyesOpen': {
                        'Value': True|False,
                        'Confidence': ...
                    },
                    'MouthOpen': {
                        'Value': True|False,
                        'Confidence': ...
                    },
                    'Emotions': [
                        {
                            'Type': 'HAPPY'|'SAD'|'ANGRY'|'CONFUSED'|'DISGUSTED'|'SURPRISED'|'CALM'|'UNKNOWN',
                            'Confidence': ...
                        },
                    ],
                    'Landmarks': [
                        {
                            'Type': 'eyeLeft'|'eyeRight'|'nose'|'mouthLeft'|'mouthRight'|'leftEyeBrowLeft'|'leftEyeBrowRight'|'leftEyeBrowUp'|'rightEyeBrowLeft'|'rightEyeBrowRight'|'rightEyeBrowUp'|'leftEyeLeft'|'leftEyeRight'|'leftEyeUp'|'leftEyeDown'|'rightEyeLeft'|'rightEyeRight'|'rightEyeUp'|'rightEyeDown'|'noseLeft'|'noseRight'|'mouthUp'|'mouthDown'|'leftPupil'|'rightPupil',
                            'X': ...,
                            'Y': ...
                        },
                    ],
                    'Pose': {
                        'Roll': ...,
                        'Yaw': ...,
                        'Pitch': ...
                    },
                    'Quality': {
                        'Brightness': ...,
                        'Sharpness': ...
                    },
                    'Confidence': ...
                }
            }
        },
    ],
    'BillableDurationSeconds': 123,
    'ErrorCode': 'string',
    'Warnings': [
        {
            'ErrorCode': 'string',
            'Message': 'string',
            'Sections': [
                {
                    'StartTimestamp': 123,
                    'EndTimestamp': 123
                },
            ]
        },
    ]
}

Response Structure

  • (dict) --

    • JobStatus (string) --

      The current status of the person tracking job.

    • StatusMessage (string) --

      If the job fails, StatusMessage provides a descriptive error message.

    • VideoMetadata (dict) --

      Information about a video that Amazon Rekognition Video analyzed. Videometadata is returned in every page of paginated responses from a Amazon Rekognition Video operation.

      • Codec (string) --

        Type of compression used in the analyzed video.

      • DurationMillis (integer) --

        Length of the video in milliseconds.

      • Format (string) --

        Format of the analyzed video. Possible values are MP4, MOV and AVI.

      • FrameRate (float) --

        Number of frames per second in the video.

      • FrameHeight (integer) --

        Vertical pixel dimension of the video.

      • FrameWidth (integer) --

        Horizontal pixel dimension of the video.

      • Rotation (integer) --

    • NextToken (string) --

      If the response is truncated, Amazon Rekognition Video returns this token that you can use in the subsequent request to retrieve the next set of persons.

    • Persons (list) --

      An array of the persons detected in the video and the times they are tracked throughout the video. An array element will exist for each time the person is tracked.

      • (dict) --

        Details and tracking information for a single time a person is tracked in a video. Amazon Rekognition operations that track persons return an array of PersonDetection objects with elements for each time a person is tracked in a video.

        For more information, see API_GetPersonTracking in the Amazon Rekognition Developer Guide.

        • Timestamp (integer) --

          The time, in milliseconds from the start of the video, that the person was tracked.

        • Person (dict) --

          Details about a person tracked in a video.

          • Index (integer) --

            Identifier for the person detected person within a video. Use to keep track of the person throughout the video. The identifier is not stored by Amazon Rekognition.

          • BoundingBox (dict) --

            Bounding box around the detected person.

            • Width (float) --

              Width of the bounding box as a ratio of the overall image width.

            • Height (float) --

              Height of the bounding box as a ratio of the overall image height.

            • Left (float) --

              Left coordinate of the bounding box as a ratio of overall image width.

            • Top (float) --

              Top coordinate of the bounding box as a ratio of overall image height.

          • Face (dict) --

            Face details for the detected person.

            • BoundingBox (dict) --

              Bounding box of the face. Default attribute.

              • Width (float) --

                Width of the bounding box as a ratio of the overall image width.

              • Height (float) --

                Height of the bounding box as a ratio of the overall image height.

              • Left (float) --

                Left coordinate of the bounding box as a ratio of overall image width.

              • Top (float) --

                Top coordinate of the bounding box as a ratio of overall image height.

            • AgeRange (dict) --

              The estimated age range, in years, for the face. Low represents the lowest estimated age and High represents the highest estimated age.

              • Low (integer) --

                The lowest estimated age.

              • High (integer) --

                The highest estimated age.

            • Smile (dict) --

              Indicates whether or not the face is smiling, and the confidence level in the determination.

              • Value (boolean) --

                Boolean value that indicates whether the face is smiling or not.

              • Confidence (float) --

                Level of confidence in the determination.

            • Eyeglasses (dict) --

              Indicates whether or not the face is wearing eye glasses, and the confidence level in the determination.

              • Value (boolean) --

                Boolean value that indicates whether the face is wearing eye glasses or not.

              • Confidence (float) --

                Level of confidence in the determination.

            • Sunglasses (dict) --

              Indicates whether or not the face is wearing sunglasses, and the confidence level in the determination.

              • Value (boolean) --

                Boolean value that indicates whether the face is wearing sunglasses or not.

              • Confidence (float) --

                Level of confidence in the determination.

            • Gender (dict) --

              Gender of the face and the confidence level in the determination.

              • Value (string) --

                Gender of the face.

              • Confidence (float) --

                Level of confidence in the determination.

            • Beard (dict) --

              Indicates whether or not the face has a beard, and the confidence level in the determination.

              • Value (boolean) --

                Boolean value that indicates whether the face has beard or not.

              • Confidence (float) --

                Level of confidence in the determination.

            • Mustache (dict) --

              Indicates whether or not the face has a mustache, and the confidence level in the determination.

              • Value (boolean) --

                Boolean value that indicates whether the face has mustache or not.

              • Confidence (float) --

                Level of confidence in the determination.

            • EyesOpen (dict) --

              Indicates whether or not the eyes on the face are open, and the confidence level in the determination.

              • Value (boolean) --

                Boolean value that indicates whether the eyes on the face are open.

              • Confidence (float) --

                Level of confidence in the determination.

            • MouthOpen (dict) --

              Indicates whether or not the mouth on the face is open, and the confidence level in the determination.

              • Value (boolean) --

                Boolean value that indicates whether the mouth on the face is open or not.

              • Confidence (float) --

                Level of confidence in the determination.

            • Emotions (list) --

              The emotions detected on the face, and the confidence level in the determination. For example, HAPPY, SAD, and ANGRY.

              • (dict) --

                The emotions detected on the face, and the confidence level in the determination. For example, HAPPY, SAD, and ANGRY.

                • Type (string) --

                  Type of emotion detected.

                • Confidence (float) --

                  Level of confidence in the determination.

            • Landmarks (list) --

              Indicates the location of landmarks on the face. Default attribute.

              • (dict) --

                Indicates the location of the landmark on the face.

                • Type (string) --

                  Type of the landmark.

                • X (float) --

                  x-coordinate from the top left of the landmark expressed as the ratio of the width of the image. For example, if the images is 700x200 and the x-coordinate of the landmark is at 350 pixels, this value is 0.5.

                • Y (float) --

                  y-coordinate from the top left of the landmark expressed as the ratio of the height of the image. For example, if the images is 700x200 and the y-coordinate of the landmark is at 100 pixels, this value is 0.5.

            • Pose (dict) --

              Indicates the pose of the face as determined by its pitch, roll, and yaw. Default attribute.

              • Roll (float) --

                Value representing the face rotation on the roll axis.

              • Yaw (float) --

                Value representing the face rotation on the yaw axis.

              • Pitch (float) --

                Value representing the face rotation on the pitch axis.

            • Quality (dict) --

              Identifies image brightness and sharpness. Default attribute.

              • Brightness (float) --

                Value representing brightness of the face. The service returns a value between 0 and 100 (inclusive). A higher value indicates a brighter face image.

              • Sharpness (float) --

                Value representing sharpness of the face. The service returns a value between 0 and 100 (inclusive). A higher value indicates a sharper face image.

            • Confidence (float) --

              Confidence level that the bounding box contains a face (and not a different object such as a tree). Default attribute.

    • BillableDurationSeconds (integer) --

    • ErrorCode (string) --

    • Warnings (list) --

      • (dict) --

        • ErrorCode (string) --

        • Message (string) --

        • Sections (list) --

          • (dict) --

            • StartTimestamp (integer) --

            • EndTimestamp (integer) --

IndexFaces (updated) Link ¶
Changes (response)
{'FaceRecords': {'Face': {'AssociationScore': 'float'}}}

Detects faces in the input image and adds them to the specified collection.

Amazon Rekognition does not save the actual faces detected. Instead, the underlying detection algorithm first detects the faces in the input image, and for each face extracts facial features into a feature vector, and stores it in the back-end database. Amazon Rekognition uses feature vectors when performing face match and search operations using the and operations.

If you are using version 1.0 of the face detection model, IndexFaces indexes the 15 largest faces in the input image. Later versions of the face detection model index the 100 largest faces in the input image. To determine which version of the model you are using, check the the value of FaceModelVersion in the response from IndexFaces .

For more information, see Model Versioning in the Amazon Rekognition Developer Guide.

If you provide the optional ExternalImageID for the input image you provided, Amazon Rekognition associates this ID with all faces that it detects. When you call the operation, the response returns the external ID. You can use this external image ID to create a client-side index to associate the faces with each image. You can then use the index to find all faces in an image.

In response, the operation returns an array of metadata for all detected faces. This includes, the bounding box of the detected face, confidence value (indicating the bounding box contains a face), a face ID assigned by the service for each face that is detected and stored, and an image ID assigned by the service for the input image. If you request all facial attributes (using the detectionAttributes parameter, Amazon Rekognition returns detailed facial attributes such as facial landmarks (for example, location of eye and mouth) and other facial attributes such gender. If you provide the same image, specify the same collection, and use the same external ID in the IndexFaces operation, Amazon Rekognition doesn't save duplicate face metadata.

For more information, see Adding Faces to a Collection in the Amazon Rekognition Developer Guide.

The input image is passed either as base64-encoded image bytes or as a reference to an image in an Amazon S3 bucket. If you use the Amazon CLI to call Amazon Rekognition operations, passing image bytes is not supported. The image must be either a PNG or JPEG formatted file.

This operation requires permissions to perform the rekognition:IndexFaces action.

See also: AWS API Documentation

Request Syntax

client.index_faces(
    CollectionId='string',
    Image={
        'Bytes': b'bytes',
        'S3Object': {
            'Bucket': 'string',
            'Name': 'string',
            'Version': 'string'
        }
    },
    ExternalImageId='string',
    DetectionAttributes=[
        'DEFAULT'|'ALL',
    ]
)
type CollectionId

string

param CollectionId

[REQUIRED]

The ID of an existing collection to which you want to add the faces that are detected in the input images.

type Image

dict

param Image

[REQUIRED]

The input image as base64-encoded bytes or an S3 object. If you use the AWS CLI to call Amazon Rekognition operations, passing base64-encoded image bytes is not supported.

  • Bytes (bytes) --

    Blob of image bytes up to 5 MBs.

  • S3Object (dict) --

    Identifies an S3 object as the image source.

    • Bucket (string) --

      Name of the S3 bucket.

    • Name (string) --

      S3 object key name.

    • Version (string) --

      If the bucket is versioning enabled, you can specify the object version.

type ExternalImageId

string

param ExternalImageId

ID you want to assign to all the faces detected in the image.

type DetectionAttributes

list

param DetectionAttributes

An array of facial attributes that you want to be returned. This can be the default list of attributes or all attributes. If you don't specify a value for Attributes or if you specify ["DEFAULT"] , the API returns the following subset of facial attributes: BoundingBox , Confidence , Pose , Quality and Landmarks . If you provide ["ALL"] , all facial attributes are returned but the operation will take longer to complete.

If you provide both, ["ALL", "DEFAULT"] , the service uses a logical AND operator to determine which attributes to return (in this case, all attributes).

  • (string) --

rtype

dict

returns

Response Syntax

{
    'FaceRecords': [
        {
            'Face': {
                'FaceId': 'string',
                'BoundingBox': {
                    'Width': ...,
                    'Height': ...,
                    'Left': ...,
                    'Top': ...
                },
                'ImageId': 'string',
                'ExternalImageId': 'string',
                'Confidence': ...,
                'AssociationScore': ...
            },
            'FaceDetail': {
                'BoundingBox': {
                    'Width': ...,
                    'Height': ...,
                    'Left': ...,
                    'Top': ...
                },
                'AgeRange': {
                    'Low': 123,
                    'High': 123
                },
                'Smile': {
                    'Value': True|False,
                    'Confidence': ...
                },
                'Eyeglasses': {
                    'Value': True|False,
                    'Confidence': ...
                },
                'Sunglasses': {
                    'Value': True|False,
                    'Confidence': ...
                },
                'Gender': {
                    'Value': 'Male'|'Female',
                    'Confidence': ...
                },
                'Beard': {
                    'Value': True|False,
                    'Confidence': ...
                },
                'Mustache': {
                    'Value': True|False,
                    'Confidence': ...
                },
                'EyesOpen': {
                    'Value': True|False,
                    'Confidence': ...
                },
                'MouthOpen': {
                    'Value': True|False,
                    'Confidence': ...
                },
                'Emotions': [
                    {
                        'Type': 'HAPPY'|'SAD'|'ANGRY'|'CONFUSED'|'DISGUSTED'|'SURPRISED'|'CALM'|'UNKNOWN',
                        'Confidence': ...
                    },
                ],
                'Landmarks': [
                    {
                        'Type': 'eyeLeft'|'eyeRight'|'nose'|'mouthLeft'|'mouthRight'|'leftEyeBrowLeft'|'leftEyeBrowRight'|'leftEyeBrowUp'|'rightEyeBrowLeft'|'rightEyeBrowRight'|'rightEyeBrowUp'|'leftEyeLeft'|'leftEyeRight'|'leftEyeUp'|'leftEyeDown'|'rightEyeLeft'|'rightEyeRight'|'rightEyeUp'|'rightEyeDown'|'noseLeft'|'noseRight'|'mouthUp'|'mouthDown'|'leftPupil'|'rightPupil',
                        'X': ...,
                        'Y': ...
                    },
                ],
                'Pose': {
                    'Roll': ...,
                    'Yaw': ...,
                    'Pitch': ...
                },
                'Quality': {
                    'Brightness': ...,
                    'Sharpness': ...
                },
                'Confidence': ...
            }
        },
    ],
    'OrientationCorrection': 'ROTATE_0'|'ROTATE_90'|'ROTATE_180'|'ROTATE_270',
    'FaceModelVersion': 'string'
}

Response Structure

  • (dict) --

    • FaceRecords (list) --

      An array of faces detected and added to the collection. For more information, see Searching Faces in a Collection in the Amazon Rekognition Developer Guide.

      • (dict) --

        Object containing both the face metadata (stored in the back-end database) and facial attributes that are detected but aren't stored in the database.

        • Face (dict) --

          Describes the face properties such as the bounding box, face ID, image ID of the input image, and external image ID that you assigned.

          • FaceId (string) --

            Unique identifier that Amazon Rekognition assigns to the face.

          • BoundingBox (dict) --

            Bounding box of the face.

            • Width (float) --

              Width of the bounding box as a ratio of the overall image width.

            • Height (float) --

              Height of the bounding box as a ratio of the overall image height.

            • Left (float) --

              Left coordinate of the bounding box as a ratio of overall image width.

            • Top (float) --

              Top coordinate of the bounding box as a ratio of overall image height.

          • ImageId (string) --

            Unique identifier that Amazon Rekognition assigns to the input image.

          • ExternalImageId (string) --

            Identifier that you assign to all the faces in the input image.

          • Confidence (float) --

            Confidence level that the bounding box contains a face (and not a different object such as a tree).

          • AssociationScore (float) --

        • FaceDetail (dict) --

          Structure containing attributes of the face that the algorithm detected.

          • BoundingBox (dict) --

            Bounding box of the face. Default attribute.

            • Width (float) --

              Width of the bounding box as a ratio of the overall image width.

            • Height (float) --

              Height of the bounding box as a ratio of the overall image height.

            • Left (float) --

              Left coordinate of the bounding box as a ratio of overall image width.

            • Top (float) --

              Top coordinate of the bounding box as a ratio of overall image height.

          • AgeRange (dict) --

            The estimated age range, in years, for the face. Low represents the lowest estimated age and High represents the highest estimated age.

            • Low (integer) --

              The lowest estimated age.

            • High (integer) --

              The highest estimated age.

          • Smile (dict) --

            Indicates whether or not the face is smiling, and the confidence level in the determination.

            • Value (boolean) --

              Boolean value that indicates whether the face is smiling or not.

            • Confidence (float) --

              Level of confidence in the determination.

          • Eyeglasses (dict) --

            Indicates whether or not the face is wearing eye glasses, and the confidence level in the determination.

            • Value (boolean) --

              Boolean value that indicates whether the face is wearing eye glasses or not.

            • Confidence (float) --

              Level of confidence in the determination.

          • Sunglasses (dict) --

            Indicates whether or not the face is wearing sunglasses, and the confidence level in the determination.

            • Value (boolean) --

              Boolean value that indicates whether the face is wearing sunglasses or not.

            • Confidence (float) --

              Level of confidence in the determination.

          • Gender (dict) --

            Gender of the face and the confidence level in the determination.

            • Value (string) --

              Gender of the face.

            • Confidence (float) --

              Level of confidence in the determination.

          • Beard (dict) --

            Indicates whether or not the face has a beard, and the confidence level in the determination.

            • Value (boolean) --

              Boolean value that indicates whether the face has beard or not.

            • Confidence (float) --

              Level of confidence in the determination.

          • Mustache (dict) --

            Indicates whether or not the face has a mustache, and the confidence level in the determination.

            • Value (boolean) --

              Boolean value that indicates whether the face has mustache or not.

            • Confidence (float) --

              Level of confidence in the determination.

          • EyesOpen (dict) --

            Indicates whether or not the eyes on the face are open, and the confidence level in the determination.

            • Value (boolean) --

              Boolean value that indicates whether the eyes on the face are open.

            • Confidence (float) --

              Level of confidence in the determination.

          • MouthOpen (dict) --

            Indicates whether or not the mouth on the face is open, and the confidence level in the determination.

            • Value (boolean) --

              Boolean value that indicates whether the mouth on the face is open or not.

            • Confidence (float) --

              Level of confidence in the determination.

          • Emotions (list) --

            The emotions detected on the face, and the confidence level in the determination. For example, HAPPY, SAD, and ANGRY.

            • (dict) --

              The emotions detected on the face, and the confidence level in the determination. For example, HAPPY, SAD, and ANGRY.

              • Type (string) --

                Type of emotion detected.

              • Confidence (float) --

                Level of confidence in the determination.

          • Landmarks (list) --

            Indicates the location of landmarks on the face. Default attribute.

            • (dict) --

              Indicates the location of the landmark on the face.

              • Type (string) --

                Type of the landmark.

              • X (float) --

                x-coordinate from the top left of the landmark expressed as the ratio of the width of the image. For example, if the images is 700x200 and the x-coordinate of the landmark is at 350 pixels, this value is 0.5.

              • Y (float) --

                y-coordinate from the top left of the landmark expressed as the ratio of the height of the image. For example, if the images is 700x200 and the y-coordinate of the landmark is at 100 pixels, this value is 0.5.

          • Pose (dict) --

            Indicates the pose of the face as determined by its pitch, roll, and yaw. Default attribute.

            • Roll (float) --

              Value representing the face rotation on the roll axis.

            • Yaw (float) --

              Value representing the face rotation on the yaw axis.

            • Pitch (float) --

              Value representing the face rotation on the pitch axis.

          • Quality (dict) --

            Identifies image brightness and sharpness. Default attribute.

            • Brightness (float) --

              Value representing brightness of the face. The service returns a value between 0 and 100 (inclusive). A higher value indicates a brighter face image.

            • Sharpness (float) --

              Value representing sharpness of the face. The service returns a value between 0 and 100 (inclusive). A higher value indicates a sharper face image.

          • Confidence (float) --

            Confidence level that the bounding box contains a face (and not a different object such as a tree). Default attribute.

    • OrientationCorrection (string) --

      The orientation of the input image (counterclockwise direction). If your application displays the image, you can use this value to correct image orientation. The bounding box coordinates returned in FaceRecords represent face locations before the image orientation is corrected.

      Note

      If the input image is in jpeg format, it might contain exchangeable image (Exif) metadata. If so, and the Exif metadata populates the orientation field, the value of OrientationCorrection is null and the bounding box coordinates in FaceRecords represent face locations after Exif metadata is used to correct the image orientation. Images in .png format don't contain Exif metadata.

    • FaceModelVersion (string) --

      Version number of the face detection model associated with the input collection ( CollectionId ).

ListFaces (updated) Link ¶
Changes (response)
{'Faces': {'AssociationScore': 'float'}}

Returns metadata for faces in the specified collection. This metadata includes information such as the bounding box coordinates, the confidence (that the bounding box contains a face), and face ID. For an example, see Listing Faces in a Collection in the Amazon Rekognition Developer Guide.

This operation requires permissions to perform the rekognition:ListFaces action.

See also: AWS API Documentation

Request Syntax

client.list_faces(
    CollectionId='string',
    NextToken='string',
    MaxResults=123
)
type CollectionId

string

param CollectionId

[REQUIRED]

ID of the collection from which to list the faces.

type NextToken

string

param NextToken

If the previous response was incomplete (because there is more data to retrieve), Amazon Rekognition returns a pagination token in the response. You can use this pagination token to retrieve the next set of faces.

type MaxResults

integer

param MaxResults

Maximum number of faces to return.

rtype

dict

returns

Response Syntax

{
    'Faces': [
        {
            'FaceId': 'string',
            'BoundingBox': {
                'Width': ...,
                'Height': ...,
                'Left': ...,
                'Top': ...
            },
            'ImageId': 'string',
            'ExternalImageId': 'string',
            'Confidence': ...,
            'AssociationScore': ...
        },
    ],
    'NextToken': 'string',
    'FaceModelVersion': 'string'
}

Response Structure

  • (dict) --

    • Faces (list) --

      An array of Face objects.

      • (dict) --

        Describes the face properties such as the bounding box, face ID, image ID of the input image, and external image ID that you assigned.

        • FaceId (string) --

          Unique identifier that Amazon Rekognition assigns to the face.

        • BoundingBox (dict) --

          Bounding box of the face.

          • Width (float) --

            Width of the bounding box as a ratio of the overall image width.

          • Height (float) --

            Height of the bounding box as a ratio of the overall image height.

          • Left (float) --

            Left coordinate of the bounding box as a ratio of overall image width.

          • Top (float) --

            Top coordinate of the bounding box as a ratio of overall image height.

        • ImageId (string) --

          Unique identifier that Amazon Rekognition assigns to the input image.

        • ExternalImageId (string) --

          Identifier that you assign to all the faces in the input image.

        • Confidence (float) --

          Confidence level that the bounding box contains a face (and not a different object such as a tree).

        • AssociationScore (float) --

    • NextToken (string) --

      If the response is truncated, Amazon Rekognition returns this token that you can use in the subsequent request to retrieve the next set of faces.

    • FaceModelVersion (string) --

      Version number of the face detection model associated with the input collection ( CollectionId ).

SearchFaces (updated) Link ¶
Changes (response)
{'FaceMatches': {'Face': {'AssociationScore': 'float'}}}

For a given input face ID, searches for matching faces in the collection the face belongs to. You get a face ID when you add a face to the collection using the IndexFaces operation. The operation compares the features of the input face with faces in the specified collection.

Note

You can also search faces without indexing faces by using the SearchFacesByImage operation.

The operation response returns an array of faces that match, ordered by similarity score with the highest similarity first. More specifically, it is an array of metadata for each face match that is found. Along with the metadata, the response also includes a confidence value for each face match, indicating the confidence that the specific face matches the input face.

For an example, see Searching for a Face Using Its Face ID in the Amazon Rekognition Developer Guide.

This operation requires permissions to perform the rekognition:SearchFaces action.

See also: AWS API Documentation

Request Syntax

client.search_faces(
    CollectionId='string',
    FaceId='string',
    MaxFaces=123,
    FaceMatchThreshold=...
)
type CollectionId

string

param CollectionId

[REQUIRED]

ID of the collection the face belongs to.

type FaceId

string

param FaceId

[REQUIRED]

ID of a face to find matches for in the collection.

type MaxFaces

integer

param MaxFaces

Maximum number of faces to return. The operation returns the maximum number of faces with the highest confidence in the match.

type FaceMatchThreshold

float

param FaceMatchThreshold

Optional value specifying the minimum confidence in the face match to return. For example, don't return any matches where confidence in matches is less than 70%.

rtype

dict

returns

Response Syntax

{
    'SearchedFaceId': 'string',
    'FaceMatches': [
        {
            'Similarity': ...,
            'Face': {
                'FaceId': 'string',
                'BoundingBox': {
                    'Width': ...,
                    'Height': ...,
                    'Left': ...,
                    'Top': ...
                },
                'ImageId': 'string',
                'ExternalImageId': 'string',
                'Confidence': ...,
                'AssociationScore': ...
            }
        },
    ],
    'FaceModelVersion': 'string'
}

Response Structure

  • (dict) --

    • SearchedFaceId (string) --

      ID of the face that was searched for matches in a collection.

    • FaceMatches (list) --

      An array of faces that matched the input face, along with the confidence in the match.

      • (dict) --

        Provides face metadata. In addition, it also provides the confidence in the match of this face with the input face.

        • Similarity (float) --

          Confidence in the match of this face with the input face.

        • Face (dict) --

          Describes the face properties such as the bounding box, face ID, image ID of the source image, and external image ID that you assigned.

          • FaceId (string) --

            Unique identifier that Amazon Rekognition assigns to the face.

          • BoundingBox (dict) --

            Bounding box of the face.

            • Width (float) --

              Width of the bounding box as a ratio of the overall image width.

            • Height (float) --

              Height of the bounding box as a ratio of the overall image height.

            • Left (float) --

              Left coordinate of the bounding box as a ratio of overall image width.

            • Top (float) --

              Top coordinate of the bounding box as a ratio of overall image height.

          • ImageId (string) --

            Unique identifier that Amazon Rekognition assigns to the input image.

          • ExternalImageId (string) --

            Identifier that you assign to all the faces in the input image.

          • Confidence (float) --

            Confidence level that the bounding box contains a face (and not a different object such as a tree).

          • AssociationScore (float) --

    • FaceModelVersion (string) --

      Version number of the face detection model associated with the input collection ( CollectionId ).

SearchFacesByImage (updated) Link ¶
Changes (response)
{'FaceMatches': {'Face': {'AssociationScore': 'float'}}}

For a given input image, first detects the largest face in the image, and then searches the specified collection for matching faces. The operation compares the features of the input face with faces in the specified collection.

Note

To search for all faces in an input image, you might first call the operation, and then use the face IDs returned in subsequent calls to the operation.

You can also call the DetectFaces operation and use the bounding boxes in the response to make face crops, which then you can pass in to the SearchFacesByImage operation.

You pass the input image either as base64-encoded image bytes or as a reference to an image in an Amazon S3 bucket. If you use the Amazon CLI to call Amazon Rekognition operations, passing image bytes is not supported. The image must be either a PNG or JPEG formatted file.

The response returns an array of faces that match, ordered by similarity score with the highest similarity first. More specifically, it is an array of metadata for each face match found. Along with the metadata, the response also includes a similarity indicating how similar the face is to the input face. In the response, the operation also returns the bounding box (and a confidence level that the bounding box contains a face) of the face that Amazon Rekognition used for the input image.

For an example, Searching for a Face Using an Image in the Amazon Rekognition Developer Guide.

This operation requires permissions to perform the rekognition:SearchFacesByImage action.

See also: AWS API Documentation

Request Syntax

client.search_faces_by_image(
    CollectionId='string',
    Image={
        'Bytes': b'bytes',
        'S3Object': {
            'Bucket': 'string',
            'Name': 'string',
            'Version': 'string'
        }
    },
    MaxFaces=123,
    FaceMatchThreshold=...
)
type CollectionId

string

param CollectionId

[REQUIRED]

ID of the collection to search.

type Image

dict

param Image

[REQUIRED]

The input image as base64-encoded bytes or an S3 object. If you use the AWS CLI to call Amazon Rekognition operations, passing base64-encoded image bytes is not supported.

  • Bytes (bytes) --

    Blob of image bytes up to 5 MBs.

  • S3Object (dict) --

    Identifies an S3 object as the image source.

    • Bucket (string) --

      Name of the S3 bucket.

    • Name (string) --

      S3 object key name.

    • Version (string) --

      If the bucket is versioning enabled, you can specify the object version.

type MaxFaces

integer

param MaxFaces

Maximum number of faces to return. The operation returns the maximum number of faces with the highest confidence in the match.

type FaceMatchThreshold

float

param FaceMatchThreshold

(Optional) Specifies the minimum confidence in the face match to return. For example, don't return any matches where confidence in matches is less than 70%.

rtype

dict

returns

Response Syntax

{
    'SearchedFaceBoundingBox': {
        'Width': ...,
        'Height': ...,
        'Left': ...,
        'Top': ...
    },
    'SearchedFaceConfidence': ...,
    'FaceMatches': [
        {
            'Similarity': ...,
            'Face': {
                'FaceId': 'string',
                'BoundingBox': {
                    'Width': ...,
                    'Height': ...,
                    'Left': ...,
                    'Top': ...
                },
                'ImageId': 'string',
                'ExternalImageId': 'string',
                'Confidence': ...,
                'AssociationScore': ...
            }
        },
    ],
    'FaceModelVersion': 'string'
}

Response Structure

  • (dict) --

    • SearchedFaceBoundingBox (dict) --

      The bounding box around the face in the input image that Amazon Rekognition used for the search.

      • Width (float) --

        Width of the bounding box as a ratio of the overall image width.

      • Height (float) --

        Height of the bounding box as a ratio of the overall image height.

      • Left (float) --

        Left coordinate of the bounding box as a ratio of overall image width.

      • Top (float) --

        Top coordinate of the bounding box as a ratio of overall image height.

    • SearchedFaceConfidence (float) --

      The level of confidence that the searchedFaceBoundingBox , contains a face.

    • FaceMatches (list) --

      An array of faces that match the input face, along with the confidence in the match.

      • (dict) --

        Provides face metadata. In addition, it also provides the confidence in the match of this face with the input face.

        • Similarity (float) --

          Confidence in the match of this face with the input face.

        • Face (dict) --

          Describes the face properties such as the bounding box, face ID, image ID of the source image, and external image ID that you assigned.

          • FaceId (string) --

            Unique identifier that Amazon Rekognition assigns to the face.

          • BoundingBox (dict) --

            Bounding box of the face.

            • Width (float) --

              Width of the bounding box as a ratio of the overall image width.

            • Height (float) --

              Height of the bounding box as a ratio of the overall image height.

            • Left (float) --

              Left coordinate of the bounding box as a ratio of overall image width.

            • Top (float) --

              Top coordinate of the bounding box as a ratio of overall image height.

          • ImageId (string) --

            Unique identifier that Amazon Rekognition assigns to the input image.

          • ExternalImageId (string) --

            Identifier that you assign to all the faces in the input image.

          • Confidence (float) --

            Confidence level that the bounding box contains a face (and not a different object such as a tree).

          • AssociationScore (float) --

    • FaceModelVersion (string) --

      Version number of the face detection model associated with the input collection ( CollectionId ).

StartCelebrityRecognition (updated) Link ¶
Changes (request)
{'EnablePersonTracking': 'boolean'}

Starts asynchronous recognition of celebrities in a stored video.

Amazon Rekognition Video can detect celebrities in a video must be stored in an Amazon S3 bucket. Use Video to specify the bucket name and the filename of the video. StartCelebrityRecognition returns a job identifier ( JobId ) which you use to get the results of the analysis. When celebrity recognition analysis is finished, Amazon Rekognition Video publishes a completion status to the Amazon Simple Notification Service topic that you specify in NotificationChannel . To get the results of the celebrity recognition analysis, first check that the status value published to the Amazon SNS topic is SUCCEEDED . If so, call and pass the job identifier ( JobId ) from the initial call to StartCelebrityRecognition .

For more information, see Recognizing Celebrities in the Amazon Rekognition Developer Guide.

See also: AWS API Documentation

Request Syntax

client.start_celebrity_recognition(
    Video={
        'S3Object': {
            'Bucket': 'string',
            'Name': 'string',
            'Version': 'string'
        }
    },
    ClientRequestToken='string',
    NotificationChannel={
        'SNSTopicArn': 'string',
        'RoleArn': 'string'
    },
    EnablePersonTracking=True|False,
    JobTag='string'
)
type Video

dict

param Video

[REQUIRED]

The video in which you want to recognize celebrities. The video must be stored in an Amazon S3 bucket.

  • S3Object (dict) --

    The Amazon S3 bucket name and file name for the video.

    • Bucket (string) --

      Name of the S3 bucket.

    • Name (string) --

      S3 object key name.

    • Version (string) --

      If the bucket is versioning enabled, you can specify the object version.

type ClientRequestToken

string

param ClientRequestToken

Idempotent token used to identify the start request. If you use the same token with multiple StartCelebrityRecognition requests, the same JobId is returned. Use ClientRequestToken to prevent the same job from being accidently started more than once.

type NotificationChannel

dict

param NotificationChannel

The Amazon SNS topic ARN that you want Amazon Rekognition Video to publish the completion status of the celebrity recognition analysis to.

  • SNSTopicArn (string) -- [REQUIRED]

    The Amazon SNS topic to which Amazon Rekognition to posts the completion status.

  • RoleArn (string) -- [REQUIRED]

    The ARN of an IAM role that gives Amazon Rekognition publishing permissions to the Amazon SNS topic.

type EnablePersonTracking

boolean

param EnablePersonTracking

type JobTag

string

param JobTag

Unique identifier you specify to identify the job in the completion status published to the Amazon Simple Notification Service topic.

rtype

dict

returns

Response Syntax

{
    'JobId': 'string'
}

Response Structure

  • (dict) --

    • JobId (string) --

      The identifier for the celebrity recognition analysis job. Use JobId to identify the job in a subsequent call to GetCelebrityRecognition .

StartFaceSearch (updated) Link ¶
Changes (request)
{'EnablePersonTracking': 'boolean'}

Starts the asynchronous search for faces in a collection that match the faces of persons detected in a stored video.

The video must be stored in an Amazon S3 bucket. Use Video to specify the bucket name and the filename of the video. StartFaceSearch returns a job identifier ( JobId ) which you use to get the search results once the search has completed. When searching is finished, Amazon Rekognition Video publishes a completion status to the Amazon Simple Notification Service topic that you specify in NotificationChannel . To get the search results, first check that the status value published to the Amazon SNS topic is SUCCEEDED . If so, call and pass the job identifier ( JobId ) from the initial call to StartFaceSearch . For more information, see procedure-person-search-videos.

See also: AWS API Documentation

Request Syntax

client.start_face_search(
    Video={
        'S3Object': {
            'Bucket': 'string',
            'Name': 'string',
            'Version': 'string'
        }
    },
    ClientRequestToken='string',
    FaceMatchThreshold=...,
    CollectionId='string',
    EnablePersonTracking=True|False,
    NotificationChannel={
        'SNSTopicArn': 'string',
        'RoleArn': 'string'
    },
    JobTag='string'
)
type Video

dict

param Video

[REQUIRED]

The video you want to search. The video must be stored in an Amazon S3 bucket.

  • S3Object (dict) --

    The Amazon S3 bucket name and file name for the video.

    • Bucket (string) --

      Name of the S3 bucket.

    • Name (string) --

      S3 object key name.

    • Version (string) --

      If the bucket is versioning enabled, you can specify the object version.

type ClientRequestToken

string

param ClientRequestToken

Idempotent token used to identify the start request. If you use the same token with multiple StartFaceSearch requests, the same JobId is returned. Use ClientRequestToken to prevent the same job from being accidently started more than once.

type FaceMatchThreshold

float

param FaceMatchThreshold

The minimum confidence in the person match to return. For example, don't return any matches where confidence in matches is less than 70%.

type CollectionId

string

param CollectionId

[REQUIRED]

ID of the collection that contains the faces you want to search for.

type EnablePersonTracking

boolean

param EnablePersonTracking

type NotificationChannel

dict

param NotificationChannel

The ARN of the Amazon SNS topic to which you want Amazon Rekognition Video to publish the completion status of the search.

  • SNSTopicArn (string) -- [REQUIRED]

    The Amazon SNS topic to which Amazon Rekognition to posts the completion status.

  • RoleArn (string) -- [REQUIRED]

    The ARN of an IAM role that gives Amazon Rekognition publishing permissions to the Amazon SNS topic.

type JobTag

string

param JobTag

Unique identifier you specify to identify the job in the completion status published to the Amazon Simple Notification Service topic.

rtype

dict

returns

Response Syntax

{
    'JobId': 'string'
}

Response Structure

  • (dict) --

    • JobId (string) --

      The identifier for the search job. Use JobId to identify the job in a subsequent call to GetFaceSearch .