Amazon Rekognition

2022/12/12 - Amazon Rekognition - 2 updated api methods

Changes  Adds support for "aliases" and "categories", inclusion and exclusion filters for labels and label categories, and aggregating labels by video segment timestamps for Stored Video Label Detection APIs.

GetLabelDetection (updated) Link ¶
Changes (request, response)
Request
{'AggregateBy': 'TIMESTAMPS | SEGMENTS'}
Response
{'Labels': {'DurationMillis': 'long',
            'EndTimestampMillis': 'long',
            'StartTimestampMillis': 'long'}}

Gets the label detection results of a Amazon Rekognition Video analysis started by StartLabelDetection.

The label detection operation is started by a call to StartLabelDetection which returns a job identifier ( JobId ). When the label detection operation finishes, Amazon Rekognition publishes a completion status to the Amazon Simple Notification Service topic registered in the initial call to StartlabelDetection .

To get the results of the label detection operation, first check that the status value published to the Amazon SNS topic is SUCCEEDED . If so, call GetLabelDetection and pass the job identifier ( JobId ) from the initial call to StartLabelDetection .

GetLabelDetection returns an array of detected labels ( Labels ) sorted by the time the labels were detected. You can also sort by the label name by specifying NAME for the SortBy input parameter. If there is no NAME specified, the default sort is by timestamp.

You can select how results are aggregated by using the AggregateBy input parameter. The default aggregation method is TIMESTAMPS . You can also aggregate by SEGMENTS , which aggregates all instances of labels detected in a given segment.

The returned Labels array may include the following attributes:

  • Name - The name of the detected label.

  • Confidence - The level of confidence in the label assigned to a detected object.

  • Parents - The ancestor labels for a detected label. GetLabelDetection returns a hierarchical taxonomy of detected labels. For example, a detected car might be assigned the label car. The label car has two parent labels: Vehicle (its parent) and Transportation (its grandparent). The response includes the all ancestors for a label, where every ancestor is a unique label. In the previous example, Car, Vehicle, and Transportation are returned as unique labels in the response.

  • Aliases - Possible Aliases for the label.

  • Categories - The label categories that the detected label belongs to.

  • BoundingBox — Bounding boxes are described for all instances of detected common object labels, returned in an array of Instance objects. An Instance object contains a BoundingBox object, describing the location of the label on the input image. It also includes the confidence for the accuracy of the detected bounding box.

  • Timestamp - Time, in milliseconds from the start of the video, that the label was detected. For aggregation by SEGMENTS , the StartTimestampMillis , EndTimestampMillis , and DurationMillis structures are what define a segment. Although the “Timestamp” structure is still returned with each label, its value is set to be the same as StartTimestampMillis .

Timestamp and Bounding box information are returned for detected Instances, only if aggregation is done by TIMESTAMPS . If aggregating by SEGMENTS , information about detected instances isn’t returned.

The version of the label model used for the detection is also returned.

Note DominantColors isn't returned for Instances , although it is shown as part of the response in the sample seen below.

Use MaxResults parameter to limit the number of labels returned. If there are more results than specified in MaxResults , the value of NextToken in the operation response contains a pagination token for getting the next set of results. To get the next page of results, call GetlabelDetection and populate the NextToken request parameter with the token value returned from the previous call to GetLabelDetection .

See also: AWS API Documentation

Request Syntax

client.get_label_detection(
    JobId='string',
    MaxResults=123,
    NextToken='string',
    SortBy='NAME'|'TIMESTAMP',
    AggregateBy='TIMESTAMPS'|'SEGMENTS'
)
type JobId

string

param JobId

[REQUIRED]

Job identifier for the label detection operation for which you want results returned. You get the job identifer from an initial call to StartlabelDetection .

type MaxResults

integer

param MaxResults

Maximum number of results to return per paginated call. The largest value you can specify is 1000. If you specify a value greater than 1000, a maximum of 1000 results is returned. The default value is 1000.

type NextToken

string

param NextToken

If the previous response was incomplete (because there are more labels to retrieve), Amazon Rekognition Video returns a pagination token in the response. You can use this pagination token to retrieve the next set of labels.

type SortBy

string

param SortBy

Sort to use for elements in the Labels array. Use TIMESTAMP to sort array elements by the time labels are detected. Use NAME to alphabetically group elements for a label together. Within each label group, the array element are sorted by detection confidence. The default sort is by TIMESTAMP .

type AggregateBy

string

param AggregateBy

Defines how to aggregate the returned results. Results can be aggregated by timestamps or segments.

rtype

dict

returns

Response Syntax

{
    'JobStatus': 'IN_PROGRESS'|'SUCCEEDED'|'FAILED',
    'StatusMessage': 'string',
    'VideoMetadata': {
        'Codec': 'string',
        'DurationMillis': 123,
        'Format': 'string',
        'FrameRate': ...,
        'FrameHeight': 123,
        'FrameWidth': 123,
        'ColorRange': 'FULL'|'LIMITED'
    },
    'NextToken': 'string',
    'Labels': [
        {
            'Timestamp': 123,
            'Label': {
                'Name': 'string',
                'Confidence': ...,
                'Instances': [
                    {
                        'BoundingBox': {
                            'Width': ...,
                            'Height': ...,
                            'Left': ...,
                            'Top': ...
                        },
                        'Confidence': ...,
                        'DominantColors': [
                            {
                                'Red': 123,
                                'Blue': 123,
                                'Green': 123,
                                'HexCode': 'string',
                                'CSSColor': 'string',
                                'SimplifiedColor': 'string',
                                'PixelPercent': ...
                            },
                        ]
                    },
                ],
                'Parents': [
                    {
                        'Name': 'string'
                    },
                ],
                'Aliases': [
                    {
                        'Name': 'string'
                    },
                ],
                'Categories': [
                    {
                        'Name': 'string'
                    },
                ]
            },
            'StartTimestampMillis': 123,
            'EndTimestampMillis': 123,
            'DurationMillis': 123
        },
    ],
    'LabelModelVersion': 'string'
}

Response Structure

  • (dict) --

    • JobStatus (string) --

      The current status of the label detection job.

    • StatusMessage (string) --

      If the job fails, StatusMessage provides a descriptive error message.

    • VideoMetadata (dict) --

      Information about a video that Amazon Rekognition Video analyzed. Videometadata is returned in every page of paginated responses from a Amazon Rekognition video operation.

      • Codec (string) --

        Type of compression used in the analyzed video.

      • DurationMillis (integer) --

        Length of the video in milliseconds.

      • Format (string) --

        Format of the analyzed video. Possible values are MP4, MOV and AVI.

      • FrameRate (float) --

        Number of frames per second in the video.

      • FrameHeight (integer) --

        Vertical pixel dimension of the video.

      • FrameWidth (integer) --

        Horizontal pixel dimension of the video.

      • ColorRange (string) --

        A description of the range of luminance values in a video, either LIMITED (16 to 235) or FULL (0 to 255).

    • NextToken (string) --

      If the response is truncated, Amazon Rekognition Video returns this token that you can use in the subsequent request to retrieve the next set of labels.

    • Labels (list) --

      An array of labels detected in the video. Each element contains the detected label and the time, in milliseconds from the start of the video, that the label was detected.

      • (dict) --

        Information about a label detected in a video analysis request and the time the label was detected in the video.

        • Timestamp (integer) --

          Time, in milliseconds from the start of the video, that the label was detected. Note that Timestamp is not guaranteed to be accurate to the individual frame where the label first appears.

        • Label (dict) --

          Details about the detected label.

          • Name (string) --

            The name (label) of the object or scene.

          • Confidence (float) --

            Level of confidence.

          • Instances (list) --

            If Label represents an object, Instances contains the bounding boxes for each instance of the detected object. Bounding boxes are returned for common object labels such as people, cars, furniture, apparel or pets.

            • (dict) --

              An instance of a label returned by Amazon Rekognition Image ( DetectLabels ) or by Amazon Rekognition Video ( GetLabelDetection ).

              • BoundingBox (dict) --

                The position of the label instance on the image.

                • Width (float) --

                  Width of the bounding box as a ratio of the overall image width.

                • Height (float) --

                  Height of the bounding box as a ratio of the overall image height.

                • Left (float) --

                  Left coordinate of the bounding box as a ratio of overall image width.

                • Top (float) --

                  Top coordinate of the bounding box as a ratio of overall image height.

              • Confidence (float) --

                The confidence that Amazon Rekognition has in the accuracy of the bounding box.

              • DominantColors (list) --

                The dominant colors found in an individual instance of a label.

                • (dict) --

                  A description of the dominant colors in an image.

                  • Red (integer) --

                    The Red RGB value for a dominant color.

                  • Blue (integer) --

                    The Blue RGB value for a dominant color.

                  • Green (integer) --

                    The Green RGB value for a dominant color.

                  • HexCode (string) --

                    The Hex code equivalent of the RGB values for a dominant color.

                  • CSSColor (string) --

                    The CSS color name of a dominant color.

                  • SimplifiedColor (string) --

                    One of 12 simplified color names applied to a dominant color.

                  • PixelPercent (float) --

                    The percentage of image pixels that have a given dominant color.

          • Parents (list) --

            The parent labels for a label. The response includes all ancestor labels.

            • (dict) --

              A parent label for a label. A label can have 0, 1, or more parents.

              • Name (string) --

                The name of the parent label.

          • Aliases (list) --

            A list of potential aliases for a given label.

            • (dict) --

              A potential alias of for a given label.

              • Name (string) --

                The name of an alias for a given label.

          • Categories (list) --

            A list of the categories associated with a given label.

            • (dict) --

              The category that applies to a given label.

              • Name (string) --

                The name of a category that applies to a given label.

        • StartTimestampMillis (integer) --

          The time in milliseconds defining the start of the timeline segment containing a continuously detected label.

        • EndTimestampMillis (integer) --

          The time in milliseconds defining the end of the timeline segment containing a continuously detected label.

        • DurationMillis (integer) --

          The time duration of a segment in milliseconds, I.e. time elapsed from StartTimestampMillis to EndTimestampMillis.

    • LabelModelVersion (string) --

      Version number of the label detection model that was used to detect labels.

StartLabelDetection (updated) Link ¶
Changes (request)
{'Features': ['GENERAL_LABELS'],
 'Settings': {'GeneralLabels': {'LabelCategoryExclusionFilters': ['string'],
                                'LabelCategoryInclusionFilters': ['string'],
                                'LabelExclusionFilters': ['string'],
                                'LabelInclusionFilters': ['string']}}}

Starts asynchronous detection of labels in a stored video.

Amazon Rekognition Video can detect labels in a video. Labels are instances of real-world entities. This includes objects like flower, tree, and table; events like wedding, graduation, and birthday party; concepts like landscape, evening, and nature; and activities like a person getting out of a car or a person skiing.

The video must be stored in an Amazon S3 bucket. Use Video to specify the bucket name and the filename of the video. StartLabelDetection returns a job identifier ( JobId ) which you use to get the results of the operation. When label detection is finished, Amazon Rekognition Video publishes a completion status to the Amazon Simple Notification Service topic that you specify in NotificationChannel .

To get the results of the label detection operation, first check that the status value published to the Amazon SNS topic is SUCCEEDED . If so, call GetLabelDetection and pass the job identifier ( JobId ) from the initial call to StartLabelDetection .

Optional Parameters

StartLabelDetection has the GENERAL_LABELS Feature applied by default. This feature allows you to provide filtering criteria to the Settings parameter. You can filter with sets of individual labels or with label categories. You can specify inclusive filters, exclusive filters, or a combination of inclusive and exclusive filters. For more information on filtering, see Detecting labels in a video.

You can specify MinConfidence to control the confidence threshold for the labels returned. The default is 50.

See also: AWS API Documentation

Request Syntax

client.start_label_detection(
    Video={
        'S3Object': {
            'Bucket': 'string',
            'Name': 'string',
            'Version': 'string'
        }
    },
    ClientRequestToken='string',
    MinConfidence=...,
    NotificationChannel={
        'SNSTopicArn': 'string',
        'RoleArn': 'string'
    },
    JobTag='string',
    Features=[
        'GENERAL_LABELS',
    ],
    Settings={
        'GeneralLabels': {
            'LabelInclusionFilters': [
                'string',
            ],
            'LabelExclusionFilters': [
                'string',
            ],
            'LabelCategoryInclusionFilters': [
                'string',
            ],
            'LabelCategoryExclusionFilters': [
                'string',
            ]
        }
    }
)
type Video

dict

param Video

[REQUIRED]

The video in which you want to detect labels. The video must be stored in an Amazon S3 bucket.

  • S3Object (dict) --

    The Amazon S3 bucket name and file name for the video.

    • Bucket (string) --

      Name of the S3 bucket.

    • Name (string) --

      S3 object key name.

    • Version (string) --

      If the bucket is versioning enabled, you can specify the object version.

type ClientRequestToken

string

param ClientRequestToken

Idempotent token used to identify the start request. If you use the same token with multiple StartLabelDetection requests, the same JobId is returned. Use ClientRequestToken to prevent the same job from being accidently started more than once.

type MinConfidence

float

param MinConfidence

Specifies the minimum confidence that Amazon Rekognition Video must have in order to return a detected label. Confidence represents how certain Amazon Rekognition is that a label is correctly identified.0 is the lowest confidence. 100 is the highest confidence. Amazon Rekognition Video doesn't return any labels with a confidence level lower than this specified value.

If you don't specify MinConfidence , the operation returns labels and bounding boxes (if detected) with confidence values greater than or equal to 50 percent.

type NotificationChannel

dict

param NotificationChannel

The Amazon SNS topic ARN you want Amazon Rekognition Video to publish the completion status of the label detection operation to. The Amazon SNS topic must have a topic name that begins with AmazonRekognition if you are using the AmazonRekognitionServiceRole permissions policy.

  • SNSTopicArn (string) -- [REQUIRED]

    The Amazon SNS topic to which Amazon Rekognition posts the completion status.

  • RoleArn (string) -- [REQUIRED]

    The ARN of an IAM role that gives Amazon Rekognition publishing permissions to the Amazon SNS topic.

type JobTag

string

param JobTag

An identifier you specify that's returned in the completion notification that's published to your Amazon Simple Notification Service topic. For example, you can use JobTag to group related jobs and identify them in the completion notification.

type Features

list

param Features

The features to return after video analysis. You can specify that GENERAL_LABELS are returned.

  • (string) --

type Settings

dict

param Settings

The settings for a StartLabelDetection request.Contains the specified parameters for the label detection request of an asynchronous label analysis operation. Settings can include filters for GENERAL_LABELS.

  • GeneralLabels (dict) --

    Contains filters for the object labels returned by DetectLabels. Filters can be inclusive, exclusive, or a combination of both and can be applied to individual l abels or entire label categories.

    • LabelInclusionFilters (list) --

      The labels that should be included in the return from DetectLabels.

      • (string) --

    • LabelExclusionFilters (list) --

      The labels that should be excluded from the return from DetectLabels.

      • (string) --

    • LabelCategoryInclusionFilters (list) --

      The label categories that should be included in the return from DetectLabels.

      • (string) --

    • LabelCategoryExclusionFilters (list) --

      The label categories that should be excluded from the return from DetectLabels.

      • (string) --

rtype

dict

returns

Response Syntax

{
    'JobId': 'string'
}

Response Structure

  • (dict) --

    • JobId (string) --

      The identifier for the label detection job. Use JobId to identify the job in a subsequent call to GetLabelDetection .