Amazon Rekognition

2018/11/02 - Amazon Rekognition - 2 updated api methods

Changes  This release updates the DetectLabels operation. Bounding boxes are now returned for certain objects, a hierarchical taxonomy is now available for labels, and you can now get the version of the detection model used for detection.

DetectLabels (updated) Link ¶
Changes (response)
{'LabelModelVersion': 'string',
 'Labels': {'Instances': [{'BoundingBox': {'Height': 'float',
                                           'Left': 'float',
                                           'Top': 'float',
                                           'Width': 'float'},
                           'Confidence': 'float'}],
            'Parents': [{'Name': 'string'}]}}

Detects instances of real-world entities within an image (JPEG or PNG) provided as input. This includes objects like flower, tree, and table; events like wedding, graduation, and birthday party; and concepts like landscape, evening, and nature.

For an example, see Analyzing Images Stored in an Amazon S3 Bucket in the Amazon Rekognition Developer Guide.

Note

DetectLabels does not support the detection of activities. However, activity detection is supported for label detection in videos. For more information, see StartLabelDetection in the Amazon Rekognition Developer Guide.

You pass the input image as base64-encoded image bytes or as a reference to an image in an Amazon S3 bucket. If you use the AWS CLI to call Amazon Rekognition operations, passing image bytes is not supported. The image must be either a PNG or JPEG formatted file.

For each object, scene, and concept the API returns one or more labels. Each label provides the object name, and the level of confidence that the image contains the object. For example, suppose the input image has a lighthouse, the sea, and a rock. The response includes all three labels, one for each object.

{Name: lighthouse, Confidence: 98.4629}

{Name: rock,Confidence: 79.2097}

{Name: sea,Confidence: 75.061}

In the preceding example, the operation returns one label for each of the three objects. The operation can also return multiple labels for the same object in the image. For example, if the input image shows a flower (for example, a tulip), the operation might return the following three labels.

{Name: flower,Confidence: 99.0562}

{Name: plant,Confidence: 99.0562}

{Name: tulip,Confidence: 99.0562}

In this example, the detection algorithm more precisely identifies the flower as a tulip.

In response, the API returns an array of labels. In addition, the response also includes the orientation correction. Optionally, you can specify MinConfidence to control the confidence threshold for the labels returned. The default is 50%. You can also add the MaxLabels parameter to limit the number of labels returned.

Note

If the object detected is a person, the operation doesn't provide the same facial details that the DetectFaces operation provides.

DetectLabels returns bounding boxes for instances of common object labels in an array of objects. An Instance object contains a object, for the location of the label on the image. It also includes the confidence by which the bounding box was detected.

DetectLabels also returns a hierarchical taxonomy of detected labels. For example, a detected car might be assigned the label car . The label car has two parent labels: Vehicle (its parent) and Transportation (its grandparent). The response returns the entire list of ancestors for a label. Each ancestor is a unique label in the response. In the previous example, Car , Vehicle , and Transportation are returned as unique labels in the response.

This is a stateless API operation. That is, the operation does not persist any data.

This operation requires permissions to perform the rekognition:DetectLabels action.

See also: AWS API Documentation

Request Syntax

client.detect_labels(
    Image={
        'Bytes': b'bytes',
        'S3Object': {
            'Bucket': 'string',
            'Name': 'string',
            'Version': 'string'
        }
    },
    MaxLabels=123,
    MinConfidence=...
)
type Image

dict

param Image

[REQUIRED]

The input image as base64-encoded bytes or an S3 object. If you use the AWS CLI to call Amazon Rekognition operations, passing base64-encoded image bytes is not supported.

  • Bytes (bytes) --

    Blob of image bytes up to 5 MBs.

  • S3Object (dict) --

    Identifies an S3 object as the image source.

    • Bucket (string) --

      Name of the S3 bucket.

    • Name (string) --

      S3 object key name.

    • Version (string) --

      If the bucket is versioning enabled, you can specify the object version.

type MaxLabels

integer

param MaxLabels

Maximum number of labels you want the service to return in the response. The service returns the specified number of highest confidence labels.

type MinConfidence

float

param MinConfidence

Specifies the minimum confidence level for the labels to return. Amazon Rekognition doesn't return any labels with confidence lower than this specified value.

If MinConfidence is not specified, the operation returns labels with a confidence values greater than or equal to 50 percent.

rtype

dict

returns

Response Syntax

{
    'Labels': [
        {
            'Name': 'string',
            'Confidence': ...,
            'Instances': [
                {
                    'BoundingBox': {
                        'Width': ...,
                        'Height': ...,
                        'Left': ...,
                        'Top': ...
                    },
                    'Confidence': ...
                },
            ],
            'Parents': [
                {
                    'Name': 'string'
                },
            ]
        },
    ],
    'OrientationCorrection': 'ROTATE_0'|'ROTATE_90'|'ROTATE_180'|'ROTATE_270',
    'LabelModelVersion': 'string'
}

Response Structure

  • (dict) --

    • Labels (list) --

      An array of labels for the real-world objects detected.

      • (dict) --

        Structure containing details about the detected label, including the name, and level of confidence.

        The Amazon Rekognition Image operation operation returns a hierarchical taxonomy ( Parents ) for detected labels and also bounding box information ( Instances ) for detected labels. Amazon Rekognition Video doesn't return this information and returns null for the Parents and Instances attributes.

        • Name (string) --

          The name (label) of the object or scene.

        • Confidence (float) --

          Level of confidence.

        • Instances (list) --

          If Label represents an object, Instances contains the bounding boxes for each instance of the detected object. Bounding boxes are returned for common object labels such as people, cars, furniture, apparel or pets.

          Note

          Amazon Rekognition Video does not support bounding box information for detected labels. The value of Instances is returned as null by GetLabelDetection .

          • (dict) --

            An instance of a label detected by .

            • BoundingBox (dict) --

              The position of the label instance on the image.

              • Width (float) --

                Width of the bounding box as a ratio of the overall image width.

              • Height (float) --

                Height of the bounding box as a ratio of the overall image height.

              • Left (float) --

                Left coordinate of the bounding box as a ratio of overall image width.

              • Top (float) --

                Top coordinate of the bounding box as a ratio of overall image height.

            • Confidence (float) --

              The confidence that Amazon Rekognition Image has in the accuracy of the bounding box.

        • Parents (list) --

          The parent labels for a label. The response includes all ancestor labels.

          Note

          Amazon Rekognition Video does not support a hierarchical taxonomy of detected labels. The value of Parents is returned as null by GetLabelDetection .

          • (dict) --

            A parent label for a label. A label can have 0, 1, or more parents.

            • Name (string) --

              The name of the parent label.

    • OrientationCorrection (string) --

      The value of OrientationCorrection is always null.

      If the input image is in .jpeg format, it might contain exchangeable image (Exif) metadata that includes the image's orientation. Amazon Rekognition uses this orientation information to perform image correction - the bounding box coordinates are translated to represent object locations after the orientation information in the Exif metadata is used to correct the image orientation. Images in .png format don't contain Exif metadata.

      Amazon Rekognition doesn’t perform image correction for images in .png format and .jpeg images without orientation information in the image Exif metadata. The bounding box coordinates are not translated and represent the object locations before the image is rotated.

    • LabelModelVersion (string) --

      Version number of the label detection model that was used to detect labels.

GetLabelDetection (updated) Link ¶
Changes (response)
{'Labels': {'Label': {'Instances': [{'BoundingBox': {'Height': 'float',
                                                     'Left': 'float',
                                                     'Top': 'float',
                                                     'Width': 'float'},
                                     'Confidence': 'float'}],
                      'Parents': [{'Name': 'string'}]}}}

Gets the label detection results of a Amazon Rekognition Video analysis started by .

The label detection operation is started by a call to which returns a job identifier ( JobId ). When the label detection operation finishes, Amazon Rekognition publishes a completion status to the Amazon Simple Notification Service topic registered in the initial call to StartlabelDetection . To get the results of the label detection operation, first check that the status value published to the Amazon SNS topic is SUCCEEDED . If so, call and pass the job identifier ( JobId ) from the initial call to StartLabelDetection .

GetLabelDetection returns an array of detected labels ( Labels ) sorted by the time the labels were detected. You can also sort by the label name by specifying NAME for the SortBy input parameter.

The labels returned include the label name, the percentage confidence in the accuracy of the detected label, and the time the label was detected in the video.

Use MaxResults parameter to limit the number of labels returned. If there are more results than specified in MaxResults , the value of NextToken in the operation response contains a pagination token for getting the next set of results. To get the next page of results, call GetlabelDetection and populate the NextToken request parameter with the token value returned from the previous call to GetLabelDetection .

Note

GetLabelDetection doesn't return a hierarchical taxonomy, or bounding box information, for detected labels. GetLabelDetection returns null for the Parents and Instances attributes of the object which is returned in the Labels array.

See also: AWS API Documentation

Request Syntax

client.get_label_detection(
    JobId='string',
    MaxResults=123,
    NextToken='string',
    SortBy='NAME'|'TIMESTAMP'
)
type JobId

string

param JobId

[REQUIRED]

Job identifier for the label detection operation for which you want results returned. You get the job identifer from an initial call to StartlabelDetection .

type MaxResults

integer

param MaxResults

Maximum number of results to return per paginated call. The largest value you can specify is 1000. If you specify a value greater than 1000, a maximum of 1000 results is returned. The default value is 1000.

type NextToken

string

param NextToken

If the previous response was incomplete (because there are more labels to retrieve), Amazon Rekognition Video returns a pagination token in the response. You can use this pagination token to retrieve the next set of labels.

type SortBy

string

param SortBy

Sort to use for elements in the Labels array. Use TIMESTAMP to sort array elements by the time labels are detected. Use NAME to alphabetically group elements for a label together. Within each label group, the array element are sorted by detection confidence. The default sort is by TIMESTAMP .

rtype

dict

returns

Response Syntax

{
    'JobStatus': 'IN_PROGRESS'|'SUCCEEDED'|'FAILED',
    'StatusMessage': 'string',
    'VideoMetadata': {
        'Codec': 'string',
        'DurationMillis': 123,
        'Format': 'string',
        'FrameRate': ...,
        'FrameHeight': 123,
        'FrameWidth': 123
    },
    'NextToken': 'string',
    'Labels': [
        {
            'Timestamp': 123,
            'Label': {
                'Name': 'string',
                'Confidence': ...,
                'Instances': [
                    {
                        'BoundingBox': {
                            'Width': ...,
                            'Height': ...,
                            'Left': ...,
                            'Top': ...
                        },
                        'Confidence': ...
                    },
                ],
                'Parents': [
                    {
                        'Name': 'string'
                    },
                ]
            }
        },
    ]
}

Response Structure

  • (dict) --

    • JobStatus (string) --

      The current status of the label detection job.

    • StatusMessage (string) --

      If the job fails, StatusMessage provides a descriptive error message.

    • VideoMetadata (dict) --

      Information about a video that Amazon Rekognition Video analyzed. Videometadata is returned in every page of paginated responses from a Amazon Rekognition video operation.

      • Codec (string) --

        Type of compression used in the analyzed video.

      • DurationMillis (integer) --

        Length of the video in milliseconds.

      • Format (string) --

        Format of the analyzed video. Possible values are MP4, MOV and AVI.

      • FrameRate (float) --

        Number of frames per second in the video.

      • FrameHeight (integer) --

        Vertical pixel dimension of the video.

      • FrameWidth (integer) --

        Horizontal pixel dimension of the video.

    • NextToken (string) --

      If the response is truncated, Amazon Rekognition Video returns this token that you can use in the subsequent request to retrieve the next set of labels.

    • Labels (list) --

      An array of labels detected in the video. Each element contains the detected label and the time, in milliseconds from the start of the video, that the label was detected.

      • (dict) --

        Information about a label detected in a video analysis request and the time the label was detected in the video.

        • Timestamp (integer) --

          Time, in milliseconds from the start of the video, that the label was detected.

        • Label (dict) --

          Details about the detected label.

          • Name (string) --

            The name (label) of the object or scene.

          • Confidence (float) --

            Level of confidence.

          • Instances (list) --

            If Label represents an object, Instances contains the bounding boxes for each instance of the detected object. Bounding boxes are returned for common object labels such as people, cars, furniture, apparel or pets.

            Note

            Amazon Rekognition Video does not support bounding box information for detected labels. The value of Instances is returned as null by GetLabelDetection .

            • (dict) --

              An instance of a label detected by .

              • BoundingBox (dict) --

                The position of the label instance on the image.

                • Width (float) --

                  Width of the bounding box as a ratio of the overall image width.

                • Height (float) --

                  Height of the bounding box as a ratio of the overall image height.

                • Left (float) --

                  Left coordinate of the bounding box as a ratio of overall image width.

                • Top (float) --

                  Top coordinate of the bounding box as a ratio of overall image height.

              • Confidence (float) --

                The confidence that Amazon Rekognition Image has in the accuracy of the bounding box.

          • Parents (list) --

            The parent labels for a label. The response includes all ancestor labels.

            Note

            Amazon Rekognition Video does not support a hierarchical taxonomy of detected labels. The value of Parents is returned as null by GetLabelDetection .

            • (dict) --

              A parent label for a label. A label can have 0, 1, or more parents.

              • Name (string) --

                The name of the parent label.