Amazon SageMaker Runtime

2021/08/18 - Amazon SageMaker Runtime - 1 new api methods

Changes  Amazon SageMaker now supports Asynchronous Inference endpoints. Adds PlatformIdentifier field that allows Notebook Instance creation with different platform selections. Increases the maximum number of containers in multi-container endpoints to 15. Adds more instance types to InstanceType field.

InvokeEndpointAsync (new) Link ΒΆ

After you deploy a model into production using Amazon SageMaker hosting services, your client applications use this API to get inferences from the model hosted at the specified endpoint in an asynchronous manner.

Inference requests sent to this API are enqueued for asynchronous processing. The processing of the inference request may or may not complete before the you receive a response from this API. The response from this API will not contain the result of the inference request but contain information about where you can locate it.

Amazon SageMaker strips all POST headers except those supported by the API. Amazon SageMaker might add additional headers. You should not rely on the behavior of headers outside those enumerated in the request syntax.

Calls to InvokeEndpointAsync are authenticated by using AWS Signature Version 4. For information, see Authenticating Requests (AWS Signature Version 4) in the Amazon S3 API Reference.

See also: AWS API Documentation

Request Syntax

client.invoke_endpoint_async(
    EndpointName='string',
    ContentType='string',
    Accept='string',
    CustomAttributes='string',
    InferenceId='string',
    InputLocation='string',
    RequestTTLSeconds=123
)
type EndpointName:

string

param EndpointName:

[REQUIRED]

The name of the endpoint that you specified when you created the endpoint using the CreateEndpoint API.

type ContentType:

string

param ContentType:

The MIME type of the input data in the request body.

type Accept:

string

param Accept:

The desired MIME type of the inference in the response.

type CustomAttributes:

string

param CustomAttributes:

Provides additional information about a request for an inference submitted to a model hosted at an Amazon SageMaker endpoint. The information is an opaque value that is forwarded verbatim. You could use this value, for example, to provide an ID that you can use to track a request or to provide other metadata that a service endpoint was programmed to process. The value must consist of no more than 1024 visible US-ASCII characters as specified in Section 3.3.6. Field Value Components of the Hypertext Transfer Protocol (HTTP/1.1).

The code in your model is responsible for setting or updating any custom attributes in the response. If your code does not set this value in the response, an empty value is returned. For example, if a custom attribute represents the trace ID, your model can prepend the custom attribute with Trace ID: in your post-processing function.

This feature is currently supported in the AWS SDKs but not in the Amazon SageMaker Python SDK.

type InferenceId:

string

param InferenceId:

The identifier for the inference request. Amazon SageMaker will generate an identifier for you if none is specified.

type InputLocation:

string

param InputLocation:

[REQUIRED]

The Amazon S3 URI where the inference request payload is stored.

type RequestTTLSeconds:

integer

param RequestTTLSeconds:

Maximum age in seconds a request can be in the queue before it is marked as expired.

rtype:

dict

returns:

Response Syntax

{
    'InferenceId': 'string',
    'OutputLocation': 'string'
}

Response Structure

  • (dict) --

    • InferenceId (string) --

      Identifier for an inference request. This will be the same as the InferenceId specified in the input. Amazon SageMaker will generate an identifier for you if you do not specify one.

    • OutputLocation (string) --

      The Amazon S3 URI where the inference response payload is stored.