Amazon SageMaker Service

2019/06/11 - Amazon SageMaker Service - 5 updated api methods

Changes  Update sagemaker client to latest version

CreateCompilationJob (updated) Link ¶
Changes (request)
{'OutputConfig': {'TargetDevice': {'sbe_c'}}}

Starts a model compilation job. After the model has been compiled, Amazon SageMaker saves the resulting model artifacts to an Amazon Simple Storage Service (Amazon S3) bucket that you specify.

If you choose to host your model using Amazon SageMaker hosting services, you can use the resulting model artifacts as part of the model. You can also use the artifacts with AWS IoT Greengrass. In that case, deploy them as an ML resource.

In the request body, you provide the following:

  • A name for the compilation job

  • Information about the input model artifacts

  • The output location for the compiled model and the device (target) that the model runs on

  • The Amazon Resource Name (ARN) of the IAM role that Amazon SageMaker assumes to perform the model compilation job

You can also provide a Tag to track the model compilation job's resource use and costs. The response body contains the CompilationJobArn for the compiled job.

To stop a model compilation job, use StopCompilationJob. To get information about a particular model compilation job, use DescribeCompilationJob. To get information about multiple model compilation jobs, use ListCompilationJobs.

See also: AWS API Documentation

Request Syntax

client.create_compilation_job(
    CompilationJobName='string',
    RoleArn='string',
    InputConfig={
        'S3Uri': 'string',
        'DataInputConfig': 'string',
        'Framework': 'TENSORFLOW'|'MXNET'|'ONNX'|'PYTORCH'|'XGBOOST'
    },
    OutputConfig={
        'S3OutputLocation': 'string',
        'TargetDevice': 'lambda'|'ml_m4'|'ml_m5'|'ml_c4'|'ml_c5'|'ml_p2'|'ml_p3'|'jetson_tx1'|'jetson_tx2'|'jetson_nano'|'rasp3b'|'deeplens'|'rk3399'|'rk3288'|'sbe_c'
    },
    StoppingCondition={
        'MaxRuntimeInSeconds': 123
    }
)
type CompilationJobName

string

param CompilationJobName

[REQUIRED]

A name for the model compilation job. The name must be unique within the AWS Region and within your AWS account.

type RoleArn

string

param RoleArn

[REQUIRED]

The Amazon Resource Name (ARN) of an IAM role that enables Amazon SageMaker to perform tasks on your behalf.

During model compilation, Amazon SageMaker needs your permission to:

  • Read input data from an S3 bucket

  • Write model artifacts to an S3 bucket

  • Write logs to Amazon CloudWatch Logs

  • Publish metrics to Amazon CloudWatch

You grant permissions for all of these tasks to an IAM role. To pass this role to Amazon SageMaker, the caller of this API must have the iam:PassRole permission. For more information, see Amazon SageMaker Roles.

type InputConfig

dict

param InputConfig

[REQUIRED]

Provides information about the location of input model artifacts, the name and shape of the expected data inputs, and the framework in which the model was trained.

  • S3Uri (string) -- [REQUIRED]

    The S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix).

  • DataInputConfig (string) -- [REQUIRED]

    Specifies the name and shape of the expected data inputs for your trained model with a JSON dictionary form. The data inputs are InputConfig$Framework specific.

    • TensorFlow : You must specify the name and shape (NHWC format) of the expected data inputs using a dictionary format for your trained model. The dictionary formats required for the console and CLI are different.

      • Examples for one input:

        • If using the console, {"input":[1,1024,1024,3]}

        • If using the CLI, {\"input\":[1,1024,1024,3]}

      • Examples for two inputs:

        • If using the console, {"data1": [1,28,28,1], "data2":[1,28,28,1]}

        • If using the CLI, {\"data1\": [1,28,28,1], \"data2\":[1,28,28,1]}

    • MXNET/ONNX : You must specify the name and shape (NCHW format) of the expected data inputs in order using a dictionary format for your trained model. The dictionary formats required for the console and CLI are different.

      • Examples for one input:

        • If using the console, {"data":[1,3,1024,1024]}

        • If using the CLI, {\"data\":[1,3,1024,1024]}

      • Examples for two inputs:

        • If using the console, {"var1": [1,1,28,28], "var2":[1,1,28,28]}

        • If using the CLI, {\"var1\": [1,1,28,28], \"var2\":[1,1,28,28]}

    • PyTorch : You can either specify the name and shape (NCHW format) of expected data inputs in order using a dictionary format for your trained model or you can specify the shape only using a list format. The dictionary formats required for the console and CLI are different. The list formats for the console and CLI are the same.

      • Examples for one input in dictionary format:

        • If using the console, {"input0":[1,3,224,224]}

        • If using the CLI, {\"input0\":[1,3,224,224]}

      • Example for one input in list format: [[1,3,224,224]]

      • Examples for two inputs in dictionary format:

        • If using the console, {"input0":[1,3,224,224], "input1":[1,3,224,224]}

        • If using the CLI, {\"input0\":[1,3,224,224], \"input1\":[1,3,224,224]}

      • Example for two inputs in list format: [[1,3,224,224], [1,3,224,224]]

    • XGBOOST : input data name and shape are not needed.

  • Framework (string) -- [REQUIRED]

    Identifies the framework in which the model was trained. For example: TENSORFLOW.

type OutputConfig

dict

param OutputConfig

[REQUIRED]

Provides information about the output location for the compiled model and the target device the model runs on.

  • S3OutputLocation (string) -- [REQUIRED]

    Identifies the S3 path where you want Amazon SageMaker to store the model artifacts. For example, s3://bucket-name/key-name-prefix.

  • TargetDevice (string) -- [REQUIRED]

    Identifies the device that you want to run your model on after it has been compiled. For example: ml_c5.

type StoppingCondition

dict

param StoppingCondition

[REQUIRED]

Specifies a limit to how long a model compilation job can run. When the job reaches the time limit, Amazon SageMaker ends the compilation job. Use this API to cap model training costs.

  • MaxRuntimeInSeconds (integer) --

    The maximum length of time, in seconds, that the training or compilation job can run. If job does not complete during this time, Amazon SageMaker ends the job. If value is not specified, default value is 1 day. The maximum value is 28 days.

rtype

dict

returns

Response Syntax

{
    'CompilationJobArn': 'string'
}

Response Structure

  • (dict) --

    • CompilationJobArn (string) --

      If the action is successful, the service sends back an HTTP 200 response. Amazon SageMaker returns the following data in JSON format:

      • CompilationJobArn : The Amazon Resource Name (ARN) of the compiled job.

CreateTransformJob (updated) Link ¶
Changes (request)
{'DataProcessing': {'InputFilter': 'string',
                    'JoinSource': 'Input | None',
                    'OutputFilter': 'string'}}

Starts a transform job. A transform job uses a trained model to get inferences on a dataset and saves these results to an Amazon S3 location that you specify.

To perform batch transformations, you create a transform job and use the data that you have readily available.

In the request body, you provide the following:

  • TransformJobName - Identifies the transform job. The name must be unique within an AWS Region in an AWS account.

  • ModelName - Identifies the model to use. ModelName must be the name of an existing Amazon SageMaker model in the same AWS Region and AWS account. For information on creating a model, see CreateModel.

  • TransformInput - Describes the dataset to be transformed and the Amazon S3 location where it is stored.

  • TransformOutput - Identifies the Amazon S3 location where you want Amazon SageMaker to save the results from the transform job.

  • TransformResources - Identifies the ML compute instances for the transform job.

For more information about how batch transformation works Amazon SageMaker, see How It Works.

See also: AWS API Documentation

Request Syntax

client.create_transform_job(
    TransformJobName='string',
    ModelName='string',
    MaxConcurrentTransforms=123,
    MaxPayloadInMB=123,
    BatchStrategy='MultiRecord'|'SingleRecord',
    Environment={
        'string': 'string'
    },
    TransformInput={
        'DataSource': {
            'S3DataSource': {
                'S3DataType': 'ManifestFile'|'S3Prefix'|'AugmentedManifestFile',
                'S3Uri': 'string'
            }
        },
        'ContentType': 'string',
        'CompressionType': 'None'|'Gzip',
        'SplitType': 'None'|'Line'|'RecordIO'|'TFRecord'
    },
    TransformOutput={
        'S3OutputPath': 'string',
        'Accept': 'string',
        'AssembleWith': 'None'|'Line',
        'KmsKeyId': 'string'
    },
    TransformResources={
        'InstanceType': 'ml.m4.xlarge'|'ml.m4.2xlarge'|'ml.m4.4xlarge'|'ml.m4.10xlarge'|'ml.m4.16xlarge'|'ml.c4.xlarge'|'ml.c4.2xlarge'|'ml.c4.4xlarge'|'ml.c4.8xlarge'|'ml.p2.xlarge'|'ml.p2.8xlarge'|'ml.p2.16xlarge'|'ml.p3.2xlarge'|'ml.p3.8xlarge'|'ml.p3.16xlarge'|'ml.c5.xlarge'|'ml.c5.2xlarge'|'ml.c5.4xlarge'|'ml.c5.9xlarge'|'ml.c5.18xlarge'|'ml.m5.large'|'ml.m5.xlarge'|'ml.m5.2xlarge'|'ml.m5.4xlarge'|'ml.m5.12xlarge'|'ml.m5.24xlarge',
        'InstanceCount': 123,
        'VolumeKmsKeyId': 'string'
    },
    DataProcessing={
        'InputFilter': 'string',
        'OutputFilter': 'string',
        'JoinSource': 'Input'|'None'
    },
    Tags=[
        {
            'Key': 'string',
            'Value': 'string'
        },
    ]
)
type TransformJobName

string

param TransformJobName

[REQUIRED]

The name of the transform job. The name must be unique within an AWS Region in an AWS account.

type ModelName

string

param ModelName

[REQUIRED]

The name of the model that you want to use for the transform job. ModelName must be the name of an existing Amazon SageMaker model within an AWS Region in an AWS account.

type MaxConcurrentTransforms

integer

param MaxConcurrentTransforms

The maximum number of parallel requests that can be sent to each instance in a transform job. If MaxConcurrentTransforms is set to 0 or left unset, Amazon SageMaker checks the optional execution-parameters to determine the optimal settings for your chosen algorithm. If the execution-parameters endpoint is not enabled, the default value is 1 . For more information on execution-parameters, see How Containers Serve Requests. For built-in algorithms, you don't need to set a value for MaxConcurrentTransforms .

type MaxPayloadInMB

integer

param MaxPayloadInMB

The maximum allowed size of the payload, in MB. A payload is the data portion of a record (without metadata). The value in MaxPayloadInMB must be greater than, or equal to, the size of a single record. To estimate the size of a record in MB, divide the size of your dataset by the number of records. To ensure that the records fit within the maximum payload size, we recommend using a slightly larger value. The default value is 6 MB.

For cases where the payload might be arbitrarily large and is transmitted using HTTP chunked encoding, set the value to 0 . This feature works only in supported algorithms. Currently, Amazon SageMaker built-in algorithms do not support HTTP chunked encoding.

type BatchStrategy

string

param BatchStrategy

Specifies the number of records to include in a mini-batch for an HTTP inference request. A record is a single unit of input data that inference can be made on. For example, a single line in a CSV file is a record.

To enable the batch strategy, you must set SplitType to Line , RecordIO , or TFRecord .

To use only one record when making an HTTP invocation request to a container, set BatchStrategy to SingleRecord and SplitType to Line .

To fit as many records in a mini-batch as can fit within the MaxPayloadInMB limit, set BatchStrategy to MultiRecord and SplitType to Line .

type Environment

dict

param Environment

The environment variables to set in the Docker container. We support up to 16 key and values entries in the map.

  • (string) --

    • (string) --

type TransformInput

dict

param TransformInput

[REQUIRED]

Describes the input source and the way the transform job consumes it.

  • DataSource (dict) -- [REQUIRED]

    Describes the location of the channel data, which is, the S3 location of the input data that the model can consume.

    • S3DataSource (dict) -- [REQUIRED]

      The S3 location of the data source that is associated with a channel.

      • S3DataType (string) -- [REQUIRED]

        If you choose S3Prefix , S3Uri identifies a key name prefix. Amazon SageMaker uses all objects with the specified key name prefix for batch transform.

        If you choose ManifestFile , S3Uri identifies an object that is a manifest file containing a list of object keys that you want Amazon SageMaker to use for batch transform.

        The following values are compatible: ManifestFile , S3Prefix

        The following value is not compatible: AugmentedManifestFile

      • S3Uri (string) -- [REQUIRED]

        Depending on the value specified for the S3DataType , identifies either a key name prefix or a manifest. For example:

        • A key name prefix might look like this: s3://bucketname/exampleprefix .

        • A manifest might look like this: s3://bucketname/example.manifest The manifest is an S3 object which is a JSON file with the following format: [ {"prefix": "s3://customer_bucket/some/prefix/"}, "relative/path/to/custdata-1", "relative/path/custdata-2", ... ] The preceding JSON matches the following S3Uris : s3://customer_bucket/some/prefix/relative/path/to/custdata-1 s3://customer_bucket/some/prefix/relative/path/custdata-1 ... The complete set of S3Uris in this manifest constitutes the input data for the channel for this datasource. The object that each S3Uris points to must be readable by the IAM role that Amazon SageMaker uses to perform tasks on your behalf.

  • ContentType (string) --

    The multipurpose internet mail extension (MIME) type of the data. Amazon SageMaker uses the MIME type with each http call to transfer data to the transform job.

  • CompressionType (string) --

    If your transform data is compressed, specify the compression type. Amazon SageMaker automatically decompresses the data for the transform job accordingly. The default value is None .

  • SplitType (string) --

    The method to use to split the transform job's data files into smaller batches. Splitting is necessary when the total size of each object is too large to fit in a single request. You can also use data splitting to improve performance by processing multiple concurrent mini-batches. The default value for SplitType is None , which indicates that input data files are not split, and request payloads contain the entire contents of an input object. Set the value of this parameter to Line to split records on a newline character boundary. SplitType also supports a number of record-oriented binary data formats.

    When splitting is enabled, the size of a mini-batch depends on the values of the BatchStrategy and MaxPayloadInMB parameters. When the value of BatchStrategy is MultiRecord , Amazon SageMaker sends the maximum number of records in each request, up to the MaxPayloadInMB limit. If the value of BatchStrategy is SingleRecord , Amazon SageMaker sends individual records in each request.

    Note

    Some data formats represent a record as a binary payload wrapped with extra padding bytes. When splitting is applied to a binary data format, padding is removed if the value of BatchStrategy is set to SingleRecord . Padding is not removed if the value of BatchStrategy is set to MultiRecord .

    For more information about the RecordIO, see Data Format in the MXNet documentation. For more information about the TFRecord, see Consuming TFRecord data in the TensorFlow documentation.

type TransformOutput

dict

param TransformOutput

[REQUIRED]

Describes the results of the transform job.

  • S3OutputPath (string) -- [REQUIRED]

    The Amazon S3 path where you want Amazon SageMaker to store the results of the transform job. For example, s3://bucket-name/key-name-prefix .

    For every S3 object used as input for the transform job, batch transform stores the transformed data with an . out suffix in a corresponding subfolder in the location in the output prefix. For example, for the input data stored at s3://bucket-name/input-name-prefix/dataset01/data.csv , batch transform stores the transformed data at s3://bucket-name/output-name-prefix/input-name-prefix/data.csv.out . Batch transform doesn't upload partially processed objects. For an input S3 object that contains multiple records, it creates an . out file only if the transform job succeeds on the entire file. When the input contains multiple S3 objects, the batch transform job processes the listed S3 objects and uploads only the output for successfully processed objects. If any object fails in the transform job batch transform marks the job as failed to prompt investigation.

  • Accept (string) --

    The MIME type used to specify the output data. Amazon SageMaker uses the MIME type with each http call to transfer data from the transform job.

  • AssembleWith (string) --

    Defines how to assemble the results of the transform job as a single S3 object. Choose a format that is most convenient to you. To concatenate the results in binary format, specify None . To add a newline character at the end of every transformed record, specify Line .

  • KmsKeyId (string) --

    The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt the model artifacts at rest using Amazon S3 server-side encryption. The KmsKeyId can be any of the following formats:

    • // KMS Key ID "1234abcd-12ab-34cd-56ef-1234567890ab"

    • // Amazon Resource Name (ARN) of a KMS Key "arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab"

    • // KMS Key Alias "alias/ExampleAlias"

    • // Amazon Resource Name (ARN) of a KMS Key Alias "arn:aws:kms:us-west-2:111122223333:alias/ExampleAlias"

    If you don't provide a KMS key ID, Amazon SageMaker uses the default KMS key for Amazon S3 for your role's account. For more information, see KMS-Managed Encryption Keys in the Amazon Simple Storage Service Developer Guide.

    The KMS key policy must grant permission to the IAM role that you specify in your CreateTramsformJob request. For more information, see Using Key Policies in AWS KMS in the AWS Key Management Service Developer Guide .

type TransformResources

dict

param TransformResources

[REQUIRED]

Describes the resources, including ML instance types and ML instance count, to use for the transform job.

  • InstanceType (string) -- [REQUIRED]

    The ML compute instance type for the transform job. If you are using built-in algorithms to transform moderately sized datasets, we recommend using ml.m4.xlarge or ml.m5.large instance types.

  • InstanceCount (integer) -- [REQUIRED]

    The number of ML compute instances to use in the transform job. For distributed transform jobs, specify a value greater than 1. The default value is 1 .

  • VolumeKmsKeyId (string) --

    The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance(s) that run the batch transform job. The VolumeKmsKeyId can be any of the following formats:

    • // KMS Key ID "1234abcd-12ab-34cd-56ef-1234567890ab"

    • // Amazon Resource Name (ARN) of a KMS Key "arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab"

type DataProcessing

dict

param DataProcessing

The data structure used for combining the input data and inference in the output file. For more information, see Batch Transform I/O Join.

  • InputFilter (string) --

    A JSONPath expression used to select a portion of the input data to pass to the algorithm. Use the InputFilter parameter to exclude fields, such as an ID column, from the input. If you want Amazon SageMaker to pass the entire input dataset to the algorithm, accept the default value $ .

    Examples: "$" , "$[1:]" , "$.features"

  • OutputFilter (string) --

    A JSONPath expression used to select a portion of the joined dataset to save in the output file for a batch transform job. If you want Amazon SageMaker to store the entire input dataset in the output file, leave the default value, $ . If you specify indexes that aren't within the dimension size of the joined dataset, you get an error.

    Examples: "$" , "$[0,5:]" , "$.['id','SageMakerOutput']"

  • JoinSource (string) --

    Specifies the source of the data to join with the transformed data. The valid values are None and Input The default value is None which specifies not to join the input with the transformed data. If you want the batch transform job to join the original input data with the transformed data, set JoinSource to Input . To join input and output, the batch transform job must satisfy the Requirements for Using Batch Transform I/O Join.

    For JSON or JSONLines objects, such as a JSON array, Amazon SageMaker adds the transformed data to the input JSON object in an attribute called SageMakerOutput . The joined result for JSON must be a key-value pair object. If the input is not a key-value pair object, Amazon SageMaker creates a new JSON file. In the new JSON file, and the input data is stored under the SageMakerInput key and the results are stored in SageMakerOutput .

    For CSV files, Amazon SageMaker combines the transformed data with the input data at the end of the input data and stores it in the output file. The joined data has the joined input data followed by the transformed data and the output is a CSV file.

type Tags

list

param Tags

(Optional) An array of key-value pairs. For more information, see Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide .

  • (dict) --

    Describes a tag.

    • Key (string) -- [REQUIRED]

      The tag key.

    • Value (string) -- [REQUIRED]

      The tag value.

rtype

dict

returns

Response Syntax

{
    'TransformJobArn': 'string'
}

Response Structure

  • (dict) --

    • TransformJobArn (string) --

      The Amazon Resource Name (ARN) of the transform job.

DescribeCompilationJob (updated) Link ¶
Changes (response)
{'OutputConfig': {'TargetDevice': {'sbe_c'}}}

Returns information about a model compilation job.

To create a model compilation job, use CreateCompilationJob. To get information about multiple model compilation jobs, use ListCompilationJobs.

See also: AWS API Documentation

Request Syntax

client.describe_compilation_job(
    CompilationJobName='string'
)
type CompilationJobName

string

param CompilationJobName

[REQUIRED]

The name of the model compilation job that you want information about.

rtype

dict

returns

Response Syntax

{
    'CompilationJobName': 'string',
    'CompilationJobArn': 'string',
    'CompilationJobStatus': 'INPROGRESS'|'COMPLETED'|'FAILED'|'STARTING'|'STOPPING'|'STOPPED',
    'CompilationStartTime': datetime(2015, 1, 1),
    'CompilationEndTime': datetime(2015, 1, 1),
    'StoppingCondition': {
        'MaxRuntimeInSeconds': 123
    },
    'CreationTime': datetime(2015, 1, 1),
    'LastModifiedTime': datetime(2015, 1, 1),
    'FailureReason': 'string',
    'ModelArtifacts': {
        'S3ModelArtifacts': 'string'
    },
    'RoleArn': 'string',
    'InputConfig': {
        'S3Uri': 'string',
        'DataInputConfig': 'string',
        'Framework': 'TENSORFLOW'|'MXNET'|'ONNX'|'PYTORCH'|'XGBOOST'
    },
    'OutputConfig': {
        'S3OutputLocation': 'string',
        'TargetDevice': 'lambda'|'ml_m4'|'ml_m5'|'ml_c4'|'ml_c5'|'ml_p2'|'ml_p3'|'jetson_tx1'|'jetson_tx2'|'jetson_nano'|'rasp3b'|'deeplens'|'rk3399'|'rk3288'|'sbe_c'
    }
}

Response Structure

  • (dict) --

    • CompilationJobName (string) --

      The name of the model compilation job.

    • CompilationJobArn (string) --

      The Amazon Resource Name (ARN) of an IAM role that Amazon SageMaker assumes to perform the model compilation job.

    • CompilationJobStatus (string) --

      The status of the model compilation job.

    • CompilationStartTime (datetime) --

      The time when the model compilation job started the CompilationJob instances.

      You are billed for the time between this timestamp and the timestamp in the DescribeCompilationJobResponse$CompilationEndTime field. In Amazon CloudWatch Logs, the start time might be later than this time. That's because it takes time to download the compilation job, which depends on the size of the compilation job container.

    • CompilationEndTime (datetime) --

      The time when the model compilation job on a compilation job instance ended. For a successful or stopped job, this is when the job's model artifacts have finished uploading. For a failed job, this is when Amazon SageMaker detected that the job failed.

    • StoppingCondition (dict) --

      Specifies a limit to how long a model compilation job can run. When the job reaches the time limit, Amazon SageMaker ends the compilation job. Use this API to cap model training costs.

      • MaxRuntimeInSeconds (integer) --

        The maximum length of time, in seconds, that the training or compilation job can run. If job does not complete during this time, Amazon SageMaker ends the job. If value is not specified, default value is 1 day. The maximum value is 28 days.

    • CreationTime (datetime) --

      The time that the model compilation job was created.

    • LastModifiedTime (datetime) --

      The time that the status of the model compilation job was last modified.

    • FailureReason (string) --

      If a model compilation job failed, the reason it failed.

    • ModelArtifacts (dict) --

      Information about the location in Amazon S3 that has been configured for storing the model artifacts used in the compilation job.

      • S3ModelArtifacts (string) --

        The path of the S3 object that contains the model artifacts. For example, s3://bucket-name/keynameprefix/model.tar.gz .

    • RoleArn (string) --

      The Amazon Resource Name (ARN) of the model compilation job.

    • InputConfig (dict) --

      Information about the location in Amazon S3 of the input model artifacts, the name and shape of the expected data inputs, and the framework in which the model was trained.

      • S3Uri (string) --

        The S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix).

      • DataInputConfig (string) --

        Specifies the name and shape of the expected data inputs for your trained model with a JSON dictionary form. The data inputs are InputConfig$Framework specific.

        • TensorFlow : You must specify the name and shape (NHWC format) of the expected data inputs using a dictionary format for your trained model. The dictionary formats required for the console and CLI are different.

          • Examples for one input:

            • If using the console, {"input":[1,1024,1024,3]}

            • If using the CLI, {\"input\":[1,1024,1024,3]}

          • Examples for two inputs:

            • If using the console, {"data1": [1,28,28,1], "data2":[1,28,28,1]}

            • If using the CLI, {\"data1\": [1,28,28,1], \"data2\":[1,28,28,1]}

        • MXNET/ONNX : You must specify the name and shape (NCHW format) of the expected data inputs in order using a dictionary format for your trained model. The dictionary formats required for the console and CLI are different.

          • Examples for one input:

            • If using the console, {"data":[1,3,1024,1024]}

            • If using the CLI, {\"data\":[1,3,1024,1024]}

          • Examples for two inputs:

            • If using the console, {"var1": [1,1,28,28], "var2":[1,1,28,28]}

            • If using the CLI, {\"var1\": [1,1,28,28], \"var2\":[1,1,28,28]}

        • PyTorch : You can either specify the name and shape (NCHW format) of expected data inputs in order using a dictionary format for your trained model or you can specify the shape only using a list format. The dictionary formats required for the console and CLI are different. The list formats for the console and CLI are the same.

          • Examples for one input in dictionary format:

            • If using the console, {"input0":[1,3,224,224]}

            • If using the CLI, {\"input0\":[1,3,224,224]}

          • Example for one input in list format: [[1,3,224,224]]

          • Examples for two inputs in dictionary format:

            • If using the console, {"input0":[1,3,224,224], "input1":[1,3,224,224]}

            • If using the CLI, {\"input0\":[1,3,224,224], \"input1\":[1,3,224,224]}

          • Example for two inputs in list format: [[1,3,224,224], [1,3,224,224]]

        • XGBOOST : input data name and shape are not needed.

      • Framework (string) --

        Identifies the framework in which the model was trained. For example: TENSORFLOW.

    • OutputConfig (dict) --

      Information about the output location for the compiled model and the target device that the model runs on.

      • S3OutputLocation (string) --

        Identifies the S3 path where you want Amazon SageMaker to store the model artifacts. For example, s3://bucket-name/key-name-prefix.

      • TargetDevice (string) --

        Identifies the device that you want to run your model on after it has been compiled. For example: ml_c5.

DescribeTransformJob (updated) Link ¶
Changes (response)
{'DataProcessing': {'InputFilter': 'string',
                    'JoinSource': 'Input | None',
                    'OutputFilter': 'string'}}

Returns information about a transform job.

See also: AWS API Documentation

Request Syntax

client.describe_transform_job(
    TransformJobName='string'
)
type TransformJobName

string

param TransformJobName

[REQUIRED]

The name of the transform job that you want to view details of.

rtype

dict

returns

Response Syntax

{
    'TransformJobName': 'string',
    'TransformJobArn': 'string',
    'TransformJobStatus': 'InProgress'|'Completed'|'Failed'|'Stopping'|'Stopped',
    'FailureReason': 'string',
    'ModelName': 'string',
    'MaxConcurrentTransforms': 123,
    'MaxPayloadInMB': 123,
    'BatchStrategy': 'MultiRecord'|'SingleRecord',
    'Environment': {
        'string': 'string'
    },
    'TransformInput': {
        'DataSource': {
            'S3DataSource': {
                'S3DataType': 'ManifestFile'|'S3Prefix'|'AugmentedManifestFile',
                'S3Uri': 'string'
            }
        },
        'ContentType': 'string',
        'CompressionType': 'None'|'Gzip',
        'SplitType': 'None'|'Line'|'RecordIO'|'TFRecord'
    },
    'TransformOutput': {
        'S3OutputPath': 'string',
        'Accept': 'string',
        'AssembleWith': 'None'|'Line',
        'KmsKeyId': 'string'
    },
    'TransformResources': {
        'InstanceType': 'ml.m4.xlarge'|'ml.m4.2xlarge'|'ml.m4.4xlarge'|'ml.m4.10xlarge'|'ml.m4.16xlarge'|'ml.c4.xlarge'|'ml.c4.2xlarge'|'ml.c4.4xlarge'|'ml.c4.8xlarge'|'ml.p2.xlarge'|'ml.p2.8xlarge'|'ml.p2.16xlarge'|'ml.p3.2xlarge'|'ml.p3.8xlarge'|'ml.p3.16xlarge'|'ml.c5.xlarge'|'ml.c5.2xlarge'|'ml.c5.4xlarge'|'ml.c5.9xlarge'|'ml.c5.18xlarge'|'ml.m5.large'|'ml.m5.xlarge'|'ml.m5.2xlarge'|'ml.m5.4xlarge'|'ml.m5.12xlarge'|'ml.m5.24xlarge',
        'InstanceCount': 123,
        'VolumeKmsKeyId': 'string'
    },
    'CreationTime': datetime(2015, 1, 1),
    'TransformStartTime': datetime(2015, 1, 1),
    'TransformEndTime': datetime(2015, 1, 1),
    'LabelingJobArn': 'string',
    'DataProcessing': {
        'InputFilter': 'string',
        'OutputFilter': 'string',
        'JoinSource': 'Input'|'None'
    }
}

Response Structure

  • (dict) --

    • TransformJobName (string) --

      The name of the transform job.

    • TransformJobArn (string) --

      The Amazon Resource Name (ARN) of the transform job.

    • TransformJobStatus (string) --

      The status of the transform job. If the transform job failed, the reason is returned in the FailureReason field.

    • FailureReason (string) --

      If the transform job failed, FailureReason describes why it failed. A transform job creates a log file, which includes error messages, and stores it as an Amazon S3 object. For more information, see Log Amazon SageMaker Events with Amazon CloudWatch.

    • ModelName (string) --

      The name of the model used in the transform job.

    • MaxConcurrentTransforms (integer) --

      The maximum number of parallel requests on each instance node that can be launched in a transform job. The default value is 1.

    • MaxPayloadInMB (integer) --

      The maximum payload size, in MB, used in the transform job.

    • BatchStrategy (string) --

      Specifies the number of records to include in a mini-batch for an HTTP inference request. A record is a single unit of input data that inference can be made on. For example, a single line in a CSV file is a record.

      To enable the batch strategy, you must set SplitType to Line , RecordIO , or TFRecord .

    • Environment (dict) --

      The environment variables to set in the Docker container. We support up to 16 key and values entries in the map.

      • (string) --

        • (string) --

    • TransformInput (dict) --

      Describes the dataset to be transformed and the Amazon S3 location where it is stored.

      • DataSource (dict) --

        Describes the location of the channel data, which is, the S3 location of the input data that the model can consume.

        • S3DataSource (dict) --

          The S3 location of the data source that is associated with a channel.

          • S3DataType (string) --

            If you choose S3Prefix , S3Uri identifies a key name prefix. Amazon SageMaker uses all objects with the specified key name prefix for batch transform.

            If you choose ManifestFile , S3Uri identifies an object that is a manifest file containing a list of object keys that you want Amazon SageMaker to use for batch transform.

            The following values are compatible: ManifestFile , S3Prefix

            The following value is not compatible: AugmentedManifestFile

          • S3Uri (string) --

            Depending on the value specified for the S3DataType , identifies either a key name prefix or a manifest. For example:

            • A key name prefix might look like this: s3://bucketname/exampleprefix .

            • A manifest might look like this: s3://bucketname/example.manifest The manifest is an S3 object which is a JSON file with the following format: [ {"prefix": "s3://customer_bucket/some/prefix/"}, "relative/path/to/custdata-1", "relative/path/custdata-2", ... ] The preceding JSON matches the following S3Uris : s3://customer_bucket/some/prefix/relative/path/to/custdata-1 s3://customer_bucket/some/prefix/relative/path/custdata-1 ... The complete set of S3Uris in this manifest constitutes the input data for the channel for this datasource. The object that each S3Uris points to must be readable by the IAM role that Amazon SageMaker uses to perform tasks on your behalf.

      • ContentType (string) --

        The multipurpose internet mail extension (MIME) type of the data. Amazon SageMaker uses the MIME type with each http call to transfer data to the transform job.

      • CompressionType (string) --

        If your transform data is compressed, specify the compression type. Amazon SageMaker automatically decompresses the data for the transform job accordingly. The default value is None .

      • SplitType (string) --

        The method to use to split the transform job's data files into smaller batches. Splitting is necessary when the total size of each object is too large to fit in a single request. You can also use data splitting to improve performance by processing multiple concurrent mini-batches. The default value for SplitType is None , which indicates that input data files are not split, and request payloads contain the entire contents of an input object. Set the value of this parameter to Line to split records on a newline character boundary. SplitType also supports a number of record-oriented binary data formats.

        When splitting is enabled, the size of a mini-batch depends on the values of the BatchStrategy and MaxPayloadInMB parameters. When the value of BatchStrategy is MultiRecord , Amazon SageMaker sends the maximum number of records in each request, up to the MaxPayloadInMB limit. If the value of BatchStrategy is SingleRecord , Amazon SageMaker sends individual records in each request.

        Note

        Some data formats represent a record as a binary payload wrapped with extra padding bytes. When splitting is applied to a binary data format, padding is removed if the value of BatchStrategy is set to SingleRecord . Padding is not removed if the value of BatchStrategy is set to MultiRecord .

        For more information about the RecordIO, see Data Format in the MXNet documentation. For more information about the TFRecord, see Consuming TFRecord data in the TensorFlow documentation.

    • TransformOutput (dict) --

      Identifies the Amazon S3 location where you want Amazon SageMaker to save the results from the transform job.

      • S3OutputPath (string) --

        The Amazon S3 path where you want Amazon SageMaker to store the results of the transform job. For example, s3://bucket-name/key-name-prefix .

        For every S3 object used as input for the transform job, batch transform stores the transformed data with an . out suffix in a corresponding subfolder in the location in the output prefix. For example, for the input data stored at s3://bucket-name/input-name-prefix/dataset01/data.csv , batch transform stores the transformed data at s3://bucket-name/output-name-prefix/input-name-prefix/data.csv.out . Batch transform doesn't upload partially processed objects. For an input S3 object that contains multiple records, it creates an . out file only if the transform job succeeds on the entire file. When the input contains multiple S3 objects, the batch transform job processes the listed S3 objects and uploads only the output for successfully processed objects. If any object fails in the transform job batch transform marks the job as failed to prompt investigation.

      • Accept (string) --

        The MIME type used to specify the output data. Amazon SageMaker uses the MIME type with each http call to transfer data from the transform job.

      • AssembleWith (string) --

        Defines how to assemble the results of the transform job as a single S3 object. Choose a format that is most convenient to you. To concatenate the results in binary format, specify None . To add a newline character at the end of every transformed record, specify Line .

      • KmsKeyId (string) --

        The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt the model artifacts at rest using Amazon S3 server-side encryption. The KmsKeyId can be any of the following formats:

        • // KMS Key ID "1234abcd-12ab-34cd-56ef-1234567890ab"

        • // Amazon Resource Name (ARN) of a KMS Key "arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab"

        • // KMS Key Alias "alias/ExampleAlias"

        • // Amazon Resource Name (ARN) of a KMS Key Alias "arn:aws:kms:us-west-2:111122223333:alias/ExampleAlias"

        If you don't provide a KMS key ID, Amazon SageMaker uses the default KMS key for Amazon S3 for your role's account. For more information, see KMS-Managed Encryption Keys in the Amazon Simple Storage Service Developer Guide.

        The KMS key policy must grant permission to the IAM role that you specify in your CreateTramsformJob request. For more information, see Using Key Policies in AWS KMS in the AWS Key Management Service Developer Guide .

    • TransformResources (dict) --

      Describes the resources, including ML instance types and ML instance count, to use for the transform job.

      • InstanceType (string) --

        The ML compute instance type for the transform job. If you are using built-in algorithms to transform moderately sized datasets, we recommend using ml.m4.xlarge or ml.m5.large instance types.

      • InstanceCount (integer) --

        The number of ML compute instances to use in the transform job. For distributed transform jobs, specify a value greater than 1. The default value is 1 .

      • VolumeKmsKeyId (string) --

        The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance(s) that run the batch transform job. The VolumeKmsKeyId can be any of the following formats:

        • // KMS Key ID "1234abcd-12ab-34cd-56ef-1234567890ab"

        • // Amazon Resource Name (ARN) of a KMS Key "arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab"

    • CreationTime (datetime) --

      A timestamp that shows when the transform Job was created.

    • TransformStartTime (datetime) --

      Indicates when the transform job starts on ML instances. You are billed for the time interval between this time and the value of TransformEndTime .

    • TransformEndTime (datetime) --

      Indicates when the transform job has been completed, or has stopped or failed. You are billed for the time interval between this time and the value of TransformStartTime .

    • LabelingJobArn (string) --

      The Amazon Resource Name (ARN) of the Amazon SageMaker Ground Truth labeling job that created the transform or training job.

    • DataProcessing (dict) --

      The data structure used to combine the input data and transformed data from the batch transform output into a joined dataset and to store it in an output file. It also contains information on how to filter the input data and the joined dataset. For more information, see Batch Transform I/O Join.

      • InputFilter (string) --

        A JSONPath expression used to select a portion of the input data to pass to the algorithm. Use the InputFilter parameter to exclude fields, such as an ID column, from the input. If you want Amazon SageMaker to pass the entire input dataset to the algorithm, accept the default value $ .

        Examples: "$" , "$[1:]" , "$.features"

      • OutputFilter (string) --

        A JSONPath expression used to select a portion of the joined dataset to save in the output file for a batch transform job. If you want Amazon SageMaker to store the entire input dataset in the output file, leave the default value, $ . If you specify indexes that aren't within the dimension size of the joined dataset, you get an error.

        Examples: "$" , "$[0,5:]" , "$.['id','SageMakerOutput']"

      • JoinSource (string) --

        Specifies the source of the data to join with the transformed data. The valid values are None and Input The default value is None which specifies not to join the input with the transformed data. If you want the batch transform job to join the original input data with the transformed data, set JoinSource to Input . To join input and output, the batch transform job must satisfy the Requirements for Using Batch Transform I/O Join.

        For JSON or JSONLines objects, such as a JSON array, Amazon SageMaker adds the transformed data to the input JSON object in an attribute called SageMakerOutput . The joined result for JSON must be a key-value pair object. If the input is not a key-value pair object, Amazon SageMaker creates a new JSON file. In the new JSON file, and the input data is stored under the SageMakerInput key and the results are stored in SageMakerOutput .

        For CSV files, Amazon SageMaker combines the transformed data with the input data at the end of the input data and stores it in the output file. The joined data has the joined input data followed by the transformed data and the output is a CSV file.

ListCompilationJobs (updated) Link ¶
Changes (response)
{'CompilationJobSummaries': {'CompilationTargetDevice': {'sbe_c'}}}

Lists model compilation jobs that satisfy various filters.

To create a model compilation job, use CreateCompilationJob. To get information about a particular model compilation job you have created, use DescribeCompilationJob.

See also: AWS API Documentation

Request Syntax

client.list_compilation_jobs(
    NextToken='string',
    MaxResults=123,
    CreationTimeAfter=datetime(2015, 1, 1),
    CreationTimeBefore=datetime(2015, 1, 1),
    LastModifiedTimeAfter=datetime(2015, 1, 1),
    LastModifiedTimeBefore=datetime(2015, 1, 1),
    NameContains='string',
    StatusEquals='INPROGRESS'|'COMPLETED'|'FAILED'|'STARTING'|'STOPPING'|'STOPPED',
    SortBy='Name'|'CreationTime'|'Status',
    SortOrder='Ascending'|'Descending'
)
type NextToken

string

param NextToken

If the result of the previous ListCompilationJobs request was truncated, the response includes a NextToken . To retrieve the next set of model compilation jobs, use the token in the next request.

type MaxResults

integer

param MaxResults

The maximum number of model compilation jobs to return in the response.

type CreationTimeAfter

datetime

param CreationTimeAfter

A filter that returns the model compilation jobs that were created after a specified time.

type CreationTimeBefore

datetime

param CreationTimeBefore

A filter that returns the model compilation jobs that were created before a specified time.

type LastModifiedTimeAfter

datetime

param LastModifiedTimeAfter

A filter that returns the model compilation jobs that were modified after a specified time.

type LastModifiedTimeBefore

datetime

param LastModifiedTimeBefore

A filter that returns the model compilation jobs that were modified before a specified time.

type NameContains

string

param NameContains

A filter that returns the model compilation jobs whose name contains a specified string.

type StatusEquals

string

param StatusEquals

A filter that retrieves model compilation jobs with a specific DescribeCompilationJobResponse$CompilationJobStatus status.

type SortBy

string

param SortBy

The field by which to sort results. The default is CreationTime .

type SortOrder

string

param SortOrder

The sort order for results. The default is Ascending .

rtype

dict

returns

Response Syntax

{
    'CompilationJobSummaries': [
        {
            'CompilationJobName': 'string',
            'CompilationJobArn': 'string',
            'CreationTime': datetime(2015, 1, 1),
            'CompilationStartTime': datetime(2015, 1, 1),
            'CompilationEndTime': datetime(2015, 1, 1),
            'CompilationTargetDevice': 'lambda'|'ml_m4'|'ml_m5'|'ml_c4'|'ml_c5'|'ml_p2'|'ml_p3'|'jetson_tx1'|'jetson_tx2'|'jetson_nano'|'rasp3b'|'deeplens'|'rk3399'|'rk3288'|'sbe_c',
            'LastModifiedTime': datetime(2015, 1, 1),
            'CompilationJobStatus': 'INPROGRESS'|'COMPLETED'|'FAILED'|'STARTING'|'STOPPING'|'STOPPED'
        },
    ],
    'NextToken': 'string'
}

Response Structure

  • (dict) --

    • CompilationJobSummaries (list) --

      An array of CompilationJobSummary objects, each describing a model compilation job.

      • (dict) --

        A summary of a model compilation job.

        • CompilationJobName (string) --

          The name of the model compilation job that you want a summary for.

        • CompilationJobArn (string) --

          The Amazon Resource Name (ARN) of the model compilation job.

        • CreationTime (datetime) --

          The time when the model compilation job was created.

        • CompilationStartTime (datetime) --

          The time when the model compilation job started.

        • CompilationEndTime (datetime) --

          The time when the model compilation job completed.

        • CompilationTargetDevice (string) --

          The type of device that the model will run on after compilation has completed.

        • LastModifiedTime (datetime) --

          The time when the model compilation job was last modified.

        • CompilationJobStatus (string) --

          The status of the model compilation job.

    • NextToken (string) --

      If the response is truncated, Amazon SageMaker returns this NextToken . To retrieve the next set of model compilation jobs, use this token in the next request.