Amazon SageMaker Service

2021/12/08 - Amazon SageMaker Service - 4 updated api methods

Changes  This release added a new Ambarella device(amba_cv2) compilation support for Sagemaker Neo.

CreateCompilationJob (updated) Link ¶
Changes (request)
{'OutputConfig': {'TargetDevice': {'amba_cv2'}}}

Starts a model compilation job. After the model has been compiled, Amazon SageMaker saves the resulting model artifacts to an Amazon Simple Storage Service (Amazon S3) bucket that you specify.

If you choose to host your model using Amazon SageMaker hosting services, you can use the resulting model artifacts as part of the model. You can also use the artifacts with Amazon Web Services IoT Greengrass. In that case, deploy them as an ML resource.

In the request body, you provide the following:

  • A name for the compilation job

  • Information about the input model artifacts

  • The output location for the compiled model and the device (target) that the model runs on

  • The Amazon Resource Name (ARN) of the IAM role that Amazon SageMaker assumes to perform the model compilation job.

You can also provide a Tag to track the model compilation job's resource use and costs. The response body contains the CompilationJobArn for the compiled job.

To stop a model compilation job, use StopCompilationJob. To get information about a particular model compilation job, use DescribeCompilationJob. To get information about multiple model compilation jobs, use ListCompilationJobs.

See also: AWS API Documentation

Request Syntax

client.create_compilation_job(
    CompilationJobName='string',
    RoleArn='string',
    ModelPackageVersionArn='string',
    InputConfig={
        'S3Uri': 'string',
        'DataInputConfig': 'string',
        'Framework': 'TENSORFLOW'|'KERAS'|'MXNET'|'ONNX'|'PYTORCH'|'XGBOOST'|'TFLITE'|'DARKNET'|'SKLEARN',
        'FrameworkVersion': 'string'
    },
    OutputConfig={
        'S3OutputLocation': 'string',
        'TargetDevice': 'lambda'|'ml_m4'|'ml_m5'|'ml_c4'|'ml_c5'|'ml_p2'|'ml_p3'|'ml_g4dn'|'ml_inf1'|'ml_eia2'|'jetson_tx1'|'jetson_tx2'|'jetson_nano'|'jetson_xavier'|'rasp3b'|'imx8qm'|'deeplens'|'rk3399'|'rk3288'|'aisage'|'sbe_c'|'qcs605'|'qcs603'|'sitara_am57x'|'amba_cv2'|'amba_cv22'|'amba_cv25'|'x86_win32'|'x86_win64'|'coreml'|'jacinto_tda4vm'|'imx8mplus',
        'TargetPlatform': {
            'Os': 'ANDROID'|'LINUX',
            'Arch': 'X86_64'|'X86'|'ARM64'|'ARM_EABI'|'ARM_EABIHF',
            'Accelerator': 'INTEL_GRAPHICS'|'MALI'|'NVIDIA'
        },
        'CompilerOptions': 'string',
        'KmsKeyId': 'string'
    },
    VpcConfig={
        'SecurityGroupIds': [
            'string',
        ],
        'Subnets': [
            'string',
        ]
    },
    StoppingCondition={
        'MaxRuntimeInSeconds': 123,
        'MaxWaitTimeInSeconds': 123
    },
    Tags=[
        {
            'Key': 'string',
            'Value': 'string'
        },
    ]
)
type CompilationJobName

string

param CompilationJobName

[REQUIRED]

A name for the model compilation job. The name must be unique within the Amazon Web Services Region and within your Amazon Web Services account.

type RoleArn

string

param RoleArn

[REQUIRED]

The Amazon Resource Name (ARN) of an IAM role that enables Amazon SageMaker to perform tasks on your behalf.

During model compilation, Amazon SageMaker needs your permission to:

  • Read input data from an S3 bucket

  • Write model artifacts to an S3 bucket

  • Write logs to Amazon CloudWatch Logs

  • Publish metrics to Amazon CloudWatch

You grant permissions for all of these tasks to an IAM role. To pass this role to Amazon SageMaker, the caller of this API must have the iam:PassRole permission. For more information, see Amazon SageMaker Roles.

type ModelPackageVersionArn

string

param ModelPackageVersionArn

The Amazon Resource Name (ARN) of a versioned model package. Provide either a ModelPackageVersionArn or an InputConfig object in the request syntax. The presence of both objects in the CreateCompilationJob request will return an exception.

type InputConfig

dict

param InputConfig

Provides information about the location of input model artifacts, the name and shape of the expected data inputs, and the framework in which the model was trained.

  • S3Uri (string) -- [REQUIRED]

    The S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix).

  • DataInputConfig (string) -- [REQUIRED]

    Specifies the name and shape of the expected data inputs for your trained model with a JSON dictionary form. The data inputs are InputConfig$Framework specific.

    • TensorFlow : You must specify the name and shape (NHWC format) of the expected data inputs using a dictionary format for your trained model. The dictionary formats required for the console and CLI are different.

      • Examples for one input:

        • If using the console, {"input":[1,1024,1024,3]}

        • If using the CLI, {\"input\":[1,1024,1024,3]}

      • Examples for two inputs:

        • If using the console, {"data1": [1,28,28,1], "data2":[1,28,28,1]}

        • If using the CLI, {\"data1\": [1,28,28,1], \"data2\":[1,28,28,1]}

    • KERAS : You must specify the name and shape (NCHW format) of expected data inputs using a dictionary format for your trained model. Note that while Keras model artifacts should be uploaded in NHWC (channel-last) format, DataInputConfig should be specified in NCHW (channel-first) format. The dictionary formats required for the console and CLI are different.

      • Examples for one input:

        • If using the console, {"input_1":[1,3,224,224]}

        • If using the CLI, {\"input_1\":[1,3,224,224]}

      • Examples for two inputs:

        • If using the console, {"input_1": [1,3,224,224], "input_2":[1,3,224,224]}

        • If using the CLI, {\"input_1\": [1,3,224,224], \"input_2\":[1,3,224,224]}

    • MXNET/ONNX/DARKNET : You must specify the name and shape (NCHW format) of the expected data inputs in order using a dictionary format for your trained model. The dictionary formats required for the console and CLI are different.

      • Examples for one input:

        • If using the console, {"data":[1,3,1024,1024]}

        • If using the CLI, {\"data\":[1,3,1024,1024]}

      • Examples for two inputs:

        • If using the console, {"var1": [1,1,28,28], "var2":[1,1,28,28]}

        • If using the CLI, {\"var1\": [1,1,28,28], \"var2\":[1,1,28,28]}

    • PyTorch : You can either specify the name and shape (NCHW format) of expected data inputs in order using a dictionary format for your trained model or you can specify the shape only using a list format. The dictionary formats required for the console and CLI are different. The list formats for the console and CLI are the same.

      • Examples for one input in dictionary format:

        • If using the console, {"input0":[1,3,224,224]}

        • If using the CLI, {\"input0\":[1,3,224,224]}

      • Example for one input in list format: [[1,3,224,224]]

      • Examples for two inputs in dictionary format:

        • If using the console, {"input0":[1,3,224,224], "input1":[1,3,224,224]}

        • If using the CLI, {\"input0\":[1,3,224,224], \"input1\":[1,3,224,224]}

      • Example for two inputs in list format: [[1,3,224,224], [1,3,224,224]]

    • XGBOOST : input data name and shape are not needed.

    DataInputConfig supports the following parameters for CoreML OutputConfig$TargetDevice (ML Model format):

    • shape : Input shape, for example {"input_1": {"shape": [1,224,224,3]}} . In addition to static input shapes, CoreML converter supports Flexible input shapes:

      • Range Dimension. You can use the Range Dimension feature if you know the input shape will be within some specific interval in that dimension, for example: {"input_1": {"shape": ["1..10", 224, 224, 3]}}

      • Enumerated shapes. Sometimes, the models are trained to work only on a select set of inputs. You can enumerate all supported input shapes, for example: {"input_1": {"shape": [[1, 224, 224, 3], [1, 160, 160, 3]]}}

    • default_shape : Default input shape. You can set a default shape during conversion for both Range Dimension and Enumerated Shapes. For example {"input_1": {"shape": ["1..10", 224, 224, 3], "default_shape": [1, 224, 224, 3]}}

    • type : Input type. Allowed values: Image and Tensor . By default, the converter generates an ML Model with inputs of type Tensor (MultiArray). User can set input type to be Image. Image input type requires additional input parameters such as bias and scale .

    • bias : If the input type is an Image, you need to provide the bias vector.

    • scale : If the input type is an Image, you need to provide a scale factor.

    CoreML ClassifierConfig parameters can be specified using OutputConfig$CompilerOptions. CoreML converter supports Tensorflow and PyTorch models. CoreML conversion examples:

    • Tensor type input:

      • "DataInputConfig": {"input_1": {"shape": [[1,224,224,3], [1,160,160,3]], "default_shape": [1,224,224,3]}}

    • Tensor type input without input name (PyTorch):

      • "DataInputConfig": [{"shape": [[1,3,224,224], [1,3,160,160]], "default_shape": [1,3,224,224]}]

    • Image type input:

      • "DataInputConfig": {"input_1": {"shape": [[1,224,224,3], [1,160,160,3]], "default_shape": [1,224,224,3], "type": "Image", "bias": [-1,-1,-1], "scale": 0.007843137255}}

      • "CompilerOptions": {"class_labels": "imagenet_labels_1000.txt"}

    • Image type input without input name (PyTorch):

      • "DataInputConfig": [{"shape": [[1,3,224,224], [1,3,160,160]], "default_shape": [1,3,224,224], "type": "Image", "bias": [-1,-1,-1], "scale": 0.007843137255}]

      • "CompilerOptions": {"class_labels": "imagenet_labels_1000.txt"}

    Depending on the model format, DataInputConfig requires the following parameters for ml_eia2 OutputConfig:TargetDevice.

    • For TensorFlow models saved in the SavedModel format, specify the input names from signature_def_key and the input model shapes for DataInputConfig . Specify the signature_def_key in OutputConfig:CompilerOptions if the model does not use TensorFlow's default signature def key. For example:

      • "DataInputConfig": {"inputs": [1, 224, 224, 3]}

      • "CompilerOptions": {"signature_def_key": "serving_custom"}

    • For TensorFlow models saved as a frozen graph, specify the input tensor names and shapes in DataInputConfig and the output tensor names for output_names in OutputConfig:CompilerOptions. For example:

      • "DataInputConfig": {"input_tensor:0": [1, 224, 224, 3]}

      • "CompilerOptions": {"output_names": ["output_tensor:0"]}

  • Framework (string) -- [REQUIRED]

    Identifies the framework in which the model was trained. For example: TENSORFLOW.

  • FrameworkVersion (string) --

    Specifies the framework version to use.

    This API field is only supported for PyTorch framework versions 1.4 , 1.5 , and 1.6 for cloud instance target devices: ml_c4 , ml_c5 , ml_m4 , ml_m5 , ml_p2 , ml_p3 , and ml_g4dn .

type OutputConfig

dict

param OutputConfig

[REQUIRED]

Provides information about the output location for the compiled model and the target device the model runs on.

  • S3OutputLocation (string) -- [REQUIRED]

    Identifies the S3 bucket where you want Amazon SageMaker to store the model artifacts. For example, s3://bucket-name/key-name-prefix .

  • TargetDevice (string) --

    Identifies the target device or the machine learning instance that you want to run your model on after the compilation has completed. Alternatively, you can specify OS, architecture, and accelerator using TargetPlatform fields. It can be used instead of TargetPlatform .

  • TargetPlatform (dict) --

    Contains information about a target platform that you want your model to run on, such as OS, architecture, and accelerators. It is an alternative of TargetDevice .

    The following examples show how to configure the TargetPlatform and CompilerOptions JSON strings for popular target platforms:

    • Raspberry Pi 3 Model B+ "TargetPlatform": {"Os": "LINUX", "Arch": "ARM_EABIHF"}, "CompilerOptions": {'mattr': ['+neon']}

    • Jetson TX2 "TargetPlatform": {"Os": "LINUX", "Arch": "ARM64", "Accelerator": "NVIDIA"}, "CompilerOptions": {'gpu-code': 'sm_62', 'trt-ver': '6.0.1', 'cuda-ver': '10.0'}

    • EC2 m5.2xlarge instance OS "TargetPlatform": {"Os": "LINUX", "Arch": "X86_64", "Accelerator": "NVIDIA"}, "CompilerOptions": {'mcpu': 'skylake-avx512'}

    • RK3399 "TargetPlatform": {"Os": "LINUX", "Arch": "ARM64", "Accelerator": "MALI"}

    • ARMv7 phone (CPU) "TargetPlatform": {"Os": "ANDROID", "Arch": "ARM_EABI"}, "CompilerOptions": {'ANDROID_PLATFORM': 25, 'mattr': ['+neon']}

    • ARMv8 phone (CPU) "TargetPlatform": {"Os": "ANDROID", "Arch": "ARM64"}, "CompilerOptions": {'ANDROID_PLATFORM': 29}

    • Os (string) -- [REQUIRED]

      Specifies a target platform OS.

      • LINUX : Linux-based operating systems.

      • ANDROID : Android operating systems. Android API level can be specified using the ANDROID_PLATFORM compiler option. For example, "CompilerOptions": {'ANDROID_PLATFORM': 28}

    • Arch (string) -- [REQUIRED]

      Specifies a target platform architecture.

      • X86_64 : 64-bit version of the x86 instruction set.

      • X86 : 32-bit version of the x86 instruction set.

      • ARM64 : ARMv8 64-bit CPU.

      • ARM_EABIHF : ARMv7 32-bit, Hard Float.

      • ARM_EABI : ARMv7 32-bit, Soft Float. Used by Android 32-bit ARM platform.

    • Accelerator (string) --

      Specifies a target platform accelerator (optional).

      • NVIDIA : Nvidia graphics processing unit. It also requires gpu-code , trt-ver , cuda-ver compiler options

      • MALI : ARM Mali graphics processor

      • INTEL_GRAPHICS : Integrated Intel graphics

  • CompilerOptions (string) --

    Specifies additional parameters for compiler options in JSON format. The compiler options are TargetPlatform specific. It is required for NVIDIA accelerators and highly recommended for CPU compilations. For any other cases, it is optional to specify CompilerOptions.

    • DTYPE : Specifies the data type for the input. When compiling for ml_* (except for ml_inf ) instances using PyTorch framework, provide the data type (dtype) of the model's input. "float32" is used if "DTYPE" is not specified. Options for data type are:

      • float32: Use either "float" or "float32" .

      • int64: Use either "int64" or "long" .

    For example, {"dtype" : "float32"} .

    • CPU : Compilation for CPU supports the following compiler options.

      • mcpu : CPU micro-architecture. For example, {'mcpu': 'skylake-avx512'}

      • mattr : CPU flags. For example, {'mattr': ['+neon', '+vfpv4']}

    • ARM : Details of ARM CPU compilations.

      • NEON : NEON is an implementation of the Advanced SIMD extension used in ARMv7 processors. For example, add {'mattr': ['+neon']} to the compiler options if compiling for ARM 32-bit platform with the NEON support.

    • NVIDIA : Compilation for NVIDIA GPU supports the following compiler options.

      • gpu_code : Specifies the targeted architecture.

      • trt-ver : Specifies the TensorRT versions in x.y.z. format.

      • cuda-ver : Specifies the CUDA version in x.y format.

    For example, {'gpu-code': 'sm_72', 'trt-ver': '6.0.1', 'cuda-ver': '10.1'}

    • ANDROID : Compilation for the Android OS supports the following compiler options:

      • ANDROID_PLATFORM : Specifies the Android API levels. Available levels range from 21 to 29. For example, {'ANDROID_PLATFORM': 28} .

      • mattr : Add {'mattr': ['+neon']} to compiler options if compiling for ARM 32-bit platform with NEON support.

    • INFERENTIA : Compilation for target ml_inf1 uses compiler options passed in as a JSON string. For example, "CompilerOptions": "\"--verbose 1 --num-neuroncores 2 -O2\"" . For information about supported compiler options, see Neuron Compiler CLI.

    • CoreML : Compilation for the CoreML OutputConfig$TargetDevice supports the following compiler options:

      • class_labels : Specifies the classification labels file name inside input tar.gz file. For example, {"class_labels": "imagenet_labels_1000.txt"} . Labels inside the txt file should be separated by newlines.

    • EIA : Compilation for the Elastic Inference Accelerator supports the following compiler options:

      • precision_mode : Specifies the precision of compiled artifacts. Supported values are "FP16" and "FP32" . Default is "FP32" .

      • signature_def_key : Specifies the signature to use for models in SavedModel format. Defaults is TensorFlow's default signature def key.

      • output_names : Specifies a list of output tensor names for models in FrozenGraph format. Set at most one API field, either: signature_def_key or output_names .

    For example: {"precision_mode": "FP32", "output_names": ["output:0"]}

  • KmsKeyId (string) --

    The Amazon Web Services Key Management Service key (Amazon Web Services KMS) that Amazon SageMaker uses to encrypt your output models with Amazon S3 server-side encryption after compilation job. If you don't provide a KMS key ID, Amazon SageMaker uses the default KMS key for Amazon S3 for your role's account. For more information, see KMS-Managed Encryption Keys in the Amazon Simple Storage Service Developer Guide.

    The KmsKeyId can be any of the following formats:

    • Key ID: 1234abcd-12ab-34cd-56ef-1234567890ab

    • Key ARN: arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab

    • Alias name: alias/ExampleAlias

    • Alias name ARN: arn:aws:kms:us-west-2:111122223333:alias/ExampleAlias

type VpcConfig

dict

param VpcConfig

A VpcConfig object that specifies the VPC that you want your compilation job to connect to. Control access to your models by configuring the VPC. For more information, see Protect Compilation Jobs by Using an Amazon Virtual Private Cloud.

  • SecurityGroupIds (list) -- [REQUIRED]

    The VPC security group IDs. IDs have the form of sg-xxxxxxxx . Specify the security groups for the VPC that is specified in the Subnets field.

    • (string) --

  • Subnets (list) -- [REQUIRED]

    The ID of the subnets in the VPC that you want to connect the compilation job to for accessing the model in Amazon S3.

    • (string) --

type StoppingCondition

dict

param StoppingCondition

[REQUIRED]

Specifies a limit to how long a model compilation job can run. When the job reaches the time limit, Amazon SageMaker ends the compilation job. Use this API to cap model training costs.

  • MaxRuntimeInSeconds (integer) --

    The maximum length of time, in seconds, that a training or compilation job can run.

    For compilation jobs, if the job does not complete during this time, you will receive a TimeOut error. We recommend starting with 900 seconds and increase as necessary based on your model.

    For all other jobs, if the job does not complete during this time, Amazon SageMaker ends the job. When RetryStrategy is specified in the job request, MaxRuntimeInSeconds specifies the maximum time for all of the attempts in total, not each individual attempt. The default value is 1 day. The maximum value is 28 days.

  • MaxWaitTimeInSeconds (integer) --

    The maximum length of time, in seconds, that a managed Spot training job has to complete. It is the amount of time spent waiting for Spot capacity plus the amount of time the job can run. It must be equal to or greater than MaxRuntimeInSeconds . If the job does not complete during this time, Amazon SageMaker ends the job.

    When RetryStrategy is specified in the job request, MaxWaitTimeInSeconds specifies the maximum time for all of the attempts in total, not each individual attempt.

type Tags

list

param Tags

An array of key-value pairs. You can use tags to categorize your Amazon Web Services resources in different ways, for example, by purpose, owner, or environment. For more information, see Tagging Amazon Web Services Resources.

  • (dict) --

    A tag object that consists of a key and an optional value, used to manage metadata for SageMaker Amazon Web Services resources.

    You can add tags to notebook instances, training jobs, hyperparameter tuning jobs, batch transform jobs, models, labeling jobs, work teams, endpoint configurations, and endpoints. For more information on adding tags to SageMaker resources, see AddTags.

    For more information on adding metadata to your Amazon Web Services resources with tagging, see Tagging Amazon Web Services resources. For advice on best practices for managing Amazon Web Services resources with tagging, see Tagging Best Practices: Implement an Effective Amazon Web Services Resource Tagging Strategy.

    • Key (string) -- [REQUIRED]

      The tag key. Tag keys must be unique per resource.

    • Value (string) -- [REQUIRED]

      The tag value.

rtype

dict

returns

Response Syntax

{
    'CompilationJobArn': 'string'
}

Response Structure

  • (dict) --

    • CompilationJobArn (string) --

      If the action is successful, the service sends back an HTTP 200 response. Amazon SageMaker returns the following data in JSON format:

      • CompilationJobArn : The Amazon Resource Name (ARN) of the compiled job.

DescribeCompilationJob (updated) Link ¶
Changes (response)
{'OutputConfig': {'TargetDevice': {'amba_cv2'}}}

Returns information about a model compilation job.

To create a model compilation job, use CreateCompilationJob. To get information about multiple model compilation jobs, use ListCompilationJobs.

See also: AWS API Documentation

Request Syntax

client.describe_compilation_job(
    CompilationJobName='string'
)
type CompilationJobName

string

param CompilationJobName

[REQUIRED]

The name of the model compilation job that you want information about.

rtype

dict

returns

Response Syntax

{
    'CompilationJobName': 'string',
    'CompilationJobArn': 'string',
    'CompilationJobStatus': 'INPROGRESS'|'COMPLETED'|'FAILED'|'STARTING'|'STOPPING'|'STOPPED',
    'CompilationStartTime': datetime(2015, 1, 1),
    'CompilationEndTime': datetime(2015, 1, 1),
    'StoppingCondition': {
        'MaxRuntimeInSeconds': 123,
        'MaxWaitTimeInSeconds': 123
    },
    'InferenceImage': 'string',
    'ModelPackageVersionArn': 'string',
    'CreationTime': datetime(2015, 1, 1),
    'LastModifiedTime': datetime(2015, 1, 1),
    'FailureReason': 'string',
    'ModelArtifacts': {
        'S3ModelArtifacts': 'string'
    },
    'ModelDigests': {
        'ArtifactDigest': 'string'
    },
    'RoleArn': 'string',
    'InputConfig': {
        'S3Uri': 'string',
        'DataInputConfig': 'string',
        'Framework': 'TENSORFLOW'|'KERAS'|'MXNET'|'ONNX'|'PYTORCH'|'XGBOOST'|'TFLITE'|'DARKNET'|'SKLEARN',
        'FrameworkVersion': 'string'
    },
    'OutputConfig': {
        'S3OutputLocation': 'string',
        'TargetDevice': 'lambda'|'ml_m4'|'ml_m5'|'ml_c4'|'ml_c5'|'ml_p2'|'ml_p3'|'ml_g4dn'|'ml_inf1'|'ml_eia2'|'jetson_tx1'|'jetson_tx2'|'jetson_nano'|'jetson_xavier'|'rasp3b'|'imx8qm'|'deeplens'|'rk3399'|'rk3288'|'aisage'|'sbe_c'|'qcs605'|'qcs603'|'sitara_am57x'|'amba_cv2'|'amba_cv22'|'amba_cv25'|'x86_win32'|'x86_win64'|'coreml'|'jacinto_tda4vm'|'imx8mplus',
        'TargetPlatform': {
            'Os': 'ANDROID'|'LINUX',
            'Arch': 'X86_64'|'X86'|'ARM64'|'ARM_EABI'|'ARM_EABIHF',
            'Accelerator': 'INTEL_GRAPHICS'|'MALI'|'NVIDIA'
        },
        'CompilerOptions': 'string',
        'KmsKeyId': 'string'
    },
    'VpcConfig': {
        'SecurityGroupIds': [
            'string',
        ],
        'Subnets': [
            'string',
        ]
    }
}

Response Structure

  • (dict) --

    • CompilationJobName (string) --

      The name of the model compilation job.

    • CompilationJobArn (string) --

      The Amazon Resource Name (ARN) of the model compilation job.

    • CompilationJobStatus (string) --

      The status of the model compilation job.

    • CompilationStartTime (datetime) --

      The time when the model compilation job started the CompilationJob instances.

      You are billed for the time between this timestamp and the timestamp in the DescribeCompilationJobResponse$CompilationEndTime field. In Amazon CloudWatch Logs, the start time might be later than this time. That's because it takes time to download the compilation job, which depends on the size of the compilation job container.

    • CompilationEndTime (datetime) --

      The time when the model compilation job on a compilation job instance ended. For a successful or stopped job, this is when the job's model artifacts have finished uploading. For a failed job, this is when Amazon SageMaker detected that the job failed.

    • StoppingCondition (dict) --

      Specifies a limit to how long a model compilation job can run. When the job reaches the time limit, Amazon SageMaker ends the compilation job. Use this API to cap model training costs.

      • MaxRuntimeInSeconds (integer) --

        The maximum length of time, in seconds, that a training or compilation job can run.

        For compilation jobs, if the job does not complete during this time, you will receive a TimeOut error. We recommend starting with 900 seconds and increase as necessary based on your model.

        For all other jobs, if the job does not complete during this time, Amazon SageMaker ends the job. When RetryStrategy is specified in the job request, MaxRuntimeInSeconds specifies the maximum time for all of the attempts in total, not each individual attempt. The default value is 1 day. The maximum value is 28 days.

      • MaxWaitTimeInSeconds (integer) --

        The maximum length of time, in seconds, that a managed Spot training job has to complete. It is the amount of time spent waiting for Spot capacity plus the amount of time the job can run. It must be equal to or greater than MaxRuntimeInSeconds . If the job does not complete during this time, Amazon SageMaker ends the job.

        When RetryStrategy is specified in the job request, MaxWaitTimeInSeconds specifies the maximum time for all of the attempts in total, not each individual attempt.

    • InferenceImage (string) --

      The inference image to use when compiling a model. Specify an image only if the target device is a cloud instance.

    • ModelPackageVersionArn (string) --

      The Amazon Resource Name (ARN) of the versioned model package that was provided to SageMaker Neo when you initiated a compilation job.

    • CreationTime (datetime) --

      The time that the model compilation job was created.

    • LastModifiedTime (datetime) --

      The time that the status of the model compilation job was last modified.

    • FailureReason (string) --

      If a model compilation job failed, the reason it failed.

    • ModelArtifacts (dict) --

      Information about the location in Amazon S3 that has been configured for storing the model artifacts used in the compilation job.

      • S3ModelArtifacts (string) --

        The path of the S3 object that contains the model artifacts. For example, s3://bucket-name/keynameprefix/model.tar.gz .

    • ModelDigests (dict) --

      Provides a BLAKE2 hash value that identifies the compiled model artifacts in Amazon S3.

      • ArtifactDigest (string) --

        Provides a hash value that uniquely identifies the stored model artifacts.

    • RoleArn (string) --

      The Amazon Resource Name (ARN) of an IAM role that Amazon SageMaker assumes to perform the model compilation job.

    • InputConfig (dict) --

      Information about the location in Amazon S3 of the input model artifacts, the name and shape of the expected data inputs, and the framework in which the model was trained.

      • S3Uri (string) --

        The S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix).

      • DataInputConfig (string) --

        Specifies the name and shape of the expected data inputs for your trained model with a JSON dictionary form. The data inputs are InputConfig$Framework specific.

        • TensorFlow : You must specify the name and shape (NHWC format) of the expected data inputs using a dictionary format for your trained model. The dictionary formats required for the console and CLI are different.

          • Examples for one input:

            • If using the console, {"input":[1,1024,1024,3]}

            • If using the CLI, {\"input\":[1,1024,1024,3]}

          • Examples for two inputs:

            • If using the console, {"data1": [1,28,28,1], "data2":[1,28,28,1]}

            • If using the CLI, {\"data1\": [1,28,28,1], \"data2\":[1,28,28,1]}

        • KERAS : You must specify the name and shape (NCHW format) of expected data inputs using a dictionary format for your trained model. Note that while Keras model artifacts should be uploaded in NHWC (channel-last) format, DataInputConfig should be specified in NCHW (channel-first) format. The dictionary formats required for the console and CLI are different.

          • Examples for one input:

            • If using the console, {"input_1":[1,3,224,224]}

            • If using the CLI, {\"input_1\":[1,3,224,224]}

          • Examples for two inputs:

            • If using the console, {"input_1": [1,3,224,224], "input_2":[1,3,224,224]}

            • If using the CLI, {\"input_1\": [1,3,224,224], \"input_2\":[1,3,224,224]}

        • MXNET/ONNX/DARKNET : You must specify the name and shape (NCHW format) of the expected data inputs in order using a dictionary format for your trained model. The dictionary formats required for the console and CLI are different.

          • Examples for one input:

            • If using the console, {"data":[1,3,1024,1024]}

            • If using the CLI, {\"data\":[1,3,1024,1024]}

          • Examples for two inputs:

            • If using the console, {"var1": [1,1,28,28], "var2":[1,1,28,28]}

            • If using the CLI, {\"var1\": [1,1,28,28], \"var2\":[1,1,28,28]}

        • PyTorch : You can either specify the name and shape (NCHW format) of expected data inputs in order using a dictionary format for your trained model or you can specify the shape only using a list format. The dictionary formats required for the console and CLI are different. The list formats for the console and CLI are the same.

          • Examples for one input in dictionary format:

            • If using the console, {"input0":[1,3,224,224]}

            • If using the CLI, {\"input0\":[1,3,224,224]}

          • Example for one input in list format: [[1,3,224,224]]

          • Examples for two inputs in dictionary format:

            • If using the console, {"input0":[1,3,224,224], "input1":[1,3,224,224]}

            • If using the CLI, {\"input0\":[1,3,224,224], \"input1\":[1,3,224,224]}

          • Example for two inputs in list format: [[1,3,224,224], [1,3,224,224]]

        • XGBOOST : input data name and shape are not needed.

        DataInputConfig supports the following parameters for CoreML OutputConfig$TargetDevice (ML Model format):

        • shape : Input shape, for example {"input_1": {"shape": [1,224,224,3]}} . In addition to static input shapes, CoreML converter supports Flexible input shapes:

          • Range Dimension. You can use the Range Dimension feature if you know the input shape will be within some specific interval in that dimension, for example: {"input_1": {"shape": ["1..10", 224, 224, 3]}}

          • Enumerated shapes. Sometimes, the models are trained to work only on a select set of inputs. You can enumerate all supported input shapes, for example: {"input_1": {"shape": [[1, 224, 224, 3], [1, 160, 160, 3]]}}

        • default_shape : Default input shape. You can set a default shape during conversion for both Range Dimension and Enumerated Shapes. For example {"input_1": {"shape": ["1..10", 224, 224, 3], "default_shape": [1, 224, 224, 3]}}

        • type : Input type. Allowed values: Image and Tensor . By default, the converter generates an ML Model with inputs of type Tensor (MultiArray). User can set input type to be Image. Image input type requires additional input parameters such as bias and scale .

        • bias : If the input type is an Image, you need to provide the bias vector.

        • scale : If the input type is an Image, you need to provide a scale factor.

        CoreML ClassifierConfig parameters can be specified using OutputConfig$CompilerOptions. CoreML converter supports Tensorflow and PyTorch models. CoreML conversion examples:

        • Tensor type input:

          • "DataInputConfig": {"input_1": {"shape": [[1,224,224,3], [1,160,160,3]], "default_shape": [1,224,224,3]}}

        • Tensor type input without input name (PyTorch):

          • "DataInputConfig": [{"shape": [[1,3,224,224], [1,3,160,160]], "default_shape": [1,3,224,224]}]

        • Image type input:

          • "DataInputConfig": {"input_1": {"shape": [[1,224,224,3], [1,160,160,3]], "default_shape": [1,224,224,3], "type": "Image", "bias": [-1,-1,-1], "scale": 0.007843137255}}

          • "CompilerOptions": {"class_labels": "imagenet_labels_1000.txt"}

        • Image type input without input name (PyTorch):

          • "DataInputConfig": [{"shape": [[1,3,224,224], [1,3,160,160]], "default_shape": [1,3,224,224], "type": "Image", "bias": [-1,-1,-1], "scale": 0.007843137255}]

          • "CompilerOptions": {"class_labels": "imagenet_labels_1000.txt"}

        Depending on the model format, DataInputConfig requires the following parameters for ml_eia2 OutputConfig:TargetDevice.

        • For TensorFlow models saved in the SavedModel format, specify the input names from signature_def_key and the input model shapes for DataInputConfig . Specify the signature_def_key in OutputConfig:CompilerOptions if the model does not use TensorFlow's default signature def key. For example:

          • "DataInputConfig": {"inputs": [1, 224, 224, 3]}

          • "CompilerOptions": {"signature_def_key": "serving_custom"}

        • For TensorFlow models saved as a frozen graph, specify the input tensor names and shapes in DataInputConfig and the output tensor names for output_names in OutputConfig:CompilerOptions. For example:

          • "DataInputConfig": {"input_tensor:0": [1, 224, 224, 3]}

          • "CompilerOptions": {"output_names": ["output_tensor:0"]}

      • Framework (string) --

        Identifies the framework in which the model was trained. For example: TENSORFLOW.

      • FrameworkVersion (string) --

        Specifies the framework version to use.

        This API field is only supported for PyTorch framework versions 1.4 , 1.5 , and 1.6 for cloud instance target devices: ml_c4 , ml_c5 , ml_m4 , ml_m5 , ml_p2 , ml_p3 , and ml_g4dn .

    • OutputConfig (dict) --

      Information about the output location for the compiled model and the target device that the model runs on.

      • S3OutputLocation (string) --

        Identifies the S3 bucket where you want Amazon SageMaker to store the model artifacts. For example, s3://bucket-name/key-name-prefix .

      • TargetDevice (string) --

        Identifies the target device or the machine learning instance that you want to run your model on after the compilation has completed. Alternatively, you can specify OS, architecture, and accelerator using TargetPlatform fields. It can be used instead of TargetPlatform .

      • TargetPlatform (dict) --

        Contains information about a target platform that you want your model to run on, such as OS, architecture, and accelerators. It is an alternative of TargetDevice .

        The following examples show how to configure the TargetPlatform and CompilerOptions JSON strings for popular target platforms:

        • Raspberry Pi 3 Model B+ "TargetPlatform": {"Os": "LINUX", "Arch": "ARM_EABIHF"}, "CompilerOptions": {'mattr': ['+neon']}

        • Jetson TX2 "TargetPlatform": {"Os": "LINUX", "Arch": "ARM64", "Accelerator": "NVIDIA"}, "CompilerOptions": {'gpu-code': 'sm_62', 'trt-ver': '6.0.1', 'cuda-ver': '10.0'}

        • EC2 m5.2xlarge instance OS "TargetPlatform": {"Os": "LINUX", "Arch": "X86_64", "Accelerator": "NVIDIA"}, "CompilerOptions": {'mcpu': 'skylake-avx512'}

        • RK3399 "TargetPlatform": {"Os": "LINUX", "Arch": "ARM64", "Accelerator": "MALI"}

        • ARMv7 phone (CPU) "TargetPlatform": {"Os": "ANDROID", "Arch": "ARM_EABI"}, "CompilerOptions": {'ANDROID_PLATFORM': 25, 'mattr': ['+neon']}

        • ARMv8 phone (CPU) "TargetPlatform": {"Os": "ANDROID", "Arch": "ARM64"}, "CompilerOptions": {'ANDROID_PLATFORM': 29}

        • Os (string) --

          Specifies a target platform OS.

          • LINUX : Linux-based operating systems.

          • ANDROID : Android operating systems. Android API level can be specified using the ANDROID_PLATFORM compiler option. For example, "CompilerOptions": {'ANDROID_PLATFORM': 28}

        • Arch (string) --

          Specifies a target platform architecture.

          • X86_64 : 64-bit version of the x86 instruction set.

          • X86 : 32-bit version of the x86 instruction set.

          • ARM64 : ARMv8 64-bit CPU.

          • ARM_EABIHF : ARMv7 32-bit, Hard Float.

          • ARM_EABI : ARMv7 32-bit, Soft Float. Used by Android 32-bit ARM platform.

        • Accelerator (string) --

          Specifies a target platform accelerator (optional).

          • NVIDIA : Nvidia graphics processing unit. It also requires gpu-code , trt-ver , cuda-ver compiler options

          • MALI : ARM Mali graphics processor

          • INTEL_GRAPHICS : Integrated Intel graphics

      • CompilerOptions (string) --

        Specifies additional parameters for compiler options in JSON format. The compiler options are TargetPlatform specific. It is required for NVIDIA accelerators and highly recommended for CPU compilations. For any other cases, it is optional to specify CompilerOptions.

        • DTYPE : Specifies the data type for the input. When compiling for ml_* (except for ml_inf ) instances using PyTorch framework, provide the data type (dtype) of the model's input. "float32" is used if "DTYPE" is not specified. Options for data type are:

          • float32: Use either "float" or "float32" .

          • int64: Use either "int64" or "long" .

        For example, {"dtype" : "float32"} .

        • CPU : Compilation for CPU supports the following compiler options.

          • mcpu : CPU micro-architecture. For example, {'mcpu': 'skylake-avx512'}

          • mattr : CPU flags. For example, {'mattr': ['+neon', '+vfpv4']}

        • ARM : Details of ARM CPU compilations.

          • NEON : NEON is an implementation of the Advanced SIMD extension used in ARMv7 processors. For example, add {'mattr': ['+neon']} to the compiler options if compiling for ARM 32-bit platform with the NEON support.

        • NVIDIA : Compilation for NVIDIA GPU supports the following compiler options.

          • gpu_code : Specifies the targeted architecture.

          • trt-ver : Specifies the TensorRT versions in x.y.z. format.

          • cuda-ver : Specifies the CUDA version in x.y format.

        For example, {'gpu-code': 'sm_72', 'trt-ver': '6.0.1', 'cuda-ver': '10.1'}

        • ANDROID : Compilation for the Android OS supports the following compiler options:

          • ANDROID_PLATFORM : Specifies the Android API levels. Available levels range from 21 to 29. For example, {'ANDROID_PLATFORM': 28} .

          • mattr : Add {'mattr': ['+neon']} to compiler options if compiling for ARM 32-bit platform with NEON support.

        • INFERENTIA : Compilation for target ml_inf1 uses compiler options passed in as a JSON string. For example, "CompilerOptions": "\"--verbose 1 --num-neuroncores 2 -O2\"" . For information about supported compiler options, see Neuron Compiler CLI.

        • CoreML : Compilation for the CoreML OutputConfig$TargetDevice supports the following compiler options:

          • class_labels : Specifies the classification labels file name inside input tar.gz file. For example, {"class_labels": "imagenet_labels_1000.txt"} . Labels inside the txt file should be separated by newlines.

        • EIA : Compilation for the Elastic Inference Accelerator supports the following compiler options:

          • precision_mode : Specifies the precision of compiled artifacts. Supported values are "FP16" and "FP32" . Default is "FP32" .

          • signature_def_key : Specifies the signature to use for models in SavedModel format. Defaults is TensorFlow's default signature def key.

          • output_names : Specifies a list of output tensor names for models in FrozenGraph format. Set at most one API field, either: signature_def_key or output_names .

        For example: {"precision_mode": "FP32", "output_names": ["output:0"]}

      • KmsKeyId (string) --

        The Amazon Web Services Key Management Service key (Amazon Web Services KMS) that Amazon SageMaker uses to encrypt your output models with Amazon S3 server-side encryption after compilation job. If you don't provide a KMS key ID, Amazon SageMaker uses the default KMS key for Amazon S3 for your role's account. For more information, see KMS-Managed Encryption Keys in the Amazon Simple Storage Service Developer Guide.

        The KmsKeyId can be any of the following formats:

        • Key ID: 1234abcd-12ab-34cd-56ef-1234567890ab

        • Key ARN: arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab

        • Alias name: alias/ExampleAlias

        • Alias name ARN: arn:aws:kms:us-west-2:111122223333:alias/ExampleAlias

    • VpcConfig (dict) --

      A VpcConfig object that specifies the VPC that you want your compilation job to connect to. Control access to your models by configuring the VPC. For more information, see Protect Compilation Jobs by Using an Amazon Virtual Private Cloud.

      • SecurityGroupIds (list) --

        The VPC security group IDs. IDs have the form of sg-xxxxxxxx . Specify the security groups for the VPC that is specified in the Subnets field.

        • (string) --

      • Subnets (list) --

        The ID of the subnets in the VPC that you want to connect the compilation job to for accessing the model in Amazon S3.

        • (string) --

ListCompilationJobs (updated) Link ¶
Changes (response)
{'CompilationJobSummaries': {'CompilationTargetDevice': {'amba_cv2'}}}

Lists model compilation jobs that satisfy various filters.

To create a model compilation job, use CreateCompilationJob. To get information about a particular model compilation job you have created, use DescribeCompilationJob.

See also: AWS API Documentation

Request Syntax

client.list_compilation_jobs(
    NextToken='string',
    MaxResults=123,
    CreationTimeAfter=datetime(2015, 1, 1),
    CreationTimeBefore=datetime(2015, 1, 1),
    LastModifiedTimeAfter=datetime(2015, 1, 1),
    LastModifiedTimeBefore=datetime(2015, 1, 1),
    NameContains='string',
    StatusEquals='INPROGRESS'|'COMPLETED'|'FAILED'|'STARTING'|'STOPPING'|'STOPPED',
    SortBy='Name'|'CreationTime'|'Status',
    SortOrder='Ascending'|'Descending'
)
type NextToken

string

param NextToken

If the result of the previous ListCompilationJobs request was truncated, the response includes a NextToken . To retrieve the next set of model compilation jobs, use the token in the next request.

type MaxResults

integer

param MaxResults

The maximum number of model compilation jobs to return in the response.

type CreationTimeAfter

datetime

param CreationTimeAfter

A filter that returns the model compilation jobs that were created after a specified time.

type CreationTimeBefore

datetime

param CreationTimeBefore

A filter that returns the model compilation jobs that were created before a specified time.

type LastModifiedTimeAfter

datetime

param LastModifiedTimeAfter

A filter that returns the model compilation jobs that were modified after a specified time.

type LastModifiedTimeBefore

datetime

param LastModifiedTimeBefore

A filter that returns the model compilation jobs that were modified before a specified time.

type NameContains

string

param NameContains

A filter that returns the model compilation jobs whose name contains a specified string.

type StatusEquals

string

param StatusEquals

A filter that retrieves model compilation jobs with a specific DescribeCompilationJobResponse$CompilationJobStatus status.

type SortBy

string

param SortBy

The field by which to sort results. The default is CreationTime .

type SortOrder

string

param SortOrder

The sort order for results. The default is Ascending .

rtype

dict

returns

Response Syntax

{
    'CompilationJobSummaries': [
        {
            'CompilationJobName': 'string',
            'CompilationJobArn': 'string',
            'CreationTime': datetime(2015, 1, 1),
            'CompilationStartTime': datetime(2015, 1, 1),
            'CompilationEndTime': datetime(2015, 1, 1),
            'CompilationTargetDevice': 'lambda'|'ml_m4'|'ml_m5'|'ml_c4'|'ml_c5'|'ml_p2'|'ml_p3'|'ml_g4dn'|'ml_inf1'|'ml_eia2'|'jetson_tx1'|'jetson_tx2'|'jetson_nano'|'jetson_xavier'|'rasp3b'|'imx8qm'|'deeplens'|'rk3399'|'rk3288'|'aisage'|'sbe_c'|'qcs605'|'qcs603'|'sitara_am57x'|'amba_cv2'|'amba_cv22'|'amba_cv25'|'x86_win32'|'x86_win64'|'coreml'|'jacinto_tda4vm'|'imx8mplus',
            'CompilationTargetPlatformOs': 'ANDROID'|'LINUX',
            'CompilationTargetPlatformArch': 'X86_64'|'X86'|'ARM64'|'ARM_EABI'|'ARM_EABIHF',
            'CompilationTargetPlatformAccelerator': 'INTEL_GRAPHICS'|'MALI'|'NVIDIA',
            'LastModifiedTime': datetime(2015, 1, 1),
            'CompilationJobStatus': 'INPROGRESS'|'COMPLETED'|'FAILED'|'STARTING'|'STOPPING'|'STOPPED'
        },
    ],
    'NextToken': 'string'
}

Response Structure

  • (dict) --

    • CompilationJobSummaries (list) --

      An array of CompilationJobSummary objects, each describing a model compilation job.

      • (dict) --

        A summary of a model compilation job.

        • CompilationJobName (string) --

          The name of the model compilation job that you want a summary for.

        • CompilationJobArn (string) --

          The Amazon Resource Name (ARN) of the model compilation job.

        • CreationTime (datetime) --

          The time when the model compilation job was created.

        • CompilationStartTime (datetime) --

          The time when the model compilation job started.

        • CompilationEndTime (datetime) --

          The time when the model compilation job completed.

        • CompilationTargetDevice (string) --

          The type of device that the model will run on after the compilation job has completed.

        • CompilationTargetPlatformOs (string) --

          The type of OS that the model will run on after the compilation job has completed.

        • CompilationTargetPlatformArch (string) --

          The type of architecture that the model will run on after the compilation job has completed.

        • CompilationTargetPlatformAccelerator (string) --

          The type of accelerator that the model will run on after the compilation job has completed.

        • LastModifiedTime (datetime) --

          The time when the model compilation job was last modified.

        • CompilationJobStatus (string) --

          The status of the model compilation job.

    • NextToken (string) --

      If the response is truncated, Amazon SageMaker returns this NextToken . To retrieve the next set of model compilation jobs, use this token in the next request.

ListPipelineExecutionSteps (updated) Link ¶
Changes (response)
{'PipelineExecutionSteps': {'AttemptCount': 'integer'}}

Gets a list of PipeLineExecutionStep objects.

See also: AWS API Documentation

Request Syntax

client.list_pipeline_execution_steps(
    PipelineExecutionArn='string',
    NextToken='string',
    MaxResults=123,
    SortOrder='Ascending'|'Descending'
)
type PipelineExecutionArn

string

param PipelineExecutionArn

The Amazon Resource Name (ARN) of the pipeline execution.

type NextToken

string

param NextToken

If the result of the previous ListPipelineExecutionSteps request was truncated, the response includes a NextToken . To retrieve the next set of pipeline execution steps, use the token in the next request.

type MaxResults

integer

param MaxResults

The maximum number of pipeline execution steps to return in the response.

type SortOrder

string

param SortOrder

The field by which to sort results. The default is CreatedTime .

rtype

dict

returns

Response Syntax

{
    'PipelineExecutionSteps': [
        {
            'StepName': 'string',
            'StartTime': datetime(2015, 1, 1),
            'EndTime': datetime(2015, 1, 1),
            'StepStatus': 'Starting'|'Executing'|'Stopping'|'Stopped'|'Failed'|'Succeeded',
            'CacheHitResult': {
                'SourcePipelineExecutionArn': 'string'
            },
            'AttemptCount': 123,
            'FailureReason': 'string',
            'Metadata': {
                'TrainingJob': {
                    'Arn': 'string'
                },
                'ProcessingJob': {
                    'Arn': 'string'
                },
                'TransformJob': {
                    'Arn': 'string'
                },
                'TuningJob': {
                    'Arn': 'string'
                },
                'Model': {
                    'Arn': 'string'
                },
                'RegisterModel': {
                    'Arn': 'string'
                },
                'Condition': {
                    'Outcome': 'True'|'False'
                },
                'Callback': {
                    'CallbackToken': 'string',
                    'SqsQueueUrl': 'string',
                    'OutputParameters': [
                        {
                            'Name': 'string',
                            'Value': 'string'
                        },
                    ]
                },
                'Lambda': {
                    'Arn': 'string',
                    'OutputParameters': [
                        {
                            'Name': 'string',
                            'Value': 'string'
                        },
                    ]
                },
                'QualityCheck': {
                    'CheckType': 'string',
                    'BaselineUsedForDriftCheckStatistics': 'string',
                    'BaselineUsedForDriftCheckConstraints': 'string',
                    'CalculatedBaselineStatistics': 'string',
                    'CalculatedBaselineConstraints': 'string',
                    'ModelPackageGroupName': 'string',
                    'ViolationReport': 'string',
                    'CheckJobArn': 'string',
                    'SkipCheck': True|False,
                    'RegisterNewBaseline': True|False
                },
                'ClarifyCheck': {
                    'CheckType': 'string',
                    'BaselineUsedForDriftCheckConstraints': 'string',
                    'CalculatedBaselineConstraints': 'string',
                    'ModelPackageGroupName': 'string',
                    'ViolationReport': 'string',
                    'CheckJobArn': 'string',
                    'SkipCheck': True|False,
                    'RegisterNewBaseline': True|False
                }
            }
        },
    ],
    'NextToken': 'string'
}

Response Structure

  • (dict) --

    • PipelineExecutionSteps (list) --

      A list of PipeLineExecutionStep objects. Each PipeLineExecutionStep consists of StepName, StartTime, EndTime, StepStatus, and Metadata. Metadata is an object with properties for each job that contains relevant information about the job created by the step.

      • (dict) --

        An execution of a step in a pipeline.

        • StepName (string) --

          The name of the step that is executed.

        • StartTime (datetime) --

          The time that the step started executing.

        • EndTime (datetime) --

          The time that the step stopped executing.

        • StepStatus (string) --

          The status of the step execution.

        • CacheHitResult (dict) --

          If this pipeline execution step was cached, details on the cache hit.

          • SourcePipelineExecutionArn (string) --

            The Amazon Resource Name (ARN) of the pipeline execution.

        • AttemptCount (integer) --

        • FailureReason (string) --

          The reason why the step failed execution. This is only returned if the step failed its execution.

        • Metadata (dict) --

          Metadata for the step execution.

          • TrainingJob (dict) --

            The Amazon Resource Name (ARN) of the training job that was run by this step execution.

            • Arn (string) --

              The Amazon Resource Name (ARN) of the training job that was run by this step execution.

          • ProcessingJob (dict) --

            The Amazon Resource Name (ARN) of the processing job that was run by this step execution.

            • Arn (string) --

              The Amazon Resource Name (ARN) of the processing job.

          • TransformJob (dict) --

            The Amazon Resource Name (ARN) of the transform job that was run by this step execution.

            • Arn (string) --

              The Amazon Resource Name (ARN) of the transform job that was run by this step execution.

          • TuningJob (dict) --

            The Amazon Resource Name (ARN) of the tuning job that was run by this step execution.

            • Arn (string) --

              The Amazon Resource Name (ARN) of the tuning job that was run by this step execution.

          • Model (dict) --

            The Amazon Resource Name (ARN) of the model that was created by this step execution.

            • Arn (string) --

              The Amazon Resource Name (ARN) of the created model.

          • RegisterModel (dict) --

            The Amazon Resource Name (ARN) of the model package the model was registered to by this step execution.

            • Arn (string) --

              The Amazon Resource Name (ARN) of the model package.

          • Condition (dict) --

            The outcome of the condition evaluation that was run by this step execution.

            • Outcome (string) --

              The outcome of the Condition step evaluation.

          • Callback (dict) --

            The URL of the Amazon SQS queue used by this step execution, the pipeline generated token, and a list of output parameters.

            • CallbackToken (string) --

              The pipeline generated token from the Amazon SQS queue.

            • SqsQueueUrl (string) --

              The URL of the Amazon Simple Queue Service (Amazon SQS) queue used by the callback step.

            • OutputParameters (list) --

              A list of the output parameters of the callback step.

              • (dict) --

                An output parameter of a pipeline step.

                • Name (string) --

                  The name of the output parameter.

                • Value (string) --

                  The value of the output parameter.

          • Lambda (dict) --

            The Amazon Resource Name (ARN) of the Lambda function that was run by this step execution and a list of output parameters.

            • Arn (string) --

              The Amazon Resource Name (ARN) of the Lambda function that was run by this step execution.

            • OutputParameters (list) --

              A list of the output parameters of the Lambda step.

              • (dict) --

                An output parameter of a pipeline step.

                • Name (string) --

                  The name of the output parameter.

                • Value (string) --

                  The value of the output parameter.

          • QualityCheck (dict) --

            The configurations and outcomes of the check step execution. This includes:

            • The type of the check conducted,

            • The Amazon S3 URIs of baseline constraints and statistics files to be used for the drift check.

            • The Amazon S3 URIs of newly calculated baseline constraints and statistics.

            • The model package group name provided.

            • The Amazon S3 URI of the violation report if violations detected.

            • The Amazon Resource Name (ARN) of check processing job initiated by the step execution.

            • The boolean flags indicating if the drift check is skipped.

            • If step property BaselineUsedForDriftCheck is set the same as CalculatedBaseline .

            • CheckType (string) --

              The type of the Quality check step.

            • BaselineUsedForDriftCheckStatistics (string) --

              The Amazon S3 URI of the baseline statistics file used for the drift check.

            • BaselineUsedForDriftCheckConstraints (string) --

              The Amazon S3 URI of the baseline constraints file used for the drift check.

            • CalculatedBaselineStatistics (string) --

              The Amazon S3 URI of the newly calculated baseline statistics file.

            • CalculatedBaselineConstraints (string) --

              The Amazon S3 URI of the newly calculated baseline constraints file.

            • ModelPackageGroupName (string) --

              The model package group name.

            • ViolationReport (string) --

              The Amazon S3 URI of violation report if violations are detected.

            • CheckJobArn (string) --

              The Amazon Resource Name (ARN) of the Quality check processing job that was run by this step execution.

            • SkipCheck (boolean) --

              This flag indicates if the drift check against the previous baseline will be skipped or not. If it is set to False , the previous baseline of the configured check type must be available.

            • RegisterNewBaseline (boolean) --

              This flag indicates if a newly calculated baseline can be accessed through step properties BaselineUsedForDriftCheckConstraints and BaselineUsedForDriftCheckStatistics . If it is set to False , the previous baseline of the configured check type must also be available. These can be accessed through the BaselineUsedForDriftCheckConstraints and BaselineUsedForDriftCheckStatistics properties.

          • ClarifyCheck (dict) --

            Container for the metadata for a Clarify check step. The configurations and outcomes of the check step execution. This includes:

            • The type of the check conducted,

            • The Amazon S3 URIs of baseline constraints and statistics files to be used for the drift check.

            • The Amazon S3 URIs of newly calculated baseline constraints and statistics.

            • The model package group name provided.

            • The Amazon S3 URI of the violation report if violations detected.

            • The Amazon Resource Name (ARN) of check processing job initiated by the step execution.

            • The boolean flags indicating if the drift check is skipped.

            • If step property BaselineUsedForDriftCheck is set the same as CalculatedBaseline .

            • CheckType (string) --

              The type of the Clarify Check step

            • BaselineUsedForDriftCheckConstraints (string) --

              The Amazon S3 URI of baseline constraints file to be used for the drift check.

            • CalculatedBaselineConstraints (string) --

              The Amazon S3 URI of the newly calculated baseline constraints file.

            • ModelPackageGroupName (string) --

              The model package group name.

            • ViolationReport (string) --

              The Amazon S3 URI of the violation report if violations are detected.

            • CheckJobArn (string) --

              The Amazon Resource Name (ARN) of the check processing job that was run by this step's execution.

            • SkipCheck (boolean) --

              This flag indicates if the drift check against the previous baseline will be skipped or not. If it is set to False , the previous baseline of the configured check type must be available.

            • RegisterNewBaseline (boolean) --

              This flag indicates if a newly calculated baseline can be accessed through step properties BaselineUsedForDriftCheckConstraints and BaselineUsedForDriftCheckStatistics . If it is set to False , the previous baseline of the configured check type must also be available. These can be accessed through the BaselineUsedForDriftCheckConstraints property.

    • NextToken (string) --

      If the result of the previous ListPipelineExecutionSteps request was truncated, the response includes a NextToken . To retrieve the next set of pipeline execution steps, use the token in the next request.