Amazon SageMaker Service

2022/10/18 - Amazon SageMaker Service - 14 updated api methods

Changes  This change allows customers to enable data capturing while running a batch transform job, and configure monitoring schedule to monitoring the captured data.

CreateDataQualityJobDefinition (updated) Link ¶
Changes (request)
{'DataQualityJobInput': {'BatchTransformInput': {'DataCapturedDestinationS3Uri': 'string',
                                                 'DatasetFormat': {'Csv': {'Header': 'boolean'},
                                                                   'Json': {'Line': 'boolean'},
                                                                   'Parquet': {}},
                                                 'EndTimeOffset': 'string',
                                                 'FeaturesAttribute': 'string',
                                                 'InferenceAttribute': 'string',
                                                 'LocalPath': 'string',
                                                 'ProbabilityAttribute': 'string',
                                                 'ProbabilityThresholdAttribute': 'double',
                                                 'S3DataDistributionType': 'FullyReplicated '
                                                                           '| '
                                                                           'ShardedByS3Key',
                                                 'S3InputMode': 'Pipe | File',
                                                 'StartTimeOffset': 'string'}}}

Creates a definition for a job that monitors data quality and drift. For information about model monitor, see Amazon SageMaker Model Monitor.

See also: AWS API Documentation

Request Syntax

client.create_data_quality_job_definition(
    JobDefinitionName='string',
    DataQualityBaselineConfig={
        'BaseliningJobName': 'string',
        'ConstraintsResource': {
            'S3Uri': 'string'
        },
        'StatisticsResource': {
            'S3Uri': 'string'
        }
    },
    DataQualityAppSpecification={
        'ImageUri': 'string',
        'ContainerEntrypoint': [
            'string',
        ],
        'ContainerArguments': [
            'string',
        ],
        'RecordPreprocessorSourceUri': 'string',
        'PostAnalyticsProcessorSourceUri': 'string',
        'Environment': {
            'string': 'string'
        }
    },
    DataQualityJobInput={
        'EndpointInput': {
            'EndpointName': 'string',
            'LocalPath': 'string',
            'S3InputMode': 'Pipe'|'File',
            'S3DataDistributionType': 'FullyReplicated'|'ShardedByS3Key',
            'FeaturesAttribute': 'string',
            'InferenceAttribute': 'string',
            'ProbabilityAttribute': 'string',
            'ProbabilityThresholdAttribute': 123.0,
            'StartTimeOffset': 'string',
            'EndTimeOffset': 'string'
        },
        'BatchTransformInput': {
            'DataCapturedDestinationS3Uri': 'string',
            'DatasetFormat': {
                'Csv': {
                    'Header': True|False
                },
                'Json': {
                    'Line': True|False
                },
                'Parquet': {}

            },
            'LocalPath': 'string',
            'S3InputMode': 'Pipe'|'File',
            'S3DataDistributionType': 'FullyReplicated'|'ShardedByS3Key',
            'FeaturesAttribute': 'string',
            'InferenceAttribute': 'string',
            'ProbabilityAttribute': 'string',
            'ProbabilityThresholdAttribute': 123.0,
            'StartTimeOffset': 'string',
            'EndTimeOffset': 'string'
        }
    },
    DataQualityJobOutputConfig={
        'MonitoringOutputs': [
            {
                'S3Output': {
                    'S3Uri': 'string',
                    'LocalPath': 'string',
                    'S3UploadMode': 'Continuous'|'EndOfJob'
                }
            },
        ],
        'KmsKeyId': 'string'
    },
    JobResources={
        'ClusterConfig': {
            'InstanceCount': 123,
            'InstanceType': 'ml.t3.medium'|'ml.t3.large'|'ml.t3.xlarge'|'ml.t3.2xlarge'|'ml.m4.xlarge'|'ml.m4.2xlarge'|'ml.m4.4xlarge'|'ml.m4.10xlarge'|'ml.m4.16xlarge'|'ml.c4.xlarge'|'ml.c4.2xlarge'|'ml.c4.4xlarge'|'ml.c4.8xlarge'|'ml.p2.xlarge'|'ml.p2.8xlarge'|'ml.p2.16xlarge'|'ml.p3.2xlarge'|'ml.p3.8xlarge'|'ml.p3.16xlarge'|'ml.c5.xlarge'|'ml.c5.2xlarge'|'ml.c5.4xlarge'|'ml.c5.9xlarge'|'ml.c5.18xlarge'|'ml.m5.large'|'ml.m5.xlarge'|'ml.m5.2xlarge'|'ml.m5.4xlarge'|'ml.m5.12xlarge'|'ml.m5.24xlarge'|'ml.r5.large'|'ml.r5.xlarge'|'ml.r5.2xlarge'|'ml.r5.4xlarge'|'ml.r5.8xlarge'|'ml.r5.12xlarge'|'ml.r5.16xlarge'|'ml.r5.24xlarge'|'ml.g4dn.xlarge'|'ml.g4dn.2xlarge'|'ml.g4dn.4xlarge'|'ml.g4dn.8xlarge'|'ml.g4dn.12xlarge'|'ml.g4dn.16xlarge',
            'VolumeSizeInGB': 123,
            'VolumeKmsKeyId': 'string'
        }
    },
    NetworkConfig={
        'EnableInterContainerTrafficEncryption': True|False,
        'EnableNetworkIsolation': True|False,
        'VpcConfig': {
            'SecurityGroupIds': [
                'string',
            ],
            'Subnets': [
                'string',
            ]
        }
    },
    RoleArn='string',
    StoppingCondition={
        'MaxRuntimeInSeconds': 123
    },
    Tags=[
        {
            'Key': 'string',
            'Value': 'string'
        },
    ]
)
type JobDefinitionName

string

param JobDefinitionName

[REQUIRED]

The name for the monitoring job definition.

type DataQualityBaselineConfig

dict

param DataQualityBaselineConfig

Configures the constraints and baselines for the monitoring job.

  • BaseliningJobName (string) --

    The name of the job that performs baselining for the data quality monitoring job.

  • ConstraintsResource (dict) --

    The constraints resource for a monitoring job.

    • S3Uri (string) --

      The Amazon S3 URI for the constraints resource.

  • StatisticsResource (dict) --

    The statistics resource for a monitoring job.

    • S3Uri (string) --

      The Amazon S3 URI for the statistics resource.

type DataQualityAppSpecification

dict

param DataQualityAppSpecification

[REQUIRED]

Specifies the container that runs the monitoring job.

  • ImageUri (string) -- [REQUIRED]

    The container image that the data quality monitoring job runs.

  • ContainerEntrypoint (list) --

    The entrypoint for a container used to run a monitoring job.

    • (string) --

  • ContainerArguments (list) --

    The arguments to send to the container that the monitoring job runs.

    • (string) --

  • RecordPreprocessorSourceUri (string) --

    An Amazon S3 URI to a script that is called per row prior to running analysis. It can base64 decode the payload and convert it into a flatted json so that the built-in container can use the converted data. Applicable only for the built-in (first party) containers.

  • PostAnalyticsProcessorSourceUri (string) --

    An Amazon S3 URI to a script that is called after analysis has been performed. Applicable only for the built-in (first party) containers.

  • Environment (dict) --

    Sets the environment variables in the container that the monitoring job runs.

    • (string) --

      • (string) --

type DataQualityJobInput

dict

param DataQualityJobInput

[REQUIRED]

A list of inputs for the monitoring job. Currently endpoints are supported as monitoring inputs.

  • EndpointInput (dict) --

    Input object for the endpoint

    • EndpointName (string) -- [REQUIRED]

      An endpoint in customer's account which has enabled DataCaptureConfig enabled.

    • LocalPath (string) -- [REQUIRED]

      Path to the filesystem where the endpoint data is available to the container.

    • S3InputMode (string) --

      Whether the Pipe or File is used as the input mode for transferring data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File .

    • S3DataDistributionType (string) --

      Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defaults to FullyReplicated

    • FeaturesAttribute (string) --

      The attributes of the input data that are the input features.

    • InferenceAttribute (string) --

      The attribute of the input data that represents the ground truth label.

    • ProbabilityAttribute (string) --

      In a classification problem, the attribute that represents the class probability.

    • ProbabilityThresholdAttribute (float) --

      The threshold for the class probability to be evaluated as a positive result.

    • StartTimeOffset (string) --

      If specified, monitoring jobs substract this time from the start time. For information about using offsets for scheduling monitoring jobs, see Schedule Model Quality Monitoring Jobs.

    • EndTimeOffset (string) --

      If specified, monitoring jobs substract this time from the end time. For information about using offsets for scheduling monitoring jobs, see Schedule Model Quality Monitoring Jobs.

  • BatchTransformInput (dict) --

    Input object for the batch transform job.

    • DataCapturedDestinationS3Uri (string) -- [REQUIRED]

      The Amazon S3 location being used to capture the data.

    • DatasetFormat (dict) -- [REQUIRED]

      The dataset format for your batch transform job.

      • Csv (dict) --

        The CSV dataset used in the monitoring job.

        • Header (boolean) --

          Indicates if the CSV data has a header.

      • Json (dict) --

        The JSON dataset used in the monitoring job

        • Line (boolean) --

          Indicates if the file should be read as a json object per line.

      • Parquet (dict) --

        The Parquet dataset used in the monitoring job

    • LocalPath (string) -- [REQUIRED]

      Path to the filesystem where the batch transform data is available to the container.

    • S3InputMode (string) --

      Whether the Pipe or File is used as the input mode for transferring data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File .

    • S3DataDistributionType (string) --

      Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defaults to FullyReplicated

    • FeaturesAttribute (string) --

      The attributes of the input data that are the input features.

    • InferenceAttribute (string) --

      The attribute of the input data that represents the ground truth label.

    • ProbabilityAttribute (string) --

      In a classification problem, the attribute that represents the class probability.

    • ProbabilityThresholdAttribute (float) --

      The threshold for the class probability to be evaluated as a positive result.

    • StartTimeOffset (string) --

      If specified, monitoring jobs substract this time from the start time. For information about using offsets for scheduling monitoring jobs, see Schedule Model Quality Monitoring Jobs.

    • EndTimeOffset (string) --

      If specified, monitoring jobs substract this time from the end time. For information about using offsets for scheduling monitoring jobs, see Schedule Model Quality Monitoring Jobs.

type DataQualityJobOutputConfig

dict

param DataQualityJobOutputConfig

[REQUIRED]

The output configuration for monitoring jobs.

  • MonitoringOutputs (list) -- [REQUIRED]

    Monitoring outputs for monitoring jobs. This is where the output of the periodic monitoring jobs is uploaded.

    • (dict) --

      The output object for a monitoring job.

      • S3Output (dict) -- [REQUIRED]

        The Amazon S3 storage location where the results of a monitoring job are saved.

        • S3Uri (string) -- [REQUIRED]

          A URI that identifies the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job.

        • LocalPath (string) -- [REQUIRED]

          The local path to the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job. LocalPath is an absolute path for the output data.

        • S3UploadMode (string) --

          Whether to upload the results of the monitoring job continuously or after the job completes.

  • KmsKeyId (string) --

    The Amazon Web Services Key Management Service (Amazon Web Services KMS) key that Amazon SageMaker uses to encrypt the model artifacts at rest using Amazon S3 server-side encryption.

type JobResources

dict

param JobResources

[REQUIRED]

Identifies the resources to deploy for a monitoring job.

  • ClusterConfig (dict) -- [REQUIRED]

    The configuration for the cluster resources used to run the processing job.

    • InstanceCount (integer) -- [REQUIRED]

      The number of ML compute instances to use in the model monitoring job. For distributed processing jobs, specify a value greater than 1. The default value is 1.

    • InstanceType (string) -- [REQUIRED]

      The ML compute instance type for the processing job.

    • VolumeSizeInGB (integer) -- [REQUIRED]

      The size of the ML storage volume, in gigabytes, that you want to provision. You must specify sufficient ML storage for your scenario.

    • VolumeKmsKeyId (string) --

      The Amazon Web Services Key Management Service (Amazon Web Services KMS) key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance(s) that run the model monitoring job.

type NetworkConfig

dict

param NetworkConfig

Specifies networking configuration for the monitoring job.

  • EnableInterContainerTrafficEncryption (boolean) --

    Whether to encrypt all communications between the instances used for the monitoring jobs. Choose True to encrypt communications. Encryption provides greater security for distributed jobs, but the processing might take longer.

  • EnableNetworkIsolation (boolean) --

    Whether to allow inbound and outbound network calls to and from the containers used for the monitoring job.

  • VpcConfig (dict) --

    Specifies a VPC that your training jobs and hosted models have access to. Control access to and from your training and model containers by configuring the VPC. For more information, see Protect Endpoints by Using an Amazon Virtual Private Cloud and Protect Training Jobs by Using an Amazon Virtual Private Cloud.

    • SecurityGroupIds (list) -- [REQUIRED]

      The VPC security group IDs, in the form sg-xxxxxxxx. Specify the security groups for the VPC that is specified in the Subnets field.

      • (string) --

    • Subnets (list) -- [REQUIRED]

      The ID of the subnets in the VPC to which you want to connect your training job or model. For information about the availability of specific instance types, see Supported Instance Types and Availability Zones.

      • (string) --

type RoleArn

string

param RoleArn

[REQUIRED]

The Amazon Resource Name (ARN) of an IAM role that Amazon SageMaker can assume to perform tasks on your behalf.

type StoppingCondition

dict

param StoppingCondition

A time limit for how long the monitoring job is allowed to run before stopping.

  • MaxRuntimeInSeconds (integer) -- [REQUIRED]

    The maximum runtime allowed in seconds.

    Note

    The MaxRuntimeInSeconds cannot exceed the frequency of the job. For data quality and model explainability, this can be up to 3600 seconds for an hourly schedule. For model bias and model quality hourly schedules, this can be up to 1800 seconds.

type Tags

list

param Tags

(Optional) An array of key-value pairs. For more information, see Using Cost Allocation Tags in the Amazon Web Services Billing and Cost Management User Guide .

  • (dict) --

    A tag object that consists of a key and an optional value, used to manage metadata for SageMaker Amazon Web Services resources.

    You can add tags to notebook instances, training jobs, hyperparameter tuning jobs, batch transform jobs, models, labeling jobs, work teams, endpoint configurations, and endpoints. For more information on adding tags to SageMaker resources, see AddTags.

    For more information on adding metadata to your Amazon Web Services resources with tagging, see Tagging Amazon Web Services resources. For advice on best practices for managing Amazon Web Services resources with tagging, see Tagging Best Practices: Implement an Effective Amazon Web Services Resource Tagging Strategy.

    • Key (string) -- [REQUIRED]

      The tag key. Tag keys must be unique per resource.

    • Value (string) -- [REQUIRED]

      The tag value.

rtype

dict

returns

Response Syntax

{
    'JobDefinitionArn': 'string'
}

Response Structure

  • (dict) --

    • JobDefinitionArn (string) --

      The Amazon Resource Name (ARN) of the job definition.

CreateModelBiasJobDefinition (updated) Link ¶
Changes (request)
{'ModelBiasJobInput': {'BatchTransformInput': {'DataCapturedDestinationS3Uri': 'string',
                                               'DatasetFormat': {'Csv': {'Header': 'boolean'},
                                                                 'Json': {'Line': 'boolean'},
                                                                 'Parquet': {}},
                                               'EndTimeOffset': 'string',
                                               'FeaturesAttribute': 'string',
                                               'InferenceAttribute': 'string',
                                               'LocalPath': 'string',
                                               'ProbabilityAttribute': 'string',
                                               'ProbabilityThresholdAttribute': 'double',
                                               'S3DataDistributionType': 'FullyReplicated '
                                                                         '| '
                                                                         'ShardedByS3Key',
                                               'S3InputMode': 'Pipe | File',
                                               'StartTimeOffset': 'string'}}}

Creates the definition for a model bias job.

See also: AWS API Documentation

Request Syntax

client.create_model_bias_job_definition(
    JobDefinitionName='string',
    ModelBiasBaselineConfig={
        'BaseliningJobName': 'string',
        'ConstraintsResource': {
            'S3Uri': 'string'
        }
    },
    ModelBiasAppSpecification={
        'ImageUri': 'string',
        'ConfigUri': 'string',
        'Environment': {
            'string': 'string'
        }
    },
    ModelBiasJobInput={
        'EndpointInput': {
            'EndpointName': 'string',
            'LocalPath': 'string',
            'S3InputMode': 'Pipe'|'File',
            'S3DataDistributionType': 'FullyReplicated'|'ShardedByS3Key',
            'FeaturesAttribute': 'string',
            'InferenceAttribute': 'string',
            'ProbabilityAttribute': 'string',
            'ProbabilityThresholdAttribute': 123.0,
            'StartTimeOffset': 'string',
            'EndTimeOffset': 'string'
        },
        'BatchTransformInput': {
            'DataCapturedDestinationS3Uri': 'string',
            'DatasetFormat': {
                'Csv': {
                    'Header': True|False
                },
                'Json': {
                    'Line': True|False
                },
                'Parquet': {}

            },
            'LocalPath': 'string',
            'S3InputMode': 'Pipe'|'File',
            'S3DataDistributionType': 'FullyReplicated'|'ShardedByS3Key',
            'FeaturesAttribute': 'string',
            'InferenceAttribute': 'string',
            'ProbabilityAttribute': 'string',
            'ProbabilityThresholdAttribute': 123.0,
            'StartTimeOffset': 'string',
            'EndTimeOffset': 'string'
        },
        'GroundTruthS3Input': {
            'S3Uri': 'string'
        }
    },
    ModelBiasJobOutputConfig={
        'MonitoringOutputs': [
            {
                'S3Output': {
                    'S3Uri': 'string',
                    'LocalPath': 'string',
                    'S3UploadMode': 'Continuous'|'EndOfJob'
                }
            },
        ],
        'KmsKeyId': 'string'
    },
    JobResources={
        'ClusterConfig': {
            'InstanceCount': 123,
            'InstanceType': 'ml.t3.medium'|'ml.t3.large'|'ml.t3.xlarge'|'ml.t3.2xlarge'|'ml.m4.xlarge'|'ml.m4.2xlarge'|'ml.m4.4xlarge'|'ml.m4.10xlarge'|'ml.m4.16xlarge'|'ml.c4.xlarge'|'ml.c4.2xlarge'|'ml.c4.4xlarge'|'ml.c4.8xlarge'|'ml.p2.xlarge'|'ml.p2.8xlarge'|'ml.p2.16xlarge'|'ml.p3.2xlarge'|'ml.p3.8xlarge'|'ml.p3.16xlarge'|'ml.c5.xlarge'|'ml.c5.2xlarge'|'ml.c5.4xlarge'|'ml.c5.9xlarge'|'ml.c5.18xlarge'|'ml.m5.large'|'ml.m5.xlarge'|'ml.m5.2xlarge'|'ml.m5.4xlarge'|'ml.m5.12xlarge'|'ml.m5.24xlarge'|'ml.r5.large'|'ml.r5.xlarge'|'ml.r5.2xlarge'|'ml.r5.4xlarge'|'ml.r5.8xlarge'|'ml.r5.12xlarge'|'ml.r5.16xlarge'|'ml.r5.24xlarge'|'ml.g4dn.xlarge'|'ml.g4dn.2xlarge'|'ml.g4dn.4xlarge'|'ml.g4dn.8xlarge'|'ml.g4dn.12xlarge'|'ml.g4dn.16xlarge',
            'VolumeSizeInGB': 123,
            'VolumeKmsKeyId': 'string'
        }
    },
    NetworkConfig={
        'EnableInterContainerTrafficEncryption': True|False,
        'EnableNetworkIsolation': True|False,
        'VpcConfig': {
            'SecurityGroupIds': [
                'string',
            ],
            'Subnets': [
                'string',
            ]
        }
    },
    RoleArn='string',
    StoppingCondition={
        'MaxRuntimeInSeconds': 123
    },
    Tags=[
        {
            'Key': 'string',
            'Value': 'string'
        },
    ]
)
type JobDefinitionName

string

param JobDefinitionName

[REQUIRED]

The name of the bias job definition. The name must be unique within an Amazon Web Services Region in the Amazon Web Services account.

type ModelBiasBaselineConfig

dict

param ModelBiasBaselineConfig

The baseline configuration for a model bias job.

  • BaseliningJobName (string) --

    The name of the baseline model bias job.

  • ConstraintsResource (dict) --

    The constraints resource for a monitoring job.

    • S3Uri (string) --

      The Amazon S3 URI for the constraints resource.

type ModelBiasAppSpecification

dict

param ModelBiasAppSpecification

[REQUIRED]

Configures the model bias job to run a specified Docker container image.

  • ImageUri (string) -- [REQUIRED]

    The container image to be run by the model bias job.

  • ConfigUri (string) -- [REQUIRED]

    JSON formatted S3 file that defines bias parameters. For more information on this JSON configuration file, see Configure bias parameters.

  • Environment (dict) --

    Sets the environment variables in the Docker container.

    • (string) --

      • (string) --

type ModelBiasJobInput

dict

param ModelBiasJobInput

[REQUIRED]

Inputs for the model bias job.

  • EndpointInput (dict) --

    Input object for the endpoint

    • EndpointName (string) -- [REQUIRED]

      An endpoint in customer's account which has enabled DataCaptureConfig enabled.

    • LocalPath (string) -- [REQUIRED]

      Path to the filesystem where the endpoint data is available to the container.

    • S3InputMode (string) --

      Whether the Pipe or File is used as the input mode for transferring data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File .

    • S3DataDistributionType (string) --

      Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defaults to FullyReplicated

    • FeaturesAttribute (string) --

      The attributes of the input data that are the input features.

    • InferenceAttribute (string) --

      The attribute of the input data that represents the ground truth label.

    • ProbabilityAttribute (string) --

      In a classification problem, the attribute that represents the class probability.

    • ProbabilityThresholdAttribute (float) --

      The threshold for the class probability to be evaluated as a positive result.

    • StartTimeOffset (string) --

      If specified, monitoring jobs substract this time from the start time. For information about using offsets for scheduling monitoring jobs, see Schedule Model Quality Monitoring Jobs.

    • EndTimeOffset (string) --

      If specified, monitoring jobs substract this time from the end time. For information about using offsets for scheduling monitoring jobs, see Schedule Model Quality Monitoring Jobs.

  • BatchTransformInput (dict) --

    Input object for the batch transform job.

    • DataCapturedDestinationS3Uri (string) -- [REQUIRED]

      The Amazon S3 location being used to capture the data.

    • DatasetFormat (dict) -- [REQUIRED]

      The dataset format for your batch transform job.

      • Csv (dict) --

        The CSV dataset used in the monitoring job.

        • Header (boolean) --

          Indicates if the CSV data has a header.

      • Json (dict) --

        The JSON dataset used in the monitoring job

        • Line (boolean) --

          Indicates if the file should be read as a json object per line.

      • Parquet (dict) --

        The Parquet dataset used in the monitoring job

    • LocalPath (string) -- [REQUIRED]

      Path to the filesystem where the batch transform data is available to the container.

    • S3InputMode (string) --

      Whether the Pipe or File is used as the input mode for transferring data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File .

    • S3DataDistributionType (string) --

      Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defaults to FullyReplicated

    • FeaturesAttribute (string) --

      The attributes of the input data that are the input features.

    • InferenceAttribute (string) --

      The attribute of the input data that represents the ground truth label.

    • ProbabilityAttribute (string) --

      In a classification problem, the attribute that represents the class probability.

    • ProbabilityThresholdAttribute (float) --

      The threshold for the class probability to be evaluated as a positive result.

    • StartTimeOffset (string) --

      If specified, monitoring jobs substract this time from the start time. For information about using offsets for scheduling monitoring jobs, see Schedule Model Quality Monitoring Jobs.

    • EndTimeOffset (string) --

      If specified, monitoring jobs substract this time from the end time. For information about using offsets for scheduling monitoring jobs, see Schedule Model Quality Monitoring Jobs.

  • GroundTruthS3Input (dict) -- [REQUIRED]

    Location of ground truth labels to use in model bias job.

    • S3Uri (string) --

      The address of the Amazon S3 location of the ground truth labels.

type ModelBiasJobOutputConfig

dict

param ModelBiasJobOutputConfig

[REQUIRED]

The output configuration for monitoring jobs.

  • MonitoringOutputs (list) -- [REQUIRED]

    Monitoring outputs for monitoring jobs. This is where the output of the periodic monitoring jobs is uploaded.

    • (dict) --

      The output object for a monitoring job.

      • S3Output (dict) -- [REQUIRED]

        The Amazon S3 storage location where the results of a monitoring job are saved.

        • S3Uri (string) -- [REQUIRED]

          A URI that identifies the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job.

        • LocalPath (string) -- [REQUIRED]

          The local path to the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job. LocalPath is an absolute path for the output data.

        • S3UploadMode (string) --

          Whether to upload the results of the monitoring job continuously or after the job completes.

  • KmsKeyId (string) --

    The Amazon Web Services Key Management Service (Amazon Web Services KMS) key that Amazon SageMaker uses to encrypt the model artifacts at rest using Amazon S3 server-side encryption.

type JobResources

dict

param JobResources

[REQUIRED]

Identifies the resources to deploy for a monitoring job.

  • ClusterConfig (dict) -- [REQUIRED]

    The configuration for the cluster resources used to run the processing job.

    • InstanceCount (integer) -- [REQUIRED]

      The number of ML compute instances to use in the model monitoring job. For distributed processing jobs, specify a value greater than 1. The default value is 1.

    • InstanceType (string) -- [REQUIRED]

      The ML compute instance type for the processing job.

    • VolumeSizeInGB (integer) -- [REQUIRED]

      The size of the ML storage volume, in gigabytes, that you want to provision. You must specify sufficient ML storage for your scenario.

    • VolumeKmsKeyId (string) --

      The Amazon Web Services Key Management Service (Amazon Web Services KMS) key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance(s) that run the model monitoring job.

type NetworkConfig

dict

param NetworkConfig

Networking options for a model bias job.

  • EnableInterContainerTrafficEncryption (boolean) --

    Whether to encrypt all communications between the instances used for the monitoring jobs. Choose True to encrypt communications. Encryption provides greater security for distributed jobs, but the processing might take longer.

  • EnableNetworkIsolation (boolean) --

    Whether to allow inbound and outbound network calls to and from the containers used for the monitoring job.

  • VpcConfig (dict) --

    Specifies a VPC that your training jobs and hosted models have access to. Control access to and from your training and model containers by configuring the VPC. For more information, see Protect Endpoints by Using an Amazon Virtual Private Cloud and Protect Training Jobs by Using an Amazon Virtual Private Cloud.

    • SecurityGroupIds (list) -- [REQUIRED]

      The VPC security group IDs, in the form sg-xxxxxxxx. Specify the security groups for the VPC that is specified in the Subnets field.

      • (string) --

    • Subnets (list) -- [REQUIRED]

      The ID of the subnets in the VPC to which you want to connect your training job or model. For information about the availability of specific instance types, see Supported Instance Types and Availability Zones.

      • (string) --

type RoleArn

string

param RoleArn

[REQUIRED]

The Amazon Resource Name (ARN) of an IAM role that Amazon SageMaker can assume to perform tasks on your behalf.

type StoppingCondition

dict

param StoppingCondition

A time limit for how long the monitoring job is allowed to run before stopping.

  • MaxRuntimeInSeconds (integer) -- [REQUIRED]

    The maximum runtime allowed in seconds.

    Note

    The MaxRuntimeInSeconds cannot exceed the frequency of the job. For data quality and model explainability, this can be up to 3600 seconds for an hourly schedule. For model bias and model quality hourly schedules, this can be up to 1800 seconds.

type Tags

list

param Tags

(Optional) An array of key-value pairs. For more information, see Using Cost Allocation Tags in the Amazon Web Services Billing and Cost Management User Guide .

  • (dict) --

    A tag object that consists of a key and an optional value, used to manage metadata for SageMaker Amazon Web Services resources.

    You can add tags to notebook instances, training jobs, hyperparameter tuning jobs, batch transform jobs, models, labeling jobs, work teams, endpoint configurations, and endpoints. For more information on adding tags to SageMaker resources, see AddTags.

    For more information on adding metadata to your Amazon Web Services resources with tagging, see Tagging Amazon Web Services resources. For advice on best practices for managing Amazon Web Services resources with tagging, see Tagging Best Practices: Implement an Effective Amazon Web Services Resource Tagging Strategy.

    • Key (string) -- [REQUIRED]

      The tag key. Tag keys must be unique per resource.

    • Value (string) -- [REQUIRED]

      The tag value.

rtype

dict

returns

Response Syntax

{
    'JobDefinitionArn': 'string'
}

Response Structure

  • (dict) --

    • JobDefinitionArn (string) --

      The Amazon Resource Name (ARN) of the model bias job.

CreateModelExplainabilityJobDefinition (updated) Link ¶
Changes (request)
{'ModelExplainabilityJobInput': {'BatchTransformInput': {'DataCapturedDestinationS3Uri': 'string',
                                                         'DatasetFormat': {'Csv': {'Header': 'boolean'},
                                                                           'Json': {'Line': 'boolean'},
                                                                           'Parquet': {}},
                                                         'EndTimeOffset': 'string',
                                                         'FeaturesAttribute': 'string',
                                                         'InferenceAttribute': 'string',
                                                         'LocalPath': 'string',
                                                         'ProbabilityAttribute': 'string',
                                                         'ProbabilityThresholdAttribute': 'double',
                                                         'S3DataDistributionType': 'FullyReplicated '
                                                                                   '| '
                                                                                   'ShardedByS3Key',
                                                         'S3InputMode': 'Pipe '
                                                                        '| '
                                                                        'File',
                                                         'StartTimeOffset': 'string'}}}

Creates the definition for a model explainability job.

See also: AWS API Documentation

Request Syntax

client.create_model_explainability_job_definition(
    JobDefinitionName='string',
    ModelExplainabilityBaselineConfig={
        'BaseliningJobName': 'string',
        'ConstraintsResource': {
            'S3Uri': 'string'
        }
    },
    ModelExplainabilityAppSpecification={
        'ImageUri': 'string',
        'ConfigUri': 'string',
        'Environment': {
            'string': 'string'
        }
    },
    ModelExplainabilityJobInput={
        'EndpointInput': {
            'EndpointName': 'string',
            'LocalPath': 'string',
            'S3InputMode': 'Pipe'|'File',
            'S3DataDistributionType': 'FullyReplicated'|'ShardedByS3Key',
            'FeaturesAttribute': 'string',
            'InferenceAttribute': 'string',
            'ProbabilityAttribute': 'string',
            'ProbabilityThresholdAttribute': 123.0,
            'StartTimeOffset': 'string',
            'EndTimeOffset': 'string'
        },
        'BatchTransformInput': {
            'DataCapturedDestinationS3Uri': 'string',
            'DatasetFormat': {
                'Csv': {
                    'Header': True|False
                },
                'Json': {
                    'Line': True|False
                },
                'Parquet': {}

            },
            'LocalPath': 'string',
            'S3InputMode': 'Pipe'|'File',
            'S3DataDistributionType': 'FullyReplicated'|'ShardedByS3Key',
            'FeaturesAttribute': 'string',
            'InferenceAttribute': 'string',
            'ProbabilityAttribute': 'string',
            'ProbabilityThresholdAttribute': 123.0,
            'StartTimeOffset': 'string',
            'EndTimeOffset': 'string'
        }
    },
    ModelExplainabilityJobOutputConfig={
        'MonitoringOutputs': [
            {
                'S3Output': {
                    'S3Uri': 'string',
                    'LocalPath': 'string',
                    'S3UploadMode': 'Continuous'|'EndOfJob'
                }
            },
        ],
        'KmsKeyId': 'string'
    },
    JobResources={
        'ClusterConfig': {
            'InstanceCount': 123,
            'InstanceType': 'ml.t3.medium'|'ml.t3.large'|'ml.t3.xlarge'|'ml.t3.2xlarge'|'ml.m4.xlarge'|'ml.m4.2xlarge'|'ml.m4.4xlarge'|'ml.m4.10xlarge'|'ml.m4.16xlarge'|'ml.c4.xlarge'|'ml.c4.2xlarge'|'ml.c4.4xlarge'|'ml.c4.8xlarge'|'ml.p2.xlarge'|'ml.p2.8xlarge'|'ml.p2.16xlarge'|'ml.p3.2xlarge'|'ml.p3.8xlarge'|'ml.p3.16xlarge'|'ml.c5.xlarge'|'ml.c5.2xlarge'|'ml.c5.4xlarge'|'ml.c5.9xlarge'|'ml.c5.18xlarge'|'ml.m5.large'|'ml.m5.xlarge'|'ml.m5.2xlarge'|'ml.m5.4xlarge'|'ml.m5.12xlarge'|'ml.m5.24xlarge'|'ml.r5.large'|'ml.r5.xlarge'|'ml.r5.2xlarge'|'ml.r5.4xlarge'|'ml.r5.8xlarge'|'ml.r5.12xlarge'|'ml.r5.16xlarge'|'ml.r5.24xlarge'|'ml.g4dn.xlarge'|'ml.g4dn.2xlarge'|'ml.g4dn.4xlarge'|'ml.g4dn.8xlarge'|'ml.g4dn.12xlarge'|'ml.g4dn.16xlarge',
            'VolumeSizeInGB': 123,
            'VolumeKmsKeyId': 'string'
        }
    },
    NetworkConfig={
        'EnableInterContainerTrafficEncryption': True|False,
        'EnableNetworkIsolation': True|False,
        'VpcConfig': {
            'SecurityGroupIds': [
                'string',
            ],
            'Subnets': [
                'string',
            ]
        }
    },
    RoleArn='string',
    StoppingCondition={
        'MaxRuntimeInSeconds': 123
    },
    Tags=[
        {
            'Key': 'string',
            'Value': 'string'
        },
    ]
)
type JobDefinitionName

string

param JobDefinitionName

[REQUIRED]

The name of the model explainability job definition. The name must be unique within an Amazon Web Services Region in the Amazon Web Services account.

type ModelExplainabilityBaselineConfig

dict

param ModelExplainabilityBaselineConfig

The baseline configuration for a model explainability job.

  • BaseliningJobName (string) --

    The name of the baseline model explainability job.

  • ConstraintsResource (dict) --

    The constraints resource for a monitoring job.

    • S3Uri (string) --

      The Amazon S3 URI for the constraints resource.

type ModelExplainabilityAppSpecification

dict

param ModelExplainabilityAppSpecification

[REQUIRED]

Configures the model explainability job to run a specified Docker container image.

  • ImageUri (string) -- [REQUIRED]

    The container image to be run by the model explainability job.

  • ConfigUri (string) -- [REQUIRED]

    JSON formatted S3 file that defines explainability parameters. For more information on this JSON configuration file, see Configure model explainability parameters.

  • Environment (dict) --

    Sets the environment variables in the Docker container.

    • (string) --

      • (string) --

type ModelExplainabilityJobInput

dict

param ModelExplainabilityJobInput

[REQUIRED]

Inputs for the model explainability job.

  • EndpointInput (dict) --

    Input object for the endpoint

    • EndpointName (string) -- [REQUIRED]

      An endpoint in customer's account which has enabled DataCaptureConfig enabled.

    • LocalPath (string) -- [REQUIRED]

      Path to the filesystem where the endpoint data is available to the container.

    • S3InputMode (string) --

      Whether the Pipe or File is used as the input mode for transferring data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File .

    • S3DataDistributionType (string) --

      Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defaults to FullyReplicated

    • FeaturesAttribute (string) --

      The attributes of the input data that are the input features.

    • InferenceAttribute (string) --

      The attribute of the input data that represents the ground truth label.

    • ProbabilityAttribute (string) --

      In a classification problem, the attribute that represents the class probability.

    • ProbabilityThresholdAttribute (float) --

      The threshold for the class probability to be evaluated as a positive result.

    • StartTimeOffset (string) --

      If specified, monitoring jobs substract this time from the start time. For information about using offsets for scheduling monitoring jobs, see Schedule Model Quality Monitoring Jobs.

    • EndTimeOffset (string) --

      If specified, monitoring jobs substract this time from the end time. For information about using offsets for scheduling monitoring jobs, see Schedule Model Quality Monitoring Jobs.

  • BatchTransformInput (dict) --

    Input object for the batch transform job.

    • DataCapturedDestinationS3Uri (string) -- [REQUIRED]

      The Amazon S3 location being used to capture the data.

    • DatasetFormat (dict) -- [REQUIRED]

      The dataset format for your batch transform job.

      • Csv (dict) --

        The CSV dataset used in the monitoring job.

        • Header (boolean) --

          Indicates if the CSV data has a header.

      • Json (dict) --

        The JSON dataset used in the monitoring job

        • Line (boolean) --

          Indicates if the file should be read as a json object per line.

      • Parquet (dict) --

        The Parquet dataset used in the monitoring job

    • LocalPath (string) -- [REQUIRED]

      Path to the filesystem where the batch transform data is available to the container.

    • S3InputMode (string) --

      Whether the Pipe or File is used as the input mode for transferring data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File .

    • S3DataDistributionType (string) --

      Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defaults to FullyReplicated

    • FeaturesAttribute (string) --

      The attributes of the input data that are the input features.

    • InferenceAttribute (string) --

      The attribute of the input data that represents the ground truth label.

    • ProbabilityAttribute (string) --

      In a classification problem, the attribute that represents the class probability.

    • ProbabilityThresholdAttribute (float) --

      The threshold for the class probability to be evaluated as a positive result.

    • StartTimeOffset (string) --

      If specified, monitoring jobs substract this time from the start time. For information about using offsets for scheduling monitoring jobs, see Schedule Model Quality Monitoring Jobs.

    • EndTimeOffset (string) --

      If specified, monitoring jobs substract this time from the end time. For information about using offsets for scheduling monitoring jobs, see Schedule Model Quality Monitoring Jobs.

type ModelExplainabilityJobOutputConfig

dict

param ModelExplainabilityJobOutputConfig

[REQUIRED]

The output configuration for monitoring jobs.

  • MonitoringOutputs (list) -- [REQUIRED]

    Monitoring outputs for monitoring jobs. This is where the output of the periodic monitoring jobs is uploaded.

    • (dict) --

      The output object for a monitoring job.

      • S3Output (dict) -- [REQUIRED]

        The Amazon S3 storage location where the results of a monitoring job are saved.

        • S3Uri (string) -- [REQUIRED]

          A URI that identifies the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job.

        • LocalPath (string) -- [REQUIRED]

          The local path to the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job. LocalPath is an absolute path for the output data.

        • S3UploadMode (string) --

          Whether to upload the results of the monitoring job continuously or after the job completes.

  • KmsKeyId (string) --

    The Amazon Web Services Key Management Service (Amazon Web Services KMS) key that Amazon SageMaker uses to encrypt the model artifacts at rest using Amazon S3 server-side encryption.

type JobResources

dict

param JobResources

[REQUIRED]

Identifies the resources to deploy for a monitoring job.

  • ClusterConfig (dict) -- [REQUIRED]

    The configuration for the cluster resources used to run the processing job.

    • InstanceCount (integer) -- [REQUIRED]

      The number of ML compute instances to use in the model monitoring job. For distributed processing jobs, specify a value greater than 1. The default value is 1.

    • InstanceType (string) -- [REQUIRED]

      The ML compute instance type for the processing job.

    • VolumeSizeInGB (integer) -- [REQUIRED]

      The size of the ML storage volume, in gigabytes, that you want to provision. You must specify sufficient ML storage for your scenario.

    • VolumeKmsKeyId (string) --

      The Amazon Web Services Key Management Service (Amazon Web Services KMS) key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance(s) that run the model monitoring job.

type NetworkConfig

dict

param NetworkConfig

Networking options for a model explainability job.

  • EnableInterContainerTrafficEncryption (boolean) --

    Whether to encrypt all communications between the instances used for the monitoring jobs. Choose True to encrypt communications. Encryption provides greater security for distributed jobs, but the processing might take longer.

  • EnableNetworkIsolation (boolean) --

    Whether to allow inbound and outbound network calls to and from the containers used for the monitoring job.

  • VpcConfig (dict) --

    Specifies a VPC that your training jobs and hosted models have access to. Control access to and from your training and model containers by configuring the VPC. For more information, see Protect Endpoints by Using an Amazon Virtual Private Cloud and Protect Training Jobs by Using an Amazon Virtual Private Cloud.

    • SecurityGroupIds (list) -- [REQUIRED]

      The VPC security group IDs, in the form sg-xxxxxxxx. Specify the security groups for the VPC that is specified in the Subnets field.

      • (string) --

    • Subnets (list) -- [REQUIRED]

      The ID of the subnets in the VPC to which you want to connect your training job or model. For information about the availability of specific instance types, see Supported Instance Types and Availability Zones.

      • (string) --

type RoleArn

string

param RoleArn

[REQUIRED]

The Amazon Resource Name (ARN) of an IAM role that Amazon SageMaker can assume to perform tasks on your behalf.

type StoppingCondition

dict

param StoppingCondition

A time limit for how long the monitoring job is allowed to run before stopping.

  • MaxRuntimeInSeconds (integer) -- [REQUIRED]

    The maximum runtime allowed in seconds.

    Note

    The MaxRuntimeInSeconds cannot exceed the frequency of the job. For data quality and model explainability, this can be up to 3600 seconds for an hourly schedule. For model bias and model quality hourly schedules, this can be up to 1800 seconds.

type Tags

list

param Tags

(Optional) An array of key-value pairs. For more information, see Using Cost Allocation Tags in the Amazon Web Services Billing and Cost Management User Guide .

  • (dict) --

    A tag object that consists of a key and an optional value, used to manage metadata for SageMaker Amazon Web Services resources.

    You can add tags to notebook instances, training jobs, hyperparameter tuning jobs, batch transform jobs, models, labeling jobs, work teams, endpoint configurations, and endpoints. For more information on adding tags to SageMaker resources, see AddTags.

    For more information on adding metadata to your Amazon Web Services resources with tagging, see Tagging Amazon Web Services resources. For advice on best practices for managing Amazon Web Services resources with tagging, see Tagging Best Practices: Implement an Effective Amazon Web Services Resource Tagging Strategy.

    • Key (string) -- [REQUIRED]

      The tag key. Tag keys must be unique per resource.

    • Value (string) -- [REQUIRED]

      The tag value.

rtype

dict

returns

Response Syntax

{
    'JobDefinitionArn': 'string'
}

Response Structure

  • (dict) --

    • JobDefinitionArn (string) --

      The Amazon Resource Name (ARN) of the model explainability job.

CreateModelQualityJobDefinition (updated) Link ¶
Changes (request)
{'ModelQualityJobInput': {'BatchTransformInput': {'DataCapturedDestinationS3Uri': 'string',
                                                  'DatasetFormat': {'Csv': {'Header': 'boolean'},
                                                                    'Json': {'Line': 'boolean'},
                                                                    'Parquet': {}},
                                                  'EndTimeOffset': 'string',
                                                  'FeaturesAttribute': 'string',
                                                  'InferenceAttribute': 'string',
                                                  'LocalPath': 'string',
                                                  'ProbabilityAttribute': 'string',
                                                  'ProbabilityThresholdAttribute': 'double',
                                                  'S3DataDistributionType': 'FullyReplicated '
                                                                            '| '
                                                                            'ShardedByS3Key',
                                                  'S3InputMode': 'Pipe | File',
                                                  'StartTimeOffset': 'string'}}}

Creates a definition for a job that monitors model quality and drift. For information about model monitor, see Amazon SageMaker Model Monitor.

See also: AWS API Documentation

Request Syntax

client.create_model_quality_job_definition(
    JobDefinitionName='string',
    ModelQualityBaselineConfig={
        'BaseliningJobName': 'string',
        'ConstraintsResource': {
            'S3Uri': 'string'
        }
    },
    ModelQualityAppSpecification={
        'ImageUri': 'string',
        'ContainerEntrypoint': [
            'string',
        ],
        'ContainerArguments': [
            'string',
        ],
        'RecordPreprocessorSourceUri': 'string',
        'PostAnalyticsProcessorSourceUri': 'string',
        'ProblemType': 'BinaryClassification'|'MulticlassClassification'|'Regression',
        'Environment': {
            'string': 'string'
        }
    },
    ModelQualityJobInput={
        'EndpointInput': {
            'EndpointName': 'string',
            'LocalPath': 'string',
            'S3InputMode': 'Pipe'|'File',
            'S3DataDistributionType': 'FullyReplicated'|'ShardedByS3Key',
            'FeaturesAttribute': 'string',
            'InferenceAttribute': 'string',
            'ProbabilityAttribute': 'string',
            'ProbabilityThresholdAttribute': 123.0,
            'StartTimeOffset': 'string',
            'EndTimeOffset': 'string'
        },
        'BatchTransformInput': {
            'DataCapturedDestinationS3Uri': 'string',
            'DatasetFormat': {
                'Csv': {
                    'Header': True|False
                },
                'Json': {
                    'Line': True|False
                },
                'Parquet': {}

            },
            'LocalPath': 'string',
            'S3InputMode': 'Pipe'|'File',
            'S3DataDistributionType': 'FullyReplicated'|'ShardedByS3Key',
            'FeaturesAttribute': 'string',
            'InferenceAttribute': 'string',
            'ProbabilityAttribute': 'string',
            'ProbabilityThresholdAttribute': 123.0,
            'StartTimeOffset': 'string',
            'EndTimeOffset': 'string'
        },
        'GroundTruthS3Input': {
            'S3Uri': 'string'
        }
    },
    ModelQualityJobOutputConfig={
        'MonitoringOutputs': [
            {
                'S3Output': {
                    'S3Uri': 'string',
                    'LocalPath': 'string',
                    'S3UploadMode': 'Continuous'|'EndOfJob'
                }
            },
        ],
        'KmsKeyId': 'string'
    },
    JobResources={
        'ClusterConfig': {
            'InstanceCount': 123,
            'InstanceType': 'ml.t3.medium'|'ml.t3.large'|'ml.t3.xlarge'|'ml.t3.2xlarge'|'ml.m4.xlarge'|'ml.m4.2xlarge'|'ml.m4.4xlarge'|'ml.m4.10xlarge'|'ml.m4.16xlarge'|'ml.c4.xlarge'|'ml.c4.2xlarge'|'ml.c4.4xlarge'|'ml.c4.8xlarge'|'ml.p2.xlarge'|'ml.p2.8xlarge'|'ml.p2.16xlarge'|'ml.p3.2xlarge'|'ml.p3.8xlarge'|'ml.p3.16xlarge'|'ml.c5.xlarge'|'ml.c5.2xlarge'|'ml.c5.4xlarge'|'ml.c5.9xlarge'|'ml.c5.18xlarge'|'ml.m5.large'|'ml.m5.xlarge'|'ml.m5.2xlarge'|'ml.m5.4xlarge'|'ml.m5.12xlarge'|'ml.m5.24xlarge'|'ml.r5.large'|'ml.r5.xlarge'|'ml.r5.2xlarge'|'ml.r5.4xlarge'|'ml.r5.8xlarge'|'ml.r5.12xlarge'|'ml.r5.16xlarge'|'ml.r5.24xlarge'|'ml.g4dn.xlarge'|'ml.g4dn.2xlarge'|'ml.g4dn.4xlarge'|'ml.g4dn.8xlarge'|'ml.g4dn.12xlarge'|'ml.g4dn.16xlarge',
            'VolumeSizeInGB': 123,
            'VolumeKmsKeyId': 'string'
        }
    },
    NetworkConfig={
        'EnableInterContainerTrafficEncryption': True|False,
        'EnableNetworkIsolation': True|False,
        'VpcConfig': {
            'SecurityGroupIds': [
                'string',
            ],
            'Subnets': [
                'string',
            ]
        }
    },
    RoleArn='string',
    StoppingCondition={
        'MaxRuntimeInSeconds': 123
    },
    Tags=[
        {
            'Key': 'string',
            'Value': 'string'
        },
    ]
)
type JobDefinitionName

string

param JobDefinitionName

[REQUIRED]

The name of the monitoring job definition.

type ModelQualityBaselineConfig

dict

param ModelQualityBaselineConfig

Specifies the constraints and baselines for the monitoring job.

  • BaseliningJobName (string) --

    The name of the job that performs baselining for the monitoring job.

  • ConstraintsResource (dict) --

    The constraints resource for a monitoring job.

    • S3Uri (string) --

      The Amazon S3 URI for the constraints resource.

type ModelQualityAppSpecification

dict

param ModelQualityAppSpecification

[REQUIRED]

The container that runs the monitoring job.

  • ImageUri (string) -- [REQUIRED]

    The address of the container image that the monitoring job runs.

  • ContainerEntrypoint (list) --

    Specifies the entrypoint for a container that the monitoring job runs.

    • (string) --

  • ContainerArguments (list) --

    An array of arguments for the container used to run the monitoring job.

    • (string) --

  • RecordPreprocessorSourceUri (string) --

    An Amazon S3 URI to a script that is called per row prior to running analysis. It can base64 decode the payload and convert it into a flatted json so that the built-in container can use the converted data. Applicable only for the built-in (first party) containers.

  • PostAnalyticsProcessorSourceUri (string) --

    An Amazon S3 URI to a script that is called after analysis has been performed. Applicable only for the built-in (first party) containers.

  • ProblemType (string) --

    The machine learning problem type of the model that the monitoring job monitors.

  • Environment (dict) --

    Sets the environment variables in the container that the monitoring job runs.

    • (string) --

      • (string) --

type ModelQualityJobInput

dict

param ModelQualityJobInput

[REQUIRED]

A list of the inputs that are monitored. Currently endpoints are supported.

  • EndpointInput (dict) --

    Input object for the endpoint

    • EndpointName (string) -- [REQUIRED]

      An endpoint in customer's account which has enabled DataCaptureConfig enabled.

    • LocalPath (string) -- [REQUIRED]

      Path to the filesystem where the endpoint data is available to the container.

    • S3InputMode (string) --

      Whether the Pipe or File is used as the input mode for transferring data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File .

    • S3DataDistributionType (string) --

      Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defaults to FullyReplicated

    • FeaturesAttribute (string) --

      The attributes of the input data that are the input features.

    • InferenceAttribute (string) --

      The attribute of the input data that represents the ground truth label.

    • ProbabilityAttribute (string) --

      In a classification problem, the attribute that represents the class probability.

    • ProbabilityThresholdAttribute (float) --

      The threshold for the class probability to be evaluated as a positive result.

    • StartTimeOffset (string) --

      If specified, monitoring jobs substract this time from the start time. For information about using offsets for scheduling monitoring jobs, see Schedule Model Quality Monitoring Jobs.

    • EndTimeOffset (string) --

      If specified, monitoring jobs substract this time from the end time. For information about using offsets for scheduling monitoring jobs, see Schedule Model Quality Monitoring Jobs.

  • BatchTransformInput (dict) --

    Input object for the batch transform job.

    • DataCapturedDestinationS3Uri (string) -- [REQUIRED]

      The Amazon S3 location being used to capture the data.

    • DatasetFormat (dict) -- [REQUIRED]

      The dataset format for your batch transform job.

      • Csv (dict) --

        The CSV dataset used in the monitoring job.

        • Header (boolean) --

          Indicates if the CSV data has a header.

      • Json (dict) --

        The JSON dataset used in the monitoring job

        • Line (boolean) --

          Indicates if the file should be read as a json object per line.

      • Parquet (dict) --

        The Parquet dataset used in the monitoring job

    • LocalPath (string) -- [REQUIRED]

      Path to the filesystem where the batch transform data is available to the container.

    • S3InputMode (string) --

      Whether the Pipe or File is used as the input mode for transferring data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File .

    • S3DataDistributionType (string) --

      Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defaults to FullyReplicated

    • FeaturesAttribute (string) --

      The attributes of the input data that are the input features.

    • InferenceAttribute (string) --

      The attribute of the input data that represents the ground truth label.

    • ProbabilityAttribute (string) --

      In a classification problem, the attribute that represents the class probability.

    • ProbabilityThresholdAttribute (float) --

      The threshold for the class probability to be evaluated as a positive result.

    • StartTimeOffset (string) --

      If specified, monitoring jobs substract this time from the start time. For information about using offsets for scheduling monitoring jobs, see Schedule Model Quality Monitoring Jobs.

    • EndTimeOffset (string) --

      If specified, monitoring jobs substract this time from the end time. For information about using offsets for scheduling monitoring jobs, see Schedule Model Quality Monitoring Jobs.

  • GroundTruthS3Input (dict) -- [REQUIRED]

    The ground truth label provided for the model.

    • S3Uri (string) --

      The address of the Amazon S3 location of the ground truth labels.

type ModelQualityJobOutputConfig

dict

param ModelQualityJobOutputConfig

[REQUIRED]

The output configuration for monitoring jobs.

  • MonitoringOutputs (list) -- [REQUIRED]

    Monitoring outputs for monitoring jobs. This is where the output of the periodic monitoring jobs is uploaded.

    • (dict) --

      The output object for a monitoring job.

      • S3Output (dict) -- [REQUIRED]

        The Amazon S3 storage location where the results of a monitoring job are saved.

        • S3Uri (string) -- [REQUIRED]

          A URI that identifies the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job.

        • LocalPath (string) -- [REQUIRED]

          The local path to the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job. LocalPath is an absolute path for the output data.

        • S3UploadMode (string) --

          Whether to upload the results of the monitoring job continuously or after the job completes.

  • KmsKeyId (string) --

    The Amazon Web Services Key Management Service (Amazon Web Services KMS) key that Amazon SageMaker uses to encrypt the model artifacts at rest using Amazon S3 server-side encryption.

type JobResources

dict

param JobResources

[REQUIRED]

Identifies the resources to deploy for a monitoring job.

  • ClusterConfig (dict) -- [REQUIRED]

    The configuration for the cluster resources used to run the processing job.

    • InstanceCount (integer) -- [REQUIRED]

      The number of ML compute instances to use in the model monitoring job. For distributed processing jobs, specify a value greater than 1. The default value is 1.

    • InstanceType (string) -- [REQUIRED]

      The ML compute instance type for the processing job.

    • VolumeSizeInGB (integer) -- [REQUIRED]

      The size of the ML storage volume, in gigabytes, that you want to provision. You must specify sufficient ML storage for your scenario.

    • VolumeKmsKeyId (string) --

      The Amazon Web Services Key Management Service (Amazon Web Services KMS) key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance(s) that run the model monitoring job.

type NetworkConfig

dict

param NetworkConfig

Specifies the network configuration for the monitoring job.

  • EnableInterContainerTrafficEncryption (boolean) --

    Whether to encrypt all communications between the instances used for the monitoring jobs. Choose True to encrypt communications. Encryption provides greater security for distributed jobs, but the processing might take longer.

  • EnableNetworkIsolation (boolean) --

    Whether to allow inbound and outbound network calls to and from the containers used for the monitoring job.

  • VpcConfig (dict) --

    Specifies a VPC that your training jobs and hosted models have access to. Control access to and from your training and model containers by configuring the VPC. For more information, see Protect Endpoints by Using an Amazon Virtual Private Cloud and Protect Training Jobs by Using an Amazon Virtual Private Cloud.

    • SecurityGroupIds (list) -- [REQUIRED]

      The VPC security group IDs, in the form sg-xxxxxxxx. Specify the security groups for the VPC that is specified in the Subnets field.

      • (string) --

    • Subnets (list) -- [REQUIRED]

      The ID of the subnets in the VPC to which you want to connect your training job or model. For information about the availability of specific instance types, see Supported Instance Types and Availability Zones.

      • (string) --

type RoleArn

string

param RoleArn

[REQUIRED]

The Amazon Resource Name (ARN) of an IAM role that Amazon SageMaker can assume to perform tasks on your behalf.

type StoppingCondition

dict

param StoppingCondition

A time limit for how long the monitoring job is allowed to run before stopping.

  • MaxRuntimeInSeconds (integer) -- [REQUIRED]

    The maximum runtime allowed in seconds.

    Note

    The MaxRuntimeInSeconds cannot exceed the frequency of the job. For data quality and model explainability, this can be up to 3600 seconds for an hourly schedule. For model bias and model quality hourly schedules, this can be up to 1800 seconds.

type Tags

list

param Tags

(Optional) An array of key-value pairs. For more information, see Using Cost Allocation Tags in the Amazon Web Services Billing and Cost Management User Guide .

  • (dict) --

    A tag object that consists of a key and an optional value, used to manage metadata for SageMaker Amazon Web Services resources.

    You can add tags to notebook instances, training jobs, hyperparameter tuning jobs, batch transform jobs, models, labeling jobs, work teams, endpoint configurations, and endpoints. For more information on adding tags to SageMaker resources, see AddTags.

    For more information on adding metadata to your Amazon Web Services resources with tagging, see Tagging Amazon Web Services resources. For advice on best practices for managing Amazon Web Services resources with tagging, see Tagging Best Practices: Implement an Effective Amazon Web Services Resource Tagging Strategy.

    • Key (string) -- [REQUIRED]

      The tag key. Tag keys must be unique per resource.

    • Value (string) -- [REQUIRED]

      The tag value.

rtype

dict

returns

Response Syntax

{
    'JobDefinitionArn': 'string'
}

Response Structure

  • (dict) --

    • JobDefinitionArn (string) --

      The Amazon Resource Name (ARN) of the model quality monitoring job.

CreateMonitoringSchedule (updated) Link ¶
Changes (request)
{'MonitoringScheduleConfig': {'MonitoringJobDefinition': {'MonitoringInputs': {'BatchTransformInput': {'DataCapturedDestinationS3Uri': 'string',
                                                                                                       'DatasetFormat': {'Csv': {'Header': 'boolean'},
                                                                                                                         'Json': {'Line': 'boolean'},
                                                                                                                         'Parquet': {}},
                                                                                                       'EndTimeOffset': 'string',
                                                                                                       'FeaturesAttribute': 'string',
                                                                                                       'InferenceAttribute': 'string',
                                                                                                       'LocalPath': 'string',
                                                                                                       'ProbabilityAttribute': 'string',
                                                                                                       'ProbabilityThresholdAttribute': 'double',
                                                                                                       'S3DataDistributionType': 'FullyReplicated '
                                                                                                                                 '| '
                                                                                                                                 'ShardedByS3Key',
                                                                                                       'S3InputMode': 'Pipe '
                                                                                                                      '| '
                                                                                                                      'File',
                                                                                                       'StartTimeOffset': 'string'}}}}}

Creates a schedule that regularly starts Amazon SageMaker Processing Jobs to monitor the data captured for an Amazon SageMaker Endoint.

See also: AWS API Documentation

Request Syntax

client.create_monitoring_schedule(
    MonitoringScheduleName='string',
    MonitoringScheduleConfig={
        'ScheduleConfig': {
            'ScheduleExpression': 'string'
        },
        'MonitoringJobDefinition': {
            'BaselineConfig': {
                'BaseliningJobName': 'string',
                'ConstraintsResource': {
                    'S3Uri': 'string'
                },
                'StatisticsResource': {
                    'S3Uri': 'string'
                }
            },
            'MonitoringInputs': [
                {
                    'EndpointInput': {
                        'EndpointName': 'string',
                        'LocalPath': 'string',
                        'S3InputMode': 'Pipe'|'File',
                        'S3DataDistributionType': 'FullyReplicated'|'ShardedByS3Key',
                        'FeaturesAttribute': 'string',
                        'InferenceAttribute': 'string',
                        'ProbabilityAttribute': 'string',
                        'ProbabilityThresholdAttribute': 123.0,
                        'StartTimeOffset': 'string',
                        'EndTimeOffset': 'string'
                    },
                    'BatchTransformInput': {
                        'DataCapturedDestinationS3Uri': 'string',
                        'DatasetFormat': {
                            'Csv': {
                                'Header': True|False
                            },
                            'Json': {
                                'Line': True|False
                            },
                            'Parquet': {}

                        },
                        'LocalPath': 'string',
                        'S3InputMode': 'Pipe'|'File',
                        'S3DataDistributionType': 'FullyReplicated'|'ShardedByS3Key',
                        'FeaturesAttribute': 'string',
                        'InferenceAttribute': 'string',
                        'ProbabilityAttribute': 'string',
                        'ProbabilityThresholdAttribute': 123.0,
                        'StartTimeOffset': 'string',
                        'EndTimeOffset': 'string'
                    }
                },
            ],
            'MonitoringOutputConfig': {
                'MonitoringOutputs': [
                    {
                        'S3Output': {
                            'S3Uri': 'string',
                            'LocalPath': 'string',
                            'S3UploadMode': 'Continuous'|'EndOfJob'
                        }
                    },
                ],
                'KmsKeyId': 'string'
            },
            'MonitoringResources': {
                'ClusterConfig': {
                    'InstanceCount': 123,
                    'InstanceType': 'ml.t3.medium'|'ml.t3.large'|'ml.t3.xlarge'|'ml.t3.2xlarge'|'ml.m4.xlarge'|'ml.m4.2xlarge'|'ml.m4.4xlarge'|'ml.m4.10xlarge'|'ml.m4.16xlarge'|'ml.c4.xlarge'|'ml.c4.2xlarge'|'ml.c4.4xlarge'|'ml.c4.8xlarge'|'ml.p2.xlarge'|'ml.p2.8xlarge'|'ml.p2.16xlarge'|'ml.p3.2xlarge'|'ml.p3.8xlarge'|'ml.p3.16xlarge'|'ml.c5.xlarge'|'ml.c5.2xlarge'|'ml.c5.4xlarge'|'ml.c5.9xlarge'|'ml.c5.18xlarge'|'ml.m5.large'|'ml.m5.xlarge'|'ml.m5.2xlarge'|'ml.m5.4xlarge'|'ml.m5.12xlarge'|'ml.m5.24xlarge'|'ml.r5.large'|'ml.r5.xlarge'|'ml.r5.2xlarge'|'ml.r5.4xlarge'|'ml.r5.8xlarge'|'ml.r5.12xlarge'|'ml.r5.16xlarge'|'ml.r5.24xlarge'|'ml.g4dn.xlarge'|'ml.g4dn.2xlarge'|'ml.g4dn.4xlarge'|'ml.g4dn.8xlarge'|'ml.g4dn.12xlarge'|'ml.g4dn.16xlarge',
                    'VolumeSizeInGB': 123,
                    'VolumeKmsKeyId': 'string'
                }
            },
            'MonitoringAppSpecification': {
                'ImageUri': 'string',
                'ContainerEntrypoint': [
                    'string',
                ],
                'ContainerArguments': [
                    'string',
                ],
                'RecordPreprocessorSourceUri': 'string',
                'PostAnalyticsProcessorSourceUri': 'string'
            },
            'StoppingCondition': {
                'MaxRuntimeInSeconds': 123
            },
            'Environment': {
                'string': 'string'
            },
            'NetworkConfig': {
                'EnableInterContainerTrafficEncryption': True|False,
                'EnableNetworkIsolation': True|False,
                'VpcConfig': {
                    'SecurityGroupIds': [
                        'string',
                    ],
                    'Subnets': [
                        'string',
                    ]
                }
            },
            'RoleArn': 'string'
        },
        'MonitoringJobDefinitionName': 'string',
        'MonitoringType': 'DataQuality'|'ModelQuality'|'ModelBias'|'ModelExplainability'
    },
    Tags=[
        {
            'Key': 'string',
            'Value': 'string'
        },
    ]
)
type MonitoringScheduleName

string

param MonitoringScheduleName

[REQUIRED]

The name of the monitoring schedule. The name must be unique within an Amazon Web Services Region within an Amazon Web Services account.

type MonitoringScheduleConfig

dict

param MonitoringScheduleConfig

[REQUIRED]

The configuration object that specifies the monitoring schedule and defines the monitoring job.

  • ScheduleConfig (dict) --

    Configures the monitoring schedule.

    • ScheduleExpression (string) -- [REQUIRED]

      A cron expression that describes details about the monitoring schedule.

      Currently the only supported cron expressions are:

      • If you want to set the job to start every hour, please use the following: Hourly: cron(0 * ? * * *)

      • If you want to start the job daily: cron(0 [00-23] ? * * *)

      For example, the following are valid cron expressions:

      • Daily at noon UTC: cron(0 12 ? * * *)

      • Daily at midnight UTC: cron(0 0 ? * * *)

      To support running every 6, 12 hours, the following are also supported:

      cron(0 [00-23]/[01-24] ? * * *)

      For example, the following are valid cron expressions:

      • Every 12 hours, starting at 5pm UTC: cron(0 17/12 ? * * *)

      • Every two hours starting at midnight: cron(0 0/2 ? * * *)

      Note

      • Even though the cron expression is set to start at 5PM UTC, note that there could be a delay of 0-20 minutes from the actual requested time to run the execution.

      • We recommend that if you would like a daily schedule, you do not provide this parameter. Amazon SageMaker will pick a time for running every day.

  • MonitoringJobDefinition (dict) --

    Defines the monitoring job.

    • BaselineConfig (dict) --

      Baseline configuration used to validate that the data conforms to the specified constraints and statistics

      • BaseliningJobName (string) --

        The name of the job that performs baselining for the monitoring job.

      • ConstraintsResource (dict) --

        The baseline constraint file in Amazon S3 that the current monitoring job should validated against.

        • S3Uri (string) --

          The Amazon S3 URI for the constraints resource.

      • StatisticsResource (dict) --

        The baseline statistics file in Amazon S3 that the current monitoring job should be validated against.

        • S3Uri (string) --

          The Amazon S3 URI for the statistics resource.

    • MonitoringInputs (list) -- [REQUIRED]

      The array of inputs for the monitoring job. Currently we support monitoring an Amazon SageMaker Endpoint.

      • (dict) --

        The inputs for a monitoring job.

        • EndpointInput (dict) --

          The endpoint for a monitoring job.

          • EndpointName (string) -- [REQUIRED]

            An endpoint in customer's account which has enabled DataCaptureConfig enabled.

          • LocalPath (string) -- [REQUIRED]

            Path to the filesystem where the endpoint data is available to the container.

          • S3InputMode (string) --

            Whether the Pipe or File is used as the input mode for transferring data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File .

          • S3DataDistributionType (string) --

            Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defaults to FullyReplicated

          • FeaturesAttribute (string) --

            The attributes of the input data that are the input features.

          • InferenceAttribute (string) --

            The attribute of the input data that represents the ground truth label.

          • ProbabilityAttribute (string) --

            In a classification problem, the attribute that represents the class probability.

          • ProbabilityThresholdAttribute (float) --

            The threshold for the class probability to be evaluated as a positive result.

          • StartTimeOffset (string) --

            If specified, monitoring jobs substract this time from the start time. For information about using offsets for scheduling monitoring jobs, see Schedule Model Quality Monitoring Jobs.

          • EndTimeOffset (string) --

            If specified, monitoring jobs substract this time from the end time. For information about using offsets for scheduling monitoring jobs, see Schedule Model Quality Monitoring Jobs.

        • BatchTransformInput (dict) --

          Input object for the batch transform job.

          • DataCapturedDestinationS3Uri (string) -- [REQUIRED]

            The Amazon S3 location being used to capture the data.

          • DatasetFormat (dict) -- [REQUIRED]

            The dataset format for your batch transform job.

            • Csv (dict) --

              The CSV dataset used in the monitoring job.

              • Header (boolean) --

                Indicates if the CSV data has a header.

            • Json (dict) --

              The JSON dataset used in the monitoring job

              • Line (boolean) --

                Indicates if the file should be read as a json object per line.

            • Parquet (dict) --

              The Parquet dataset used in the monitoring job

          • LocalPath (string) -- [REQUIRED]

            Path to the filesystem where the batch transform data is available to the container.

          • S3InputMode (string) --

            Whether the Pipe or File is used as the input mode for transferring data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File .

          • S3DataDistributionType (string) --

            Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defaults to FullyReplicated

          • FeaturesAttribute (string) --

            The attributes of the input data that are the input features.

          • InferenceAttribute (string) --

            The attribute of the input data that represents the ground truth label.

          • ProbabilityAttribute (string) --

            In a classification problem, the attribute that represents the class probability.

          • ProbabilityThresholdAttribute (float) --

            The threshold for the class probability to be evaluated as a positive result.

          • StartTimeOffset (string) --

            If specified, monitoring jobs substract this time from the start time. For information about using offsets for scheduling monitoring jobs, see Schedule Model Quality Monitoring Jobs.

          • EndTimeOffset (string) --

            If specified, monitoring jobs substract this time from the end time. For information about using offsets for scheduling monitoring jobs, see Schedule Model Quality Monitoring Jobs.

    • MonitoringOutputConfig (dict) -- [REQUIRED]

      The array of outputs from the monitoring job to be uploaded to Amazon Simple Storage Service (Amazon S3).

      • MonitoringOutputs (list) -- [REQUIRED]

        Monitoring outputs for monitoring jobs. This is where the output of the periodic monitoring jobs is uploaded.

        • (dict) --

          The output object for a monitoring job.

          • S3Output (dict) -- [REQUIRED]

            The Amazon S3 storage location where the results of a monitoring job are saved.

            • S3Uri (string) -- [REQUIRED]

              A URI that identifies the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job.

            • LocalPath (string) -- [REQUIRED]

              The local path to the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job. LocalPath is an absolute path for the output data.

            • S3UploadMode (string) --

              Whether to upload the results of the monitoring job continuously or after the job completes.

      • KmsKeyId (string) --

        The Amazon Web Services Key Management Service (Amazon Web Services KMS) key that Amazon SageMaker uses to encrypt the model artifacts at rest using Amazon S3 server-side encryption.

    • MonitoringResources (dict) -- [REQUIRED]

      Identifies the resources, ML compute instances, and ML storage volumes to deploy for a monitoring job. In distributed processing, you specify more than one instance.

      • ClusterConfig (dict) -- [REQUIRED]

        The configuration for the cluster resources used to run the processing job.

        • InstanceCount (integer) -- [REQUIRED]

          The number of ML compute instances to use in the model monitoring job. For distributed processing jobs, specify a value greater than 1. The default value is 1.

        • InstanceType (string) -- [REQUIRED]

          The ML compute instance type for the processing job.

        • VolumeSizeInGB (integer) -- [REQUIRED]

          The size of the ML storage volume, in gigabytes, that you want to provision. You must specify sufficient ML storage for your scenario.

        • VolumeKmsKeyId (string) --

          The Amazon Web Services Key Management Service (Amazon Web Services KMS) key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance(s) that run the model monitoring job.

    • MonitoringAppSpecification (dict) -- [REQUIRED]

      Configures the monitoring job to run a specified Docker container image.

      • ImageUri (string) -- [REQUIRED]

        The container image to be run by the monitoring job.

      • ContainerEntrypoint (list) --

        Specifies the entrypoint for a container used to run the monitoring job.

        • (string) --

      • ContainerArguments (list) --

        An array of arguments for the container used to run the monitoring job.

        • (string) --

      • RecordPreprocessorSourceUri (string) --

        An Amazon S3 URI to a script that is called per row prior to running analysis. It can base64 decode the payload and convert it into a flatted json so that the built-in container can use the converted data. Applicable only for the built-in (first party) containers.

      • PostAnalyticsProcessorSourceUri (string) --

        An Amazon S3 URI to a script that is called after analysis has been performed. Applicable only for the built-in (first party) containers.

    • StoppingCondition (dict) --

      Specifies a time limit for how long the monitoring job is allowed to run.

      • MaxRuntimeInSeconds (integer) -- [REQUIRED]

        The maximum runtime allowed in seconds.

        Note

        The MaxRuntimeInSeconds cannot exceed the frequency of the job. For data quality and model explainability, this can be up to 3600 seconds for an hourly schedule. For model bias and model quality hourly schedules, this can be up to 1800 seconds.

    • Environment (dict) --

      Sets the environment variables in the Docker container.

      • (string) --

        • (string) --

    • NetworkConfig (dict) --

      Specifies networking options for an monitoring job.

      • EnableInterContainerTrafficEncryption (boolean) --

        Whether to encrypt all communications between distributed processing jobs. Choose True to encrypt communications. Encryption provides greater security for distributed processing jobs, but the processing might take longer.

      • EnableNetworkIsolation (boolean) --

        Whether to allow inbound and outbound network calls to and from the containers used for the processing job.

      • VpcConfig (dict) --

        Specifies a VPC that your training jobs and hosted models have access to. Control access to and from your training and model containers by configuring the VPC. For more information, see Protect Endpoints by Using an Amazon Virtual Private Cloud and Protect Training Jobs by Using an Amazon Virtual Private Cloud.

        • SecurityGroupIds (list) -- [REQUIRED]

          The VPC security group IDs, in the form sg-xxxxxxxx. Specify the security groups for the VPC that is specified in the Subnets field.

          • (string) --

        • Subnets (list) -- [REQUIRED]

          The ID of the subnets in the VPC to which you want to connect your training job or model. For information about the availability of specific instance types, see Supported Instance Types and Availability Zones.

          • (string) --

    • RoleArn (string) -- [REQUIRED]

      The Amazon Resource Name (ARN) of an IAM role that Amazon SageMaker can assume to perform tasks on your behalf.

  • MonitoringJobDefinitionName (string) --

    The name of the monitoring job definition to schedule.

  • MonitoringType (string) --

    The type of the monitoring job definition to schedule.

type Tags

list

param Tags

(Optional) An array of key-value pairs. For more information, see Using Cost Allocation Tags in the Amazon Web Services Billing and Cost Management User Guide .

  • (dict) --

    A tag object that consists of a key and an optional value, used to manage metadata for SageMaker Amazon Web Services resources.

    You can add tags to notebook instances, training jobs, hyperparameter tuning jobs, batch transform jobs, models, labeling jobs, work teams, endpoint configurations, and endpoints. For more information on adding tags to SageMaker resources, see AddTags.

    For more information on adding metadata to your Amazon Web Services resources with tagging, see Tagging Amazon Web Services resources. For advice on best practices for managing Amazon Web Services resources with tagging, see Tagging Best Practices: Implement an Effective Amazon Web Services Resource Tagging Strategy.

    • Key (string) -- [REQUIRED]

      The tag key. Tag keys must be unique per resource.

    • Value (string) -- [REQUIRED]

      The tag value.

rtype

dict

returns

Response Syntax

{
    'MonitoringScheduleArn': 'string'
}

Response Structure

  • (dict) --

    • MonitoringScheduleArn (string) --

      The Amazon Resource Name (ARN) of the monitoring schedule.

CreateTransformJob (updated) Link ¶
Changes (request)
{'DataCaptureConfig': {'DestinationS3Uri': 'string',
                       'GenerateInferenceId': 'boolean',
                       'KmsKeyId': 'string'}}

Starts a transform job. A transform job uses a trained model to get inferences on a dataset and saves these results to an Amazon S3 location that you specify.

To perform batch transformations, you create a transform job and use the data that you have readily available.

In the request body, you provide the following:

  • TransformJobName - Identifies the transform job. The name must be unique within an Amazon Web Services Region in an Amazon Web Services account.

  • ModelName - Identifies the model to use. ModelName must be the name of an existing Amazon SageMaker model in the same Amazon Web Services Region and Amazon Web Services account. For information on creating a model, see CreateModel.

  • TransformInput - Describes the dataset to be transformed and the Amazon S3 location where it is stored.

  • TransformOutput - Identifies the Amazon S3 location where you want Amazon SageMaker to save the results from the transform job.

  • TransformResources - Identifies the ML compute instances for the transform job.

For more information about how batch transformation works, see Batch Transform.

See also: AWS API Documentation

Request Syntax

client.create_transform_job(
    TransformJobName='string',
    ModelName='string',
    MaxConcurrentTransforms=123,
    ModelClientConfig={
        'InvocationsTimeoutInSeconds': 123,
        'InvocationsMaxRetries': 123
    },
    MaxPayloadInMB=123,
    BatchStrategy='MultiRecord'|'SingleRecord',
    Environment={
        'string': 'string'
    },
    TransformInput={
        'DataSource': {
            'S3DataSource': {
                'S3DataType': 'ManifestFile'|'S3Prefix'|'AugmentedManifestFile',
                'S3Uri': 'string'
            }
        },
        'ContentType': 'string',
        'CompressionType': 'None'|'Gzip',
        'SplitType': 'None'|'Line'|'RecordIO'|'TFRecord'
    },
    TransformOutput={
        'S3OutputPath': 'string',
        'Accept': 'string',
        'AssembleWith': 'None'|'Line',
        'KmsKeyId': 'string'
    },
    DataCaptureConfig={
        'DestinationS3Uri': 'string',
        'KmsKeyId': 'string',
        'GenerateInferenceId': True|False
    },
    TransformResources={
        'InstanceType': 'ml.m4.xlarge'|'ml.m4.2xlarge'|'ml.m4.4xlarge'|'ml.m4.10xlarge'|'ml.m4.16xlarge'|'ml.c4.xlarge'|'ml.c4.2xlarge'|'ml.c4.4xlarge'|'ml.c4.8xlarge'|'ml.p2.xlarge'|'ml.p2.8xlarge'|'ml.p2.16xlarge'|'ml.p3.2xlarge'|'ml.p3.8xlarge'|'ml.p3.16xlarge'|'ml.c5.xlarge'|'ml.c5.2xlarge'|'ml.c5.4xlarge'|'ml.c5.9xlarge'|'ml.c5.18xlarge'|'ml.m5.large'|'ml.m5.xlarge'|'ml.m5.2xlarge'|'ml.m5.4xlarge'|'ml.m5.12xlarge'|'ml.m5.24xlarge'|'ml.g4dn.xlarge'|'ml.g4dn.2xlarge'|'ml.g4dn.4xlarge'|'ml.g4dn.8xlarge'|'ml.g4dn.12xlarge'|'ml.g4dn.16xlarge',
        'InstanceCount': 123,
        'VolumeKmsKeyId': 'string'
    },
    DataProcessing={
        'InputFilter': 'string',
        'OutputFilter': 'string',
        'JoinSource': 'Input'|'None'
    },
    Tags=[
        {
            'Key': 'string',
            'Value': 'string'
        },
    ],
    ExperimentConfig={
        'ExperimentName': 'string',
        'TrialName': 'string',
        'TrialComponentDisplayName': 'string'
    }
)
type TransformJobName

string

param TransformJobName

[REQUIRED]

The name of the transform job. The name must be unique within an Amazon Web Services Region in an Amazon Web Services account.

type ModelName

string

param ModelName

[REQUIRED]

The name of the model that you want to use for the transform job. ModelName must be the name of an existing Amazon SageMaker model within an Amazon Web Services Region in an Amazon Web Services account.

type MaxConcurrentTransforms

integer

param MaxConcurrentTransforms

The maximum number of parallel requests that can be sent to each instance in a transform job. If MaxConcurrentTransforms is set to 0 or left unset, Amazon SageMaker checks the optional execution-parameters to determine the settings for your chosen algorithm. If the execution-parameters endpoint is not enabled, the default value is 1 . For more information on execution-parameters, see How Containers Serve Requests. For built-in algorithms, you don't need to set a value for MaxConcurrentTransforms .

type ModelClientConfig

dict

param ModelClientConfig

Configures the timeout and maximum number of retries for processing a transform job invocation.

  • InvocationsTimeoutInSeconds (integer) --

    The timeout value in seconds for an invocation request. The default value is 600.

  • InvocationsMaxRetries (integer) --

    The maximum number of retries when invocation requests are failing. The default value is 3.

type MaxPayloadInMB

integer

param MaxPayloadInMB

The maximum allowed size of the payload, in MB. A payload is the data portion of a record (without metadata). The value in MaxPayloadInMB must be greater than, or equal to, the size of a single record. To estimate the size of a record in MB, divide the size of your dataset by the number of records. To ensure that the records fit within the maximum payload size, we recommend using a slightly larger value. The default value is 6 MB.

The value of MaxPayloadInMB cannot be greater than 100 MB. If you specify the MaxConcurrentTransforms parameter, the value of (MaxConcurrentTransforms * MaxPayloadInMB) also cannot exceed 100 MB.

For cases where the payload might be arbitrarily large and is transmitted using HTTP chunked encoding, set the value to 0 . This feature works only in supported algorithms. Currently, Amazon SageMaker built-in algorithms do not support HTTP chunked encoding.

type BatchStrategy

string

param BatchStrategy

Specifies the number of records to include in a mini-batch for an HTTP inference request. A record is a single unit of input data that inference can be made on. For example, a single line in a CSV file is a record.

To enable the batch strategy, you must set the SplitType property to Line , RecordIO , or TFRecord .

To use only one record when making an HTTP invocation request to a container, set BatchStrategy to SingleRecord and SplitType to Line .

To fit as many records in a mini-batch as can fit within the MaxPayloadInMB limit, set BatchStrategy to MultiRecord and SplitType to Line .

type Environment

dict

param Environment

The environment variables to set in the Docker container. We support up to 16 key and values entries in the map.

  • (string) --

    • (string) --

type TransformInput

dict

param TransformInput

[REQUIRED]

Describes the input source and the way the transform job consumes it.

  • DataSource (dict) -- [REQUIRED]

    Describes the location of the channel data, which is, the S3 location of the input data that the model can consume.

    • S3DataSource (dict) -- [REQUIRED]

      The S3 location of the data source that is associated with a channel.

      • S3DataType (string) -- [REQUIRED]

        If you choose S3Prefix , S3Uri identifies a key name prefix. Amazon SageMaker uses all objects with the specified key name prefix for batch transform.

        If you choose ManifestFile , S3Uri identifies an object that is a manifest file containing a list of object keys that you want Amazon SageMaker to use for batch transform.

        The following values are compatible: ManifestFile , S3Prefix

        The following value is not compatible: AugmentedManifestFile

      • S3Uri (string) -- [REQUIRED]

        Depending on the value specified for the S3DataType , identifies either a key name prefix or a manifest. For example:

        • A key name prefix might look like this: s3://bucketname/exampleprefix .

        • A manifest might look like this: s3://bucketname/example.manifest The manifest is an S3 object which is a JSON file with the following format: [ {"prefix": "s3://customer_bucket/some/prefix/"}, "relative/path/to/custdata-1", "relative/path/custdata-2", ... "relative/path/custdata-N" ] The preceding JSON matches the following S3Uris : s3://customer_bucket/some/prefix/relative/path/to/custdata-1 s3://customer_bucket/some/prefix/relative/path/custdata-2 ... s3://customer_bucket/some/prefix/relative/path/custdata-N The complete set of S3Uris in this manifest constitutes the input data for the channel for this datasource. The object that each S3Uris points to must be readable by the IAM role that Amazon SageMaker uses to perform tasks on your behalf.

  • ContentType (string) --

    The multipurpose internet mail extension (MIME) type of the data. Amazon SageMaker uses the MIME type with each http call to transfer data to the transform job.

  • CompressionType (string) --

    If your transform data is compressed, specify the compression type. Amazon SageMaker automatically decompresses the data for the transform job accordingly. The default value is None .

  • SplitType (string) --

    The method to use to split the transform job's data files into smaller batches. Splitting is necessary when the total size of each object is too large to fit in a single request. You can also use data splitting to improve performance by processing multiple concurrent mini-batches. The default value for SplitType is None , which indicates that input data files are not split, and request payloads contain the entire contents of an input object. Set the value of this parameter to Line to split records on a newline character boundary. SplitType also supports a number of record-oriented binary data formats. Currently, the supported record formats are:

    • RecordIO

    • TFRecord

    When splitting is enabled, the size of a mini-batch depends on the values of the BatchStrategy and MaxPayloadInMB parameters. When the value of BatchStrategy is MultiRecord , Amazon SageMaker sends the maximum number of records in each request, up to the MaxPayloadInMB limit. If the value of BatchStrategy is SingleRecord , Amazon SageMaker sends individual records in each request.

    Note

    Some data formats represent a record as a binary payload wrapped with extra padding bytes. When splitting is applied to a binary data format, padding is removed if the value of BatchStrategy is set to SingleRecord . Padding is not removed if the value of BatchStrategy is set to MultiRecord .

    For more information about RecordIO , see Create a Dataset Using RecordIO in the MXNet documentation. For more information about TFRecord , see Consuming TFRecord data in the TensorFlow documentation.

type TransformOutput

dict

param TransformOutput

[REQUIRED]

Describes the results of the transform job.

  • S3OutputPath (string) -- [REQUIRED]

    The Amazon S3 path where you want Amazon SageMaker to store the results of the transform job. For example, s3://bucket-name/key-name-prefix .

    For every S3 object used as input for the transform job, batch transform stores the transformed data with an . out suffix in a corresponding subfolder in the location in the output prefix. For example, for the input data stored at s3://bucket-name/input-name-prefix/dataset01/data.csv , batch transform stores the transformed data at s3://bucket-name/output-name-prefix/input-name-prefix/data.csv.out . Batch transform doesn't upload partially processed objects. For an input S3 object that contains multiple records, it creates an . out file only if the transform job succeeds on the entire file. When the input contains multiple S3 objects, the batch transform job processes the listed S3 objects and uploads only the output for successfully processed objects. If any object fails in the transform job batch transform marks the job as failed to prompt investigation.

  • Accept (string) --

    The MIME type used to specify the output data. Amazon SageMaker uses the MIME type with each http call to transfer data from the transform job.

  • AssembleWith (string) --

    Defines how to assemble the results of the transform job as a single S3 object. Choose a format that is most convenient to you. To concatenate the results in binary format, specify None . To add a newline character at the end of every transformed record, specify Line .

  • KmsKeyId (string) --

    The Amazon Web Services Key Management Service (Amazon Web Services KMS) key that Amazon SageMaker uses to encrypt the model artifacts at rest using Amazon S3 server-side encryption. The KmsKeyId can be any of the following formats:

    • Key ID: 1234abcd-12ab-34cd-56ef-1234567890ab

    • Key ARN: arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab

    • Alias name: alias/ExampleAlias

    • Alias name ARN: arn:aws:kms:us-west-2:111122223333:alias/ExampleAlias

    If you don't provide a KMS key ID, Amazon SageMaker uses the default KMS key for Amazon S3 for your role's account. For more information, see KMS-Managed Encryption Keys in the Amazon Simple Storage Service Developer Guide.

    The KMS key policy must grant permission to the IAM role that you specify in your CreateModel request. For more information, see Using Key Policies in Amazon Web Services KMS in the Amazon Web Services Key Management Service Developer Guide .

type DataCaptureConfig

dict

param DataCaptureConfig

Configuration to control how SageMaker captures inference data.

  • DestinationS3Uri (string) -- [REQUIRED]

    The Amazon S3 location being used to capture the data.

  • KmsKeyId (string) --

    The Amazon Resource Name (ARN) of a Amazon Web Services Key Management Service key that SageMaker uses to encrypt data on the storage volume attached to the ML compute instance that hosts the batch transform job.

    The KmsKeyId can be any of the following formats:

    • Key ID: 1234abcd-12ab-34cd-56ef-1234567890ab

    • Key ARN: arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab

    • Alias name: alias/ExampleAlias

    • Alias name ARN: arn:aws:kms:us-west-2:111122223333:alias/ExampleAlias

  • GenerateInferenceId (boolean) --

    Flag that indicates whether to append inference id to the output.

type TransformResources

dict

param TransformResources

[REQUIRED]

Describes the resources, including ML instance types and ML instance count, to use for the transform job.

  • InstanceType (string) -- [REQUIRED]

    The ML compute instance type for the transform job. If you are using built-in algorithms to transform moderately sized datasets, we recommend using ml.m4.xlarge or ml.m5.large instance types.

  • InstanceCount (integer) -- [REQUIRED]

    The number of ML compute instances to use in the transform job. For distributed transform jobs, specify a value greater than 1. The default value is 1 .

  • VolumeKmsKeyId (string) --

    The Amazon Web Services Key Management Service (Amazon Web Services KMS) key that Amazon SageMaker uses to encrypt model data on the storage volume attached to the ML compute instance(s) that run the batch transform job.

    Note

    Certain Nitro-based instances include local storage, dependent on the instance type. Local storage volumes are encrypted using a hardware module on the instance. You can't request a VolumeKmsKeyId when using an instance type with local storage.

    For a list of instance types that support local instance storage, see Instance Store Volumes.

    For more information about local instance storage encryption, see SSD Instance Store Volumes.

    The VolumeKmsKeyId can be any of the following formats:

    • Key ID: 1234abcd-12ab-34cd-56ef-1234567890ab

    • Key ARN: arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab

    • Alias name: alias/ExampleAlias

    • Alias name ARN: arn:aws:kms:us-west-2:111122223333:alias/ExampleAlias

type DataProcessing

dict

param DataProcessing

The data structure used to specify the data to be used for inference in a batch transform job and to associate the data that is relevant to the prediction results in the output. The input filter provided allows you to exclude input data that is not needed for inference in a batch transform job. The output filter provided allows you to include input data relevant to interpreting the predictions in the output from the job. For more information, see Associate Prediction Results with their Corresponding Input Records.

  • InputFilter (string) --

    A JSONPath expression used to select a portion of the input data to pass to the algorithm. Use the InputFilter parameter to exclude fields, such as an ID column, from the input. If you want SageMaker to pass the entire input dataset to the algorithm, accept the default value $ .

    Examples: "$" , "$[1:]" , "$.features"

  • OutputFilter (string) --

    A JSONPath expression used to select a portion of the joined dataset to save in the output file for a batch transform job. If you want SageMaker to store the entire input dataset in the output file, leave the default value, $ . If you specify indexes that aren't within the dimension size of the joined dataset, you get an error.

    Examples: "$" , "$[0,5:]" , "$['id','SageMakerOutput']"

  • JoinSource (string) --

    Specifies the source of the data to join with the transformed data. The valid values are None and Input . The default value is None , which specifies not to join the input with the transformed data. If you want the batch transform job to join the original input data with the transformed data, set JoinSource to Input . You can specify OutputFilter as an additional filter to select a portion of the joined dataset and store it in the output file.

    For JSON or JSONLines objects, such as a JSON array, SageMaker adds the transformed data to the input JSON object in an attribute called SageMakerOutput . The joined result for JSON must be a key-value pair object. If the input is not a key-value pair object, SageMaker creates a new JSON file. In the new JSON file, and the input data is stored under the SageMakerInput key and the results are stored in SageMakerOutput .

    For CSV data, SageMaker takes each row as a JSON array and joins the transformed data with the input by appending each transformed row to the end of the input. The joined data has the original input data followed by the transformed data and the output is a CSV file.

    For information on how joining in applied, see Workflow for Associating Inferences with Input Records.

type Tags

list

param Tags

(Optional) An array of key-value pairs. For more information, see Using Cost Allocation Tags in the Amazon Web Services Billing and Cost Management User Guide .

  • (dict) --

    A tag object that consists of a key and an optional value, used to manage metadata for SageMaker Amazon Web Services resources.

    You can add tags to notebook instances, training jobs, hyperparameter tuning jobs, batch transform jobs, models, labeling jobs, work teams, endpoint configurations, and endpoints. For more information on adding tags to SageMaker resources, see AddTags.

    For more information on adding metadata to your Amazon Web Services resources with tagging, see Tagging Amazon Web Services resources. For advice on best practices for managing Amazon Web Services resources with tagging, see Tagging Best Practices: Implement an Effective Amazon Web Services Resource Tagging Strategy.

    • Key (string) -- [REQUIRED]

      The tag key. Tag keys must be unique per resource.

    • Value (string) -- [REQUIRED]

      The tag value.

type ExperimentConfig

dict

param ExperimentConfig

Associates a SageMaker job as a trial component with an experiment and trial. Specified when you call the following APIs:

  • CreateProcessingJob

  • CreateTrainingJob

  • CreateTransformJob

  • ExperimentName (string) --

    The name of an existing experiment to associate the trial component with.

  • TrialName (string) --

    The name of an existing trial to associate the trial component with. If not specified, a new trial is created.

  • TrialComponentDisplayName (string) --

    The display name for the trial component. If this key isn't specified, the display name is the trial component name.

rtype

dict

returns

Response Syntax

{
    'TransformJobArn': 'string'
}

Response Structure

  • (dict) --

    • TransformJobArn (string) --

      The Amazon Resource Name (ARN) of the transform job.

DescribeDataQualityJobDefinition (updated) Link ¶
Changes (response)
{'DataQualityJobInput': {'BatchTransformInput': {'DataCapturedDestinationS3Uri': 'string',
                                                 'DatasetFormat': {'Csv': {'Header': 'boolean'},
                                                                   'Json': {'Line': 'boolean'},
                                                                   'Parquet': {}},
                                                 'EndTimeOffset': 'string',
                                                 'FeaturesAttribute': 'string',
                                                 'InferenceAttribute': 'string',
                                                 'LocalPath': 'string',
                                                 'ProbabilityAttribute': 'string',
                                                 'ProbabilityThresholdAttribute': 'double',
                                                 'S3DataDistributionType': 'FullyReplicated '
                                                                           '| '
                                                                           'ShardedByS3Key',
                                                 'S3InputMode': 'Pipe | File',
                                                 'StartTimeOffset': 'string'}}}

Gets the details of a data quality monitoring job definition.

See also: AWS API Documentation

Request Syntax

client.describe_data_quality_job_definition(
    JobDefinitionName='string'
)
type JobDefinitionName

string

param JobDefinitionName

[REQUIRED]

The name of the data quality monitoring job definition to describe.

rtype

dict

returns

Response Syntax

{
    'JobDefinitionArn': 'string',
    'JobDefinitionName': 'string',
    'CreationTime': datetime(2015, 1, 1),
    'DataQualityBaselineConfig': {
        'BaseliningJobName': 'string',
        'ConstraintsResource': {
            'S3Uri': 'string'
        },
        'StatisticsResource': {
            'S3Uri': 'string'
        }
    },
    'DataQualityAppSpecification': {
        'ImageUri': 'string',
        'ContainerEntrypoint': [
            'string',
        ],
        'ContainerArguments': [
            'string',
        ],
        'RecordPreprocessorSourceUri': 'string',
        'PostAnalyticsProcessorSourceUri': 'string',
        'Environment': {
            'string': 'string'
        }
    },
    'DataQualityJobInput': {
        'EndpointInput': {
            'EndpointName': 'string',
            'LocalPath': 'string',
            'S3InputMode': 'Pipe'|'File',
            'S3DataDistributionType': 'FullyReplicated'|'ShardedByS3Key',
            'FeaturesAttribute': 'string',
            'InferenceAttribute': 'string',
            'ProbabilityAttribute': 'string',
            'ProbabilityThresholdAttribute': 123.0,
            'StartTimeOffset': 'string',
            'EndTimeOffset': 'string'
        },
        'BatchTransformInput': {
            'DataCapturedDestinationS3Uri': 'string',
            'DatasetFormat': {
                'Csv': {
                    'Header': True|False
                },
                'Json': {
                    'Line': True|False
                },
                'Parquet': {}
            },
            'LocalPath': 'string',
            'S3InputMode': 'Pipe'|'File',
            'S3DataDistributionType': 'FullyReplicated'|'ShardedByS3Key',
            'FeaturesAttribute': 'string',
            'InferenceAttribute': 'string',
            'ProbabilityAttribute': 'string',
            'ProbabilityThresholdAttribute': 123.0,
            'StartTimeOffset': 'string',
            'EndTimeOffset': 'string'
        }
    },
    'DataQualityJobOutputConfig': {
        'MonitoringOutputs': [
            {
                'S3Output': {
                    'S3Uri': 'string',
                    'LocalPath': 'string',
                    'S3UploadMode': 'Continuous'|'EndOfJob'
                }
            },
        ],
        'KmsKeyId': 'string'
    },
    'JobResources': {
        'ClusterConfig': {
            'InstanceCount': 123,
            'InstanceType': 'ml.t3.medium'|'ml.t3.large'|'ml.t3.xlarge'|'ml.t3.2xlarge'|'ml.m4.xlarge'|'ml.m4.2xlarge'|'ml.m4.4xlarge'|'ml.m4.10xlarge'|'ml.m4.16xlarge'|'ml.c4.xlarge'|'ml.c4.2xlarge'|'ml.c4.4xlarge'|'ml.c4.8xlarge'|'ml.p2.xlarge'|'ml.p2.8xlarge'|'ml.p2.16xlarge'|'ml.p3.2xlarge'|'ml.p3.8xlarge'|'ml.p3.16xlarge'|'ml.c5.xlarge'|'ml.c5.2xlarge'|'ml.c5.4xlarge'|'ml.c5.9xlarge'|'ml.c5.18xlarge'|'ml.m5.large'|'ml.m5.xlarge'|'ml.m5.2xlarge'|'ml.m5.4xlarge'|'ml.m5.12xlarge'|'ml.m5.24xlarge'|'ml.r5.large'|'ml.r5.xlarge'|'ml.r5.2xlarge'|'ml.r5.4xlarge'|'ml.r5.8xlarge'|'ml.r5.12xlarge'|'ml.r5.16xlarge'|'ml.r5.24xlarge'|'ml.g4dn.xlarge'|'ml.g4dn.2xlarge'|'ml.g4dn.4xlarge'|'ml.g4dn.8xlarge'|'ml.g4dn.12xlarge'|'ml.g4dn.16xlarge',
            'VolumeSizeInGB': 123,
            'VolumeKmsKeyId': 'string'
        }
    },
    'NetworkConfig': {
        'EnableInterContainerTrafficEncryption': True|False,
        'EnableNetworkIsolation': True|False,
        'VpcConfig': {
            'SecurityGroupIds': [
                'string',
            ],
            'Subnets': [
                'string',
            ]
        }
    },
    'RoleArn': 'string',
    'StoppingCondition': {
        'MaxRuntimeInSeconds': 123
    }
}

Response Structure

  • (dict) --

    • JobDefinitionArn (string) --

      The Amazon Resource Name (ARN) of the data quality monitoring job definition.

    • JobDefinitionName (string) --

      The name of the data quality monitoring job definition.

    • CreationTime (datetime) --

      The time that the data quality monitoring job definition was created.

    • DataQualityBaselineConfig (dict) --

      The constraints and baselines for the data quality monitoring job definition.

      • BaseliningJobName (string) --

        The name of the job that performs baselining for the data quality monitoring job.

      • ConstraintsResource (dict) --

        The constraints resource for a monitoring job.

        • S3Uri (string) --

          The Amazon S3 URI for the constraints resource.

      • StatisticsResource (dict) --

        The statistics resource for a monitoring job.

        • S3Uri (string) --

          The Amazon S3 URI for the statistics resource.

    • DataQualityAppSpecification (dict) --

      Information about the container that runs the data quality monitoring job.

      • ImageUri (string) --

        The container image that the data quality monitoring job runs.

      • ContainerEntrypoint (list) --

        The entrypoint for a container used to run a monitoring job.

        • (string) --

      • ContainerArguments (list) --

        The arguments to send to the container that the monitoring job runs.

        • (string) --

      • RecordPreprocessorSourceUri (string) --

        An Amazon S3 URI to a script that is called per row prior to running analysis. It can base64 decode the payload and convert it into a flatted json so that the built-in container can use the converted data. Applicable only for the built-in (first party) containers.

      • PostAnalyticsProcessorSourceUri (string) --

        An Amazon S3 URI to a script that is called after analysis has been performed. Applicable only for the built-in (first party) containers.

      • Environment (dict) --

        Sets the environment variables in the container that the monitoring job runs.

        • (string) --

          • (string) --

    • DataQualityJobInput (dict) --

      The list of inputs for the data quality monitoring job. Currently endpoints are supported.

      • EndpointInput (dict) --

        Input object for the endpoint

        • EndpointName (string) --

          An endpoint in customer's account which has enabled DataCaptureConfig enabled.

        • LocalPath (string) --

          Path to the filesystem where the endpoint data is available to the container.

        • S3InputMode (string) --

          Whether the Pipe or File is used as the input mode for transferring data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File .

        • S3DataDistributionType (string) --

          Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defaults to FullyReplicated

        • FeaturesAttribute (string) --

          The attributes of the input data that are the input features.

        • InferenceAttribute (string) --

          The attribute of the input data that represents the ground truth label.

        • ProbabilityAttribute (string) --

          In a classification problem, the attribute that represents the class probability.

        • ProbabilityThresholdAttribute (float) --

          The threshold for the class probability to be evaluated as a positive result.

        • StartTimeOffset (string) --

          If specified, monitoring jobs substract this time from the start time. For information about using offsets for scheduling monitoring jobs, see Schedule Model Quality Monitoring Jobs.

        • EndTimeOffset (string) --

          If specified, monitoring jobs substract this time from the end time. For information about using offsets for scheduling monitoring jobs, see Schedule Model Quality Monitoring Jobs.

      • BatchTransformInput (dict) --

        Input object for the batch transform job.

        • DataCapturedDestinationS3Uri (string) --

          The Amazon S3 location being used to capture the data.

        • DatasetFormat (dict) --

          The dataset format for your batch transform job.

          • Csv (dict) --

            The CSV dataset used in the monitoring job.

            • Header (boolean) --

              Indicates if the CSV data has a header.

          • Json (dict) --

            The JSON dataset used in the monitoring job

            • Line (boolean) --

              Indicates if the file should be read as a json object per line.

          • Parquet (dict) --

            The Parquet dataset used in the monitoring job

        • LocalPath (string) --

          Path to the filesystem where the batch transform data is available to the container.

        • S3InputMode (string) --

          Whether the Pipe or File is used as the input mode for transferring data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File .

        • S3DataDistributionType (string) --

          Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defaults to FullyReplicated

        • FeaturesAttribute (string) --

          The attributes of the input data that are the input features.

        • InferenceAttribute (string) --

          The attribute of the input data that represents the ground truth label.

        • ProbabilityAttribute (string) --

          In a classification problem, the attribute that represents the class probability.

        • ProbabilityThresholdAttribute (float) --

          The threshold for the class probability to be evaluated as a positive result.

        • StartTimeOffset (string) --

          If specified, monitoring jobs substract this time from the start time. For information about using offsets for scheduling monitoring jobs, see Schedule Model Quality Monitoring Jobs.

        • EndTimeOffset (string) --

          If specified, monitoring jobs substract this time from the end time. For information about using offsets for scheduling monitoring jobs, see Schedule Model Quality Monitoring Jobs.

    • DataQualityJobOutputConfig (dict) --

      The output configuration for monitoring jobs.

      • MonitoringOutputs (list) --

        Monitoring outputs for monitoring jobs. This is where the output of the periodic monitoring jobs is uploaded.

        • (dict) --

          The output object for a monitoring job.

          • S3Output (dict) --

            The Amazon S3 storage location where the results of a monitoring job are saved.

            • S3Uri (string) --

              A URI that identifies the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job.

            • LocalPath (string) --

              The local path to the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job. LocalPath is an absolute path for the output data.

            • S3UploadMode (string) --

              Whether to upload the results of the monitoring job continuously or after the job completes.

      • KmsKeyId (string) --

        The Amazon Web Services Key Management Service (Amazon Web Services KMS) key that Amazon SageMaker uses to encrypt the model artifacts at rest using Amazon S3 server-side encryption.

    • JobResources (dict) --

      Identifies the resources to deploy for a monitoring job.

      • ClusterConfig (dict) --

        The configuration for the cluster resources used to run the processing job.

        • InstanceCount (integer) --

          The number of ML compute instances to use in the model monitoring job. For distributed processing jobs, specify a value greater than 1. The default value is 1.

        • InstanceType (string) --

          The ML compute instance type for the processing job.

        • VolumeSizeInGB (integer) --

          The size of the ML storage volume, in gigabytes, that you want to provision. You must specify sufficient ML storage for your scenario.

        • VolumeKmsKeyId (string) --

          The Amazon Web Services Key Management Service (Amazon Web Services KMS) key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance(s) that run the model monitoring job.

    • NetworkConfig (dict) --

      The networking configuration for the data quality monitoring job.

      • EnableInterContainerTrafficEncryption (boolean) --

        Whether to encrypt all communications between the instances used for the monitoring jobs. Choose True to encrypt communications. Encryption provides greater security for distributed jobs, but the processing might take longer.

      • EnableNetworkIsolation (boolean) --

        Whether to allow inbound and outbound network calls to and from the containers used for the monitoring job.

      • VpcConfig (dict) --

        Specifies a VPC that your training jobs and hosted models have access to. Control access to and from your training and model containers by configuring the VPC. For more information, see Protect Endpoints by Using an Amazon Virtual Private Cloud and Protect Training Jobs by Using an Amazon Virtual Private Cloud.

        • SecurityGroupIds (list) --

          The VPC security group IDs, in the form sg-xxxxxxxx. Specify the security groups for the VPC that is specified in the Subnets field.

          • (string) --

        • Subnets (list) --

          The ID of the subnets in the VPC to which you want to connect your training job or model. For information about the availability of specific instance types, see Supported Instance Types and Availability Zones.

          • (string) --

    • RoleArn (string) --

      The Amazon Resource Name (ARN) of an IAM role that Amazon SageMaker can assume to perform tasks on your behalf.

    • StoppingCondition (dict) --

      A time limit for how long the monitoring job is allowed to run before stopping.

      • MaxRuntimeInSeconds (integer) --

        The maximum runtime allowed in seconds.

        Note

        The MaxRuntimeInSeconds cannot exceed the frequency of the job. For data quality and model explainability, this can be up to 3600 seconds for an hourly schedule. For model bias and model quality hourly schedules, this can be up to 1800 seconds.

DescribeModelBiasJobDefinition (updated) Link ¶
Changes (response)
{'ModelBiasJobInput': {'BatchTransformInput': {'DataCapturedDestinationS3Uri': 'string',
                                               'DatasetFormat': {'Csv': {'Header': 'boolean'},
                                                                 'Json': {'Line': 'boolean'},
                                                                 'Parquet': {}},
                                               'EndTimeOffset': 'string',
                                               'FeaturesAttribute': 'string',
                                               'InferenceAttribute': 'string',
                                               'LocalPath': 'string',
                                               'ProbabilityAttribute': 'string',
                                               'ProbabilityThresholdAttribute': 'double',
                                               'S3DataDistributionType': 'FullyReplicated '
                                                                         '| '
                                                                         'ShardedByS3Key',
                                               'S3InputMode': 'Pipe | File',
                                               'StartTimeOffset': 'string'}}}

Returns a description of a model bias job definition.

See also: AWS API Documentation

Request Syntax

client.describe_model_bias_job_definition(
    JobDefinitionName='string'
)
type JobDefinitionName

string

param JobDefinitionName

[REQUIRED]

The name of the model bias job definition. The name must be unique within an Amazon Web Services Region in the Amazon Web Services account.

rtype

dict

returns

Response Syntax

{
    'JobDefinitionArn': 'string',
    'JobDefinitionName': 'string',
    'CreationTime': datetime(2015, 1, 1),
    'ModelBiasBaselineConfig': {
        'BaseliningJobName': 'string',
        'ConstraintsResource': {
            'S3Uri': 'string'
        }
    },
    'ModelBiasAppSpecification': {
        'ImageUri': 'string',
        'ConfigUri': 'string',
        'Environment': {
            'string': 'string'
        }
    },
    'ModelBiasJobInput': {
        'EndpointInput': {
            'EndpointName': 'string',
            'LocalPath': 'string',
            'S3InputMode': 'Pipe'|'File',
            'S3DataDistributionType': 'FullyReplicated'|'ShardedByS3Key',
            'FeaturesAttribute': 'string',
            'InferenceAttribute': 'string',
            'ProbabilityAttribute': 'string',
            'ProbabilityThresholdAttribute': 123.0,
            'StartTimeOffset': 'string',
            'EndTimeOffset': 'string'
        },
        'BatchTransformInput': {
            'DataCapturedDestinationS3Uri': 'string',
            'DatasetFormat': {
                'Csv': {
                    'Header': True|False
                },
                'Json': {
                    'Line': True|False
                },
                'Parquet': {}
            },
            'LocalPath': 'string',
            'S3InputMode': 'Pipe'|'File',
            'S3DataDistributionType': 'FullyReplicated'|'ShardedByS3Key',
            'FeaturesAttribute': 'string',
            'InferenceAttribute': 'string',
            'ProbabilityAttribute': 'string',
            'ProbabilityThresholdAttribute': 123.0,
            'StartTimeOffset': 'string',
            'EndTimeOffset': 'string'
        },
        'GroundTruthS3Input': {
            'S3Uri': 'string'
        }
    },
    'ModelBiasJobOutputConfig': {
        'MonitoringOutputs': [
            {
                'S3Output': {
                    'S3Uri': 'string',
                    'LocalPath': 'string',
                    'S3UploadMode': 'Continuous'|'EndOfJob'
                }
            },
        ],
        'KmsKeyId': 'string'
    },
    'JobResources': {
        'ClusterConfig': {
            'InstanceCount': 123,
            'InstanceType': 'ml.t3.medium'|'ml.t3.large'|'ml.t3.xlarge'|'ml.t3.2xlarge'|'ml.m4.xlarge'|'ml.m4.2xlarge'|'ml.m4.4xlarge'|'ml.m4.10xlarge'|'ml.m4.16xlarge'|'ml.c4.xlarge'|'ml.c4.2xlarge'|'ml.c4.4xlarge'|'ml.c4.8xlarge'|'ml.p2.xlarge'|'ml.p2.8xlarge'|'ml.p2.16xlarge'|'ml.p3.2xlarge'|'ml.p3.8xlarge'|'ml.p3.16xlarge'|'ml.c5.xlarge'|'ml.c5.2xlarge'|'ml.c5.4xlarge'|'ml.c5.9xlarge'|'ml.c5.18xlarge'|'ml.m5.large'|'ml.m5.xlarge'|'ml.m5.2xlarge'|'ml.m5.4xlarge'|'ml.m5.12xlarge'|'ml.m5.24xlarge'|'ml.r5.large'|'ml.r5.xlarge'|'ml.r5.2xlarge'|'ml.r5.4xlarge'|'ml.r5.8xlarge'|'ml.r5.12xlarge'|'ml.r5.16xlarge'|'ml.r5.24xlarge'|'ml.g4dn.xlarge'|'ml.g4dn.2xlarge'|'ml.g4dn.4xlarge'|'ml.g4dn.8xlarge'|'ml.g4dn.12xlarge'|'ml.g4dn.16xlarge',
            'VolumeSizeInGB': 123,
            'VolumeKmsKeyId': 'string'
        }
    },
    'NetworkConfig': {
        'EnableInterContainerTrafficEncryption': True|False,
        'EnableNetworkIsolation': True|False,
        'VpcConfig': {
            'SecurityGroupIds': [
                'string',
            ],
            'Subnets': [
                'string',
            ]
        }
    },
    'RoleArn': 'string',
    'StoppingCondition': {
        'MaxRuntimeInSeconds': 123
    }
}

Response Structure

  • (dict) --

    • JobDefinitionArn (string) --

      The Amazon Resource Name (ARN) of the model bias job.

    • JobDefinitionName (string) --

      The name of the bias job definition. The name must be unique within an Amazon Web Services Region in the Amazon Web Services account.

    • CreationTime (datetime) --

      The time at which the model bias job was created.

    • ModelBiasBaselineConfig (dict) --

      The baseline configuration for a model bias job.

      • BaseliningJobName (string) --

        The name of the baseline model bias job.

      • ConstraintsResource (dict) --

        The constraints resource for a monitoring job.

        • S3Uri (string) --

          The Amazon S3 URI for the constraints resource.

    • ModelBiasAppSpecification (dict) --

      Configures the model bias job to run a specified Docker container image.

      • ImageUri (string) --

        The container image to be run by the model bias job.

      • ConfigUri (string) --

        JSON formatted S3 file that defines bias parameters. For more information on this JSON configuration file, see Configure bias parameters.

      • Environment (dict) --

        Sets the environment variables in the Docker container.

        • (string) --

          • (string) --

    • ModelBiasJobInput (dict) --

      Inputs for the model bias job.

      • EndpointInput (dict) --

        Input object for the endpoint

        • EndpointName (string) --

          An endpoint in customer's account which has enabled DataCaptureConfig enabled.

        • LocalPath (string) --

          Path to the filesystem where the endpoint data is available to the container.

        • S3InputMode (string) --

          Whether the Pipe or File is used as the input mode for transferring data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File .

        • S3DataDistributionType (string) --

          Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defaults to FullyReplicated

        • FeaturesAttribute (string) --

          The attributes of the input data that are the input features.

        • InferenceAttribute (string) --

          The attribute of the input data that represents the ground truth label.

        • ProbabilityAttribute (string) --

          In a classification problem, the attribute that represents the class probability.

        • ProbabilityThresholdAttribute (float) --

          The threshold for the class probability to be evaluated as a positive result.

        • StartTimeOffset (string) --

          If specified, monitoring jobs substract this time from the start time. For information about using offsets for scheduling monitoring jobs, see Schedule Model Quality Monitoring Jobs.

        • EndTimeOffset (string) --

          If specified, monitoring jobs substract this time from the end time. For information about using offsets for scheduling monitoring jobs, see Schedule Model Quality Monitoring Jobs.

      • BatchTransformInput (dict) --

        Input object for the batch transform job.

        • DataCapturedDestinationS3Uri (string) --

          The Amazon S3 location being used to capture the data.

        • DatasetFormat (dict) --

          The dataset format for your batch transform job.

          • Csv (dict) --

            The CSV dataset used in the monitoring job.

            • Header (boolean) --

              Indicates if the CSV data has a header.

          • Json (dict) --

            The JSON dataset used in the monitoring job

            • Line (boolean) --

              Indicates if the file should be read as a json object per line.

          • Parquet (dict) --

            The Parquet dataset used in the monitoring job

        • LocalPath (string) --

          Path to the filesystem where the batch transform data is available to the container.

        • S3InputMode (string) --

          Whether the Pipe or File is used as the input mode for transferring data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File .

        • S3DataDistributionType (string) --

          Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defaults to FullyReplicated

        • FeaturesAttribute (string) --

          The attributes of the input data that are the input features.

        • InferenceAttribute (string) --

          The attribute of the input data that represents the ground truth label.

        • ProbabilityAttribute (string) --

          In a classification problem, the attribute that represents the class probability.

        • ProbabilityThresholdAttribute (float) --

          The threshold for the class probability to be evaluated as a positive result.

        • StartTimeOffset (string) --

          If specified, monitoring jobs substract this time from the start time. For information about using offsets for scheduling monitoring jobs, see Schedule Model Quality Monitoring Jobs.

        • EndTimeOffset (string) --

          If specified, monitoring jobs substract this time from the end time. For information about using offsets for scheduling monitoring jobs, see Schedule Model Quality Monitoring Jobs.

      • GroundTruthS3Input (dict) --

        Location of ground truth labels to use in model bias job.

        • S3Uri (string) --

          The address of the Amazon S3 location of the ground truth labels.

    • ModelBiasJobOutputConfig (dict) --

      The output configuration for monitoring jobs.

      • MonitoringOutputs (list) --

        Monitoring outputs for monitoring jobs. This is where the output of the periodic monitoring jobs is uploaded.

        • (dict) --

          The output object for a monitoring job.

          • S3Output (dict) --

            The Amazon S3 storage location where the results of a monitoring job are saved.

            • S3Uri (string) --

              A URI that identifies the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job.

            • LocalPath (string) --

              The local path to the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job. LocalPath is an absolute path for the output data.

            • S3UploadMode (string) --

              Whether to upload the results of the monitoring job continuously or after the job completes.

      • KmsKeyId (string) --

        The Amazon Web Services Key Management Service (Amazon Web Services KMS) key that Amazon SageMaker uses to encrypt the model artifacts at rest using Amazon S3 server-side encryption.

    • JobResources (dict) --

      Identifies the resources to deploy for a monitoring job.

      • ClusterConfig (dict) --

        The configuration for the cluster resources used to run the processing job.

        • InstanceCount (integer) --

          The number of ML compute instances to use in the model monitoring job. For distributed processing jobs, specify a value greater than 1. The default value is 1.

        • InstanceType (string) --

          The ML compute instance type for the processing job.

        • VolumeSizeInGB (integer) --

          The size of the ML storage volume, in gigabytes, that you want to provision. You must specify sufficient ML storage for your scenario.

        • VolumeKmsKeyId (string) --

          The Amazon Web Services Key Management Service (Amazon Web Services KMS) key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance(s) that run the model monitoring job.

    • NetworkConfig (dict) --

      Networking options for a model bias job.

      • EnableInterContainerTrafficEncryption (boolean) --

        Whether to encrypt all communications between the instances used for the monitoring jobs. Choose True to encrypt communications. Encryption provides greater security for distributed jobs, but the processing might take longer.

      • EnableNetworkIsolation (boolean) --

        Whether to allow inbound and outbound network calls to and from the containers used for the monitoring job.

      • VpcConfig (dict) --

        Specifies a VPC that your training jobs and hosted models have access to. Control access to and from your training and model containers by configuring the VPC. For more information, see Protect Endpoints by Using an Amazon Virtual Private Cloud and Protect Training Jobs by Using an Amazon Virtual Private Cloud.

        • SecurityGroupIds (list) --

          The VPC security group IDs, in the form sg-xxxxxxxx. Specify the security groups for the VPC that is specified in the Subnets field.

          • (string) --

        • Subnets (list) --

          The ID of the subnets in the VPC to which you want to connect your training job or model. For information about the availability of specific instance types, see Supported Instance Types and Availability Zones.

          • (string) --

    • RoleArn (string) --

      The Amazon Resource Name (ARN) of the Amazon Web Services Identity and Access Management (IAM) role that has read permission to the input data location and write permission to the output data location in Amazon S3.

    • StoppingCondition (dict) --

      A time limit for how long the monitoring job is allowed to run before stopping.

      • MaxRuntimeInSeconds (integer) --

        The maximum runtime allowed in seconds.

        Note

        The MaxRuntimeInSeconds cannot exceed the frequency of the job. For data quality and model explainability, this can be up to 3600 seconds for an hourly schedule. For model bias and model quality hourly schedules, this can be up to 1800 seconds.

DescribeModelExplainabilityJobDefinition (updated) Link ¶
Changes (response)
{'ModelExplainabilityJobInput': {'BatchTransformInput': {'DataCapturedDestinationS3Uri': 'string',
                                                         'DatasetFormat': {'Csv': {'Header': 'boolean'},
                                                                           'Json': {'Line': 'boolean'},
                                                                           'Parquet': {}},
                                                         'EndTimeOffset': 'string',
                                                         'FeaturesAttribute': 'string',
                                                         'InferenceAttribute': 'string',
                                                         'LocalPath': 'string',
                                                         'ProbabilityAttribute': 'string',
                                                         'ProbabilityThresholdAttribute': 'double',
                                                         'S3DataDistributionType': 'FullyReplicated '
                                                                                   '| '
                                                                                   'ShardedByS3Key',
                                                         'S3InputMode': 'Pipe '
                                                                        '| '
                                                                        'File',
                                                         'StartTimeOffset': 'string'}}}

Returns a description of a model explainability job definition.

See also: AWS API Documentation

Request Syntax

client.describe_model_explainability_job_definition(
    JobDefinitionName='string'
)
type JobDefinitionName

string

param JobDefinitionName

[REQUIRED]

The name of the model explainability job definition. The name must be unique within an Amazon Web Services Region in the Amazon Web Services account.

rtype

dict

returns

Response Syntax

{
    'JobDefinitionArn': 'string',
    'JobDefinitionName': 'string',
    'CreationTime': datetime(2015, 1, 1),
    'ModelExplainabilityBaselineConfig': {
        'BaseliningJobName': 'string',
        'ConstraintsResource': {
            'S3Uri': 'string'
        }
    },
    'ModelExplainabilityAppSpecification': {
        'ImageUri': 'string',
        'ConfigUri': 'string',
        'Environment': {
            'string': 'string'
        }
    },
    'ModelExplainabilityJobInput': {
        'EndpointInput': {
            'EndpointName': 'string',
            'LocalPath': 'string',
            'S3InputMode': 'Pipe'|'File',
            'S3DataDistributionType': 'FullyReplicated'|'ShardedByS3Key',
            'FeaturesAttribute': 'string',
            'InferenceAttribute': 'string',
            'ProbabilityAttribute': 'string',
            'ProbabilityThresholdAttribute': 123.0,
            'StartTimeOffset': 'string',
            'EndTimeOffset': 'string'
        },
        'BatchTransformInput': {
            'DataCapturedDestinationS3Uri': 'string',
            'DatasetFormat': {
                'Csv': {
                    'Header': True|False
                },
                'Json': {
                    'Line': True|False
                },
                'Parquet': {}
            },
            'LocalPath': 'string',
            'S3InputMode': 'Pipe'|'File',
            'S3DataDistributionType': 'FullyReplicated'|'ShardedByS3Key',
            'FeaturesAttribute': 'string',
            'InferenceAttribute': 'string',
            'ProbabilityAttribute': 'string',
            'ProbabilityThresholdAttribute': 123.0,
            'StartTimeOffset': 'string',
            'EndTimeOffset': 'string'
        }
    },
    'ModelExplainabilityJobOutputConfig': {
        'MonitoringOutputs': [
            {
                'S3Output': {
                    'S3Uri': 'string',
                    'LocalPath': 'string',
                    'S3UploadMode': 'Continuous'|'EndOfJob'
                }
            },
        ],
        'KmsKeyId': 'string'
    },
    'JobResources': {
        'ClusterConfig': {
            'InstanceCount': 123,
            'InstanceType': 'ml.t3.medium'|'ml.t3.large'|'ml.t3.xlarge'|'ml.t3.2xlarge'|'ml.m4.xlarge'|'ml.m4.2xlarge'|'ml.m4.4xlarge'|'ml.m4.10xlarge'|'ml.m4.16xlarge'|'ml.c4.xlarge'|'ml.c4.2xlarge'|'ml.c4.4xlarge'|'ml.c4.8xlarge'|'ml.p2.xlarge'|'ml.p2.8xlarge'|'ml.p2.16xlarge'|'ml.p3.2xlarge'|'ml.p3.8xlarge'|'ml.p3.16xlarge'|'ml.c5.xlarge'|'ml.c5.2xlarge'|'ml.c5.4xlarge'|'ml.c5.9xlarge'|'ml.c5.18xlarge'|'ml.m5.large'|'ml.m5.xlarge'|'ml.m5.2xlarge'|'ml.m5.4xlarge'|'ml.m5.12xlarge'|'ml.m5.24xlarge'|'ml.r5.large'|'ml.r5.xlarge'|'ml.r5.2xlarge'|'ml.r5.4xlarge'|'ml.r5.8xlarge'|'ml.r5.12xlarge'|'ml.r5.16xlarge'|'ml.r5.24xlarge'|'ml.g4dn.xlarge'|'ml.g4dn.2xlarge'|'ml.g4dn.4xlarge'|'ml.g4dn.8xlarge'|'ml.g4dn.12xlarge'|'ml.g4dn.16xlarge',
            'VolumeSizeInGB': 123,
            'VolumeKmsKeyId': 'string'
        }
    },
    'NetworkConfig': {
        'EnableInterContainerTrafficEncryption': True|False,
        'EnableNetworkIsolation': True|False,
        'VpcConfig': {
            'SecurityGroupIds': [
                'string',
            ],
            'Subnets': [
                'string',
            ]
        }
    },
    'RoleArn': 'string',
    'StoppingCondition': {
        'MaxRuntimeInSeconds': 123
    }
}

Response Structure

  • (dict) --

    • JobDefinitionArn (string) --

      The Amazon Resource Name (ARN) of the model explainability job.

    • JobDefinitionName (string) --

      The name of the explainability job definition. The name must be unique within an Amazon Web Services Region in the Amazon Web Services account.

    • CreationTime (datetime) --

      The time at which the model explainability job was created.

    • ModelExplainabilityBaselineConfig (dict) --

      The baseline configuration for a model explainability job.

      • BaseliningJobName (string) --

        The name of the baseline model explainability job.

      • ConstraintsResource (dict) --

        The constraints resource for a monitoring job.

        • S3Uri (string) --

          The Amazon S3 URI for the constraints resource.

    • ModelExplainabilityAppSpecification (dict) --

      Configures the model explainability job to run a specified Docker container image.

      • ImageUri (string) --

        The container image to be run by the model explainability job.

      • ConfigUri (string) --

        JSON formatted S3 file that defines explainability parameters. For more information on this JSON configuration file, see Configure model explainability parameters.

      • Environment (dict) --

        Sets the environment variables in the Docker container.

        • (string) --

          • (string) --

    • ModelExplainabilityJobInput (dict) --

      Inputs for the model explainability job.

      • EndpointInput (dict) --

        Input object for the endpoint

        • EndpointName (string) --

          An endpoint in customer's account which has enabled DataCaptureConfig enabled.

        • LocalPath (string) --

          Path to the filesystem where the endpoint data is available to the container.

        • S3InputMode (string) --

          Whether the Pipe or File is used as the input mode for transferring data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File .

        • S3DataDistributionType (string) --

          Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defaults to FullyReplicated

        • FeaturesAttribute (string) --

          The attributes of the input data that are the input features.

        • InferenceAttribute (string) --

          The attribute of the input data that represents the ground truth label.

        • ProbabilityAttribute (string) --

          In a classification problem, the attribute that represents the class probability.

        • ProbabilityThresholdAttribute (float) --

          The threshold for the class probability to be evaluated as a positive result.

        • StartTimeOffset (string) --

          If specified, monitoring jobs substract this time from the start time. For information about using offsets for scheduling monitoring jobs, see Schedule Model Quality Monitoring Jobs.

        • EndTimeOffset (string) --

          If specified, monitoring jobs substract this time from the end time. For information about using offsets for scheduling monitoring jobs, see Schedule Model Quality Monitoring Jobs.

      • BatchTransformInput (dict) --

        Input object for the batch transform job.

        • DataCapturedDestinationS3Uri (string) --

          The Amazon S3 location being used to capture the data.

        • DatasetFormat (dict) --

          The dataset format for your batch transform job.

          • Csv (dict) --

            The CSV dataset used in the monitoring job.

            • Header (boolean) --

              Indicates if the CSV data has a header.

          • Json (dict) --

            The JSON dataset used in the monitoring job

            • Line (boolean) --

              Indicates if the file should be read as a json object per line.

          • Parquet (dict) --

            The Parquet dataset used in the monitoring job

        • LocalPath (string) --

          Path to the filesystem where the batch transform data is available to the container.

        • S3InputMode (string) --

          Whether the Pipe or File is used as the input mode for transferring data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File .

        • S3DataDistributionType (string) --

          Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defaults to FullyReplicated

        • FeaturesAttribute (string) --

          The attributes of the input data that are the input features.

        • InferenceAttribute (string) --

          The attribute of the input data that represents the ground truth label.

        • ProbabilityAttribute (string) --

          In a classification problem, the attribute that represents the class probability.

        • ProbabilityThresholdAttribute (float) --

          The threshold for the class probability to be evaluated as a positive result.

        • StartTimeOffset (string) --

          If specified, monitoring jobs substract this time from the start time. For information about using offsets for scheduling monitoring jobs, see Schedule Model Quality Monitoring Jobs.

        • EndTimeOffset (string) --

          If specified, monitoring jobs substract this time from the end time. For information about using offsets for scheduling monitoring jobs, see Schedule Model Quality Monitoring Jobs.

    • ModelExplainabilityJobOutputConfig (dict) --

      The output configuration for monitoring jobs.

      • MonitoringOutputs (list) --

        Monitoring outputs for monitoring jobs. This is where the output of the periodic monitoring jobs is uploaded.

        • (dict) --

          The output object for a monitoring job.

          • S3Output (dict) --

            The Amazon S3 storage location where the results of a monitoring job are saved.

            • S3Uri (string) --

              A URI that identifies the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job.

            • LocalPath (string) --

              The local path to the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job. LocalPath is an absolute path for the output data.

            • S3UploadMode (string) --

              Whether to upload the results of the monitoring job continuously or after the job completes.

      • KmsKeyId (string) --

        The Amazon Web Services Key Management Service (Amazon Web Services KMS) key that Amazon SageMaker uses to encrypt the model artifacts at rest using Amazon S3 server-side encryption.

    • JobResources (dict) --

      Identifies the resources to deploy for a monitoring job.

      • ClusterConfig (dict) --

        The configuration for the cluster resources used to run the processing job.

        • InstanceCount (integer) --

          The number of ML compute instances to use in the model monitoring job. For distributed processing jobs, specify a value greater than 1. The default value is 1.

        • InstanceType (string) --

          The ML compute instance type for the processing job.

        • VolumeSizeInGB (integer) --

          The size of the ML storage volume, in gigabytes, that you want to provision. You must specify sufficient ML storage for your scenario.

        • VolumeKmsKeyId (string) --

          The Amazon Web Services Key Management Service (Amazon Web Services KMS) key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance(s) that run the model monitoring job.

    • NetworkConfig (dict) --

      Networking options for a model explainability job.

      • EnableInterContainerTrafficEncryption (boolean) --

        Whether to encrypt all communications between the instances used for the monitoring jobs. Choose True to encrypt communications. Encryption provides greater security for distributed jobs, but the processing might take longer.

      • EnableNetworkIsolation (boolean) --

        Whether to allow inbound and outbound network calls to and from the containers used for the monitoring job.

      • VpcConfig (dict) --

        Specifies a VPC that your training jobs and hosted models have access to. Control access to and from your training and model containers by configuring the VPC. For more information, see Protect Endpoints by Using an Amazon Virtual Private Cloud and Protect Training Jobs by Using an Amazon Virtual Private Cloud.

        • SecurityGroupIds (list) --

          The VPC security group IDs, in the form sg-xxxxxxxx. Specify the security groups for the VPC that is specified in the Subnets field.

          • (string) --

        • Subnets (list) --

          The ID of the subnets in the VPC to which you want to connect your training job or model. For information about the availability of specific instance types, see Supported Instance Types and Availability Zones.

          • (string) --

    • RoleArn (string) --

      The Amazon Resource Name (ARN) of the Amazon Web Services Identity and Access Management (IAM) role that has read permission to the input data location and write permission to the output data location in Amazon S3.

    • StoppingCondition (dict) --

      A time limit for how long the monitoring job is allowed to run before stopping.

      • MaxRuntimeInSeconds (integer) --

        The maximum runtime allowed in seconds.

        Note

        The MaxRuntimeInSeconds cannot exceed the frequency of the job. For data quality and model explainability, this can be up to 3600 seconds for an hourly schedule. For model bias and model quality hourly schedules, this can be up to 1800 seconds.

DescribeModelQualityJobDefinition (updated) Link ¶
Changes (response)
{'ModelQualityJobInput': {'BatchTransformInput': {'DataCapturedDestinationS3Uri': 'string',
                                                  'DatasetFormat': {'Csv': {'Header': 'boolean'},
                                                                    'Json': {'Line': 'boolean'},
                                                                    'Parquet': {}},
                                                  'EndTimeOffset': 'string',
                                                  'FeaturesAttribute': 'string',
                                                  'InferenceAttribute': 'string',
                                                  'LocalPath': 'string',
                                                  'ProbabilityAttribute': 'string',
                                                  'ProbabilityThresholdAttribute': 'double',
                                                  'S3DataDistributionType': 'FullyReplicated '
                                                                            '| '
                                                                            'ShardedByS3Key',
                                                  'S3InputMode': 'Pipe | File',
                                                  'StartTimeOffset': 'string'}}}

Returns a description of a model quality job definition.

See also: AWS API Documentation

Request Syntax

client.describe_model_quality_job_definition(
    JobDefinitionName='string'
)
type JobDefinitionName

string

param JobDefinitionName

[REQUIRED]

The name of the model quality job. The name must be unique within an Amazon Web Services Region in the Amazon Web Services account.

rtype

dict

returns

Response Syntax

{
    'JobDefinitionArn': 'string',
    'JobDefinitionName': 'string',
    'CreationTime': datetime(2015, 1, 1),
    'ModelQualityBaselineConfig': {
        'BaseliningJobName': 'string',
        'ConstraintsResource': {
            'S3Uri': 'string'
        }
    },
    'ModelQualityAppSpecification': {
        'ImageUri': 'string',
        'ContainerEntrypoint': [
            'string',
        ],
        'ContainerArguments': [
            'string',
        ],
        'RecordPreprocessorSourceUri': 'string',
        'PostAnalyticsProcessorSourceUri': 'string',
        'ProblemType': 'BinaryClassification'|'MulticlassClassification'|'Regression',
        'Environment': {
            'string': 'string'
        }
    },
    'ModelQualityJobInput': {
        'EndpointInput': {
            'EndpointName': 'string',
            'LocalPath': 'string',
            'S3InputMode': 'Pipe'|'File',
            'S3DataDistributionType': 'FullyReplicated'|'ShardedByS3Key',
            'FeaturesAttribute': 'string',
            'InferenceAttribute': 'string',
            'ProbabilityAttribute': 'string',
            'ProbabilityThresholdAttribute': 123.0,
            'StartTimeOffset': 'string',
            'EndTimeOffset': 'string'
        },
        'BatchTransformInput': {
            'DataCapturedDestinationS3Uri': 'string',
            'DatasetFormat': {
                'Csv': {
                    'Header': True|False
                },
                'Json': {
                    'Line': True|False
                },
                'Parquet': {}
            },
            'LocalPath': 'string',
            'S3InputMode': 'Pipe'|'File',
            'S3DataDistributionType': 'FullyReplicated'|'ShardedByS3Key',
            'FeaturesAttribute': 'string',
            'InferenceAttribute': 'string',
            'ProbabilityAttribute': 'string',
            'ProbabilityThresholdAttribute': 123.0,
            'StartTimeOffset': 'string',
            'EndTimeOffset': 'string'
        },
        'GroundTruthS3Input': {
            'S3Uri': 'string'
        }
    },
    'ModelQualityJobOutputConfig': {
        'MonitoringOutputs': [
            {
                'S3Output': {
                    'S3Uri': 'string',
                    'LocalPath': 'string',
                    'S3UploadMode': 'Continuous'|'EndOfJob'
                }
            },
        ],
        'KmsKeyId': 'string'
    },
    'JobResources': {
        'ClusterConfig': {
            'InstanceCount': 123,
            'InstanceType': 'ml.t3.medium'|'ml.t3.large'|'ml.t3.xlarge'|'ml.t3.2xlarge'|'ml.m4.xlarge'|'ml.m4.2xlarge'|'ml.m4.4xlarge'|'ml.m4.10xlarge'|'ml.m4.16xlarge'|'ml.c4.xlarge'|'ml.c4.2xlarge'|'ml.c4.4xlarge'|'ml.c4.8xlarge'|'ml.p2.xlarge'|'ml.p2.8xlarge'|'ml.p2.16xlarge'|'ml.p3.2xlarge'|'ml.p3.8xlarge'|'ml.p3.16xlarge'|'ml.c5.xlarge'|'ml.c5.2xlarge'|'ml.c5.4xlarge'|'ml.c5.9xlarge'|'ml.c5.18xlarge'|'ml.m5.large'|'ml.m5.xlarge'|'ml.m5.2xlarge'|'ml.m5.4xlarge'|'ml.m5.12xlarge'|'ml.m5.24xlarge'|'ml.r5.large'|'ml.r5.xlarge'|'ml.r5.2xlarge'|'ml.r5.4xlarge'|'ml.r5.8xlarge'|'ml.r5.12xlarge'|'ml.r5.16xlarge'|'ml.r5.24xlarge'|'ml.g4dn.xlarge'|'ml.g4dn.2xlarge'|'ml.g4dn.4xlarge'|'ml.g4dn.8xlarge'|'ml.g4dn.12xlarge'|'ml.g4dn.16xlarge',
            'VolumeSizeInGB': 123,
            'VolumeKmsKeyId': 'string'
        }
    },
    'NetworkConfig': {
        'EnableInterContainerTrafficEncryption': True|False,
        'EnableNetworkIsolation': True|False,
        'VpcConfig': {
            'SecurityGroupIds': [
                'string',
            ],
            'Subnets': [
                'string',
            ]
        }
    },
    'RoleArn': 'string',
    'StoppingCondition': {
        'MaxRuntimeInSeconds': 123
    }
}

Response Structure

  • (dict) --

    • JobDefinitionArn (string) --

      The Amazon Resource Name (ARN) of the model quality job.

    • JobDefinitionName (string) --

      The name of the quality job definition. The name must be unique within an Amazon Web Services Region in the Amazon Web Services account.

    • CreationTime (datetime) --

      The time at which the model quality job was created.

    • ModelQualityBaselineConfig (dict) --

      The baseline configuration for a model quality job.

      • BaseliningJobName (string) --

        The name of the job that performs baselining for the monitoring job.

      • ConstraintsResource (dict) --

        The constraints resource for a monitoring job.

        • S3Uri (string) --

          The Amazon S3 URI for the constraints resource.

    • ModelQualityAppSpecification (dict) --

      Configures the model quality job to run a specified Docker container image.

      • ImageUri (string) --

        The address of the container image that the monitoring job runs.

      • ContainerEntrypoint (list) --

        Specifies the entrypoint for a container that the monitoring job runs.

        • (string) --

      • ContainerArguments (list) --

        An array of arguments for the container used to run the monitoring job.

        • (string) --

      • RecordPreprocessorSourceUri (string) --

        An Amazon S3 URI to a script that is called per row prior to running analysis. It can base64 decode the payload and convert it into a flatted json so that the built-in container can use the converted data. Applicable only for the built-in (first party) containers.

      • PostAnalyticsProcessorSourceUri (string) --

        An Amazon S3 URI to a script that is called after analysis has been performed. Applicable only for the built-in (first party) containers.

      • ProblemType (string) --

        The machine learning problem type of the model that the monitoring job monitors.

      • Environment (dict) --

        Sets the environment variables in the container that the monitoring job runs.

        • (string) --

          • (string) --

    • ModelQualityJobInput (dict) --

      Inputs for the model quality job.

      • EndpointInput (dict) --

        Input object for the endpoint

        • EndpointName (string) --

          An endpoint in customer's account which has enabled DataCaptureConfig enabled.

        • LocalPath (string) --

          Path to the filesystem where the endpoint data is available to the container.

        • S3InputMode (string) --

          Whether the Pipe or File is used as the input mode for transferring data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File .

        • S3DataDistributionType (string) --

          Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defaults to FullyReplicated

        • FeaturesAttribute (string) --

          The attributes of the input data that are the input features.

        • InferenceAttribute (string) --

          The attribute of the input data that represents the ground truth label.

        • ProbabilityAttribute (string) --

          In a classification problem, the attribute that represents the class probability.

        • ProbabilityThresholdAttribute (float) --

          The threshold for the class probability to be evaluated as a positive result.

        • StartTimeOffset (string) --

          If specified, monitoring jobs substract this time from the start time. For information about using offsets for scheduling monitoring jobs, see Schedule Model Quality Monitoring Jobs.

        • EndTimeOffset (string) --

          If specified, monitoring jobs substract this time from the end time. For information about using offsets for scheduling monitoring jobs, see Schedule Model Quality Monitoring Jobs.

      • BatchTransformInput (dict) --

        Input object for the batch transform job.

        • DataCapturedDestinationS3Uri (string) --

          The Amazon S3 location being used to capture the data.

        • DatasetFormat (dict) --

          The dataset format for your batch transform job.

          • Csv (dict) --

            The CSV dataset used in the monitoring job.

            • Header (boolean) --

              Indicates if the CSV data has a header.

          • Json (dict) --

            The JSON dataset used in the monitoring job

            • Line (boolean) --

              Indicates if the file should be read as a json object per line.

          • Parquet (dict) --

            The Parquet dataset used in the monitoring job

        • LocalPath (string) --

          Path to the filesystem where the batch transform data is available to the container.

        • S3InputMode (string) --

          Whether the Pipe or File is used as the input mode for transferring data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File .

        • S3DataDistributionType (string) --

          Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defaults to FullyReplicated

        • FeaturesAttribute (string) --

          The attributes of the input data that are the input features.

        • InferenceAttribute (string) --

          The attribute of the input data that represents the ground truth label.

        • ProbabilityAttribute (string) --

          In a classification problem, the attribute that represents the class probability.

        • ProbabilityThresholdAttribute (float) --

          The threshold for the class probability to be evaluated as a positive result.

        • StartTimeOffset (string) --

          If specified, monitoring jobs substract this time from the start time. For information about using offsets for scheduling monitoring jobs, see Schedule Model Quality Monitoring Jobs.

        • EndTimeOffset (string) --

          If specified, monitoring jobs substract this time from the end time. For information about using offsets for scheduling monitoring jobs, see Schedule Model Quality Monitoring Jobs.

      • GroundTruthS3Input (dict) --

        The ground truth label provided for the model.

        • S3Uri (string) --

          The address of the Amazon S3 location of the ground truth labels.

    • ModelQualityJobOutputConfig (dict) --

      The output configuration for monitoring jobs.

      • MonitoringOutputs (list) --

        Monitoring outputs for monitoring jobs. This is where the output of the periodic monitoring jobs is uploaded.

        • (dict) --

          The output object for a monitoring job.

          • S3Output (dict) --

            The Amazon S3 storage location where the results of a monitoring job are saved.

            • S3Uri (string) --

              A URI that identifies the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job.

            • LocalPath (string) --

              The local path to the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job. LocalPath is an absolute path for the output data.

            • S3UploadMode (string) --

              Whether to upload the results of the monitoring job continuously or after the job completes.

      • KmsKeyId (string) --

        The Amazon Web Services Key Management Service (Amazon Web Services KMS) key that Amazon SageMaker uses to encrypt the model artifacts at rest using Amazon S3 server-side encryption.

    • JobResources (dict) --

      Identifies the resources to deploy for a monitoring job.

      • ClusterConfig (dict) --

        The configuration for the cluster resources used to run the processing job.

        • InstanceCount (integer) --

          The number of ML compute instances to use in the model monitoring job. For distributed processing jobs, specify a value greater than 1. The default value is 1.

        • InstanceType (string) --

          The ML compute instance type for the processing job.

        • VolumeSizeInGB (integer) --

          The size of the ML storage volume, in gigabytes, that you want to provision. You must specify sufficient ML storage for your scenario.

        • VolumeKmsKeyId (string) --

          The Amazon Web Services Key Management Service (Amazon Web Services KMS) key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance(s) that run the model monitoring job.

    • NetworkConfig (dict) --

      Networking options for a model quality job.

      • EnableInterContainerTrafficEncryption (boolean) --

        Whether to encrypt all communications between the instances used for the monitoring jobs. Choose True to encrypt communications. Encryption provides greater security for distributed jobs, but the processing might take longer.

      • EnableNetworkIsolation (boolean) --

        Whether to allow inbound and outbound network calls to and from the containers used for the monitoring job.

      • VpcConfig (dict) --

        Specifies a VPC that your training jobs and hosted models have access to. Control access to and from your training and model containers by configuring the VPC. For more information, see Protect Endpoints by Using an Amazon Virtual Private Cloud and Protect Training Jobs by Using an Amazon Virtual Private Cloud.

        • SecurityGroupIds (list) --

          The VPC security group IDs, in the form sg-xxxxxxxx. Specify the security groups for the VPC that is specified in the Subnets field.

          • (string) --

        • Subnets (list) --

          The ID of the subnets in the VPC to which you want to connect your training job or model. For information about the availability of specific instance types, see Supported Instance Types and Availability Zones.

          • (string) --

    • RoleArn (string) --

      The Amazon Resource Name (ARN) of an IAM role that Amazon SageMaker can assume to perform tasks on your behalf.

    • StoppingCondition (dict) --

      A time limit for how long the monitoring job is allowed to run before stopping.

      • MaxRuntimeInSeconds (integer) --

        The maximum runtime allowed in seconds.

        Note

        The MaxRuntimeInSeconds cannot exceed the frequency of the job. For data quality and model explainability, this can be up to 3600 seconds for an hourly schedule. For model bias and model quality hourly schedules, this can be up to 1800 seconds.

DescribeMonitoringSchedule (updated) Link ¶
Changes (response)
{'MonitoringScheduleConfig': {'MonitoringJobDefinition': {'MonitoringInputs': {'BatchTransformInput': {'DataCapturedDestinationS3Uri': 'string',
                                                                                                       'DatasetFormat': {'Csv': {'Header': 'boolean'},
                                                                                                                         'Json': {'Line': 'boolean'},
                                                                                                                         'Parquet': {}},
                                                                                                       'EndTimeOffset': 'string',
                                                                                                       'FeaturesAttribute': 'string',
                                                                                                       'InferenceAttribute': 'string',
                                                                                                       'LocalPath': 'string',
                                                                                                       'ProbabilityAttribute': 'string',
                                                                                                       'ProbabilityThresholdAttribute': 'double',
                                                                                                       'S3DataDistributionType': 'FullyReplicated '
                                                                                                                                 '| '
                                                                                                                                 'ShardedByS3Key',
                                                                                                       'S3InputMode': 'Pipe '
                                                                                                                      '| '
                                                                                                                      'File',
                                                                                                       'StartTimeOffset': 'string'}}}}}

Describes the schedule for a monitoring job.

See also: AWS API Documentation

Request Syntax

client.describe_monitoring_schedule(
    MonitoringScheduleName='string'
)
type MonitoringScheduleName

string

param MonitoringScheduleName

[REQUIRED]

Name of a previously created monitoring schedule.

rtype

dict

returns

Response Syntax

{
    'MonitoringScheduleArn': 'string',
    'MonitoringScheduleName': 'string',
    'MonitoringScheduleStatus': 'Pending'|'Failed'|'Scheduled'|'Stopped',
    'MonitoringType': 'DataQuality'|'ModelQuality'|'ModelBias'|'ModelExplainability',
    'FailureReason': 'string',
    'CreationTime': datetime(2015, 1, 1),
    'LastModifiedTime': datetime(2015, 1, 1),
    'MonitoringScheduleConfig': {
        'ScheduleConfig': {
            'ScheduleExpression': 'string'
        },
        'MonitoringJobDefinition': {
            'BaselineConfig': {
                'BaseliningJobName': 'string',
                'ConstraintsResource': {
                    'S3Uri': 'string'
                },
                'StatisticsResource': {
                    'S3Uri': 'string'
                }
            },
            'MonitoringInputs': [
                {
                    'EndpointInput': {
                        'EndpointName': 'string',
                        'LocalPath': 'string',
                        'S3InputMode': 'Pipe'|'File',
                        'S3DataDistributionType': 'FullyReplicated'|'ShardedByS3Key',
                        'FeaturesAttribute': 'string',
                        'InferenceAttribute': 'string',
                        'ProbabilityAttribute': 'string',
                        'ProbabilityThresholdAttribute': 123.0,
                        'StartTimeOffset': 'string',
                        'EndTimeOffset': 'string'
                    },
                    'BatchTransformInput': {
                        'DataCapturedDestinationS3Uri': 'string',
                        'DatasetFormat': {
                            'Csv': {
                                'Header': True|False
                            },
                            'Json': {
                                'Line': True|False
                            },
                            'Parquet': {}
                        },
                        'LocalPath': 'string',
                        'S3InputMode': 'Pipe'|'File',
                        'S3DataDistributionType': 'FullyReplicated'|'ShardedByS3Key',
                        'FeaturesAttribute': 'string',
                        'InferenceAttribute': 'string',
                        'ProbabilityAttribute': 'string',
                        'ProbabilityThresholdAttribute': 123.0,
                        'StartTimeOffset': 'string',
                        'EndTimeOffset': 'string'
                    }
                },
            ],
            'MonitoringOutputConfig': {
                'MonitoringOutputs': [
                    {
                        'S3Output': {
                            'S3Uri': 'string',
                            'LocalPath': 'string',
                            'S3UploadMode': 'Continuous'|'EndOfJob'
                        }
                    },
                ],
                'KmsKeyId': 'string'
            },
            'MonitoringResources': {
                'ClusterConfig': {
                    'InstanceCount': 123,
                    'InstanceType': 'ml.t3.medium'|'ml.t3.large'|'ml.t3.xlarge'|'ml.t3.2xlarge'|'ml.m4.xlarge'|'ml.m4.2xlarge'|'ml.m4.4xlarge'|'ml.m4.10xlarge'|'ml.m4.16xlarge'|'ml.c4.xlarge'|'ml.c4.2xlarge'|'ml.c4.4xlarge'|'ml.c4.8xlarge'|'ml.p2.xlarge'|'ml.p2.8xlarge'|'ml.p2.16xlarge'|'ml.p3.2xlarge'|'ml.p3.8xlarge'|'ml.p3.16xlarge'|'ml.c5.xlarge'|'ml.c5.2xlarge'|'ml.c5.4xlarge'|'ml.c5.9xlarge'|'ml.c5.18xlarge'|'ml.m5.large'|'ml.m5.xlarge'|'ml.m5.2xlarge'|'ml.m5.4xlarge'|'ml.m5.12xlarge'|'ml.m5.24xlarge'|'ml.r5.large'|'ml.r5.xlarge'|'ml.r5.2xlarge'|'ml.r5.4xlarge'|'ml.r5.8xlarge'|'ml.r5.12xlarge'|'ml.r5.16xlarge'|'ml.r5.24xlarge'|'ml.g4dn.xlarge'|'ml.g4dn.2xlarge'|'ml.g4dn.4xlarge'|'ml.g4dn.8xlarge'|'ml.g4dn.12xlarge'|'ml.g4dn.16xlarge',
                    'VolumeSizeInGB': 123,
                    'VolumeKmsKeyId': 'string'
                }
            },
            'MonitoringAppSpecification': {
                'ImageUri': 'string',
                'ContainerEntrypoint': [
                    'string',
                ],
                'ContainerArguments': [
                    'string',
                ],
                'RecordPreprocessorSourceUri': 'string',
                'PostAnalyticsProcessorSourceUri': 'string'
            },
            'StoppingCondition': {
                'MaxRuntimeInSeconds': 123
            },
            'Environment': {
                'string': 'string'
            },
            'NetworkConfig': {
                'EnableInterContainerTrafficEncryption': True|False,
                'EnableNetworkIsolation': True|False,
                'VpcConfig': {
                    'SecurityGroupIds': [
                        'string',
                    ],
                    'Subnets': [
                        'string',
                    ]
                }
            },
            'RoleArn': 'string'
        },
        'MonitoringJobDefinitionName': 'string',
        'MonitoringType': 'DataQuality'|'ModelQuality'|'ModelBias'|'ModelExplainability'
    },
    'EndpointName': 'string',
    'LastMonitoringExecutionSummary': {
        'MonitoringScheduleName': 'string',
        'ScheduledTime': datetime(2015, 1, 1),
        'CreationTime': datetime(2015, 1, 1),
        'LastModifiedTime': datetime(2015, 1, 1),
        'MonitoringExecutionStatus': 'Pending'|'Completed'|'CompletedWithViolations'|'InProgress'|'Failed'|'Stopping'|'Stopped',
        'ProcessingJobArn': 'string',
        'EndpointName': 'string',
        'FailureReason': 'string',
        'MonitoringJobDefinitionName': 'string',
        'MonitoringType': 'DataQuality'|'ModelQuality'|'ModelBias'|'ModelExplainability'
    }
}

Response Structure

  • (dict) --

    • MonitoringScheduleArn (string) --

      The Amazon Resource Name (ARN) of the monitoring schedule.

    • MonitoringScheduleName (string) --

      Name of the monitoring schedule.

    • MonitoringScheduleStatus (string) --

      The status of an monitoring job.

    • MonitoringType (string) --

      The type of the monitoring job that this schedule runs. This is one of the following values.

      • DATA_QUALITY - The schedule is for a data quality monitoring job.

      • MODEL_QUALITY - The schedule is for a model quality monitoring job.

      • MODEL_BIAS - The schedule is for a bias monitoring job.

      • MODEL_EXPLAINABILITY - The schedule is for an explainability monitoring job.

    • FailureReason (string) --

      A string, up to one KB in size, that contains the reason a monitoring job failed, if it failed.

    • CreationTime (datetime) --

      The time at which the monitoring job was created.

    • LastModifiedTime (datetime) --

      The time at which the monitoring job was last modified.

    • MonitoringScheduleConfig (dict) --

      The configuration object that specifies the monitoring schedule and defines the monitoring job.

      • ScheduleConfig (dict) --

        Configures the monitoring schedule.

        • ScheduleExpression (string) --

          A cron expression that describes details about the monitoring schedule.

          Currently the only supported cron expressions are:

          • If you want to set the job to start every hour, please use the following: Hourly: cron(0 * ? * * *)

          • If you want to start the job daily: cron(0 [00-23] ? * * *)

          For example, the following are valid cron expressions:

          • Daily at noon UTC: cron(0 12 ? * * *)

          • Daily at midnight UTC: cron(0 0 ? * * *)

          To support running every 6, 12 hours, the following are also supported:

          cron(0 [00-23]/[01-24] ? * * *)

          For example, the following are valid cron expressions:

          • Every 12 hours, starting at 5pm UTC: cron(0 17/12 ? * * *)

          • Every two hours starting at midnight: cron(0 0/2 ? * * *)

          Note

          • Even though the cron expression is set to start at 5PM UTC, note that there could be a delay of 0-20 minutes from the actual requested time to run the execution.

          • We recommend that if you would like a daily schedule, you do not provide this parameter. Amazon SageMaker will pick a time for running every day.

      • MonitoringJobDefinition (dict) --

        Defines the monitoring job.

        • BaselineConfig (dict) --

          Baseline configuration used to validate that the data conforms to the specified constraints and statistics

          • BaseliningJobName (string) --

            The name of the job that performs baselining for the monitoring job.

          • ConstraintsResource (dict) --

            The baseline constraint file in Amazon S3 that the current monitoring job should validated against.

            • S3Uri (string) --

              The Amazon S3 URI for the constraints resource.

          • StatisticsResource (dict) --

            The baseline statistics file in Amazon S3 that the current monitoring job should be validated against.

            • S3Uri (string) --

              The Amazon S3 URI for the statistics resource.

        • MonitoringInputs (list) --

          The array of inputs for the monitoring job. Currently we support monitoring an Amazon SageMaker Endpoint.

          • (dict) --

            The inputs for a monitoring job.

            • EndpointInput (dict) --

              The endpoint for a monitoring job.

              • EndpointName (string) --

                An endpoint in customer's account which has enabled DataCaptureConfig enabled.

              • LocalPath (string) --

                Path to the filesystem where the endpoint data is available to the container.

              • S3InputMode (string) --

                Whether the Pipe or File is used as the input mode for transferring data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File .

              • S3DataDistributionType (string) --

                Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defaults to FullyReplicated

              • FeaturesAttribute (string) --

                The attributes of the input data that are the input features.

              • InferenceAttribute (string) --

                The attribute of the input data that represents the ground truth label.

              • ProbabilityAttribute (string) --

                In a classification problem, the attribute that represents the class probability.

              • ProbabilityThresholdAttribute (float) --

                The threshold for the class probability to be evaluated as a positive result.

              • StartTimeOffset (string) --

                If specified, monitoring jobs substract this time from the start time. For information about using offsets for scheduling monitoring jobs, see Schedule Model Quality Monitoring Jobs.

              • EndTimeOffset (string) --

                If specified, monitoring jobs substract this time from the end time. For information about using offsets for scheduling monitoring jobs, see Schedule Model Quality Monitoring Jobs.

            • BatchTransformInput (dict) --

              Input object for the batch transform job.

              • DataCapturedDestinationS3Uri (string) --

                The Amazon S3 location being used to capture the data.

              • DatasetFormat (dict) --

                The dataset format for your batch transform job.

                • Csv (dict) --

                  The CSV dataset used in the monitoring job.

                  • Header (boolean) --

                    Indicates if the CSV data has a header.

                • Json (dict) --

                  The JSON dataset used in the monitoring job

                  • Line (boolean) --

                    Indicates if the file should be read as a json object per line.

                • Parquet (dict) --

                  The Parquet dataset used in the monitoring job

              • LocalPath (string) --

                Path to the filesystem where the batch transform data is available to the container.

              • S3InputMode (string) --

                Whether the Pipe or File is used as the input mode for transferring data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File .

              • S3DataDistributionType (string) --

                Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defaults to FullyReplicated

              • FeaturesAttribute (string) --

                The attributes of the input data that are the input features.

              • InferenceAttribute (string) --

                The attribute of the input data that represents the ground truth label.

              • ProbabilityAttribute (string) --

                In a classification problem, the attribute that represents the class probability.

              • ProbabilityThresholdAttribute (float) --

                The threshold for the class probability to be evaluated as a positive result.

              • StartTimeOffset (string) --

                If specified, monitoring jobs substract this time from the start time. For information about using offsets for scheduling monitoring jobs, see Schedule Model Quality Monitoring Jobs.

              • EndTimeOffset (string) --

                If specified, monitoring jobs substract this time from the end time. For information about using offsets for scheduling monitoring jobs, see Schedule Model Quality Monitoring Jobs.

        • MonitoringOutputConfig (dict) --

          The array of outputs from the monitoring job to be uploaded to Amazon Simple Storage Service (Amazon S3).

          • MonitoringOutputs (list) --

            Monitoring outputs for monitoring jobs. This is where the output of the periodic monitoring jobs is uploaded.

            • (dict) --

              The output object for a monitoring job.

              • S3Output (dict) --

                The Amazon S3 storage location where the results of a monitoring job are saved.

                • S3Uri (string) --

                  A URI that identifies the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job.

                • LocalPath (string) --

                  The local path to the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job. LocalPath is an absolute path for the output data.

                • S3UploadMode (string) --

                  Whether to upload the results of the monitoring job continuously or after the job completes.

          • KmsKeyId (string) --

            The Amazon Web Services Key Management Service (Amazon Web Services KMS) key that Amazon SageMaker uses to encrypt the model artifacts at rest using Amazon S3 server-side encryption.

        • MonitoringResources (dict) --

          Identifies the resources, ML compute instances, and ML storage volumes to deploy for a monitoring job. In distributed processing, you specify more than one instance.

          • ClusterConfig (dict) --

            The configuration for the cluster resources used to run the processing job.

            • InstanceCount (integer) --

              The number of ML compute instances to use in the model monitoring job. For distributed processing jobs, specify a value greater than 1. The default value is 1.

            • InstanceType (string) --

              The ML compute instance type for the processing job.

            • VolumeSizeInGB (integer) --

              The size of the ML storage volume, in gigabytes, that you want to provision. You must specify sufficient ML storage for your scenario.

            • VolumeKmsKeyId (string) --

              The Amazon Web Services Key Management Service (Amazon Web Services KMS) key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance(s) that run the model monitoring job.

        • MonitoringAppSpecification (dict) --

          Configures the monitoring job to run a specified Docker container image.

          • ImageUri (string) --

            The container image to be run by the monitoring job.

          • ContainerEntrypoint (list) --

            Specifies the entrypoint for a container used to run the monitoring job.

            • (string) --

          • ContainerArguments (list) --

            An array of arguments for the container used to run the monitoring job.

            • (string) --

          • RecordPreprocessorSourceUri (string) --

            An Amazon S3 URI to a script that is called per row prior to running analysis. It can base64 decode the payload and convert it into a flatted json so that the built-in container can use the converted data. Applicable only for the built-in (first party) containers.

          • PostAnalyticsProcessorSourceUri (string) --

            An Amazon S3 URI to a script that is called after analysis has been performed. Applicable only for the built-in (first party) containers.

        • StoppingCondition (dict) --

          Specifies a time limit for how long the monitoring job is allowed to run.

          • MaxRuntimeInSeconds (integer) --

            The maximum runtime allowed in seconds.

            Note

            The MaxRuntimeInSeconds cannot exceed the frequency of the job. For data quality and model explainability, this can be up to 3600 seconds for an hourly schedule. For model bias and model quality hourly schedules, this can be up to 1800 seconds.

        • Environment (dict) --

          Sets the environment variables in the Docker container.

          • (string) --

            • (string) --

        • NetworkConfig (dict) --

          Specifies networking options for an monitoring job.

          • EnableInterContainerTrafficEncryption (boolean) --

            Whether to encrypt all communications between distributed processing jobs. Choose True to encrypt communications. Encryption provides greater security for distributed processing jobs, but the processing might take longer.

          • EnableNetworkIsolation (boolean) --

            Whether to allow inbound and outbound network calls to and from the containers used for the processing job.

          • VpcConfig (dict) --

            Specifies a VPC that your training jobs and hosted models have access to. Control access to and from your training and model containers by configuring the VPC. For more information, see Protect Endpoints by Using an Amazon Virtual Private Cloud and Protect Training Jobs by Using an Amazon Virtual Private Cloud.

            • SecurityGroupIds (list) --

              The VPC security group IDs, in the form sg-xxxxxxxx. Specify the security groups for the VPC that is specified in the Subnets field.

              • (string) --

            • Subnets (list) --

              The ID of the subnets in the VPC to which you want to connect your training job or model. For information about the availability of specific instance types, see Supported Instance Types and Availability Zones.

              • (string) --

        • RoleArn (string) --

          The Amazon Resource Name (ARN) of an IAM role that Amazon SageMaker can assume to perform tasks on your behalf.

      • MonitoringJobDefinitionName (string) --

        The name of the monitoring job definition to schedule.

      • MonitoringType (string) --

        The type of the monitoring job definition to schedule.

    • EndpointName (string) --

      The name of the endpoint for the monitoring job.

    • LastMonitoringExecutionSummary (dict) --

      Describes metadata on the last execution to run, if there was one.

      • MonitoringScheduleName (string) --

        The name of the monitoring schedule.

      • ScheduledTime (datetime) --

        The time the monitoring job was scheduled.

      • CreationTime (datetime) --

        The time at which the monitoring job was created.

      • LastModifiedTime (datetime) --

        A timestamp that indicates the last time the monitoring job was modified.

      • MonitoringExecutionStatus (string) --

        The status of the monitoring job.

      • ProcessingJobArn (string) --

        The Amazon Resource Name (ARN) of the monitoring job.

      • EndpointName (string) --

        The name of the endpoint used to run the monitoring job.

      • FailureReason (string) --

        Contains the reason a monitoring job failed, if it failed.

      • MonitoringJobDefinitionName (string) --

        The name of the monitoring job.

      • MonitoringType (string) --

        The type of the monitoring job.

DescribeTransformJob (updated) Link ¶
Changes (response)
{'DataCaptureConfig': {'DestinationS3Uri': 'string',
                       'GenerateInferenceId': 'boolean',
                       'KmsKeyId': 'string'}}

Returns information about a transform job.

See also: AWS API Documentation

Request Syntax

client.describe_transform_job(
    TransformJobName='string'
)
type TransformJobName

string

param TransformJobName

[REQUIRED]

The name of the transform job that you want to view details of.

rtype

dict

returns

Response Syntax

{
    'TransformJobName': 'string',
    'TransformJobArn': 'string',
    'TransformJobStatus': 'InProgress'|'Completed'|'Failed'|'Stopping'|'Stopped',
    'FailureReason': 'string',
    'ModelName': 'string',
    'MaxConcurrentTransforms': 123,
    'ModelClientConfig': {
        'InvocationsTimeoutInSeconds': 123,
        'InvocationsMaxRetries': 123
    },
    'MaxPayloadInMB': 123,
    'BatchStrategy': 'MultiRecord'|'SingleRecord',
    'Environment': {
        'string': 'string'
    },
    'TransformInput': {
        'DataSource': {
            'S3DataSource': {
                'S3DataType': 'ManifestFile'|'S3Prefix'|'AugmentedManifestFile',
                'S3Uri': 'string'
            }
        },
        'ContentType': 'string',
        'CompressionType': 'None'|'Gzip',
        'SplitType': 'None'|'Line'|'RecordIO'|'TFRecord'
    },
    'TransformOutput': {
        'S3OutputPath': 'string',
        'Accept': 'string',
        'AssembleWith': 'None'|'Line',
        'KmsKeyId': 'string'
    },
    'DataCaptureConfig': {
        'DestinationS3Uri': 'string',
        'KmsKeyId': 'string',
        'GenerateInferenceId': True|False
    },
    'TransformResources': {
        'InstanceType': 'ml.m4.xlarge'|'ml.m4.2xlarge'|'ml.m4.4xlarge'|'ml.m4.10xlarge'|'ml.m4.16xlarge'|'ml.c4.xlarge'|'ml.c4.2xlarge'|'ml.c4.4xlarge'|'ml.c4.8xlarge'|'ml.p2.xlarge'|'ml.p2.8xlarge'|'ml.p2.16xlarge'|'ml.p3.2xlarge'|'ml.p3.8xlarge'|'ml.p3.16xlarge'|'ml.c5.xlarge'|'ml.c5.2xlarge'|'ml.c5.4xlarge'|'ml.c5.9xlarge'|'ml.c5.18xlarge'|'ml.m5.large'|'ml.m5.xlarge'|'ml.m5.2xlarge'|'ml.m5.4xlarge'|'ml.m5.12xlarge'|'ml.m5.24xlarge'|'ml.g4dn.xlarge'|'ml.g4dn.2xlarge'|'ml.g4dn.4xlarge'|'ml.g4dn.8xlarge'|'ml.g4dn.12xlarge'|'ml.g4dn.16xlarge',
        'InstanceCount': 123,
        'VolumeKmsKeyId': 'string'
    },
    'CreationTime': datetime(2015, 1, 1),
    'TransformStartTime': datetime(2015, 1, 1),
    'TransformEndTime': datetime(2015, 1, 1),
    'LabelingJobArn': 'string',
    'AutoMLJobArn': 'string',
    'DataProcessing': {
        'InputFilter': 'string',
        'OutputFilter': 'string',
        'JoinSource': 'Input'|'None'
    },
    'ExperimentConfig': {
        'ExperimentName': 'string',
        'TrialName': 'string',
        'TrialComponentDisplayName': 'string'
    }
}

Response Structure

  • (dict) --

    • TransformJobName (string) --

      The name of the transform job.

    • TransformJobArn (string) --

      The Amazon Resource Name (ARN) of the transform job.

    • TransformJobStatus (string) --

      The status of the transform job. If the transform job failed, the reason is returned in the FailureReason field.

    • FailureReason (string) --

      If the transform job failed, FailureReason describes why it failed. A transform job creates a log file, which includes error messages, and stores it as an Amazon S3 object. For more information, see Log Amazon SageMaker Events with Amazon CloudWatch.

    • ModelName (string) --

      The name of the model used in the transform job.

    • MaxConcurrentTransforms (integer) --

      The maximum number of parallel requests on each instance node that can be launched in a transform job. The default value is 1.

    • ModelClientConfig (dict) --

      The timeout and maximum number of retries for processing a transform job invocation.

      • InvocationsTimeoutInSeconds (integer) --

        The timeout value in seconds for an invocation request. The default value is 600.

      • InvocationsMaxRetries (integer) --

        The maximum number of retries when invocation requests are failing. The default value is 3.

    • MaxPayloadInMB (integer) --

      The maximum payload size, in MB, used in the transform job.

    • BatchStrategy (string) --

      Specifies the number of records to include in a mini-batch for an HTTP inference request. A record is a single unit of input data that inference can be made on. For example, a single line in a CSV file is a record.

      To enable the batch strategy, you must set SplitType to Line , RecordIO , or TFRecord .

    • Environment (dict) --

      The environment variables to set in the Docker container. We support up to 16 key and values entries in the map.

      • (string) --

        • (string) --

    • TransformInput (dict) --

      Describes the dataset to be transformed and the Amazon S3 location where it is stored.

      • DataSource (dict) --

        Describes the location of the channel data, which is, the S3 location of the input data that the model can consume.

        • S3DataSource (dict) --

          The S3 location of the data source that is associated with a channel.

          • S3DataType (string) --

            If you choose S3Prefix , S3Uri identifies a key name prefix. Amazon SageMaker uses all objects with the specified key name prefix for batch transform.

            If you choose ManifestFile , S3Uri identifies an object that is a manifest file containing a list of object keys that you want Amazon SageMaker to use for batch transform.

            The following values are compatible: ManifestFile , S3Prefix

            The following value is not compatible: AugmentedManifestFile

          • S3Uri (string) --

            Depending on the value specified for the S3DataType , identifies either a key name prefix or a manifest. For example:

            • A key name prefix might look like this: s3://bucketname/exampleprefix .

            • A manifest might look like this: s3://bucketname/example.manifest The manifest is an S3 object which is a JSON file with the following format: [ {"prefix": "s3://customer_bucket/some/prefix/"}, "relative/path/to/custdata-1", "relative/path/custdata-2", ... "relative/path/custdata-N" ] The preceding JSON matches the following S3Uris : s3://customer_bucket/some/prefix/relative/path/to/custdata-1 s3://customer_bucket/some/prefix/relative/path/custdata-2 ... s3://customer_bucket/some/prefix/relative/path/custdata-N The complete set of S3Uris in this manifest constitutes the input data for the channel for this datasource. The object that each S3Uris points to must be readable by the IAM role that Amazon SageMaker uses to perform tasks on your behalf.

      • ContentType (string) --

        The multipurpose internet mail extension (MIME) type of the data. Amazon SageMaker uses the MIME type with each http call to transfer data to the transform job.

      • CompressionType (string) --

        If your transform data is compressed, specify the compression type. Amazon SageMaker automatically decompresses the data for the transform job accordingly. The default value is None .

      • SplitType (string) --

        The method to use to split the transform job's data files into smaller batches. Splitting is necessary when the total size of each object is too large to fit in a single request. You can also use data splitting to improve performance by processing multiple concurrent mini-batches. The default value for SplitType is None , which indicates that input data files are not split, and request payloads contain the entire contents of an input object. Set the value of this parameter to Line to split records on a newline character boundary. SplitType also supports a number of record-oriented binary data formats. Currently, the supported record formats are:

        • RecordIO

        • TFRecord

        When splitting is enabled, the size of a mini-batch depends on the values of the BatchStrategy and MaxPayloadInMB parameters. When the value of BatchStrategy is MultiRecord , Amazon SageMaker sends the maximum number of records in each request, up to the MaxPayloadInMB limit. If the value of BatchStrategy is SingleRecord , Amazon SageMaker sends individual records in each request.

        Note

        Some data formats represent a record as a binary payload wrapped with extra padding bytes. When splitting is applied to a binary data format, padding is removed if the value of BatchStrategy is set to SingleRecord . Padding is not removed if the value of BatchStrategy is set to MultiRecord .

        For more information about RecordIO , see Create a Dataset Using RecordIO in the MXNet documentation. For more information about TFRecord , see Consuming TFRecord data in the TensorFlow documentation.

    • TransformOutput (dict) --

      Identifies the Amazon S3 location where you want Amazon SageMaker to save the results from the transform job.

      • S3OutputPath (string) --

        The Amazon S3 path where you want Amazon SageMaker to store the results of the transform job. For example, s3://bucket-name/key-name-prefix .

        For every S3 object used as input for the transform job, batch transform stores the transformed data with an . out suffix in a corresponding subfolder in the location in the output prefix. For example, for the input data stored at s3://bucket-name/input-name-prefix/dataset01/data.csv , batch transform stores the transformed data at s3://bucket-name/output-name-prefix/input-name-prefix/data.csv.out . Batch transform doesn't upload partially processed objects. For an input S3 object that contains multiple records, it creates an . out file only if the transform job succeeds on the entire file. When the input contains multiple S3 objects, the batch transform job processes the listed S3 objects and uploads only the output for successfully processed objects. If any object fails in the transform job batch transform marks the job as failed to prompt investigation.

      • Accept (string) --

        The MIME type used to specify the output data. Amazon SageMaker uses the MIME type with each http call to transfer data from the transform job.

      • AssembleWith (string) --

        Defines how to assemble the results of the transform job as a single S3 object. Choose a format that is most convenient to you. To concatenate the results in binary format, specify None . To add a newline character at the end of every transformed record, specify Line .

      • KmsKeyId (string) --

        The Amazon Web Services Key Management Service (Amazon Web Services KMS) key that Amazon SageMaker uses to encrypt the model artifacts at rest using Amazon S3 server-side encryption. The KmsKeyId can be any of the following formats:

        • Key ID: 1234abcd-12ab-34cd-56ef-1234567890ab

        • Key ARN: arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab

        • Alias name: alias/ExampleAlias

        • Alias name ARN: arn:aws:kms:us-west-2:111122223333:alias/ExampleAlias

        If you don't provide a KMS key ID, Amazon SageMaker uses the default KMS key for Amazon S3 for your role's account. For more information, see KMS-Managed Encryption Keys in the Amazon Simple Storage Service Developer Guide.

        The KMS key policy must grant permission to the IAM role that you specify in your CreateModel request. For more information, see Using Key Policies in Amazon Web Services KMS in the Amazon Web Services Key Management Service Developer Guide .

    • DataCaptureConfig (dict) --

      Configuration to control how SageMaker captures inference data.

      • DestinationS3Uri (string) --

        The Amazon S3 location being used to capture the data.

      • KmsKeyId (string) --

        The Amazon Resource Name (ARN) of a Amazon Web Services Key Management Service key that SageMaker uses to encrypt data on the storage volume attached to the ML compute instance that hosts the batch transform job.

        The KmsKeyId can be any of the following formats:

        • Key ID: 1234abcd-12ab-34cd-56ef-1234567890ab

        • Key ARN: arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab

        • Alias name: alias/ExampleAlias

        • Alias name ARN: arn:aws:kms:us-west-2:111122223333:alias/ExampleAlias

      • GenerateInferenceId (boolean) --

        Flag that indicates whether to append inference id to the output.

    • TransformResources (dict) --

      Describes the resources, including ML instance types and ML instance count, to use for the transform job.

      • InstanceType (string) --

        The ML compute instance type for the transform job. If you are using built-in algorithms to transform moderately sized datasets, we recommend using ml.m4.xlarge or ml.m5.large instance types.

      • InstanceCount (integer) --

        The number of ML compute instances to use in the transform job. For distributed transform jobs, specify a value greater than 1. The default value is 1 .

      • VolumeKmsKeyId (string) --

        The Amazon Web Services Key Management Service (Amazon Web Services KMS) key that Amazon SageMaker uses to encrypt model data on the storage volume attached to the ML compute instance(s) that run the batch transform job.

        Note

        Certain Nitro-based instances include local storage, dependent on the instance type. Local storage volumes are encrypted using a hardware module on the instance. You can't request a VolumeKmsKeyId when using an instance type with local storage.

        For a list of instance types that support local instance storage, see Instance Store Volumes.

        For more information about local instance storage encryption, see SSD Instance Store Volumes.

        The VolumeKmsKeyId can be any of the following formats:

        • Key ID: 1234abcd-12ab-34cd-56ef-1234567890ab

        • Key ARN: arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab

        • Alias name: alias/ExampleAlias

        • Alias name ARN: arn:aws:kms:us-west-2:111122223333:alias/ExampleAlias

    • CreationTime (datetime) --

      A timestamp that shows when the transform Job was created.

    • TransformStartTime (datetime) --

      Indicates when the transform job starts on ML instances. You are billed for the time interval between this time and the value of TransformEndTime .

    • TransformEndTime (datetime) --

      Indicates when the transform job has been completed, or has stopped or failed. You are billed for the time interval between this time and the value of TransformStartTime .

    • LabelingJobArn (string) --

      The Amazon Resource Name (ARN) of the Amazon SageMaker Ground Truth labeling job that created the transform or training job.

    • AutoMLJobArn (string) --

      The Amazon Resource Name (ARN) of the AutoML transform job.

    • DataProcessing (dict) --

      The data structure used to specify the data to be used for inference in a batch transform job and to associate the data that is relevant to the prediction results in the output. The input filter provided allows you to exclude input data that is not needed for inference in a batch transform job. The output filter provided allows you to include input data relevant to interpreting the predictions in the output from the job. For more information, see Associate Prediction Results with their Corresponding Input Records.

      • InputFilter (string) --

        A JSONPath expression used to select a portion of the input data to pass to the algorithm. Use the InputFilter parameter to exclude fields, such as an ID column, from the input. If you want SageMaker to pass the entire input dataset to the algorithm, accept the default value $ .

        Examples: "$" , "$[1:]" , "$.features"

      • OutputFilter (string) --

        A JSONPath expression used to select a portion of the joined dataset to save in the output file for a batch transform job. If you want SageMaker to store the entire input dataset in the output file, leave the default value, $ . If you specify indexes that aren't within the dimension size of the joined dataset, you get an error.

        Examples: "$" , "$[0,5:]" , "$['id','SageMakerOutput']"

      • JoinSource (string) --

        Specifies the source of the data to join with the transformed data. The valid values are None and Input . The default value is None , which specifies not to join the input with the transformed data. If you want the batch transform job to join the original input data with the transformed data, set JoinSource to Input . You can specify OutputFilter as an additional filter to select a portion of the joined dataset and store it in the output file.

        For JSON or JSONLines objects, such as a JSON array, SageMaker adds the transformed data to the input JSON object in an attribute called SageMakerOutput . The joined result for JSON must be a key-value pair object. If the input is not a key-value pair object, SageMaker creates a new JSON file. In the new JSON file, and the input data is stored under the SageMakerInput key and the results are stored in SageMakerOutput .

        For CSV data, SageMaker takes each row as a JSON array and joins the transformed data with the input by appending each transformed row to the end of the input. The joined data has the original input data followed by the transformed data and the output is a CSV file.

        For information on how joining in applied, see Workflow for Associating Inferences with Input Records.

    • ExperimentConfig (dict) --

      Associates a SageMaker job as a trial component with an experiment and trial. Specified when you call the following APIs:

      • CreateProcessingJob

      • CreateTrainingJob

      • CreateTransformJob

      • ExperimentName (string) --

        The name of an existing experiment to associate the trial component with.

      • TrialName (string) --

        The name of an existing trial to associate the trial component with. If not specified, a new trial is created.

      • TrialComponentDisplayName (string) --

        The display name for the trial component. If this key isn't specified, the display name is the trial component name.

UpdateMonitoringSchedule (updated) Link ¶
Changes (request)
{'MonitoringScheduleConfig': {'MonitoringJobDefinition': {'MonitoringInputs': {'BatchTransformInput': {'DataCapturedDestinationS3Uri': 'string',
                                                                                                       'DatasetFormat': {'Csv': {'Header': 'boolean'},
                                                                                                                         'Json': {'Line': 'boolean'},
                                                                                                                         'Parquet': {}},
                                                                                                       'EndTimeOffset': 'string',
                                                                                                       'FeaturesAttribute': 'string',
                                                                                                       'InferenceAttribute': 'string',
                                                                                                       'LocalPath': 'string',
                                                                                                       'ProbabilityAttribute': 'string',
                                                                                                       'ProbabilityThresholdAttribute': 'double',
                                                                                                       'S3DataDistributionType': 'FullyReplicated '
                                                                                                                                 '| '
                                                                                                                                 'ShardedByS3Key',
                                                                                                       'S3InputMode': 'Pipe '
                                                                                                                      '| '
                                                                                                                      'File',
                                                                                                       'StartTimeOffset': 'string'}}}}}

Updates a previously created schedule.

See also: AWS API Documentation

Request Syntax

client.update_monitoring_schedule(
    MonitoringScheduleName='string',
    MonitoringScheduleConfig={
        'ScheduleConfig': {
            'ScheduleExpression': 'string'
        },
        'MonitoringJobDefinition': {
            'BaselineConfig': {
                'BaseliningJobName': 'string',
                'ConstraintsResource': {
                    'S3Uri': 'string'
                },
                'StatisticsResource': {
                    'S3Uri': 'string'
                }
            },
            'MonitoringInputs': [
                {
                    'EndpointInput': {
                        'EndpointName': 'string',
                        'LocalPath': 'string',
                        'S3InputMode': 'Pipe'|'File',
                        'S3DataDistributionType': 'FullyReplicated'|'ShardedByS3Key',
                        'FeaturesAttribute': 'string',
                        'InferenceAttribute': 'string',
                        'ProbabilityAttribute': 'string',
                        'ProbabilityThresholdAttribute': 123.0,
                        'StartTimeOffset': 'string',
                        'EndTimeOffset': 'string'
                    },
                    'BatchTransformInput': {
                        'DataCapturedDestinationS3Uri': 'string',
                        'DatasetFormat': {
                            'Csv': {
                                'Header': True|False
                            },
                            'Json': {
                                'Line': True|False
                            },
                            'Parquet': {}

                        },
                        'LocalPath': 'string',
                        'S3InputMode': 'Pipe'|'File',
                        'S3DataDistributionType': 'FullyReplicated'|'ShardedByS3Key',
                        'FeaturesAttribute': 'string',
                        'InferenceAttribute': 'string',
                        'ProbabilityAttribute': 'string',
                        'ProbabilityThresholdAttribute': 123.0,
                        'StartTimeOffset': 'string',
                        'EndTimeOffset': 'string'
                    }
                },
            ],
            'MonitoringOutputConfig': {
                'MonitoringOutputs': [
                    {
                        'S3Output': {
                            'S3Uri': 'string',
                            'LocalPath': 'string',
                            'S3UploadMode': 'Continuous'|'EndOfJob'
                        }
                    },
                ],
                'KmsKeyId': 'string'
            },
            'MonitoringResources': {
                'ClusterConfig': {
                    'InstanceCount': 123,
                    'InstanceType': 'ml.t3.medium'|'ml.t3.large'|'ml.t3.xlarge'|'ml.t3.2xlarge'|'ml.m4.xlarge'|'ml.m4.2xlarge'|'ml.m4.4xlarge'|'ml.m4.10xlarge'|'ml.m4.16xlarge'|'ml.c4.xlarge'|'ml.c4.2xlarge'|'ml.c4.4xlarge'|'ml.c4.8xlarge'|'ml.p2.xlarge'|'ml.p2.8xlarge'|'ml.p2.16xlarge'|'ml.p3.2xlarge'|'ml.p3.8xlarge'|'ml.p3.16xlarge'|'ml.c5.xlarge'|'ml.c5.2xlarge'|'ml.c5.4xlarge'|'ml.c5.9xlarge'|'ml.c5.18xlarge'|'ml.m5.large'|'ml.m5.xlarge'|'ml.m5.2xlarge'|'ml.m5.4xlarge'|'ml.m5.12xlarge'|'ml.m5.24xlarge'|'ml.r5.large'|'ml.r5.xlarge'|'ml.r5.2xlarge'|'ml.r5.4xlarge'|'ml.r5.8xlarge'|'ml.r5.12xlarge'|'ml.r5.16xlarge'|'ml.r5.24xlarge'|'ml.g4dn.xlarge'|'ml.g4dn.2xlarge'|'ml.g4dn.4xlarge'|'ml.g4dn.8xlarge'|'ml.g4dn.12xlarge'|'ml.g4dn.16xlarge',
                    'VolumeSizeInGB': 123,
                    'VolumeKmsKeyId': 'string'
                }
            },
            'MonitoringAppSpecification': {
                'ImageUri': 'string',
                'ContainerEntrypoint': [
                    'string',
                ],
                'ContainerArguments': [
                    'string',
                ],
                'RecordPreprocessorSourceUri': 'string',
                'PostAnalyticsProcessorSourceUri': 'string'
            },
            'StoppingCondition': {
                'MaxRuntimeInSeconds': 123
            },
            'Environment': {
                'string': 'string'
            },
            'NetworkConfig': {
                'EnableInterContainerTrafficEncryption': True|False,
                'EnableNetworkIsolation': True|False,
                'VpcConfig': {
                    'SecurityGroupIds': [
                        'string',
                    ],
                    'Subnets': [
                        'string',
                    ]
                }
            },
            'RoleArn': 'string'
        },
        'MonitoringJobDefinitionName': 'string',
        'MonitoringType': 'DataQuality'|'ModelQuality'|'ModelBias'|'ModelExplainability'
    }
)
type MonitoringScheduleName

string

param MonitoringScheduleName

[REQUIRED]

The name of the monitoring schedule. The name must be unique within an Amazon Web Services Region within an Amazon Web Services account.

type MonitoringScheduleConfig

dict

param MonitoringScheduleConfig

[REQUIRED]

The configuration object that specifies the monitoring schedule and defines the monitoring job.

  • ScheduleConfig (dict) --

    Configures the monitoring schedule.

    • ScheduleExpression (string) -- [REQUIRED]

      A cron expression that describes details about the monitoring schedule.

      Currently the only supported cron expressions are:

      • If you want to set the job to start every hour, please use the following: Hourly: cron(0 * ? * * *)

      • If you want to start the job daily: cron(0 [00-23] ? * * *)

      For example, the following are valid cron expressions:

      • Daily at noon UTC: cron(0 12 ? * * *)

      • Daily at midnight UTC: cron(0 0 ? * * *)

      To support running every 6, 12 hours, the following are also supported:

      cron(0 [00-23]/[01-24] ? * * *)

      For example, the following are valid cron expressions:

      • Every 12 hours, starting at 5pm UTC: cron(0 17/12 ? * * *)

      • Every two hours starting at midnight: cron(0 0/2 ? * * *)

      Note

      • Even though the cron expression is set to start at 5PM UTC, note that there could be a delay of 0-20 minutes from the actual requested time to run the execution.

      • We recommend that if you would like a daily schedule, you do not provide this parameter. Amazon SageMaker will pick a time for running every day.

  • MonitoringJobDefinition (dict) --

    Defines the monitoring job.

    • BaselineConfig (dict) --

      Baseline configuration used to validate that the data conforms to the specified constraints and statistics

      • BaseliningJobName (string) --

        The name of the job that performs baselining for the monitoring job.

      • ConstraintsResource (dict) --

        The baseline constraint file in Amazon S3 that the current monitoring job should validated against.

        • S3Uri (string) --

          The Amazon S3 URI for the constraints resource.

      • StatisticsResource (dict) --

        The baseline statistics file in Amazon S3 that the current monitoring job should be validated against.

        • S3Uri (string) --

          The Amazon S3 URI for the statistics resource.

    • MonitoringInputs (list) -- [REQUIRED]

      The array of inputs for the monitoring job. Currently we support monitoring an Amazon SageMaker Endpoint.

      • (dict) --

        The inputs for a monitoring job.

        • EndpointInput (dict) --

          The endpoint for a monitoring job.

          • EndpointName (string) -- [REQUIRED]

            An endpoint in customer's account which has enabled DataCaptureConfig enabled.

          • LocalPath (string) -- [REQUIRED]

            Path to the filesystem where the endpoint data is available to the container.

          • S3InputMode (string) --

            Whether the Pipe or File is used as the input mode for transferring data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File .

          • S3DataDistributionType (string) --

            Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defaults to FullyReplicated

          • FeaturesAttribute (string) --

            The attributes of the input data that are the input features.

          • InferenceAttribute (string) --

            The attribute of the input data that represents the ground truth label.

          • ProbabilityAttribute (string) --

            In a classification problem, the attribute that represents the class probability.

          • ProbabilityThresholdAttribute (float) --

            The threshold for the class probability to be evaluated as a positive result.

          • StartTimeOffset (string) --

            If specified, monitoring jobs substract this time from the start time. For information about using offsets for scheduling monitoring jobs, see Schedule Model Quality Monitoring Jobs.

          • EndTimeOffset (string) --

            If specified, monitoring jobs substract this time from the end time. For information about using offsets for scheduling monitoring jobs, see Schedule Model Quality Monitoring Jobs.

        • BatchTransformInput (dict) --

          Input object for the batch transform job.

          • DataCapturedDestinationS3Uri (string) -- [REQUIRED]

            The Amazon S3 location being used to capture the data.

          • DatasetFormat (dict) -- [REQUIRED]

            The dataset format for your batch transform job.

            • Csv (dict) --

              The CSV dataset used in the monitoring job.

              • Header (boolean) --

                Indicates if the CSV data has a header.

            • Json (dict) --

              The JSON dataset used in the monitoring job

              • Line (boolean) --

                Indicates if the file should be read as a json object per line.

            • Parquet (dict) --

              The Parquet dataset used in the monitoring job

          • LocalPath (string) -- [REQUIRED]

            Path to the filesystem where the batch transform data is available to the container.

          • S3InputMode (string) --

            Whether the Pipe or File is used as the input mode for transferring data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File .

          • S3DataDistributionType (string) --

            Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defaults to FullyReplicated

          • FeaturesAttribute (string) --

            The attributes of the input data that are the input features.

          • InferenceAttribute (string) --

            The attribute of the input data that represents the ground truth label.

          • ProbabilityAttribute (string) --

            In a classification problem, the attribute that represents the class probability.

          • ProbabilityThresholdAttribute (float) --

            The threshold for the class probability to be evaluated as a positive result.

          • StartTimeOffset (string) --

            If specified, monitoring jobs substract this time from the start time. For information about using offsets for scheduling monitoring jobs, see Schedule Model Quality Monitoring Jobs.

          • EndTimeOffset (string) --

            If specified, monitoring jobs substract this time from the end time. For information about using offsets for scheduling monitoring jobs, see Schedule Model Quality Monitoring Jobs.

    • MonitoringOutputConfig (dict) -- [REQUIRED]

      The array of outputs from the monitoring job to be uploaded to Amazon Simple Storage Service (Amazon S3).

      • MonitoringOutputs (list) -- [REQUIRED]

        Monitoring outputs for monitoring jobs. This is where the output of the periodic monitoring jobs is uploaded.

        • (dict) --

          The output object for a monitoring job.

          • S3Output (dict) -- [REQUIRED]

            The Amazon S3 storage location where the results of a monitoring job are saved.

            • S3Uri (string) -- [REQUIRED]

              A URI that identifies the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job.

            • LocalPath (string) -- [REQUIRED]

              The local path to the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job. LocalPath is an absolute path for the output data.

            • S3UploadMode (string) --

              Whether to upload the results of the monitoring job continuously or after the job completes.

      • KmsKeyId (string) --

        The Amazon Web Services Key Management Service (Amazon Web Services KMS) key that Amazon SageMaker uses to encrypt the model artifacts at rest using Amazon S3 server-side encryption.

    • MonitoringResources (dict) -- [REQUIRED]

      Identifies the resources, ML compute instances, and ML storage volumes to deploy for a monitoring job. In distributed processing, you specify more than one instance.

      • ClusterConfig (dict) -- [REQUIRED]

        The configuration for the cluster resources used to run the processing job.

        • InstanceCount (integer) -- [REQUIRED]

          The number of ML compute instances to use in the model monitoring job. For distributed processing jobs, specify a value greater than 1. The default value is 1.

        • InstanceType (string) -- [REQUIRED]

          The ML compute instance type for the processing job.

        • VolumeSizeInGB (integer) -- [REQUIRED]

          The size of the ML storage volume, in gigabytes, that you want to provision. You must specify sufficient ML storage for your scenario.

        • VolumeKmsKeyId (string) --

          The Amazon Web Services Key Management Service (Amazon Web Services KMS) key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance(s) that run the model monitoring job.

    • MonitoringAppSpecification (dict) -- [REQUIRED]

      Configures the monitoring job to run a specified Docker container image.

      • ImageUri (string) -- [REQUIRED]

        The container image to be run by the monitoring job.

      • ContainerEntrypoint (list) --

        Specifies the entrypoint for a container used to run the monitoring job.

        • (string) --

      • ContainerArguments (list) --

        An array of arguments for the container used to run the monitoring job.

        • (string) --

      • RecordPreprocessorSourceUri (string) --

        An Amazon S3 URI to a script that is called per row prior to running analysis. It can base64 decode the payload and convert it into a flatted json so that the built-in container can use the converted data. Applicable only for the built-in (first party) containers.

      • PostAnalyticsProcessorSourceUri (string) --

        An Amazon S3 URI to a script that is called after analysis has been performed. Applicable only for the built-in (first party) containers.

    • StoppingCondition (dict) --

      Specifies a time limit for how long the monitoring job is allowed to run.

      • MaxRuntimeInSeconds (integer) -- [REQUIRED]

        The maximum runtime allowed in seconds.

        Note

        The MaxRuntimeInSeconds cannot exceed the frequency of the job. For data quality and model explainability, this can be up to 3600 seconds for an hourly schedule. For model bias and model quality hourly schedules, this can be up to 1800 seconds.

    • Environment (dict) --

      Sets the environment variables in the Docker container.

      • (string) --

        • (string) --

    • NetworkConfig (dict) --

      Specifies networking options for an monitoring job.

      • EnableInterContainerTrafficEncryption (boolean) --

        Whether to encrypt all communications between distributed processing jobs. Choose True to encrypt communications. Encryption provides greater security for distributed processing jobs, but the processing might take longer.

      • EnableNetworkIsolation (boolean) --

        Whether to allow inbound and outbound network calls to and from the containers used for the processing job.

      • VpcConfig (dict) --

        Specifies a VPC that your training jobs and hosted models have access to. Control access to and from your training and model containers by configuring the VPC. For more information, see Protect Endpoints by Using an Amazon Virtual Private Cloud and Protect Training Jobs by Using an Amazon Virtual Private Cloud.

        • SecurityGroupIds (list) -- [REQUIRED]

          The VPC security group IDs, in the form sg-xxxxxxxx. Specify the security groups for the VPC that is specified in the Subnets field.

          • (string) --

        • Subnets (list) -- [REQUIRED]

          The ID of the subnets in the VPC to which you want to connect your training job or model. For information about the availability of specific instance types, see Supported Instance Types and Availability Zones.

          • (string) --

    • RoleArn (string) -- [REQUIRED]

      The Amazon Resource Name (ARN) of an IAM role that Amazon SageMaker can assume to perform tasks on your behalf.

  • MonitoringJobDefinitionName (string) --

    The name of the monitoring job definition to schedule.

  • MonitoringType (string) --

    The type of the monitoring job definition to schedule.

rtype

dict

returns

Response Syntax

{
    'MonitoringScheduleArn': 'string'
}

Response Structure

  • (dict) --

    • MonitoringScheduleArn (string) --

      The Amazon Resource Name (ARN) of the monitoring schedule.