Amazon SageMaker Service

2021/05/05 - Amazon SageMaker Service - 3 updated api methods

Changes  Amazon SageMaker Autopilot now provides the ability to automatically deploy the best model to an endpoint

CreateAutoMLJob (updated) Link ¶
Changes (request)
{'ModelDeployConfig': {'AutoGenerateEndpointName': 'boolean',
                       'EndpointName': 'string'}}

Creates an Autopilot job.

Find the best performing model after you run an Autopilot job by calling .

For information about how to use Autopilot, see Automate Model Development with Amazon SageMaker Autopilot.

See also: AWS API Documentation

Request Syntax

client.create_auto_ml_job(
    AutoMLJobName='string',
    InputDataConfig=[
        {
            'DataSource': {
                'S3DataSource': {
                    'S3DataType': 'ManifestFile'|'S3Prefix',
                    'S3Uri': 'string'
                }
            },
            'CompressionType': 'None'|'Gzip',
            'TargetAttributeName': 'string'
        },
    ],
    OutputDataConfig={
        'KmsKeyId': 'string',
        'S3OutputPath': 'string'
    },
    ProblemType='BinaryClassification'|'MulticlassClassification'|'Regression',
    AutoMLJobObjective={
        'MetricName': 'Accuracy'|'MSE'|'F1'|'F1macro'|'AUC'
    },
    AutoMLJobConfig={
        'CompletionCriteria': {
            'MaxCandidates': 123,
            'MaxRuntimePerTrainingJobInSeconds': 123,
            'MaxAutoMLJobRuntimeInSeconds': 123
        },
        'SecurityConfig': {
            'VolumeKmsKeyId': 'string',
            'EnableInterContainerTrafficEncryption': True|False,
            'VpcConfig': {
                'SecurityGroupIds': [
                    'string',
                ],
                'Subnets': [
                    'string',
                ]
            }
        }
    },
    RoleArn='string',
    GenerateCandidateDefinitionsOnly=True|False,
    Tags=[
        {
            'Key': 'string',
            'Value': 'string'
        },
    ],
    ModelDeployConfig={
        'AutoGenerateEndpointName': True|False,
        'EndpointName': 'string'
    }
)
type AutoMLJobName

string

param AutoMLJobName

[REQUIRED]

Identifies an Autopilot job. The name must be unique to your account and is case-insensitive.

type InputDataConfig

list

param InputDataConfig

[REQUIRED]

An array of channel objects that describes the input data and its location. Each channel is a named input source. Similar to InputDataConfig supported by . Format(s) supported: CSV. Minimum of 500 rows.

  • (dict) --

    A channel is a named input source that training algorithms can consume. For more information, see .

    • DataSource (dict) -- [REQUIRED]

      The data source for an AutoML channel.

      • S3DataSource (dict) -- [REQUIRED]

        The Amazon S3 location of the input data.

        Note

        The input data must be in CSV format and contain at least 500 rows.

        • S3DataType (string) -- [REQUIRED]

          The data type.

        • S3Uri (string) -- [REQUIRED]

          The URL to the Amazon S3 data source.

    • CompressionType (string) --

      You can use Gzip or None . The default value is None .

    • TargetAttributeName (string) -- [REQUIRED]

      The name of the target variable in supervised learning, usually represented by 'y'.

type OutputDataConfig

dict

param OutputDataConfig

[REQUIRED]

Provides information about encryption and the Amazon S3 output path needed to store artifacts from an AutoML job. Format(s) supported: CSV.

<para>Specifies whether to automatically deploy the best &ATP; model to an endpoint and the name of that endpoint if deployed automatically.</para>

  • KmsKeyId (string) --

    The AWS KMS encryption key ID.

  • S3OutputPath (string) -- [REQUIRED]

    The Amazon S3 output path. Must be 128 characters or less.

type ProblemType

string

param ProblemType

Defines the type of supervised learning available for the candidates. Options include: BinaryClassification , MulticlassClassification , and Regression . For more information, see Amazon SageMaker Autopilot problem types and algorithm support.

type AutoMLJobObjective

dict

param AutoMLJobObjective

Defines the objective metric used to measure the predictive quality of an AutoML job. You provide an AutoMLJobObjective$MetricName and Autopilot infers whether to minimize or maximize it.

  • MetricName (string) -- [REQUIRED]

    The name of the objective metric used to measure the predictive quality of a machine learning system. This metric is optimized during training to provide the best estimate for model parameter values from data.

    Here are the options:

    • MSE : The mean squared error (MSE) is the average of the squared differences between the predicted and actual values. It is used for regression. MSE values are always positive: the better a model is at predicting the actual values, the smaller the MSE value. When the data contains outliers, they tend to dominate the MSE, which might cause subpar prediction performance.

    • Accuracy : The ratio of the number of correctly classified items to the total number of (correctly and incorrectly) classified items. It is used for binary and multiclass classification. It measures how close the predicted class values are to the actual values. Accuracy values vary between zero and one: one indicates perfect accuracy and zero indicates perfect inaccuracy.

    • F1 : The F1 score is the harmonic mean of the precision and recall. It is used for binary classification into classes traditionally referred to as positive and negative. Predictions are said to be true when they match their actual (correct) class and false when they do not. Precision is the ratio of the true positive predictions to all positive predictions (including the false positives) in a data set and measures the quality of the prediction when it predicts the positive class. Recall (or sensitivity) is the ratio of the true positive predictions to all actual positive instances and measures how completely a model predicts the actual class members in a data set. The standard F1 score weighs precision and recall equally. But which metric is paramount typically depends on specific aspects of a problem. F1 scores vary between zero and one: one indicates the best possible performance and zero the worst.

    • AUC : The area under the curve (AUC) metric is used to compare and evaluate binary classification by algorithms such as logistic regression that return probabilities. A threshold is needed to map the probabilities into classifications. The relevant curve is the receiver operating characteristic curve that plots the true positive rate (TPR) of predictions (or recall) against the false positive rate (FPR) as a function of the threshold value, above which a prediction is considered positive. Increasing the threshold results in fewer false positives but more false negatives. AUC is the area under this receiver operating characteristic curve and so provides an aggregated measure of the model performance across all possible classification thresholds. The AUC score can also be interpreted as the probability that a randomly selected positive data point is more likely to be predicted positive than a randomly selected negative example. AUC scores vary between zero and one: a score of one indicates perfect accuracy and a score of one half indicates that the prediction is not better than a random classifier. Values under one half predict less accurately than a random predictor. But such consistently bad predictors can simply be inverted to obtain better than random predictors.

    • F1macro : The F1macro score applies F1 scoring to multiclass classification. In this context, you have multiple classes to predict. You just calculate the precision and recall for each class as you did for the positive class in binary classification. Then, use these values to calculate the F1 score for each class and average them to obtain the F1macro score. F1macro scores vary between zero and one: one indicates the best possible performance and zero the worst.

    If you do not specify a metric explicitly, the default behavior is to automatically use:

    • MSE : for regression.

    • F1 : for binary classification

    • Accuracy : for multiclass classification.

type AutoMLJobConfig

dict

param AutoMLJobConfig

Contains CompletionCriteria and SecurityConfig settings for the AutoML job.

  • CompletionCriteria (dict) --

    How long an AutoML job is allowed to run, or how many candidates a job is allowed to generate.

    • MaxCandidates (integer) --

      The maximum number of times a training job is allowed to run.

    • MaxRuntimePerTrainingJobInSeconds (integer) --

      The maximum time, in seconds, a job is allowed to run.

    • MaxAutoMLJobRuntimeInSeconds (integer) --

      The maximum time, in seconds, an AutoML job is allowed to wait for a trial to complete. It must be equal to or greater than MaxRuntimePerTrainingJobInSeconds .

  • SecurityConfig (dict) --

    The security configuration for traffic encryption or Amazon VPC settings.

    • VolumeKmsKeyId (string) --

      The key used to encrypt stored data.

    • EnableInterContainerTrafficEncryption (boolean) --

      Whether to use traffic encryption between the container layers.

    • VpcConfig (dict) --

      The VPC configuration.

      • SecurityGroupIds (list) -- [REQUIRED]

        The VPC security group IDs, in the form sg-xxxxxxxx. Specify the security groups for the VPC that is specified in the Subnets field.

        • (string) --

      • Subnets (list) -- [REQUIRED]

        The ID of the subnets in the VPC to which you want to connect your training job or model. For information about the availability of specific instance types, see Supported Instance Types and Availability Zones.

        • (string) --

type RoleArn

string

param RoleArn

[REQUIRED]

The ARN of the role that is used to access the data.

<para>Specifies whether to automatically deploy the best &ATP; model to an endpoint and the name of that endpoint if deployed automatically.</para>

type GenerateCandidateDefinitionsOnly

boolean

param GenerateCandidateDefinitionsOnly

Generates possible candidates without training the models. A candidate is a combination of data preprocessors, algorithms, and algorithm parameter settings.

type Tags

list

param Tags

Each tag consists of a key and an optional value. Tag keys must be unique per resource.

  • (dict) --

    Describes a tag.

    • Key (string) -- [REQUIRED]

      The tag key.

    • Value (string) -- [REQUIRED]

      The tag value.

type ModelDeployConfig

dict

param ModelDeployConfig

Specifies how to generate the endpoint name for an automatic one-click Autopilot model deployment.

  • AutoGenerateEndpointName (boolean) --

    Set to True to automatically generate an endpoint name for a one-click Autopilot model deployment; set to False otherwise. The default value is True .

    Note

    If you set AutoGenerateEndpointName to True , do not specify the EndpointName ; otherwise a 400 error is thrown.

  • EndpointName (string) --

    Specifies the endpoint name to use for a one-click Autopilot model deployment if the endpoint name is not generated automatically.

    Note

    Specify the EndpointName if and only if you set AutoGenerateEndpointName to False ; otherwise a 400 error is thrown.

rtype

dict

returns

Response Syntax

{
    'AutoMLJobArn': 'string'
}

Response Structure

  • (dict) --

    • AutoMLJobArn (string) --

      The unique ARN that is assigned to the AutoML job when it is created.

DescribeAutoMLJob (updated) Link ¶
Changes (response)
{'AutoMLJobSecondaryStatus': {'DeployingModel', 'ModelDeploymentError'},
 'ModelDeployConfig': {'AutoGenerateEndpointName': 'boolean',
                       'EndpointName': 'string'},
 'ModelDeployResult': {'EndpointName': 'string'}}

Returns information about an Amazon SageMaker AutoML job.

See also: AWS API Documentation

Request Syntax

client.describe_auto_ml_job(
    AutoMLJobName='string'
)
type AutoMLJobName

string

param AutoMLJobName

[REQUIRED]

Requests information about an AutoML job using its unique name.

rtype

dict

returns

Response Syntax

{
    'AutoMLJobName': 'string',
    'AutoMLJobArn': 'string',
    'InputDataConfig': [
        {
            'DataSource': {
                'S3DataSource': {
                    'S3DataType': 'ManifestFile'|'S3Prefix',
                    'S3Uri': 'string'
                }
            },
            'CompressionType': 'None'|'Gzip',
            'TargetAttributeName': 'string'
        },
    ],
    'OutputDataConfig': {
        'KmsKeyId': 'string',
        'S3OutputPath': 'string'
    },
    'RoleArn': 'string',
    'AutoMLJobObjective': {
        'MetricName': 'Accuracy'|'MSE'|'F1'|'F1macro'|'AUC'
    },
    'ProblemType': 'BinaryClassification'|'MulticlassClassification'|'Regression',
    'AutoMLJobConfig': {
        'CompletionCriteria': {
            'MaxCandidates': 123,
            'MaxRuntimePerTrainingJobInSeconds': 123,
            'MaxAutoMLJobRuntimeInSeconds': 123
        },
        'SecurityConfig': {
            'VolumeKmsKeyId': 'string',
            'EnableInterContainerTrafficEncryption': True|False,
            'VpcConfig': {
                'SecurityGroupIds': [
                    'string',
                ],
                'Subnets': [
                    'string',
                ]
            }
        }
    },
    'CreationTime': datetime(2015, 1, 1),
    'EndTime': datetime(2015, 1, 1),
    'LastModifiedTime': datetime(2015, 1, 1),
    'FailureReason': 'string',
    'PartialFailureReasons': [
        {
            'PartialFailureMessage': 'string'
        },
    ],
    'BestCandidate': {
        'CandidateName': 'string',
        'FinalAutoMLJobObjectiveMetric': {
            'Type': 'Maximize'|'Minimize',
            'MetricName': 'Accuracy'|'MSE'|'F1'|'F1macro'|'AUC',
            'Value': ...
        },
        'ObjectiveStatus': 'Succeeded'|'Pending'|'Failed',
        'CandidateSteps': [
            {
                'CandidateStepType': 'AWS::SageMaker::TrainingJob'|'AWS::SageMaker::TransformJob'|'AWS::SageMaker::ProcessingJob',
                'CandidateStepArn': 'string',
                'CandidateStepName': 'string'
            },
        ],
        'CandidateStatus': 'Completed'|'InProgress'|'Failed'|'Stopped'|'Stopping',
        'InferenceContainers': [
            {
                'Image': 'string',
                'ModelDataUrl': 'string',
                'Environment': {
                    'string': 'string'
                }
            },
        ],
        'CreationTime': datetime(2015, 1, 1),
        'EndTime': datetime(2015, 1, 1),
        'LastModifiedTime': datetime(2015, 1, 1),
        'FailureReason': 'string',
        'CandidateProperties': {
            'CandidateArtifactLocations': {
                'Explainability': 'string'
            }
        }
    },
    'AutoMLJobStatus': 'Completed'|'InProgress'|'Failed'|'Stopped'|'Stopping',
    'AutoMLJobSecondaryStatus': 'Starting'|'AnalyzingData'|'FeatureEngineering'|'ModelTuning'|'MaxCandidatesReached'|'Failed'|'Stopped'|'MaxAutoMLJobRuntimeReached'|'Stopping'|'CandidateDefinitionsGenerated'|'GeneratingExplainabilityReport'|'Completed'|'ExplainabilityError'|'DeployingModel'|'ModelDeploymentError',
    'GenerateCandidateDefinitionsOnly': True|False,
    'AutoMLJobArtifacts': {
        'CandidateDefinitionNotebookLocation': 'string',
        'DataExplorationNotebookLocation': 'string'
    },
    'ResolvedAttributes': {
        'AutoMLJobObjective': {
            'MetricName': 'Accuracy'|'MSE'|'F1'|'F1macro'|'AUC'
        },
        'ProblemType': 'BinaryClassification'|'MulticlassClassification'|'Regression',
        'CompletionCriteria': {
            'MaxCandidates': 123,
            'MaxRuntimePerTrainingJobInSeconds': 123,
            'MaxAutoMLJobRuntimeInSeconds': 123
        }
    },
    'ModelDeployConfig': {
        'AutoGenerateEndpointName': True|False,
        'EndpointName': 'string'
    },
    'ModelDeployResult': {
        'EndpointName': 'string'
    }
}

Response Structure

  • (dict) --

    • AutoMLJobName (string) --

      Returns the name of the AutoML job.

    • AutoMLJobArn (string) --

      Returns the ARN of the AutoML job.

    • InputDataConfig (list) --

      Returns the input data configuration for the AutoML job..

      • (dict) --

        A channel is a named input source that training algorithms can consume. For more information, see .

        • DataSource (dict) --

          The data source for an AutoML channel.

          • S3DataSource (dict) --

            The Amazon S3 location of the input data.

            Note

            The input data must be in CSV format and contain at least 500 rows.

            • S3DataType (string) --

              The data type.

            • S3Uri (string) --

              The URL to the Amazon S3 data source.

        • CompressionType (string) --

          You can use Gzip or None . The default value is None .

        • TargetAttributeName (string) --

          The name of the target variable in supervised learning, usually represented by 'y'.

    • OutputDataConfig (dict) --

      Returns the job's output data config.

      • KmsKeyId (string) --

        The AWS KMS encryption key ID.

      • S3OutputPath (string) --

        The Amazon S3 output path. Must be 128 characters or less.

    • RoleArn (string) --

      The Amazon Resource Name (ARN) of the AWS Identity and Access Management (IAM) role that has read permission to the input data location and write permission to the output data location in Amazon S3.

    • AutoMLJobObjective (dict) --

      Returns the job's objective.

      • MetricName (string) --

        The name of the objective metric used to measure the predictive quality of a machine learning system. This metric is optimized during training to provide the best estimate for model parameter values from data.

        Here are the options:

        • MSE : The mean squared error (MSE) is the average of the squared differences between the predicted and actual values. It is used for regression. MSE values are always positive: the better a model is at predicting the actual values, the smaller the MSE value. When the data contains outliers, they tend to dominate the MSE, which might cause subpar prediction performance.

        • Accuracy : The ratio of the number of correctly classified items to the total number of (correctly and incorrectly) classified items. It is used for binary and multiclass classification. It measures how close the predicted class values are to the actual values. Accuracy values vary between zero and one: one indicates perfect accuracy and zero indicates perfect inaccuracy.

        • F1 : The F1 score is the harmonic mean of the precision and recall. It is used for binary classification into classes traditionally referred to as positive and negative. Predictions are said to be true when they match their actual (correct) class and false when they do not. Precision is the ratio of the true positive predictions to all positive predictions (including the false positives) in a data set and measures the quality of the prediction when it predicts the positive class. Recall (or sensitivity) is the ratio of the true positive predictions to all actual positive instances and measures how completely a model predicts the actual class members in a data set. The standard F1 score weighs precision and recall equally. But which metric is paramount typically depends on specific aspects of a problem. F1 scores vary between zero and one: one indicates the best possible performance and zero the worst.

        • AUC : The area under the curve (AUC) metric is used to compare and evaluate binary classification by algorithms such as logistic regression that return probabilities. A threshold is needed to map the probabilities into classifications. The relevant curve is the receiver operating characteristic curve that plots the true positive rate (TPR) of predictions (or recall) against the false positive rate (FPR) as a function of the threshold value, above which a prediction is considered positive. Increasing the threshold results in fewer false positives but more false negatives. AUC is the area under this receiver operating characteristic curve and so provides an aggregated measure of the model performance across all possible classification thresholds. The AUC score can also be interpreted as the probability that a randomly selected positive data point is more likely to be predicted positive than a randomly selected negative example. AUC scores vary between zero and one: a score of one indicates perfect accuracy and a score of one half indicates that the prediction is not better than a random classifier. Values under one half predict less accurately than a random predictor. But such consistently bad predictors can simply be inverted to obtain better than random predictors.

        • F1macro : The F1macro score applies F1 scoring to multiclass classification. In this context, you have multiple classes to predict. You just calculate the precision and recall for each class as you did for the positive class in binary classification. Then, use these values to calculate the F1 score for each class and average them to obtain the F1macro score. F1macro scores vary between zero and one: one indicates the best possible performance and zero the worst.

        If you do not specify a metric explicitly, the default behavior is to automatically use:

        • MSE : for regression.

        • F1 : for binary classification

        • Accuracy : for multiclass classification.

    • ProblemType (string) --

      Returns the job's problem type.

    • AutoMLJobConfig (dict) --

      Returns the configuration for the AutoML job.

      • CompletionCriteria (dict) --

        How long an AutoML job is allowed to run, or how many candidates a job is allowed to generate.

        • MaxCandidates (integer) --

          The maximum number of times a training job is allowed to run.

        • MaxRuntimePerTrainingJobInSeconds (integer) --

          The maximum time, in seconds, a job is allowed to run.

        • MaxAutoMLJobRuntimeInSeconds (integer) --

          The maximum time, in seconds, an AutoML job is allowed to wait for a trial to complete. It must be equal to or greater than MaxRuntimePerTrainingJobInSeconds .

      • SecurityConfig (dict) --

        The security configuration for traffic encryption or Amazon VPC settings.

        • VolumeKmsKeyId (string) --

          The key used to encrypt stored data.

        • EnableInterContainerTrafficEncryption (boolean) --

          Whether to use traffic encryption between the container layers.

        • VpcConfig (dict) --

          The VPC configuration.

          • SecurityGroupIds (list) --

            The VPC security group IDs, in the form sg-xxxxxxxx. Specify the security groups for the VPC that is specified in the Subnets field.

            • (string) --

          • Subnets (list) --

            The ID of the subnets in the VPC to which you want to connect your training job or model. For information about the availability of specific instance types, see Supported Instance Types and Availability Zones.

            • (string) --

    • CreationTime (datetime) --

      Returns the creation time of the AutoML job.

    • EndTime (datetime) --

      Returns the end time of the AutoML job.

    • LastModifiedTime (datetime) --

      Returns the job's last modified time.

    • FailureReason (string) --

      Returns the failure reason for an AutoML job, when applicable.

    • PartialFailureReasons (list) --

      Returns a list of reasons for partial failures within an AutoML job.

      • (dict) --

        The reason for a partial failure of an AutoML job.

        • PartialFailureMessage (string) --

          The message containing the reason for a partial failure of an AutoML job.

    • BestCandidate (dict) --

      Returns the job's best AutoMLCandidate .

      • CandidateName (string) --

        The name of the candidate.

      • FinalAutoMLJobObjectiveMetric (dict) --

        The best candidate result from an AutoML training job.

        • Type (string) --

          The type of metric with the best result.

        • MetricName (string) --

          The name of the metric with the best result. For a description of the possible objective metrics, see AutoMLJobObjective$MetricName.

        • Value (float) --

          The value of the metric with the best result.

      • ObjectiveStatus (string) --

        The objective's status.

      • CandidateSteps (list) --

        Information about the candidate's steps.

        • (dict) --

          Information about the steps for a candidate and what step it is working on.

          • CandidateStepType (string) --

            Whether the candidate is at the transform, training, or processing step.

          • CandidateStepArn (string) --

            The ARN for the candidate's step.

          • CandidateStepName (string) --

            The name for the candidate's step.

      • CandidateStatus (string) --

        The candidate's status.

      • InferenceContainers (list) --

        Information about the inference container definitions.

        • (dict) --

          A list of container definitions that describe the different containers that make up an AutoML candidate. For more information, see .

          • Image (string) --

            The ECR path of the container. For more information, see .

          • ModelDataUrl (string) --

            The location of the model artifacts. For more information, see .

          • Environment (dict) --

            The environment variables to set in the container. For more information, see .

            • (string) --

              • (string) --

      • CreationTime (datetime) --

        The creation time.

      • EndTime (datetime) --

        The end time.

      • LastModifiedTime (datetime) --

        The last modified time.

      • FailureReason (string) --

        The failure reason.

      • CandidateProperties (dict) --

        The AutoML candidate's properties.

        • CandidateArtifactLocations (dict) --

          The Amazon S3 prefix to the artifacts generated for an AutoML candidate.

          • Explainability (string) --

            The Amazon S3 prefix to the explainability artifacts generated for the AutoML candidate.

    • AutoMLJobStatus (string) --

      Returns the status of the AutoML job.

    • AutoMLJobSecondaryStatus (string) --

      Returns the secondary status of the AutoML job.

    • GenerateCandidateDefinitionsOnly (boolean) --

      Indicates whether the output for an AutoML job generates candidate definitions only.

    • AutoMLJobArtifacts (dict) --

      Returns information on the job's artifacts found in AutoMLJobArtifacts .

      • CandidateDefinitionNotebookLocation (string) --

        The URL of the notebook location.

      • DataExplorationNotebookLocation (string) --

        The URL of the notebook location.

    • ResolvedAttributes (dict) --

      This contains ProblemType , AutoMLJobObjective and CompletionCriteria . If you do not provide these values, they are auto-inferred. If you do provide them, the values used are the ones you provide.

      • AutoMLJobObjective (dict) --

        Specifies a metric to minimize or maximize as the objective of a job.

        • MetricName (string) --

          The name of the objective metric used to measure the predictive quality of a machine learning system. This metric is optimized during training to provide the best estimate for model parameter values from data.

          Here are the options:

          • MSE : The mean squared error (MSE) is the average of the squared differences between the predicted and actual values. It is used for regression. MSE values are always positive: the better a model is at predicting the actual values, the smaller the MSE value. When the data contains outliers, they tend to dominate the MSE, which might cause subpar prediction performance.

          • Accuracy : The ratio of the number of correctly classified items to the total number of (correctly and incorrectly) classified items. It is used for binary and multiclass classification. It measures how close the predicted class values are to the actual values. Accuracy values vary between zero and one: one indicates perfect accuracy and zero indicates perfect inaccuracy.

          • F1 : The F1 score is the harmonic mean of the precision and recall. It is used for binary classification into classes traditionally referred to as positive and negative. Predictions are said to be true when they match their actual (correct) class and false when they do not. Precision is the ratio of the true positive predictions to all positive predictions (including the false positives) in a data set and measures the quality of the prediction when it predicts the positive class. Recall (or sensitivity) is the ratio of the true positive predictions to all actual positive instances and measures how completely a model predicts the actual class members in a data set. The standard F1 score weighs precision and recall equally. But which metric is paramount typically depends on specific aspects of a problem. F1 scores vary between zero and one: one indicates the best possible performance and zero the worst.

          • AUC : The area under the curve (AUC) metric is used to compare and evaluate binary classification by algorithms such as logistic regression that return probabilities. A threshold is needed to map the probabilities into classifications. The relevant curve is the receiver operating characteristic curve that plots the true positive rate (TPR) of predictions (or recall) against the false positive rate (FPR) as a function of the threshold value, above which a prediction is considered positive. Increasing the threshold results in fewer false positives but more false negatives. AUC is the area under this receiver operating characteristic curve and so provides an aggregated measure of the model performance across all possible classification thresholds. The AUC score can also be interpreted as the probability that a randomly selected positive data point is more likely to be predicted positive than a randomly selected negative example. AUC scores vary between zero and one: a score of one indicates perfect accuracy and a score of one half indicates that the prediction is not better than a random classifier. Values under one half predict less accurately than a random predictor. But such consistently bad predictors can simply be inverted to obtain better than random predictors.

          • F1macro : The F1macro score applies F1 scoring to multiclass classification. In this context, you have multiple classes to predict. You just calculate the precision and recall for each class as you did for the positive class in binary classification. Then, use these values to calculate the F1 score for each class and average them to obtain the F1macro score. F1macro scores vary between zero and one: one indicates the best possible performance and zero the worst.

          If you do not specify a metric explicitly, the default behavior is to automatically use:

          • MSE : for regression.

          • F1 : for binary classification

          • Accuracy : for multiclass classification.

      • ProblemType (string) --

        The problem type.

      • CompletionCriteria (dict) --

        How long a job is allowed to run, or how many candidates a job is allowed to generate.

        • MaxCandidates (integer) --

          The maximum number of times a training job is allowed to run.

        • MaxRuntimePerTrainingJobInSeconds (integer) --

          The maximum time, in seconds, a job is allowed to run.

        • MaxAutoMLJobRuntimeInSeconds (integer) --

          The maximum time, in seconds, an AutoML job is allowed to wait for a trial to complete. It must be equal to or greater than MaxRuntimePerTrainingJobInSeconds .

    • ModelDeployConfig (dict) --

      Indicates whether the model was deployed automatically to an endpoint and the name of that endpoint if deployed automatically.

      • AutoGenerateEndpointName (boolean) --

        Set to True to automatically generate an endpoint name for a one-click Autopilot model deployment; set to False otherwise. The default value is True .

        Note

        If you set AutoGenerateEndpointName to True , do not specify the EndpointName ; otherwise a 400 error is thrown.

      • EndpointName (string) --

        Specifies the endpoint name to use for a one-click Autopilot model deployment if the endpoint name is not generated automatically.

        Note

        Specify the EndpointName if and only if you set AutoGenerateEndpointName to False ; otherwise a 400 error is thrown.

    • ModelDeployResult (dict) --

      Provides information about endpoint for the model deployment.

      • EndpointName (string) --

        The name of the endpoint to which the model has been deployed.

        Note

        If model deployment fails, this field is omitted from the response.

ListAutoMLJobs (updated) Link ¶
Changes (response)
{'AutoMLJobSummaries': {'AutoMLJobSecondaryStatus': {'DeployingModel',
                                                     'ModelDeploymentError'}}}

Request a list of jobs.

See also: AWS API Documentation

Request Syntax

client.list_auto_ml_jobs(
    CreationTimeAfter=datetime(2015, 1, 1),
    CreationTimeBefore=datetime(2015, 1, 1),
    LastModifiedTimeAfter=datetime(2015, 1, 1),
    LastModifiedTimeBefore=datetime(2015, 1, 1),
    NameContains='string',
    StatusEquals='Completed'|'InProgress'|'Failed'|'Stopped'|'Stopping',
    SortOrder='Ascending'|'Descending',
    SortBy='Name'|'CreationTime'|'Status',
    MaxResults=123,
    NextToken='string'
)
type CreationTimeAfter

datetime

param CreationTimeAfter

Request a list of jobs, using a filter for time.

type CreationTimeBefore

datetime

param CreationTimeBefore

Request a list of jobs, using a filter for time.

type LastModifiedTimeAfter

datetime

param LastModifiedTimeAfter

Request a list of jobs, using a filter for time.

type LastModifiedTimeBefore

datetime

param LastModifiedTimeBefore

Request a list of jobs, using a filter for time.

type NameContains

string

param NameContains

Request a list of jobs, using a search filter for name.

type StatusEquals

string

param StatusEquals

Request a list of jobs, using a filter for status.

type SortOrder

string

param SortOrder

The sort order for the results. The default is Descending .

type SortBy

string

param SortBy

The parameter by which to sort the results. The default is Name .

type MaxResults

integer

param MaxResults

Request a list of jobs up to a specified limit.

type NextToken

string

param NextToken

If the previous response was truncated, you receive this token. Use it in your next request to receive the next set of results.

rtype

dict

returns

Response Syntax

{
    'AutoMLJobSummaries': [
        {
            'AutoMLJobName': 'string',
            'AutoMLJobArn': 'string',
            'AutoMLJobStatus': 'Completed'|'InProgress'|'Failed'|'Stopped'|'Stopping',
            'AutoMLJobSecondaryStatus': 'Starting'|'AnalyzingData'|'FeatureEngineering'|'ModelTuning'|'MaxCandidatesReached'|'Failed'|'Stopped'|'MaxAutoMLJobRuntimeReached'|'Stopping'|'CandidateDefinitionsGenerated'|'GeneratingExplainabilityReport'|'Completed'|'ExplainabilityError'|'DeployingModel'|'ModelDeploymentError',
            'CreationTime': datetime(2015, 1, 1),
            'EndTime': datetime(2015, 1, 1),
            'LastModifiedTime': datetime(2015, 1, 1),
            'FailureReason': 'string',
            'PartialFailureReasons': [
                {
                    'PartialFailureMessage': 'string'
                },
            ]
        },
    ],
    'NextToken': 'string'
}

Response Structure

  • (dict) --

    • AutoMLJobSummaries (list) --

      Returns a summary list of jobs.

      • (dict) --

        Provides a summary about an AutoML job.

        • AutoMLJobName (string) --

          The name of the AutoML you are requesting.

        • AutoMLJobArn (string) --

          The ARN of the AutoML job.

        • AutoMLJobStatus (string) --

          The status of the AutoML job.

        • AutoMLJobSecondaryStatus (string) --

          The secondary status of the AutoML job.

        • CreationTime (datetime) --

          When the AutoML job was created.

        • EndTime (datetime) --

          The end time of an AutoML job.

        • LastModifiedTime (datetime) --

          When the AutoML job was last modified.

        • FailureReason (string) --

          The failure reason of an AutoML job.

        • PartialFailureReasons (list) --

          The list of reasons for partial failures within an AutoML job.

          • (dict) --

            The reason for a partial failure of an AutoML job.

            • PartialFailureMessage (string) --

              The message containing the reason for a partial failure of an AutoML job.

    • NextToken (string) --

      If the previous response was truncated, you receive this token. Use it in your next request to receive the next set of results.